text
stringlengths
209
724k
semi-supervised learning has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets . in this work , we unify the current dominant approaches for semi-supervised learning to produce a new algorithm , mixmatch , that works by guessing low-entropy labels for data-augmented unlabeled examples and mixing labeled and unlabeled data using mixup . we show that mixmatch obtains state-of-the-art results by a large margin across many datasets and labeled data amounts . for example , on cifar-10 with 250 labels , we reduce error rate by a factor of 4 ( from 38 % to 11 % ) and by a factor of 2 on stl-10 . we also demonstrate how mixmatch can help achieve a dramatically better accuracy-privacy trade-off for differential privacy . finally , we perform an ablation study to tease apart which components of mixmatch are most important for its success . story_separator_special_tag we design a novel , communication-efficient , failure-robust protocol for secure aggregation of high-dimensional data . our protocol allows a server to compute the sum of large , user-held data vectors from mobile devices in a secure manner ( i.e . without learning each user 's individual contribution ) , and can be used , for example , in a federated learning setting , to aggregate user-provided model updates for a deep neural network . we prove the security of our protocol in the honest-but-curious and active adversary settings , and show that security is maintained even if an arbitrarily chosen subset of users drop out at any time . we evaluate the efficiency of our protocol and show , by complexity analysis and a concrete implementation , that its runtime and communication overhead remain low even on large data sets and client pools . for 16-bit input values , our protocol offers $ 1.73 x communication expansion for 210 users and 220-dimensional vectors , and 1.98 x expansion for 214 users and 224-dimensional vectors over sending data in the clear . story_separator_special_tag transfer learning aims at reusing the knowledge in some source tasks to improve the learning of a target task . many transfer learning methods assume that the source tasks and the target task be related , even though many tasks are not related in reality . however , when two tasks are unrelated , the knowledge extracted from a source task may not help , and even hurt , the performance of a target task . thus , how to avoid negative transfer and then ensure a `` safe transfer '' of knowledge is crucial in transfer learning . in this paper , we propose an adaptive transfer learning algorithm based on gaussian processes ( at-gp ) , which can be used to adapt the transfer learning schemes by automatically estimating the similarity between a source and a target task . the main contribution of our work is that we propose a new semi-parametric transfer kernel for transfer learning from a bayesian perspective , and propose to learn the model with respect to the target task , rather than all tasks as in multi-task learning . we can formulate the transfer learning problem as a unified gaussian process ( gp story_separator_special_tag we demonstrate that a character-level recurrent neural network is able to learn out-of-vocabulary ( oov ) words under federated learning settings , for the purpose of expanding the vocabulary of a virtual keyboard for smartphones without exporting sensitive text to servers . high-frequency words can be sampled from the trained generative model by drawing from the joint posterior directly . we study the feasibility of the approach in two settings : ( 1 ) using simulated federated learning on a publicly available non-iid per-user dataset from a popular social networking website , ( 2 ) using federated learning on data hosted on user mobile devices . the model achieves good recall and precision compared to ground-truth oov words in setting ( 1 ) . with ( 2 ) we demonstrate the practicality of this approach by showing that we can learn meaningful oov words with good character-level prediction accuracy and cross entropy loss . story_separator_special_tag the protection of user privacy is an important concern in machine learning , as evidenced by the rolling out of the general data protection regulation ( gdpr ) in the european union ( eu ) in may 2018. the gdpr is designed to give users more control over their personal data , which motivates us to explore machine learning frameworks for data sharing that do not violate user privacy . to meet this goal , in this paper , we propose a novel lossless privacy-preserving tree-boosting system known as secureboost in the setting of federated learning . secureboost first conducts entity alignment under a privacy-preserving protocol and then constructs boosting trees across multiple parties with a carefully designed encryption strategy . this federated learning system allows the learning process to be jointly conducted over multiple parties with common user samples but different feature sets , which corresponds to a vertically partitioned data set . an advantage of secureboost is that it provides the same level of accuracy as the non-privacy-preserving approach while at the same time , reveals no information of each private data provider . we show that the secureboost framework is as accurate as other non-federated gradient tree-boosting story_separator_special_tag the explosion of image data on the internet has the potential to foster more sophisticated and robust models and algorithms to index , retrieve , organize and interact with images and multimedia data . but exactly how such data can be harnessed and organized remains a critical problem . we introduce here a new database called imagenet , a large-scale ontology of images built upon the backbone of the wordnet structure . imagenet aims to populate the majority of the 80,000 synsets of wordnet with an average of 500-1000 clean and full resolution images . this will result in tens of millions of annotated images organized by the semantic hierarchy of wordnet . this paper offers a detailed analysis of imagenet in its current state : 12 subtrees with 5247 synsets and 3.2 million images in total . we show that imagenet is much larger in scale and diversity and much more accurate than the current image datasets . constructing such a large-scale database is a challenging task . we describe the data collection scheme with amazon mechanical turk . lastly , we illustrate the usefulness of imagenet through three simple applications in object recognition , image classification and automatic story_separator_special_tag we introduce a new language representation model called bert , which stands for bidirectional encoder representations from transformers . unlike recent language representation models ( peters et al. , 2018a ; radford et al. , 2018 ) , bert is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers . as a result , the pre-trained bert model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks , such as question answering and language inference , without substantial task-specific architecture modifications . bert is conceptually simple and empirically powerful . it obtains new state-of-the-art results on eleven natural language processing tasks , including pushing the glue score to 80.5 ( 7.7 point absolute improvement ) , multinli accuracy to 86.7 % ( 4.6 % absolute improvement ) , squad v1.1 question answering test f1 to 93.2 ( 1.5 point absolute improvement ) and squad v2.0 test f1 to 83.1 ( 5.1 point absolute improvement ) . story_separator_special_tag federated learning allows for population level models to be trained without centralizing client data by transmitting the global model to clients , calculating gradients locally , then averaging the gradients . downloading models and uploading gradients uses the client 's bandwidth , so minimizing these transmission costs is important . the data on each client is highly variable , so the benefit of training on different clients may differ dramatically . to exploit this we propose active federated learning , where in each round clients are selected not uniformly at random , but with a probability conditioned on the current model and the data on the client to maximize efficiency . we propose a cheap , simple and intuitive sampling scheme which reduces the number of required training iterations by 20-70 % while maintaining the same model accuracy , and which mimics well known resampling techniques under certain conditions . story_separator_special_tag abstract : several machine learning models , including neural networks , consistently misclassify adversarial examples -- -inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset , such that the perturbed input results in the model outputting an incorrect answer with high confidence . early attempts at explaining this phenomenon focused on nonlinearity and overfitting . we argue instead that the primary cause of neural networks ' vulnerability to adversarial perturbation is their linear nature . this explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them : their generalization across architectures and training sets . moreover , this view yields a simple and fast method of generating adversarial examples . using this approach to provide examples for adversarial training , we reduce the test set error of a maxout network on the mnist dataset . story_separator_special_tag the following topics are dealt with : image segmentation ; image texture ; image motion analysis ; object detection ; tracking ; feature selection ; clustering ; image reconstruction ; face recognition ; image sequences ; computer vision ; image sensors ; and object recognition . story_separator_special_tag we train a recurrent neural network language model using a distributed , on-device learning framework called federated learning for the purpose of next-word prediction in a virtual keyboard for smartphones . server-based training using stochastic gradient descent is compared with training on client devices using the federated averaging algorithm . the federated algorithm , which enables training on a higher-quality dataset for this use case , is shown to achieve better prediction recall . this work demonstrates the feasibility and benefit of training language models on client devices without exporting sensitive user data to servers . the federated learning environment gives users greater control over the use of their data and simplifies the task of incorporating privacy by default with distributed training and aggregation across a population of client devices . story_separator_special_tag a very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions . unfortunately , making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users , especially if the individual models are large neural nets . caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique . we achieve some surprising results on mnist and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model . we also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse . unlike a mixture of experts , these specialist models can be trained rapidly and in parallel . story_separator_special_tag deep learning has recently become hugely popular in machine learning , providing significant improvements in classification accuracy in the presence of highly-structured and large databases . researchers have also considered privacy implications of deep learning . models are typically trained in a centralized manner with all the data being processed by the same training algorithm . if the data is a collection of users ' private data , including habits , personal pictures , geographical positions , interests , and more , the centralized server will have access to sensitive information that could potentially be mishandled . to tackle this problem , collaborative deep learning models have recently been proposed where parties locally train their deep learning structures and only share a subset of the parameters in the attempt to keep their respective training sets private . parameters can also be obfuscated via differential privacy ( dp ) to make information extraction even more challenging , as proposed by shokri and shmatikov at ccs'15 . unfortunately , we show that any privacy-preserving collaborative deep learning is susceptible to a powerful attack that we devise in this paper . in particular , we show that a distributed , federated , or story_separator_special_tag large-scale labeled data are generally required to train deep neural networks in order to obtain better performance in visual feature learning from images or videos for computer vision applications . to avoid extensive cost of collecting and annotating large-scale datasets , as a subset of unsupervised learning methods , self-supervised learning methods are proposed to learn general image and video features from large-scale unlabeled data without using any human-annotated labels . this paper provides an extensive review of deep learning-based self-supervised general visual feature learning methods from images or videos . first , the motivation , general pipeline , and terminologies of this field are described . then the common deep neural network architectures that used for self-supervised learning are summarized . next , the schema and evaluation metrics of self-supervised learning methods are reviewed followed by the commonly used datasets for images , videos , audios , and 3d data , as well as the existing self-supervised visual feature learning methods . finally , quantitative performance comparisons of the reviewed methods on benchmark datasets are summarized and discussed for both image and video feature learning . at last , this paper is concluded and lists a set of promising story_separator_special_tag federated learning ( fl ) is a machine learning setting where many clients ( e.g . mobile devices or whole organizations ) collaboratively train a model under the orchestration of a central server ( e.g . service provider ) , while keeping the training data decentralized . fl embodies the principles of focused data collection and minimization , and can mitigate many of the systemic privacy risks and costs resulting from traditional , centralized machine learning and data science approaches . motivated by the explosive growth in fl research , this paper discusses recent advances and presents an extensive collection of open problems and challenges . story_separator_special_tag bill baird { publications references 1 ] b. baird . bifurcation analysis of oscillating neural network model of pattern recognition in the rabbit olfactory bulb . in d. 3 ] b. baird . bifurcation analysis of a network model of the rabbit olfactory bulb with periodic attractors stored by a sequence learning algorithm . 5 ] b. baird . bifurcation theory methods for programming static or periodic attractors and their bifurcations in dynamic neural networks . story_separator_special_tag we present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs . we motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions . our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes . in a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin . story_separator_special_tag gradient boosting decision trees ( gbdts ) have become very successful in recent years , with many awards in machine learning and data mining competitions . there have been several recent studies on how to train gbdts in the federated learning setting . in this paper , we focus on horizontal federated learning , where data samples with the same features are distributed among multiple parties . however , existing studies are not efficient or effective enough for practical use . they suffer either from the inefficiency due to the usage of costly data transformations such as secure sharing and homomorphic encryption , or from the low model accuracy due to differential privacy designs . in this paper , we study a practical federated environment with relaxed privacy constraints . in this environment , a dishonest party might obtain some information about the other parties ' data , but it is still impossible for the dishonest party to derive the actual raw data of other parties . specifically , each party boosts a number of trees by exploiting similarity information based on locality-sensitive hashing . we prove that our framework is secure without exposing the original record to other parties story_separator_special_tag federated learning enables a large amount of edge computing devices to jointly learn a model without data sharing . as a leading algorithm in this setting , federated averaging ( \\texttt { fedavg } ) runs stochastic gradient descent ( sgd ) in parallel on a small subset of the total devices and averages the sequences only once in a while . despite its simplicity , it lacks theoretical guarantees under realistic settings . in this paper , we analyze the convergence of \\texttt { fedavg } on non-iid data and establish a convergence rate of $ \\mathcal { o } ( \\frac { 1 } { t } ) $ for strongly convex and smooth problems , where $ t $ is the number of sgds . importantly , our bound demonstrates a trade-off between communication-efficiency and convergence rate . as user devices may be disconnected from the server , we relax the assumption of full device participation to partial device participation and study different averaging schemes ; low device participation rate can be achieved without severely slowing down the learning . our results indicate that heterogeneity of data slows down the convergence , which matches empirical observations . story_separator_special_tag machine learning relies on the availability of vast amounts of data for training . however , in reality , data are mostly scattered across different organizations and can not be easily integrated due to many legal and practical constraints . to address this important challenge in the field of machine learning , we introduce a new technique and framework , known as federated transfer learning ( ftl ) , to improve statistical modeling under a data federation . ftl allows knowledge to be shared without compromising user privacy and enables complementary knowledge to be transferred across domains in a data federation , thereby enabling a target-domain party to build flexible and effective models by leveraging rich labels from a source domain . this framework requires minimal modifications to the existing model structure and provides the same level of accuracy as the nonprivacy-preserving transfer learning . it is flexible and can be effectively adapted to various secure multiparty machine learning tasks . story_separator_special_tag after entering the big data era , a new term of big knowledge has been coined to deal with challenges in mining a mass of knowledge from big data . while researchers used to explore the basic characteristics of big data , we have not seen any studies on the general and essential properties of big knowledge . to fill this gap , this paper studies the concepts of big knowledge , big-knowledge system , and big-knowledge engineering . ten massiveness characteristics for big knowledge and big-knowledge systems , including massive concepts , connectedness , clean data resources , cases , confidence , capabilities , cumulativeness , concerns , consistency , and completeness , are defined and explored . based on these characteristics , a comprehensive investigation is conducted on some large-scale knowledge engineering projects , including the fifth comprehensive traffic survey in shanghai , the china 's xia-shang-zhou chronology project , the troy and trojan war project , and the international human genome project , as well as the online free encyclopedia wikipedia . we also investigate the recent research efforts on knowledge graphs , where they are analyzed to determine which ones can be considered as big knowledge story_separator_special_tag submitted paper will be peer reviewed by conference committees , and accepted papers after registration and presentation will be published in the international conference proceedings series by acm ( isbn : 978-14503-8834-4 ) , which will be archived in the acm digital library , and indexed by ei compendex , scopus , etc . mlmi 2019 proceedings ( isbn : 978-1-4503-7248-0 ) has been indexed by ei-compendex & scopus . mlmi 2018 proceedings ( isbn : 978-1-4503-6556-7 ) has been indexed by ei-compendex & scopus . publication after successfully held in vietnam and jakarta , this year mlmi will be held in hangzhou , china , september 18-20. it is supported by hangzhou dianzi university , china . insightful presentations , engaging discussions , vibrant networking mlmi 2020 has it all . with leading academics on the scientific committee of the event , the program is guaranteed to address the most relevant topics in the field of machine learning and machine intelligence . it 's an opportunity to source feedback on your research , to get published in conference proceedings , and to explore the beautiful city hangzhou , china . story_separator_special_tag now that data science receives a lot of attention , the three disciplines of data analysis , databases , and sciences are discussed with respect to the roles they play . in several discussions , i observed misunderstandings of artificial intelligence . hence , it might be the right time to give a personal view of ai and the part of machine learning therein . since the relation between machine learning and statistics is so close that sometimes the boundaries are blurred , explicit pointers to statistical research are made . although not at all complete , the references are intended to support further interdisciplinary understanding of the fields . story_separator_special_tag we propose a new regularization method based on virtual adversarial loss : a new measure of local smoothness of the conditional label distribution given input . virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation . unlike adversarial training , our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning . because the directions in which we smooth the model are only virtually adversarial , we call our method virtual adversarial training ( vat ) . the computational cost of vat is relatively low . for neural networks , the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations . in our experiments , we applied vat to supervised and semi-supervised learning tasks on multiple benchmark datasets . with a simple enhancement of the algorithm based on the entropy minimization principle , our vat achieves state-of-the-art performance for semi-supervised learning tasks on svhn and cifar-10 . story_separator_special_tag we extend generative adversarial networks ( gans ) to the semi-supervised context by forcing the discriminator network to output class labels . we train a generative model g and a discriminator d on a dataset with inputs belonging to one of n classes . at training time , d is made to predict which of n+1 classes the input belongs to , where an extra class is added to correspond to the outputs of g. we show that this method can be used to create a more data-efficient classifier and that it allows for generating higher quality samples than a regular gan . story_separator_special_tag semi-supervised learning ( ssl ) provides a powerful framework for leveraging unlabeled data when labels are limited or expensive to obtain . ssl algorithms based on deep neural networks have recently proven successful on standard benchmark tasks . however , we argue that these benchmarks fail to address many issues that ssl algorithms would face in real-world applications . after creating a unified reimplementation of various widely-used ssl techniques , we test them in a suite of experiments designed to address these issues . we find that the performance of simple baselines which do not use unlabeled data is often underreported , ssl methods differ in sensitivity to the amount of labeled and unlabeled data , and performance can degrade substantially when the unlabeled dataset contains out-of-distribution examples . to help guide ssl research towards real-world applicability , we make our unified reimplemention and evaluation platform publicly available . story_separator_special_tag we present an unsupervised visual feature learning algorithm driven by context-based pixel prediction . by analogy with auto-encoders , we propose context encoders a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings . in order to succeed at this task , context encoders need to both understand the content of the entire image , as well as produce a plausible hypothesis for the missing part ( s ) . when training context encoders , we have experimented with both a standard pixel-wise reconstruction loss , as well as a reconstruction plus an adversarial loss . the latter produces much sharper results because it can better handle multiple modes in the output . we found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures . we quantitatively demonstrate the effectiveness of our learned features for cnn pre-training on classification , detection , and segmentation tasks . furthermore , context encoders can be used for semantic inpainting tasks , either stand-alone or as initialization for non-parametric methods . story_separator_special_tag federated learning improves data privacy and efficiency in machine learning performed over networks of distributed devices , such as mobile phones , iot and wearable devices , etc . yet models trained with federated learning can still fail to generalize to new devices due to the problem of domain shift . domain shift occurs when the labeled data collected by source nodes statistically differs from the target node 's unlabeled data . in this work , we present a principled approach to the problem of federated domain adaptation , which aims to align the representations learned among the different nodes with the data distribution of the target node . our approach extends adversarial adaptation techniques to the constraints of the federated setting . in addition , we devise a dynamic attention mechanism and leverage feature disentanglement to enhance knowledge transfer . empirically , we perform extensive experiments on several image and text classification tasks and show promising results under unsupervised federated domain adaptation setting . story_separator_special_tag in this paper , we introduce a new model for leveraging unlabeled data to improve generalization performances of image classifiers : a two-branch encoder-decoder architecture called hybridnet . the first branch receives supervision signal and is dedicated to the extraction of invariant class-related representations . the second branch is fully unsupervised and dedicated to model information discarded by the first branch to reconstruct input data . to further support the expected behavior of our model , we propose an original training objective . it favors stability in the discriminative branch and complementarity between the learned representations in the two branches . hybridnet is able to outperform state-of-the-art results on cifar-10 , svhn and stl-10 in various semi-supervised settings . in addition , visualizations and ablation studies validate our contributions and the behavior of the model on both cifar-10 and stl-10 datasets . story_separator_special_tag which active learning methods can we expect to yield good performance in learning binary and multi-category logistic regression classifiers ? addressing this question is a natural first step in providing robust solutions for active learning across a wide variety of exponential models including maximum entropy , generalized linear , log-linear , and conditional random field models . for the logistic regression model we re-derive the variance reduction method known in experimental design circles as ` a-optimality . ' we then run comparisons against different variations of the most widely used heuristic schemes : query by committee and uncertainty sampling , to discover which methods work best for different classes of problems and why . we find that among the strategies tested , the experimental design methods are most likely to match or beat a random sample baseline . the heuristic alternatives produced mixed results , with an uncertainty sampling variant called margin sampling and a derivative method called qbb-mm providing the most promising performance at very low computational cost . computational running times of the experimental design methods were a bottleneck to the evaluations . meanwhile , evaluation of the heuristic methods lead to an accumulation of negative results . story_separator_special_tag we quantitatively investigate how machine learning models leak information about the individual data records on which they were trained . we focus on the basic membership inference attack : given a data record and black-box access to a model , determine if the record was in the model 's training dataset . to perform membership inference against a target model , we make adversarial use of machine learning and train our own inference model to recognize differences in the target model 's predictions on the inputs that it trained on versus the inputs that it did not train on . we empirically evaluate our inference techniques on classification models trained by commercial `` machine learning as a service '' providers such as google and amazon . using realistic datasets and classification tasks , including a hospital discharge dataset whose membership is sensitive from the privacy perspective , we show that these models can be vulnerable to membership inference attacks . we then investigate the factors that influence this leakage and evaluate mitigation strategies . story_separator_special_tag variational autoencoders ( vaes ) learn representations of data by jointly training a probabilistic encoder and decoder network . typically these models encode all features of the data into a single variable . here we are interested in learning disentangled representations that encode distinct aspects of the data into separate variables . we propose to learn such representations using model architectures that generalise from standard vaes , employing a general graphical model structure in the encoder and decoder . this allows us to train partially-specified models that make relatively strong assumptions about a subset of interpretable variables and rely on the flexibility of neural networks to learn representations for the remaining variables . we further define a general objective for semi-supervised learning in this model class , which can be approximated using an importance sampling procedure . we evaluate our framework 's ability to learn disentangled representations , both by qualitative exploration of its generative capacity , and quantitative evaluation of its discriminative ability on a variety of models and datasets . story_separator_special_tag abstract : in this paper we present a method for learning a discriminative classifier from unlabeled or partially labeled data . our approach is based on an objective function that trades-off mutual information between observed examples and their predicted categorical class distribution , against robustness of the classifier to an adversarial generative model . the resulting algorithm can either be interpreted as a natural generalization of the generative adversarial networks ( gan ) framework or as an extension of the regularized information maximization ( rim ) framework to robust classification against an optimal adversary . we empirically evaluate our method - which we dub categorical generative adversarial networks ( or catgan ) - on synthetic data as well as on challenging image classification tasks , demonstrating the robustness of the learned classifiers . we further qualitatively assess the fidelity of samples generated by the adversarial generator that is learned alongside the discriminative classifier , and identify links between the catgan objective and discriminative clustering algorithms ( such as rim ) . story_separator_special_tag the recently proposed temporal ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks . it maintains an exponential moving average of label predictions on each training example , and penalizes predictions that are inconsistent with this target . however , because the targets change only once per epoch , temporal ensembling becomes unwieldy when learning large datasets . to overcome this problem , we propose mean teacher , a method that averages model weights instead of label predictions . as an additional benefit , mean teacher improves test accuracy and enables training with fewer labels than temporal ensembling . without changing the network architecture , mean teacher achieves an error rate of 4.35 % on svhn with 250 labels , outperforming temporal ensembling trained with 1000 labels . we also show that a good network architecture is crucial to performance . combining mean teacher and residual networks , we improve the state of the art on cifar-10 with 4000 labels from 10.55 % to 6.28 % , and on imagenet 2012 with 10 % of the labels from 35.24 % to 9.11 % . story_separator_special_tag we introduce a pretraining technique called selfie , which stands for selfie supervised image embedding . selfie generalizes the concept of masked language modeling of bert ( devlin et al. , 2019 ) to continuous data , such as images , by making use of the contrastive predictive coding loss ( oord et al. , 2018 ) . given masked-out patches in an input image , our method learns to select the correct patch , among other `` distractor '' patches sampled from the same image , to fill in the masked location . this classification objective sidesteps the need for predicting exact pixel values of the target patches . the pretraining architecture of selfie includes a network of convolutional blocks to process patches followed by an attention pooling network to summarize the content of unmasked patches before predicting masked ones . during finetuning , we reuse the convolutional weights found by pretraining . we evaluate selfie on three benchmarks ( cifar-10 , imagenet 32 x 32 , and imagenet 224 x 224 ) with varying amounts of labeled data , from 5 % to 100 % of the training sets . our pretraining method provides consistent improvements to resnet-50 story_separator_special_tag federated learning allows edge devices to collaboratively learn a shared model while keeping the training data on device , decoupling the ability to do model training from the need to store the data in the cloud . we propose federated matched averaging ( fedma ) algorithm designed for federated learning of modern neural network architectures e.g . convolutional neural networks ( cnns ) and lstms . fedma constructs the shared global model in a layer-wise manner by matching and averaging hidden elements ( i.e . channels for convolution layers ; hidden states for lstm ; neurons for fully connected layers ) with similar feature extraction signatures . our experiments indicate that fedma not only outperforms popular state-of-the-art federated learning algorithms on deep cnn and lstm architectures trained on real world datasets , but also reduces the overall communication burden . story_separator_special_tag federated learning is a distributed form of machine learning where both the training data and model training are decentralized . in this paper , we use federated learning in a commercial , global-scale setting to train , evaluate and deploy a model to improve virtual keyboard search suggestion quality without direct access to the underlying user data . we describe our observations in federated training , compare metrics to live deployments , and present resulting quality increases . in whole , we demonstrate how federated learning can be applied end-to-end to both improve user experiences and enhance user privacy . story_separator_special_tag today 's ai still faces two major challenges . one is that in most industries , data exists in the form of isolated islands . the other is the strengthening of data privacy and security . we propose a possible solution to these challenges : secure federated learning . beyond the federated learning framework first proposed by google in 2016 , we introduce a comprehensive secure federated learning framework , which includes horizontal federated learning , vertical federated learning and federated transfer learning . we provide definitions , architectures and applications for the federated learning framework , and provide a comprehensive survey of existing works on this subject . in addition , we propose building data networks among organizations based on federated mechanisms as an effective solution to allow knowledge to be shared without compromising user privacy . story_separator_special_tag the reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback , rather than correctly labeled examples , as is common in other machine learning contexts . while significant progress has been made to improve learning in a single task , the idea of transfer learning has only recently been applied to reinforcement learning tasks . the core idea of transfer is that experience gained in learning to perform one task can help improve learning performance in a related , but different , task . in this article we present a framework that classifies transfer learning methods in terms of their capabilities and goals , and then use it to survey the existing literature , as well as to suggest future directions for transfer learning work . story_separator_special_tag the eu and other public organizations at different levels of national and local government across the world have funded and invested in numerous research and development projects on big data transport applications over last few years . the mid and long term effectiveness of these applications is very difficult to measure , and the benefits and usability of these applications are not easy to calculate . noesis , funded under eu h2020 program , aims to design a decision supported tool by gathering and analyzing these applications as use cases to formulate sufficient knowledge for policy makers to make informed decisions for their big data transport applications . the challenges in this work are associated with a small number of samples , with incomplete information , but having a good size of features that need to be analyzed to make a confident enough recommendation . this paper reports various statistical and machine learning approaches used to address these challenges and their results . story_separator_special_tag federated learning ( fl ) is a heavily promoted approach for training ml models on sensitive data , e.g. , text typed by users on their smartphones . fl is expressly designed for training on data that are unbalanced and non-iid across the participants . to ensure privacy and integrity of the federated model , latest fl approaches use differential privacy or robust aggregation to limit the influence of `` outlier '' participants . first , we show that on standard tasks such as next-word prediction , many participants gain no benefit from fl because the federated model is less accurate on their data than the models they can train locally on their own . second , we show that differential privacy and robust aggregation make this problem worse by further destroying the accuracy of the federated model for many participants . then , we evaluate three techniques for local adaptation of federated models : fine-tuning , multi-task learning , and knowledge distillation . we analyze where each technique is applicable and demonstrate that all participants benefit from local adaptation . participants whose local models are poor obtain big accuracy improvements over conventional fl . participants whose local models are story_separator_special_tag the genetic testing and genetic screening of children are commonplace . decisions about whether to offer genetic testing and screening should be driven by the best interest of the child . the growing literature on the psychosocial and clinical effects of such testing and screening can help inform best practices . this technical report provides ethical justification and empirical data in support of the proposed policy recommendations regarding such practices in a myriad of settings .
intensity-modulated radiation therapy ( imrt ) can sculpt the high-dose volume around the site of disease with hitherto unachievable precision . conformal avoidance of normal tissues goes hand in hand with this . inhomogeneous dose painting is possible . the technique has become a clinical reality and is likely to be the dominant approach this decade for improving the clinical practice of photon therapy . this series will explore all aspects of the `` imrt chain '' . only 15 years ago just a handful of physicists were working on this subject . imrt has developed so rapidly that its recent past is also its ancient history . this article will review the history of imrt with just a glance at precursors . the physical basis of imrt is then described including an attempt to introduce the concepts of convex and concave dose distributions , ill-conditioning , inverse-problem degeneracy , cost functions and complex solutions all with a minimum of technical jargon or mathematics . the many techniques for inverse planning are described and the review concludes with a look forward to the future of image-guided imrt ( ig-imrt ) . story_separator_special_tag this paper reports on the analysis of intensity modulated radiation treatment optimization problems in the presence of non-convex feasible parameter spaces caused by the specification of dose-volume constraints for the organs-at-risk ( oars ) . the main aim was to determine whether the presence of those non-convex spaces affects the optimization of clinical cases in any significant way . this was done in two phases : ( 1 ) using a carefully designed two-dimensional mathematical phantom that exhibits two controllable minima and with randomly initialized beamlet weights , we developed a methodology for exploring the nature of the convergence characteristics of quadratic cost function optimizations ( deterministic or stochastic ) . the methodology is based on observing the statistical behaviour of the residual cost at the end of optimizations in which the stopping criterion is progressively more demanding and carrying out those optimizations to very small error changes per iteration . ( 2 ) seven clinical cases were then analysed with dose-volume constraints that are stronger than originally used in the clinic . the clinical cases are two prostate cases differently posed , a meningioma case , two head-and-neck cases , a spleen case and a spine case . of story_separator_special_tag dose optimization requires that the treatment goals be specified in a meaningful manner , but also that alterations to the specification lead to predictable changes in the resulting dose distribution . within the framework of constrained optimization , it is possible to devise a tool that quantifies the impact on the objective of target volume coverage of any change to a dosimetric constraint of normal tissue or target dose homogeneity . this sensitivity analysis relies on properties of the lagrange function that is associated with the constrained optimization problem , but does not depend on the method used to solve this problem . it is useful particularly in cases with multiple target volumes and critical normal structures , where constraints and objectives can interact in a non-intuitive manner . story_separator_special_tag the major challenge in intensity-modulated radiotherapy planning is to find the right balance between tumor control and normal tissue sparing . the most desirable solution is never physically feasible , and a compromise has to be found . one possible way to approach this problem is constrained optimization . in this context , it is worthwhile to quantitatively predict the impact of adjustments of the constraints on the optimum dose distribution . this has been dealt with in regard to cost functions in a previous paper . the aim of the present paper is to introduce spatial resolution to this formalism . our method reveals the active constraints in a target subvolume that was previously selected by the practitioner for its insufficient dose . this is useful if a multitude of constraints can be the cause of a cold spot . the response of the optimal dose distribution to an adjustment of constraints ( perturbation ) is predicted . we conclude with a clinical example . story_separator_special_tag objective radiobiological models provide a means of evaluating treatment plans . keeping in mind their inherent limitations , they can also be used prospectively to design new treatment strategies which maximise therapeutic ratio . we propose here a new method to customise fractionation and prescription dose . methods to illustrate our new approach , two non-small cell lung cancer treatment plans and one prostate plan from our archive are analysed using the in-house software tool biosuite . biosuite computes normal tissue complication probability and tumour control probability using various radiobiological models and can suggest radiobiologically optimal prescription doses and fractionation schemes with limited toxicity . results dose response curves present varied aspects depending on the nature of each case . the optimisation process suggests doses and fractionation schemes differing from the original ones . patterns of optimisation depend on the degree of conformality , the behaviour of the normal tissue ( i.e . . story_separator_special_tag abstractpurpose . to investigate the potential role of incidental heart irradiation on the risk of radiation pneumonitis ( rp ) for patients receiving definitive radiation therapy for non-small-cell lung cancer ( nsclc ) . material and methods . two hundred and nine patient datasets were available for this study . heart and lung dose-volume parameters were extracted for modeling , based on monte carlo-based heterogeneity corrected dose distributions . clinical variables tested included age , gender , chemotherapy , pre-treatment weight-loss , performance status , and smoking history . the risk of rp was modeled using logistic regression . results . the most significant univariate variables were heart related , such as heart heart v65 ( percent volume receiving at least 65 gy ) ( spearman rs = 0.245 , p < 0.001 ) . the best-performing logistic regression model included heart d10 ( minimum dose to the hottest 10 % of the heart ) , lung d35 , and maximum lung dose ( spearman rs = 0.268 , p < 0.0001 ) . when classified by predicted risk , the . story_separator_special_tag determining the 'best ' optimization parameters in imrt planning is typically a time-consuming trial-and-error process with no unambiguous termination point . recently we and others proposed a goal-programming approach which better captures the desired prioritization of dosimetric goals . here , individual prescription goals are addressed stepwise in their order of priority . in the first step , only the highest order goals are considered ( target coverage and dose-limiting normal structures ) . in subsequent steps , the achievements of the previous steps are turned into hard constraints and lower priority goals are optimized , in turn , subject to higher priority constraints . so-called 'slip ' factors were introduced to allow for slight , clinically acceptable violations of the constraints . focusing on head and neck cases , we present several examples for this planning technique . the main advantages of the new optimization method are ( i ) its ability to generate plans that meet the clinical goals , as well as possible , without tuning any weighting factors or dose-volume constraints , and ( ii ) the ability to conveniently include more terms such as fluence map smoothness . lower level goals can be optimized to story_separator_special_tag in this work a prioritized optimization algorithm is adapted and applied to treatment planning for intensity modulated proton therapy ( impt ) . originally , this algorithm was developed for intensity modulated radiation therapy ( imrt ) with photons . prioritized optimization converts the clinical hierarchy of treatment goals into an effective optimization scheme for treatment planning . it presents an alternative to conventional methods that combine all optimization goals into a single optimization run with a weighted sum of all planning aims in the objective function . the highest order goal in the first step is to achieve a homogeneous dose distribution of the prescribed dose in the tumour . in subsequent steps the dose to organs at risk ( oars ) is minimized dependent upon their clinical priority , whereby the results of previous steps are turned into hard constraints . the large number of degrees of freedom through the additional energy modulation of protons enables a better protection of oars under the perpetuation of the prescribed dose in the planning target volume ( ptv ) . the solution space of subsequent optimization steps can be extended by introducing a slip factor . this slip factor allows a story_separator_special_tag optimization problems in imrt inverse planning are inherently multicriterial since they involve multiple planning goals for targets and their neighbouring critical tissue structures . clinical decisions are generally required , based on tradeoffs among these goals . since the tradeoffs can not be quantitatively determined prior to optimization , the decision-making process is usually indirect and iterative , requiring many repetitive optimizations . this situation becomes even more challenging for cases with a large number of planning goals . to address this challenge , a multicriteria optimization strategy called lexicographic ordering ( lo ) has been implemented and evaluated for imrt planning . the lo approach is a hierarchical method in which the planning goals are categorized into different priority levels and a sequence of suboptimization problems is solved in order of priority . this prioritization concept is demonstrated using two clinical cases ( a simple prostate case and a relatively complex head and neck case ) . in addition , a unique feature of lo in a decision support role is discussed . we demonstrate that a comprehensive list of planning goals ( e.g. , 23 for the head and neck case ) can be optimized using only a story_separator_special_tag treatment planning for intensity modulated radiation therapy ( imrt ) is challenging due to both the size of the computational problems ( thousands of variables and constraints ) and the multi-objective , imprecise nature of the goals . we apply hierarchical programming to imrt treatment planning . in this formulation , treatment planning goals/objectives are ordered in an absolute hierarchy , and the problem is solved from the top-down such that more important goals are optimized in turn . after each objective is optimized , that objective function is converted into a constraint when optimizing lower-priority objectives . we also demonstrate the usefulness of a linear/quadratic formulation , including the use of mean-tail-dose ( mean dose to the hottest fraction of a given structure ) , to facilitate computational efficiency . in contrast to the conventional use of dose-volume constraints ( no more than x % volume of a structure should receive more than y dose ) , the mean-tail-dose formulation ensures convex feasibility spaces and convex objective functions . to widen the search space without seriously degrading higher priority goals , we allowed higher priority constraints to relax or slip a clinically negligible amount during lower priority iterations . story_separator_special_tag in multi-objective radiotherapy planning , we are interested in pareto surfaces of dimensions 2 up to about 10 ( for head and neck cases , the number of structures to trade off can be this large ) . a key question that has not been answered yet is : how many plans does it take to sufficiently represent a high-dimensional pareto surface ? in this paper , we present a method to answer this question , and we show that the number of points needed is modest : 75 plans always controlled the error to within 5 % , and in all cases but one , n + 1 plans , where n is the number of objectives , was enough for < 15 % error . we introduce objective correlation matrices and principal component analysis ( pca ) of the beamlet solutions as two methods to understand this . pca reveals that the feasible beamlet solutions of a pareto database lie in a narrow , small dimensional subregion of the full beamlet space , which helps explain why the number of plans needed to characterize the database is small . story_separator_special_tag multiobjective radiotherapy planning aims to capture all clinically relevant trade-offs between the various planning goals . this is accomplished by calculating a representative set of pareto optimal solutions and storing them in a database . the structure of these representative pareto sets is still not fully investigated . we propose two methods for a systematic analysis of multiobjective databases : principal component analysis and the isomap method . both methods are able to extract the key trade-offs from a database and provide information which can lead to a better understanding of the clinical case and intensity-modulated radiation therapy planning in general . story_separator_special_tag approaches to approximate the efficient and pareto sets of multiobjective programs are reviewed . special attention is given to approximating structures , methods generating pareto points , and approximation quality . the survey covers 48 articles published since 1975 . story_separator_special_tag preface . acknowledgements . notation and symbols . part i : terminology and theory . 1. introduction . 2. concepts . 3. theoretical background . part ii : methods . 1. introduction . 2. no-preference methods . 3. a posteriori methods . 4. a priori methods . 5. interactive methods . part iii : related issues . 1. comparing methods . 2. software . 3. graphical illustration . 4. future directions . 5. epilogue . references . index . story_separator_special_tag the authors recently proposed the normal constraint ( nc ) method for generating a set of evenly spaced solutions on a pareto frontier for multiobjective optimization problems . since few methods offer this desirable characteristic , the new method can be of significant practical use in the choice of an optimal solution in a multiobjective setting . this paper s specific contribution is two-fold . first , it presents a new formulation of the nc method that incorporates a critical linear mapping of the design objectives . this mapping has the desirable property that the resulting performance of the method is entirely independent of the design objectives scales . we address here the fact that scaling issues can pose formidable difficulties . secondly , the notion of a pareto filter is presented and an algorithm thereof is developed . as its name suggests , a pareto filter is an algorithm that retains only the global pareto points , given a set of points in objective space . as is explained in the paper , the pareto filter is useful in the application of the nc and other methods . numerical examples are provided . story_separator_special_tag purpose to describe a fast projection algorithm for optimizing intensity modulated proton therapy ( impt ) plans and to describe and demonstrate the use of this algorithm in multicriteria impt planning . methods the authors develop a projection-based solver for a class of convex optimization problems and apply it to impt treatment planning . the speed of the solver permits its use in multicriteria optimization , where several optimizations are performed which span the space of possible treatment plans . the authors describe a plan database generation procedure which is customized to the requirements of the solver . the optimality precision of the solver can be specified by the user . results the authors apply the algorithm to three clinical cases : a pancreas case , an esophagus case , and a tumor along the rib cage case . detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms ( mosek 's interior point optimizer , primal simplex optimizer , and dual simplex optimizer ) . additionally , the projection solver has almost no memory overhead . conclusions the speed and guaranteed accuracy of the algorithm make it suitable for story_separator_special_tag in many fields , we come across problems where we want to optimize several conflicting objectives simultaneously . to find a good solution for such multi-objective optimization problems , an approximation of the pareto set is often generated . in this paper , we con- sider the approximation of pareto sets for problems with three or more convex objectives and with convex constraints . for these problems , sandwich algorithms can be used to de- termine an inner and outer approximation between which the pareto set is 'sandwiched ' . using these two approximations , we can calculate an upper bound on the approximation error . this upper bound can be used to determine which parts of the approximations must be improved and to provide a quality guarantee to the decision maker . in this paper , we extend higher dimensional sandwich algorithms in three different ways . firstly , we introduce the new concept of adding dummy points to the inner approx- imation of a pareto set . by using these dummy points , we can determine accurate inner and outer approximations more e\xb1ciently , i.e. , using less time-consuming optimizations . secondly , we introduce a new method story_separator_special_tag we consider the problem of approximating pareto surfaces of convex multicriteria optimization problems by a discrete set of points and their convex combinations . finding the scalarization parameters that optimally limit the approximation error when generating a single pareto optimal solution is a nonconvex optimization problem . this problem can be solved by enumerative techniques but at a cost that increases exponentially with the number of objectives . we present an algorithm for solving the pareto surface approximation problem that is practical with 10 or less conflicting objectives , motivated by an application to radiation therapy optimization . our enumerative scheme is , in a sense , dual to a family of previous algorithms . the proposed technique retains the quality of the best previous algorithm in this class while solving fewer subproblems . a further improvement is provided by a procedure for discarding subproblems based on reusing information from previous solves . the combined effect of the enhancements is empirically demonstrated to reduce the computational expense of solving the pareto surface approximation problem by orders of magnitude . for problems where the objectives have positive curvature , an improved bound on the approximation error is demonstrated using transformations of story_separator_special_tag inherently , imrt treatment planning involves compromising between different planning goals . multi-criteria imrt planning directly addresses this compromising and thus makes it more systematic . usually , several plans are computed from which the planner selects the most promising following a certain procedure . applying pareto navigation for this selection step simultaneously increases the variety of planning options and eases the identification of the most promising plan . pareto navigation is an interactive multi-criteria optimization method that consists of the two navigation mechanisms 'selection ' and 'restriction ' . the former allows the formulation of wishes whereas the latter allows the exclusion of unwanted plans . they are realized as optimization problems on the so-called plan bundle -- a set constructed from pre-computed plans . they can be approximately reformulated so that their solution time is a small fraction of a second . thus , the user can be provided with immediate feedback regarding his or her decisions . pareto navigation was implemented in the mira navigator software and allows real-time manipulation of the current plan and the set of considered plans . the changes are triggered by simple mouse operations on the so-called navigation star and lead to story_separator_special_tag purpose we completed an implementation of pencil-beam scanning ( pbs ) , a technology whereby a focused beam of protons , of variable intensity and energy , is scanned over a plane perpendicular to the beam axis and in depth . the aim of radiotherapy is to improve the target to healthy tissue dose differential . we illustrate how pbs achieves this aim in a patient with a bulky tumor . methods and materials our first deployment of pbs uses `` broad '' pencil-beams ranging from 20 to 35 mm ( full-width-half-maximum ) over the range interval from 32 to 7 g/cm 2 . such beam-brushes offer a unique opportunity for treating bulky tumors . we present a case study of a large ( 4,295 cc clinical target volume ) retroperitoneal sarcoma treated to 50.4 gy relative biological effectiveness ( rbe ) ( presurgery ) using a course of photons and protons to the clinical target volume and a course of protons to the gross target volume . results we describe our system and present the dosimetry for all courses and provide an interdosimetric comparison . discussion the use of pbs for bulky targets reduces the complexity of treatment planning story_separator_special_tag purpose : to introduce a method to simultaneously explore a collection of pareto surfaces . the method will allow radiotherapy treatment planners to interactively explore treatment plans for different beam angle configurations as well as different treatment modalities . methods : the authors assume a convex optimization setting and represent the pareto surface for each modality or given beam set by a set of discrete points on the surface . weighted averages of these discrete points produce a continuous representation of each pareto surface . the authors calculate a set of pareto surfaces and use linear programming to navigate across the individual surfaces , allowing switches between surfaces . the switches are organized such that the plan profits in the requested way , while trying to keep the change in dose as small as possible . results : the system is demonstrated on a phantom pancreas imrt case using 100 different five beam configurations and a multicriteria formulation with six objectives . the system has intuitive behavior and is easy to control . also , because the underlying linear programs are small , the system is fast enough to offer real-time exploration for the pareto surfaces of the given beam story_separator_special_tag we consider pareto surface based multi-criteria optimization for step and shoot imrt planning . by analyzing two navigation algorithms , we show both theoretically and in practice that the number of plans needed to form convex combinations of plans during navigation can be kept small ( much less than the theoretical maximum number needed in general , which is equal to the number of objectives for on-surface pareto navigation ) . therefore a workable approach for directly deliverable navigation in this setting is to segment the underlying pareto surface plans and then enforce the mild restriction that only a small number of these plans are active at any time during plan navigation , thus limiting the total number of segments used in the final plan . story_separator_special_tag the optimization of beam angles in imrt planning is still an open problem , with literature focusing on heuristic strategies and exhaustive searches on discrete angle grids . we show how a beam angle set can be locally refined in a continuous manner using gradient-based optimization in the beam angle space . the gradient is derived using linear programming duality theory . applying this local search to 100 random initial angle sets of a phantom pancreatic case demonstrates the method , and highlights the many-local-minima aspect of the bao problem . due to this function structure , we recommend a search strategy of a thorough global search followed by local refinement at promising beam angle sets . extensions to nonlinear imrt formulations are discussed . story_separator_special_tag purpose : in current intensity-modulated radiation therapy ( imrt ) plan optimization , the focus is on either finding optimal beam angles ( or other beam delivery parameters such as field segments , couch angles , gantry angles ) or optimal beam intensities . in this article we offer a mixed integer programming ( mip ) approach for simultaneously determining an optimal intensity map and optimal beam angles for imrt delivery . using this approach , we pursue an experimental study designed to ( a ) gauge differences in plan quality metrics with respect to different tumor sites and different mip treatment planning models , and ( b ) test the concept of critical-normal-tissue-ring a tissue ring of 5 mm thickness drawn around the planning target volume ( ptv ) and its use for designing conformal plans . methods and materials : our treatment planning models use two classes of decision variables to capture the beam configuration and intensities simultaneously . binary ( 0/1 ) variables are used to capture `` on '' or `` off '' or `` yes '' or `` no '' decisions for each field , and nonnegative continuous variables are used to represent intensities of story_separator_special_tag we view the beam orientation optimization ( boo ) problem in intensity-modulated radiation therapy ( imrt ) treatment planning as a global optimization problem with expensive objective function evaluations . we propose a response surface method that , in contrast with other approaches , allows for the generation of problem data only for promising beam orientations as the algorithm progresses . this enables the consideration of additional degrees of freedom in the treatment delivery , i.e. , many more candidate beam orientations than is possible with existing approaches to boo . this ability allows us to include noncoplanar beams and consider the question of whether or not noncoplanar beams can provide significant improvement in treatment plan quality . we also show empirically that using our approach , we can generate clinically acceptable treatment plans that require fewer beams than are used in current practice . story_separator_special_tag purpose : the purpose of this article is to explore the use of the accelerated exhaustive search strategy for developing and validating methods for optimizing beam orientations for intensity-modulated radiation therapy ( imrt ) . combining beam-angle optimization ( bao ) with intensity distribution optimization is expected to improve the quality of imrt treatment plans . however , bao is one of the most difficult problems to solve adequately because of the huge hyperspace of possible beam configurations ( e.g. , selecting 7 of 36 uniformly spaced coplanar beams would require the intercomparison of 8,347,680 imrt plans ) . methods and materials : an influence vector ( iv ) approximation technique for high-speed estimation of imrt dose distributions was used in combination with a fast gradient search algorithm ( newton s method ) for imrt optimization . in the iv approximation , it is assumed that the change in intensity of a ray ( or bixel ) proportionately changes dose along the ray . evidence is presented that the iv approximation is valid for bao . the scatter contribution at points away from the ray is accounted for fully in imrt optimization after the optimum beam orientation has been determined story_separator_special_tag purpose : to introduce icycle , a novel algorithm for integrated , multicriterial optimization of beam angles , and intensity modulated radiotherapy ( imrt ) profiles . methods : a multicriterial plan optimization with icycle is based on a prescription calledwish-list , containing hard constraints and objectives with ascribed priorities . priorities are ordinal parameters used for relative importance ranking of the objectives . the higher an objective priority is , the higher the probability that the corresponding objective will be met . beam directions are selected from an input set of candidate directions . input sets can be restricted , e.g. , to allow only generation of coplanar plans , or to avoid collisions between patient/couch and the gantry in a noncoplanar setup . obtaining clinically feasible calculation times was an important design criterium for development of icycle . this could be realized by sequentially adding beams to the treatment plan in an iterative procedure . each iteration loop starts with selection of the optimal direction to be added . then , a pareto-optimal imrt plan is generated for the ( fixed ) beam setup that includes all so far selected directions , using a previously published algorithm for story_separator_special_tag intensity-modulated radiation therapy is the technique of delivering radiation to cancer patients by using non-uniform radiation fields from selected angles , with the aim of reducing the intensity of the beams that go through critical structures while reaching the dose prescription in the target volume . two decisions are of fundamental importance : to select the beam angles and to compute the intensity of the beams used to deliver the radiation to the patient . often , these two decisions are made separately : first , the treatment planners , on the basis of experience and intuition , decide the orientation of the beams and then the intensities of the beams are optimized by using an automated software tool . automatic beam angle selection ( also known as beam angle optimization ) is an important problem and is today often based on human experience . in this context , we face the problem of optimizing both the decisions , developing an algorithm which automatically selects the beam angles and computes the beam intensities . we propose a hybrid heuristic method , which combines a simulated annealing procedure with the knowledge of the gradient . gradient information is used to quickly story_separator_special_tag imrt treatment plans for step-and-shoot delivery have traditionally been produced through the optimization of intensity distributions ( or maps ) for each beam angle . the optimization step is followed by the application of a leaf-sequencing algorithm that translates each intensity map into a set of deliverable aperture shapes . in this article , we introduce an automated planning system in which we bypass the traditional intensity optimization , and instead directly optimize the shapes and the weights of the apertures . we call this approach direct aperture optimization . this technique allows the user to specify the maximum number of apertures per beam direction , and hence provides significant control over the complexity of the treatment delivery . this is possible because the machine dependent delivery constraints imposed by the mlc are enforced within the aperture optimization algorithm rather than in a separate leaf-sequencing step . the leaf settings and the aperture intensities are optimized simultaneously using a simulated annealing algorithm . we have tested direct aperture optimization on a variety of patient cases using the egs4/beam monte carlo package for our dose calculation engine . the results demonstrate that direct aperture optimization can produce highly conformal step-and-shoot treatment story_separator_special_tag we consider the problem of intensity-modulated radiation therapy ( imrt ) treatment planning using direct aperture optimization . while this problem has been relatively well studied in recent years , most approaches employ a heuristic approach to the generation of apertures . in contrast , we use an exact approach that explicitly formulates the fluence map optimization ( fmo ) problem as a convex optimization problem in terms of all multileaf collimator ( mlc ) deliverable apertures and their associated intensities . however , the number of deliverable apertures , and therefore the number of decision variables and constraints in the new problem formulation , is typically enormous . to overcome this , we use an iterative approach that employs a subproblem whose optimal solution either provides a suitable aperture to add to a given pool of allowable apertures or concludes that the current solution is optimal . we are able to handle standard consecutiveness , interdigitation and connectedness constraints that may be imposed by the particular mlc system used , as well as jaws-only delivery . our approach has the additional advantage that it can explicitly account for transmission of dose through the part of an aperture that is story_separator_special_tag navigation-based multi-criteria optimization has been introduced to radiotherapy planning in order to allow the interactive exploration of trade-offs between conflicting clinical goals . however , this has been mainly applied to fluence map optimization . the subsequent leaf sequencing step may cause dose discrepancy , leading to human iteration loops in the treatment planning process that multi-criteria methods were meant to avoid . to circumvent this issue , this paper investigates the application of direct aperture optimization methods in the context of multi-criteria optimization . we develop a solution method to directly obtain a collection of apertures that can adequately span the entire pareto surface . to that end , we extend the column generation method for direct aperture optimization to a multi-criteria setting in which apertures that can improve the entire pareto surface are sequentially identified and added to the treatment plan . our proposed solution method can be embedded in a navigation-based multi-criteria optimization framework , in which the treatment planner explores the trade-off between treatment objectives directly in the space of deliverable apertures . our solution method is demonstrated for a paraspinal case where the trade-off between target coverage and spinal-cord sparing is studied . the computational story_separator_special_tag purpose : to make the planning of volumetric modulated arc therapy ( vmat ) faster and to explore the tradeoffs between planning objectives and delivery efficiency . methods : a convex multicriteria dose optimization problem is solved for an angular grid of 180 equi-spaced beams . this allows the planner to navigate the ideal dose distribution pareto surface and select a plan of desired target coverage versus organ at risk sparing . the selected plan is then made vmat deliverable by a fluence map merging and sequencing algorithm , which combines neighboring fluence maps based on a similarity score and then delivers the merged maps together , simplifying delivery . successive merges are made as long as the dose distribution quality is maintained . the complete algorithm is called vmerge . results : vmerge is applied to three cases : a prostate , a pancreas , and a brain . in each case , the selected pareto-optimal plan is matched almost exactly with the vmat merging routine , resulting in a high quality plan delivered with a single arc in less than five minutes on average . vmerge offers significant improvements over existing vmat algorithms . the first is the story_separator_special_tag to formulate and solve the fluence-map merging procedure of the recently-published vmat treatment-plan optimization method , called vmerge , as a bi-criteria optimization problem . using an exact merging method rather than the previously-used heuristic , we are able to better characterize the trade-off between the delivery efficiency and dose quality . vmerge begins with a solution of the fluence-map optimization problem with 180 equi-spaced beams that yields the ? ideal ? dose distribution . neighboring fluence maps are then successively merged , meaning that they are added together and delivered as a single map . the merging process improves the delivery efficiency at the expense of deviating from the initial high-quality dose distribution . we replace the original merging heuristic by considering the merging problem as a discrete bi-criteria optimization problem with the objectives of maximizing the treatment efficiency and minimizing the deviation from the ideal dose . we formulate this using a network-flow model that represents the merging problem . since the problem is discrete and thus non-convex , we employ a customized box algorithm to characterize the pareto frontier . the pareto frontier is then used as a benchmark to evaluate the performance of the standard vmerge story_separator_special_tag purpose : to develop a method for inverse volumetric-modulated arc therapy ( vmat ) planning that combines multicriteria optimization ( mco ) with direct machine parameter optimization . the ultimate goa . story_separator_special_tag for a bounded system of linear equalities and inequalities , we show that the np-hard 0-norm minimization problem is completely equivalent to the concave p-norm minimization problem , for a sufficiently small p. a local solution to the latter problem can be easily obtained by solving a provably finite number of linear programs . computational results frequently leading to a global solution of the 0-minimization problem and often producing sparser solutions than the corresponding 1-solution are given . a similar approach applies to finding minimal 0-solutions of linear programs . story_separator_special_tag < sup > 0 < /sup > norm based signal recovery is attractive in compressed sensing as it can facilitate exact recovery of sparse signal with very high probability . unfortunately , direct < sup > 0 < /sup > norm minimization problem is np-hard . this paper describes an approximate < sup > 0 < /sup > norm algorithm for sparse representation which preserves most of the advantages of < sup > 0 < /sup > norm . the algorithm shows attractive convergence properties , and provides remarkable performance improvement in noisy environment compared to other popular algorithms . the sparse representation algorithm presented is capable of very fast signal recovery , thereby reducing retrieval latency when handling story_separator_special_tag an intensity-modulated radiation therapy ( imrt ) field is composed of a series of segmented beams . it is practically important to reduce the number of segments while maintaining the conformality of the final dose distribution . in this article , the authors quantify the complexity of an imrt fluence map by introducing the concept of sparsity of fluence maps and formulate the inverse planning problem into a framework of compressing sensing . in this approach , the treatment planning is modeled as a multiobjective optimization problem , with one objective on the dose performance and the other on the sparsity of the resultant fluence maps . a pareto frontier is calculated , and the achieved dose distributions associated with the pareto efficient points are evaluated using clinical acceptance criteria . the clinically acceptable dose distribution with the smallest number of segments is chosen as the final solution . the method is demonstrated in the application of fixed-gantry imrt on a prostate patient . the result shows that the total number of segments is greatly reduced while a satisfactory dose distribution is still achieved . with the focus on the sparsity of the optimal solution , the proposed method is story_separator_special_tag purpose : a new treatment scheme coined as dense angularly sampled and sparse intensity modulated radiation therapy ( dassim-rt ) has recently been proposed to bridge the gap between imrt and vmat . by increasing the angular sampling of radiation beams while eliminating dispensable segments of the incident fields , dassim-rt is capable of providing improved conformity in dose distributions while maintaining high delivery efficiency . the fact that dassim-rt utilizes a large number of incident beams represents a major computational challenge for the clinical applications of this powerful treatment scheme . the purpose of this work is to provide a practical solution to the dassim-rt inverse planning problem . methods : the inverse planning problem is formulated as a fluence-map optimization problem with total-variation ( tv ) minimization . a newly released l1-solver , template for first-order conic solver ( tfocs ) , was adopted in this work . tfocs achieves faster convergence with less memory usage as compared with conventional quadratic programming ( qp ) for the tv form through the effective use of conic forms , dual-variable updates , and optimal first-order approaches . as such , it is tailored to specifically address the computational challenges of story_separator_special_tag purpose to provide a mathematical approach for quantifying the tradeoff between intensity-modulated radiotherapy ( imrt ) complexity and plan quality . methods and materials we solve a multi-objective program that includes imrt complexity , measured as the number of monitor units ( mu ) needed to deliver the plan using a multileaf collimator , as an objective . clinical feasibility of plans is ensured by the use of hard constraints in the formulation . optimization output is a pareto surface of treatment plans , which allows the tradeoffs between imrt complexity , tumor coverage , and tissue sparing to be observed . paraspinal and lung cases are presented . results although the amount of possible mu reduction is highly dependent on the difficulty of the underlying treatment plan ( difficult plans requiring a high degree of intensity modulation are more sensitive to mu reduction ) , in some cases the number of mu can be reduced more than twofold with a < 1 % increase in the objective function . conclusions the largely increased number of mu and irradiation time in imrt is sometimes unnecessary . tools like the one presented should be considered for integration into daily clinical practice story_separator_special_tag it is now well understood that ( 1 ) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and ( 2 ) \xa0that this can be done by constrained 1 minimization . in this paper , we study a novel method for sparse signal recovery that in many situations outperforms 1 minimization in the sense that substantially fewer measurements are needed for exact recovery . the algorithm consists of solving a sequence of weighted 1-minimization problems where the weights used for the next iteration are computed from the value of the current solution . we present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery , statistical estimation , error correction and image processing . interestingly , superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations not by reweighting the 1 norm of the coefficient sequence as is common , but by reweighting the 1 norm of the transformed object . an immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique story_separator_special_tag abstract purpose : selection of beam configuration in currently available intensity-modulated radiotherapy ( imrt ) treatment planning systems is still based on trial-and-error search . computer beam orientation optimization has the potential to improve the situation , but its practical implementation is hindered by the excessive computing time associated with the calculation . the purpose of this work is to provide an effective means to speed up the beam orientation optimization by incorporating a priori geometric and dosimetric knowledge of the system and to demonstrate the utility of the new algorithm for beam placement in imrt . methods and materials : beam orientation optimization was performed in two steps . first , the quality of each possible beam orientation was evaluated using beam's-eye-view dosimetrics ( bevd ) developed in our previous study . a simulated annealing algorithm was then employed to search for the optimal set of beam orientations , taking into account the bevd scores of different incident beam directions . during the calculation , sampling of gantry angles was weighted according to the bevd score computed before the optimization . a beam direction with a higher bevd score had a higher probability of being included in the trial story_separator_special_tag purpose to test whether multicriteria optimization ( mco ) can reduce treatment planning time and improve plan quality in intensity-modulated radiotherapy ( imrt ) . methods and materials ten imrt patients ( 5 with glioblastoma and 5 with locally advanced pancreatic cancers ) were logged during the standard treatment planning procedure currently in use at massachusetts general hospital ( mgh ) . planning durations and other relevant planning information were recorded . in parallel , the patients were planned using an mco planning system , and similar planning time data were collected . the patients were treated with the standard plan , but each mco plan was also approved by the physicians . plans were then blindly reviewed 3 weeks after planning by the treating physician . results in all cases , the treatment planning time was vastly shorter for the mco planning ( average mco treatment planning time was 12 min ; average standard planning time was 135 min ) . the physician involvement time in the planning process increased from an average of 4.8 min for the standard process to 8.6 min for the mco process . in all cases , the mco plan was blindly identified as story_separator_special_tag background : health systems in sub-saharan africa are not prepared for the rapid rise in cancer rates projected in the region over the next decades . more must be understood about the current state of cancer care in this region to target improvement efforts . yaounde general hospital ( ygh ) currently is the only site in cameroon ( population : 18.8 million ) where adults can receive chemotherapy from trained medical oncologists . the experiences of patients at this facility represent a useful paradigm for describing cancer care in this region . methods : in july and august 2010 , our multidisciplinary team conducted closed-end interviews with 79 consecutive patients who had confirmed breast cancer , kaposi sarcoma , or lymphoma . results : thirty-five percent of patients waited > 6 months to speak to a health care provider after the first sign of their cancer . the delay between first consultation with a health care provider and receipt of a cancer diagnosis was > 3 months for 47 % of patients . the total delay from the first sign of cancer to receipt of the correct diagnosis was > 6 months for 63 % of patients . twenty-three story_separator_special_tag an algorithm , which calculates the motions of the collimator jaws required to generate a given arbitrary intensity profile , is presented . the intensity profile is assumed to be piecewise linear , i.e. , to consist of segments of straight lines . the jaws move unidirectionally and continuously with variable speed during radiation delivery . during each segment , at least one of the jaws is set to move at the maximum permissible speed . the algorithm is equally applicable for multileaf collimators ( mlc ) , where the transmission through the collimator leaves is taken into account . examples are presented for different intensity profiles with varying degrees of complexity . typically , the calculation takes less than 10 ms on a vax 8550 computer . story_separator_special_tag intensity-modulated radiation therapy ( imrt ) generally requires complex equipment for delivery . just one study has investigated the use of 'jaws-only ' imrt with not discouraging conclusions . however , the monitor-unit efficiency is still considered to be too low compared with the use of a multileaf collimator ( mlc ) . in this paper a new imrt delivery technique is proposed which does not require the mlc and is only moderately more complex than the use of jaws alone . in this method a secondary collimator ( mask ) is employed together with the jaws . this mask may translate parallel to the jaw axes . two types of mask have been investigated . one is a regular binary-attenuation pattern and the other is a random binary-attenuation pattern . studies show that the monitor-unit efficiency of this 'jaws-plus-mask ' technique , with a random binary mask , is more than double that of the jaws-only technique for typical two-dimensional intensity-modulated beams of size 10 \xd7 10 bixels2 and with a peak value of 10 mu ( or quantized into 10 fluence increments ) . for two-dimensional intensity-modulated beams of size 15 \xd7 15 bixels2 with a peak value story_separator_special_tag using direct aperture optimization , we have developed an inverse planning approach that is capable of producing efficient intensity modulated radiotherapy ( imrt ) treatment plans that can be delivered without a multileaf collimator . this `` jaws-only '' approach to imrt uses a series of rectangular field shapes to achieve a high degree of intensity modulation from each beam direction . direct aperture optimization is used to directly optimize the jaw positions and the relative weights assigned to each aperture . because the constraints imposed by the jaws are incorporated into the optimization , the need for leaf sequencing is eliminated . results are shown for five patient cases covering three treatment sites : pancreas , breast , and prostate . for these cases , between 15 and 20 jaws-only apertures were required per beam direction in order to obtain conformal imrt treatment plans . each plan was delivered to a phantom , and absolute and relative dose measurements were recorded . the typical treatment time to deliver these plans was 18 min . the jaws-only approach provides an additional imrt delivery option for clinics without a multileaf collimator .
conservation equations . thermodynamics of irreversible processes : the linear region . nonlinear thermodynamics . systems involving chemical reactions and diffusion-stability . mathematical tools . simple autocatalytic models . some further aspects of dissipative structures and self-organization phenomena . general comments . birth and death descriptions of fluctuations : nonlinear master equation . self-organization in chemical reactions . regulatory processes at the subcellular level . regulatory processes at the cellular level . cellular differentiation and patter formation . thermodynamics of evolution . thermodynamics of ecosystems . perspectives and concluding remarks . references . index . story_separator_special_tag complexity in nature is astounding yet the explanation lies in the fundamental laws of physics . the second law of thermodynamics and the principle of least action are the two theories of science that have always stood the test of time . in this article , we use these fundamental principles as tools to understand how and why things happen . in order to achieve that , it is of absolute necessity to define things precisely yet preserving their applicability in a broader sense . we try to develop precise , mathematically rigorous definitions of the commonly used terms in this context , such as action , organization , system , process , etc. , and in parallel argue the behavior of the system from the first principles . this article , thus , acts as a mathematical framework for more discipline-specific theories . \xa9 2015 wiley periodicals , inc. complexity , 2015 story_separator_special_tag a comprehensive review of spatiotemporal pattern formation in systems driven away from equilibrium is presented , with emphasis on comparisons between theory and quantitative experiments . examples include patterns in hydrodynamic systems such as thermal convection in pure fluids and binary mixtures , taylor-couette flow , parametric-wave instabilities , as well as patterns in solidification fronts , nonlinear optics , oscillatory chemical reactions and excitable biological media . the theoretical starting point is usually a set of deterministic equations of motion , typically in the form of nonlinear partial differential equations . these are sometimes supplemented by stochastic terms representing thermal or instrumental noise , but for macroscopic systems and carefully designed experiments the stochastic forces are often negligible . an aim of theory is to describe solutions of the deterministic equations that are likely to be reached starting from typical initial conditions and to persist at long times . a unified description is developed , based on the linear instabilities of a homogeneous state , which leads naturally to a classification of patterns in terms of the characteristic wave vector q0 and frequency 0 of the instability . type is systems ( 0=0 , q0 0 ) are stationary story_separator_special_tag the theory of self-organization and adaptivity has grown out of a variety of disciplines , including thermodynamics , cybernetics and computer modelling . the present article reviews its most important concepts and principles . it starts with an intuitive overview , illustrated by the examples of magnetization and benard convection , and concludes with the basics of mathematical modelling . self-organization can be defined as the spontaneous creation of a globally coherent pattern out of local interactions . because of its distributed character , this organization tends to be robust , resisting perturbations . the dynamics of a self-organizing system is typically non-linear , because of circular or feedback relations between the components . positive feedback leads to an explosive growth , which ends when all components have been absorbed into the new configuration , leaving the system in a stable , negative feedback state . non-linear systems have in general several stable states , and this number tends to increase ( bifurcate ) as an increasing input of energy pushes the system farther from its thermodynamic equilibrium . to adapt to a changing environment , the system needs a variety of stable states that is large enough to react story_separator_special_tag isolated systems tend to evolve towards equilibrium , a special state that has been the focus of many-body research for a century . yet much of the richness of the world around us arises from conditions far from equilibrium . phenomena such as turbulence , earthquakes , fracture , and life itself occur only far from equilibrium . subjecting materials to conditions far from equilibrium leads to otherwise unattainable properties . for example , rapid cooling is a key process in manufacturing the strongest metallic alloys and toughest plastics . processes that occur far from equilibrium also create some of the most intricate structures known , from snowflakes to the highly organized structures of life . while much is understood about systems at or near equilibrium , we are just beginning to uncover the basic principles governing systems far from equilibrium . story_separator_special_tag knowledge of the statistical properties of chemical systems at equilibrium can be very helpful for understanding their behavior . however , much of the world surrounding us is not in equilibrium . in his perspective , egolf explains how equilibrium properties , such as free energies , can nevertheless be determined based on nonequilibrium data . he highlights the report by liphardt et al . , who show experimentally that by measuring the work required to unfold an rna molecule repeatedly as a function of its extension , the free energy of unfolding can be determined . story_separator_special_tag an account of the experimental discovery of complex dynamical behavior in the continuous-flow , stirred tank reactor ( cstr ) belousov-zhabotinsky ( bz ) reaction , as well as numerical simulations based on the bz chemistry are given . the most recent four- and three-variable models that are deduced from the well-accepted , updated chemical mechanism of the bz reaction and which exhibit robust chaotic states are summarized . chaos has been observed in experiments and simulations embedded in the regions of complexities at both low and high flow rates . the deterministic nature of the observed aperiodicities at low flow rates is unequivocally established . however , controversy still remains in the interpretation of certain aperiodicities observed at high flow rates . story_separator_special_tag active systems can produce a far greater variety of ordered patterns than conventional equilibrium systems . in particular , transitions between disorder and either polar- or nematically ordered phases have been predicted and observed in two-dimensional active systems . however , coexistence between phases of different types of order has not been reported . we demonstrate the emergence of dynamic coexistence of ordered states with fluctuating nematic and polar symmetry in an actomyosin motility assay . combining experiments with agent-based simulations , we identify sufficiently weak interactions that lack a clear alignment symmetry as a prerequisite for coexistence . thus , the symmetry of macroscopic order becomes an emergent and dynamic property of the active system . these results provide a pathway by which living systems can express different types of order by using identical building blocks . story_separator_special_tag traditional approaches to materials synthesis have largely relied on uniform , equilibrated phases leading to static condensed-matter structures ( e.g. , monolithic single crystals ) . departures from these modes of materials design are pervasive in biology . from the folding of proteins to the reorganization of self-regulating cytoskeletal networks , biological materials reflect a major shift in emphasis from equilibrium thermodynamic regimes to out-of-equilibrium regimes . here , equilibrium structures , determined by global free-energy minima , are replaced by highly structured dynamical states that are out of equilibrium , calling into question the utility of global thermodynamic energy minimization as a first-principles approach . thus , the creation of new materials capable of performing life-like functions such as complex and cooperative processes , self-replication , and self-repair , will ultimately rely upon incorporating biological principles of spatiotemporal modes of self-assembly . elucidating fundamental principles for the design of such out-of-equilibrium dynamic self-assembling materials systems is the focus of this issue of mrs bulletin . story_separator_special_tag living systems are open , out-of-equilibrium thermodynamic entities , that maintain order by locally reducing their entropy . aging is a process by which these systems gradually lose their ability to maintain their out-of-equilibrium state , as measured by their free-energy rate density , and hence , their order . thus , the process of aging reduces the efficiency of those systems , making them fragile and less adaptive to the environmental fluctuations , gradually driving them towards the state of thermodynamic equilibrium . in this paper , we discuss the various metrics that can be used to understand the process of aging from a complexity science perspective . among all the metrics that we propose , action efficiency , is observed to be of key interest as it can be used to quantify order and self-organization in any physical system . based upon our arguments , we present the dependency of other metrics on the action efficiency of a system , and also argue as to how each of the metrics , influences all the other system variables . in order to support our claims , we draw parallels between technological progress and biological growth . such parallels are story_separator_special_tag this book has discussed some of the most important aspects in the current state of the sciences of complexity , self-organization , and evolution . a central theme in this field is the search for mechanisms that can explain the self-organization of complex systems . the quest for the main guiding principles for causal explanations can be viewed as a very timely and central aspect of this search . this book is devoted to such topics and is a necessary read for anyone working at the forefront of complexity , self-organization , and evolution . as an addition to the lines of reasoning in this book , we focus on a quantitative description of self-organization and evolution . to create a measure of a degree of organization , we have applied the principle of least action from physics . action for a trajectory is defined as the integral of the difference between kinetic and potential energy over time . this principle states that the equations of motion in nature are obeyed when action is minimized . in complex systems , there are constraints to motion that prevent the agents from moving along the paths of least action . using free story_separator_special_tag in this paper , we model the bus networks of six major indian cities as graphs in l-space , and evaluate their various statistical properties . while airline and railway networks have been extensively studied , a comprehensive study on the structure and growth of bus networks is lacking . in india , where bus transport plays an important role in day-to-day commutation , it is of significant interest to analyze its topological structure and answer basic questions on its evolution , growth , robustness and resiliency . although the common feature of small-world property is observed , our analysis reveals a wide spectrum of network topologies arising due to significant variation in the degree-distribution patterns in the networks . we also observe that these networks although , robust and resilient to random attacks are particularly degree-sensitive . unlike real-world networks , such as internet , www and airline , that are virtual , bus networks are physically constrained . our findings therefore , throw light on the evolution of such geographically and constrained networks that will help us in designing more efficient bus networks in the future . story_separator_special_tag in this paper , we study the structural properties of the complex bus network of chennai . we formulate this extensive network structure by identifying each bus stop as a node , and a bus which stops at any two adjacent bus stops as an edge connecting the nodes . rigorous statistical analysis of this data shows that the chennai bus network displays small-world properties and a scale-free degree distribution with the power-law exponent , = 3:8. i. introduction a. chennai bus network chennai is one of the metropolitan cities in india with a structured and a close-knit bus transport network . the chennai bus network ( cbn ) is operated by the metropolitan transport corporation ( mtc ) , a state government undertaking . spanning an area of 3,929 sq . km and with over 800 routes sprawling across entire chennai , this extensive network also boasts of the largest bus terminus in asia . with the population of the city being the sixth largest in the country , this medium of transport is most widely used for day-to-day commutation . the reason that the bus network , in general , achieves this favourable status lies primarily in two story_separator_special_tag in recent times , the domain of network science has become extremely useful in understanding the underlying structure of various real-world networks and to answer non-trivial questions regarding them . in this study , we rigourously analyze the statistical properties of the bus networks of six major indian cities as graphs in l- and p-space , using tools from network science . although public transport networks , such as airline and railway networks have been extensively studied , a comprehensive study on the structure and growth of bus networks is lacking . in india , where bus networks play an important role in day-to-day commutation , it is of significant interest to analyze their topological structure , and answer some of the basic questions on their evolution , growth , robustness and resiliency . we start from an empirical analysis of these networks , and determine their principle characteristics in terms of the complex network theory . the common features of small-world property and heavy tails in degree-distribution plots are observed in all the networks studied . our analysis further reveals a wide spectrum of network topologies arising due to an interplay between preferential and random attachment of nodes . story_separator_special_tag bus transportation is the most convenient and cheapest way of public transportation in indian cities . due to cost-effectiveness and wide reachability , buses bring people to their destinations every day . although the bus transportation has numerous advantages over other ways of public transportation , this mode of transportation also poses a serious threat of spreading contagious diseases throughout the city . it is extremely difficult to predict the extent and spread of such an epidemic . earlier studies have focused on the contagion processes on scale-free network topologies ; whereas , real-world networks such as bus networks exhibit a wide-spectrum of network topology . therefore , we aim in this study to understand this complex dynamical process of epidemic outbreak and information diffusion on the bus networks for six different indian cities using si and sir models . we identify epidemic thresholds for these networks which help us in controlling outbreaks by developing node-based immunization techniques . \xa9 2016 wiley periodicals , inc. complexity , 2016 story_separator_special_tag far-from-equilibrium systems are ubiquitous in nature . they are also rich in terms of diversity and complexity . therefore , it is an intellectual challenge to be able to understand the physics of far-from-equilibrium phenomena . in this article , we revisit a standard tabletop experiment , the rayleigh benard convection , to explore some fundamental questions and present a new perspective from a first-principles point of view . we address how nonequilibrium fluctuations differ from equilibrium fluctuations , how emergence of order out of equilibrium breaks symmetries in the system , and how free energy of a system gets locally bifurcated to operate a carnot-like engine to maintain order . the exploration and investigation of these nontrivial questions are the focus of this article . story_separator_special_tag a challenge in fundamental physics and especially in thermodynamics is to understand emergent order in far-from-equilibrium systems . while at equilibrium , temperature plays the role of a key thermodynamic variable whose uniformity in space and time defines the equilibrium state the system is in , this is not the case in a far-from-equilibrium driven system . when energy flows through a finite system at steady-state , temperature takes on a time-independent but spatially varying character . in this study , the convection patterns of a rayleigh-benard fluid cell at steady-state is used as a prototype system where the temperature profile and fluctuations are measured spatio-temporally . the thermal data is obtained by performing high-resolution real-time infrared calorimetry on the convection system as it is first driven out-of-equilibrium when the power is applied , achieves steady-state , and then as it gradually relaxes back to room temperature equilibrium when the power is removed . our study provides new experimental data on the non-trivial nature of thermal fluctuations when stable complex convective structures emerge . the thermal analysis of these convective cells at steady-state further yield local equilibrium-like statistics . in conclusion , these results correlate the spatial ordering of the story_separator_special_tag in this paper we present a detailed description of the statistical and computational techniques that were employed to study a driven far-from-equilibrium steady-state rayleigh-b { e } nard system in the non-turbulent regime ( $ ra\\leq 3500 $ ) . in our previous work on the rayleigh-b { e } nard convection system we try to answer two key open problems that are of great interest in contemporary physics : ( i ) how does an out-of-equilibrium steady-state differ from an equilibrium state and ( ii ) how do we explain the spontaneous emergence of stable structures and simultaneously interpret the physical notion of temperature when out-of-equilibrium . we believe that this paper will offer a useful repository of the technical details for a first principles study of similar kind . in addition , we are also hopeful that our work will spur considerable interest in the community which will lead to the development of more sophisticated and novel techniques to study far-from-equilibrium behavior . story_separator_special_tag a chimera state is a spatio-temporal pattern in a network of identical coupled oscillators in which synchronous and asynchronous oscillation coexist . this state of broken symmetry , which usually coexists with a stable spatially symmetric state , has intrigued the nonlinear dynamics community since its discovery in the early 2000s . recent experiments have led to increasing interest in the origin and dynamics of these states . here we review the history of research on chimera states and highlight major advances in understanding their behaviour . story_separator_special_tag ising discussed the following model of a ferromagnetic body : assume n elementary magnets of moment to be arranged in a regular lattice ; each of them is supposed to have only two possible orientations , which we call positive and negative . assume further that there is an interaction energy u for each pair of neighbouring magnets of opposite direction . further , there is an external magnetic field of magnitude h such as to produce an additional energy of h ( + h ) for each magnet with positive ( negative ) direction . story_separator_special_tag the aim of this chapter is to present examples from the physical sciences where monte carlo methods are widely applied . here we focus on examples from statistical physics and discuss two of the most studied models , the ising model and the potts model for the interaction among classical spins . these models have been widely used for studies of phase transitions . 13 . story_separator_special_tag the transfer matrix methodology is proposed as a systematic tool for the statistical mechanical description of dna protein drug binding involved in gene regulation . we show that a genetic system of several cis-regulatory modules is calculable using this method , considering explicitly the site-overlapping , competitive , cooperative binding of regulatory proteins , their multilayer assembly and dna looping . in the methodological section , the matrix models are solved for the basic types of short- and long-range interactions between dna-bound proteins , drugs and nucleosomes . we apply the matrix method to gene regulation at the or operator of phage . the transfer matrix formalism allowed the description of the -switch at a single-nucleotide resolution , taking into account the effects of a range of inter-protein distances . our calculations confirm previously established roles of the contact ci cro rnap interactions . concerning long-range interactions , we show that while the dna loop between the or and ol operators is important at the lysogenic ci concentrations , the interference between the adjacent promoters pr and prm becomes more important at small ci concentrations . a large change in the expression pattern may arise in this regime due to story_separator_special_tag computational properties of use to biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components ( or neurons ) . the physical meaning of content-addressable memory is described by an appropriate phase space flow of the state of a system . a model of such a system is given , based on aspects of neurobiology but readily adapted to integrated circuits . the collective properties of this model produce a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size . the algorithm for the time evolution of the state of the system is based on asynchronous parallel processing . additional emergent collective properties include some capacity for generalization , familiarity recognition , categorization , error correction , and time sequence retention . the collective properties are only weakly sensitive to details of the modeling or the failure of individual devices . story_separator_special_tag bacterial chemotaxis is controlled by the signaling of a cluster of receptors . a cooperative model is presented , in which coupling between neighboring receptor dimers enhances the sensitivity with which stimuli can be detected , without diminishing the range of chemoeffector concentration over which chemotaxis can operate . individual receptor dimers have two stable conformational states : one active , one inactive . noise gives rise to a distribution between these states , with the probability influenced by ligand binding , and also by the conformational states of adjacent receptor dimers . the two-state model is solved , based on an equivalence with the ising model in a randomly distributed magnetic field . the model has only two effective parameters , and unifies a number of experimental findings . according to the value of the parameter comparing coupling and noise , the signal can be arbitrarily sensitive to changes in the fraction of receptor dimers to which the ligand is bound . the counteracting effect of a change of methylation level is mapped to an induced field in the ising model . by returning the activity to the prestimulus level , this adapts the receptor cluster to a new story_separator_special_tag in all organisms , dna molecules are tightly compacted into a dynamic 3d nucleoprotein complex . in bacteria , this compaction is governed by the family of nucleoid-associated proteins ( naps ) . under conditions of stress and starvation , an nap called dps ( dna-binding protein from starved cells ) becomes highly up-regulated and can massively reorganize the bacterial chromosome . although static structures of dps-dna complexes have been documented , little is known about the dynamics of their assembly . here , we use fluorescence microscopy and magnetic-tweezers measurements to resolve the process of dna compaction by dps . real-time in vitro studies demonstrated a highly cooperative process of dps binding characterized by an abrupt collapse of the dna extension , even under applied tension . surprisingly , we also discovered a reproducible hysteresis in the process of compaction and decompaction of the dps-dna complex . this hysteresis is extremely stable over hour-long timescales despite the rapid binding and dissociation rates of dps . a modified ising model is successfully applied to fit these kinetic features . we find that long-lived hysteresis arises naturally as a consequence of protein cooperativity in large complexes and provides a useful mechanism story_separator_special_tag the partition function of a two-dimensional `` ferromagnetic '' with scalar `` spins '' ( ising model ) is computed rigorously for the case of vanishing field . the eigenwert problem involved in the corresponding computation for a long strip crystal of finite width ( $ n $ atoms ) , joined straight to itself around a cylinder , is solved by direct product decomposition ; in the special case $ n=\\ensuremath { \\infty } $ an integral replaces a sum . the choice of different interaction energies ( $ \\ifmmode\\pm\\else\\textpm\\fi { } j , \\ifmmode\\pm\\else\\textpm\\fi { } { j } ^ { \\ensuremath { ' } } $ ) in the ( 0 1 ) and ( 1 0 ) directions does not complicate the problem . the two-way infinite crystal has an order-disorder transition at a temperature $ t= { t } _ { c } $ given by the condition $ sinh ( \\frac { 2j } { k { t } _ { c } } ) sinh ( \\frac { 2 { j } ^ { \\ensuremath { ' } } } { k { t } _ { c } } ) =1. $ story_separator_special_tag the kuramoto model describes a large population of coupled limit-cycle oscillators whose natural frequencies are drawn from some prescribed distribution . if the coupling strength exceeds a certain threshold , the system exhibits a phase transition : some of the oscillators spontaneously synchronize , while others remain incoherent . the mathematical analysis of this bifurcation has proved both problematic and fascinating . we review 25 years of research on the kuramoto model , highlighting the false turns as well as the successes , but mainly following the trail leading from kuramoto s work to crawford s recent contributions . it is a lovely winding road , with excursions through mathematical biology , statistical physics , kinetic theory , bifurcation theory , and plasma physics . \xa9 2000 elsevier science b.v. all rights reserved . story_separator_special_tag synchronization phenomena in large populations of interacting elements are the subject of intense research efforts in physical , biological , chemical , and social systems . a successful approach to the problem of synchronization consists of modeling each member of the population as a phase oscillator . in this review , synchronization is analyzed in one of the most representative models of coupled phase oscillators , the kuramoto model . a rigorous mathematical treatment , specific numerical methods , and many variations and extensions of the original model that have appeared in the last few years are presented . relevant applications of the model in different contexts are also included . story_separator_special_tag abstract in this article , we have generalised the kuramoto model to allow one to model neuronal synchronisation more appropriately . the generalised version allows for different connective arrangements , time-varying natural frequencies and time-varying coupling strengths to be realised within the framework of the original kuramoto model . by incorporating the above mentioned features into the original kuramoto model one can allow for the adaptive nature of neurons in the brain to be accommodated . extensive tests using the generalised kuramoto model were performed on a n = 4 coupled oscillator network . examination of how different connective arrangements , time-varying natural frequencies and time-varying coupling strengths affected synchronisation separately and in combination are reported . the effects on synchronisation for large n are also reported . story_separator_special_tag oscillating chemical reactions result from complex periodic changes in the concentration of the reactants . in spatially ordered ensembles of candle flame oscillators the fluctuations in the ratio of oxygen atoms with respect to that of carbon , hydrogen and nitrogen produces an oscillation in the visible part of the flame related to the energy released per unit mass of oxygen . thus , the products of the reaction vary in concentration as a function of time , giving rise to an oscillation in the amount of soot and radiative emission . synchronisation of interacting dynamical sub-systems occurs as arrays of flames that act as master and slave oscillators , with groups of candles numbering greater than two , creating a synchronised motion in three-dimensions . in a ring of candles the visible parts of each flame move together , up and down and back and forth , in a manner that appears like a worship . here this effect is shown for rings of flames which collectively empower a central flame to pulse to greater heights . in contrast , situations where the central flames are suppressed are also found . the phenomena leads to in-phase synchronised states emerging story_separator_special_tag the field of far-from-equilibrium thermodynamics is often quoted as work in progress despite the extensive depth in the equilibrium based theory . one of the major shortcomings of equilibrium based theory is its inability to explain the emergence of order . some examples of far-from-equilibrium systems include , reaction diffusion systems , ordered patterns in solids such as snowflakes and alloys in a stronger molecular structure when heated . this paper looks into some of the standard far-from-equilibrium systems such as rayleigh b\xe9nard cells , the kuramoto model , the ising model , spatial population growth and heat flow through a simple solid . stochastic simulations were carried out in order to explicitly compute the variations in system s intensive properties spatially and temporally . as all of these systems evolved into a steady-state they exhibited certain similarities that were characteristically different from that of the system when at equilibrium . one of the striking differences was a non-gaussian probability distribution of the thermodynamic parameters when driven far-from-equilibrium . this spread of thermodynamic values across systems serve as the common connection as order emerges in out-of-equilibrium systems at steady-state . story_separator_special_tag abstract this review summarizes results for rayleigh-benard convection that have been obtained over the past decade or so . it concentrates on convection in compressed gases and gas mixtures with prandtl numbers near one and smaller . in addition to the classical problem of a horizontal stationary fluid layer heated from below , it also briefly covers convection in such a layer with rotation about a vertical axis , with inclination , and with modulation of the vertical acceleration . story_separator_special_tag recent advances in the understanding of rayleigh-b\\'enard convection and turbulence are reviewed in light of work using liquid helium . the discussion includes both experiments which have probed the steady flows preceding time dependence and experiments which have been directed toward understanding the ways in which turbulence evolves . comparison is made where appropriate to the many important contributions which have been obtained using room-temperature fluids , and a discussion is given explaining the advantages of cryogenic techniques . brief reviews are given for recent experimental investigations of convection in $ ^ { 3 } \\mathrm { he } $ - $ ^ { 4 } \\mathrm { he } $ mixtures -- -in both the superfluid and the normal states -- -and investigations of convection in rotating layers of liquid helium . story_separator_special_tag part i. benard convection and rayleigh-benard convection : 1. benard 's experiments 2. linear theory of rayleigh-benard convection 3. theory of surface tension driven benard convection 4. surface tension driven benard convection experiments 5. linear rayleigh-benard convection experiments 6. supercritical rayleigh-benard convection experiments 7. nonlinear theory of rayleigh-benard convection 8. miscellaneous topics part ii . taylor vortex flow : 9. circular couette flow 10. rayleigh 's stability criterion 11. g. i. taylor 's work 12. other early experiments 13. supercritical taylor vortex experiments 14. experiments with two independently rotating cylinders 15. nonlinear theory of taylor vortices 16. miscellaneous topics . story_separator_special_tag our unifying theory of turbulent thermal convection [ grossmann and lohse , j. fluid . mech . 407 , 27 ( 2000 ) ; phys . rev . lett . 86 , 3316 ( 2001 ) ; phys . rev . e 66 , 016305 ( 2002 ) ] is revisited , considering the role of thermal plumes for the thermal dissipation rate and addressing the local distribution of the thermal dissipation rate , which had numerically been calculated by verzicco and camussi [ j. fluid mech . 477 , 19 ( 2003 ) ; eur . phys . j. b 35 , 133 ( 2003 ) ] . predictions for the local heat flux and for the temperature and velocity fluctuations as functions of the rayleigh and prandtl numbers are offered . we conclude with a list of suggestions for measurements that seem suitable to verify or falsify our present understanding of heat transport and fluctuations in turbulent thermal convection . story_separator_special_tag turbulent rayleigh-benard convection displays a large-scale order in the form of rolls and cells on lengths larger than the layer height once the fluctuations of temperature and velocity are removed . these turbulent superstructures are reminiscent of the patterns close to the onset of convection . here we report numerical simulations of turbulent convection in fluids at different prandtl number ranging from 0.005 to 70 and for rayleigh numbers up to 107. we identify characteristic scales and times that separate the fast , small-scale turbulent fluctuations from the gradually changing large-scale superstructures . the characteristic scales of the large-scale patterns , which change with prandtl and rayleigh number , are also correlated with the boundary layer dynamics , and in particular the clustering of thermal plumes at the top and bottom plates . our analysis suggests a scale separation and thus the existence of a simplified description of the turbulent superstructures in geo- and astrophysical settings . story_separator_special_tag characteristic properties of turbulent rayleigh-b\\'enard convection in the bulk and the boundary layers are summarized for a wide range of rayleigh and prandtl numbers , with a specific emphasis on low-prandtl-number convection . story_separator_special_tag a data writer is described comprising : a memory to store at least one amount of source data that is to be written to a data storage medium ; a processor to arrange the source data into subsets and generate ecc data in respect of each subset , wherein the source data and the associated ecc data are to be written to a data storage medium via a plurality of individual data channels , and wherein the ecc data comprises at least a first degree of ecc protection having a first level of redundancy in respect of a first subset and a second degree of ecc protection having a second level of redundancy in respect of a second subset ; a plurality of data writing elements , each to write data from an associated data channel , concurrently with the writing by the other data writing elements of data from respective data channels , to a data storage medium ; and a controller , to control the writing by the data writing elements of the source data and the associated ecc data to the data storage medium . story_separator_special_tag many combinatorial optimization problems can be mapped to finding the ground states of the corresponding ising hamiltonians . the physical systems that can solve optimization problems in this way , namely ising machines , have been attracting more and more attention recently . our work shows that ising machines can be realized using almost any nonlinear self-sustaining oscillators with logic values encoded in their phases . many types of such oscillators are readily available for large-scale integration , with potentials in high-speed and low-power operation . in this paper , we describe the operation and mechanism of oscillator-based ising machines . the feasibility of our scheme is demonstrated through several examples in simulation and hardware , among which a simulation study reports average solutions exceeding those from state-of-art ising machines on a benchmark combinatorial optimization problem of size 2000 . story_separator_special_tag abstractwe introduce a model of interacting lattices at different resolutions driven by the two-dimensional ising dynamics with a nearest-neighbor interaction . we study this model both with tools borrowed from equilibrium statistical mechanics as well as non-equilibrium thermodynamics . our findings show that this model keeps the signature of the equilibrium phase transition . the critical temperature of the equilibrium models corresponds to the state maximizing the entropy and delimits two out-of-equilibrium regimes , one satisfying the onsager relations for systems close to equilibrium and one resembling convective turbulent states . since the model preserves the entropy and energy fluxes in the scale space , it seems a good candidate for parametric studies of out-of-equilibrium turbulent systems . story_separator_special_tag nonequilibrium thermodynamics has shown its applicability in a wide variety of different situations pertaining to fields such as physics , chemistry , biology , and engineering . as successful as it is , however , its current formulation considers only systems close to equilibrium , those satisfying the so-called local equilibrium hypothesis . here we show that diffusion processes that occur far away from equilibrium can be viewed as at local equilibrium in a space that includes all the relevant variables in addition to the spatial coordinate . in this way , nonequilibrium thermodynamics can be used and the difficulties and ambiguities associated with the lack of a thermodynamic description disappear . we analyze explicitly the inertial effects in diffusion and outline how the main ideas can be applied to other situations . story_separator_special_tag possible universal dynamics of a many-body system far from thermal equilibrium are explored . a focus is set on meta-stable non-thermal states exhibiting critical properties such as self-similarity and independence of the details of how the respective state has been reached . it is proposed that universal dynamics far from equilibrium can be tuned to exhibit a dynamical phase transition where these critical properties change qualitatively . this is demonstrated for the case of a superfluid two-component bose gas exhibiting different types of long-lived but non-thermal critical order . scaling exponents controlled by the ratio of experimentally tuneable coupling parameters offer themselves as natural smoking guns . the results shed light on the wealth of universal phenomena expected to exist in the far-from-equilibrium realm . story_separator_special_tag usually , in a nonequilibrium setting , a current brings mass from the highest density regions to the lowest density ones . although rare , the opposite phenomenon ( known as `` uphill diffusion '' ) has also been observed in multicomponent systems , where it appears as an artificial effect of the interaction among components . we show here that uphill diffusion can be a substantial effect , i.e. , it may occur even in single component systems as a consequence of some external work . to this aim we consider the two-dimensional ferromagnetic ising model in contact with two reservoirs that fix , at the left and the right boundaries , magnetizations of the same magnitude but of opposite signs.we provide numerical evidence that a class of nonequilibrium steady states exists in which , by tuning the reservoir magnetizations , the current in the system changes from `` downhill '' to `` uphill '' . moreover , we also show that , in such nonequilibrium setup , the current vanishes when the reservoir magnetization attains a value approaching , in the large volume limit , the magnetization of the equilibrium dynamics , thus establishing a relation between equilibrium
deep architectures have demonstrated state-of-the-art results in a variety of settings , especially with vision datasets . beyond the model definitions and the quantitative analyses , there is a need for qualitative comparisons of the solutions learned by various deep architectures . the goal of this paper is to find good qualitative interpretations of high level features represented by such models . to this end , we contrast and compare several techniques applied on stacked denoising autoencoders and deep belief networks , trained on several vision datasets . we show that , perhaps counter-intuitively , such interpretation is possible at the unit level , that it is simple to accomplish and that the results are consistent across various techniques . we hope that such techniques will allow researchers in deep architectures to understand more of how and why deep architectures work . story_separator_special_tag the objective of the research is to analyze the ability of the artificial neural network model developed to forecast the credit risk of a panel of italian manufacturing companies . in a theoretical point of view , this paper introduces a litera-ture review on the application of artificial intelligence systems for credit risk management . in an empirical point of view , this research compares the architecture of the artificial neural network model developed in this research to an-other one , built for a research conducted in 2004 with a similar panel of companies , showing the differences between the two neural network models . story_separator_special_tag deep neural nets with a large number of parameters are very powerful machine learning systems . however , overfitting is a serious problem in such networks . large networks are also slow to use , making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time . dropout is a technique for addressing this problem . the key idea is to randomly drop units ( along with their connections ) from the neural network during training . this prevents units from co-adapting too much . during training , dropout samples from an exponential number of different `` thinned '' networks . at test time , it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights . this significantly reduces overfitting and gives major improvements over other regularization methods . we show that dropout improves the performance of neural networks on supervised learning tasks in vision , speech recognition , document classification and computational biology , obtaining state-of-the-art results on many benchmark data sets . story_separator_special_tag the theory of reinforcement learning provides a normative account , deeply rooted in psychological and neuroscientific perspectives on animal behaviour , of how agents may optimize their control of an environment . to use reinforcement learning successfully in situations approaching real-world complexity , however , agents are confronted with a difficult task : they must derive efficient representations of the environment from high-dimensional sensory inputs , and use these to generalize past experience to new situations . remarkably , humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems , the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms . while reinforcement learning agents have achieved some successes in a variety of domains , their applicability has previously been limited to domains in which useful features can be handcrafted , or to domains with fully observed , low-dimensional state spaces . here we use recent advances in training deep neural networks to develop a novel artificial agent , termed a deep q-network , that can learn successful policies directly from high-dimensional sensory story_separator_special_tag we propose to train trading systems and portfolios by optimizing objective functions that directly measure trading and investment performance . rather than basing a trading system on forecasts or training via a supervised learning algorithm using labelled trading data , we train our systems using recurrent reinforcement learning ( rrl ) algorithms . the performance functions that we consider for reinforcement learning are profit or wealth , economic utility , the sharpe ratio and our proposed differential sharpe ratio . the trading and portfolio management systems require prior decisions as input in order to properly take into account the effects of transactions costs , market impact , and taxes . this temporal dependence on system state requires the use of reinforcement versions of standard recurrent learning algorithms . we present empirical results in controlled experiments that demonstrate the efficacy of some of our methods for optimizing trading systems and portfolios . for a long/short trader , we find that maximizing the differential sharpe ratio yields more consistent results than maximizing profits , and that both methods outperform a trading system based on forecasts that minimize mse . we find that portfolio traders trained to maximize the differential sharpe ratio achieve story_separator_special_tag we consider strategies which use a collection of popular technical indicators as input and seek a profitable trading rule defined in terms of them . we consider two popular computational learning approaches , reinforcement learning and genetic programming , and compare them to a pair of simpler methods : the exact solution of an appropriate markov decision problem , and a simple heuristic . we find that although all methods are able to generate significant in-sample and out-of-sample profits when transaction costs are zero , the genetic algorithm approach is superior for non-zero transaction costs , although none of the methods produce significant profits at realistic transaction costs . we also find that there is a substantial danger of overfitting if in-sample learning is not constrained . story_separator_special_tag the present study addresses the learning mechanism of boundedly rational agents in the dynamic and noisy environment of financial markets . the main objective is the development of a system that `` decodes '' the knowledge-acquisition strategy and the decision-making process of technical analysts called `` chartists '' . it advances the literature on heterogeneous learning in speculative markets by introducing a trading system wherein market environment and agent beliefs are represented by fuzzy inference rules . the resulting functionality leads to the derivation of the parameters of the fuzzy rules by means of adaptive training . in technical terms , it expands the literature that has utilized actor-critic reinforcement learning and fuzzy systems in agent-based applications , by presenting an adaptive fuzzy reinforcement learning approach that provides with accurate and prompt identification of market turning points and thus higher predictability . the purpose of this paper is to illustrate this concretely through a comparative investigation against other well-established models . the results indicate that with the inclusion of transaction costs , the profitability of the novel system in case of nasdaq composite , ftse100 and nikkei255 indices is consistently superior to that of a recurrent neural network , a story_separator_special_tag 1. introduction - the new paradigm in high frequency trading david easley ( cornell university ) , marcos lopez de prado ( hess corporation ) , maureen o'hara ( cornell university ) 2. high frequency trading strategies in fx markets richard olsen ( olsen ltd ) 3. execution strategies in equity markets michael g sotiropoulos ( bank of america merrill lynch ) 4. execution strategies in fixed income markets robert almgren ( quantitative brokers ) 5. machine learning and high frequency data michael kearns ( university of pennsylvania ) and yuriy nevmyvaka ( sac capital ) 6. the regulatory problems in high frequency markets oliver linton ( university of cambridge ) , maureen o'hara ( cornell university ) and j.p. zigrand ( lse ) 7. microstructure research methodologies for hf markets terry hendershott ( university of california , berkeley ) , charles jones ( columbia university ) , albert menkveld ( vu university amsterdam ) 8. do algo executions leak information ? george sofianos and juanjuan xiang ( goldman sachs ) 9. volatility and contagion david easley ( cornell university ) , robert engle ( nyu stern school of business ) , ( hess corporation ) , maureen o'hara ( story_separator_special_tag learning to store information over extended time intervals by recurrent backpropagation takes a very long time , mostly because of insufficient , decaying error backflow . we briefly review hochreiter 's ( 1991 ) analysis of this problem , then address it by introducing a novel , efficient , gradient based method called long short-term memory ( lstm ) . truncating the gradient where this does not do harm , lstm can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units . multiplicative gate units learn to open and close access to the constant error flow . lstm is local in space and time ; its computational complexity per time step and weight is o . 1. our experiments with artificial data involve local , distributed , real-valued , and noisy pattern representations . in comparisons with real-time recurrent learning , back propagation through time , recurrent cascade correlation , elman nets , and neural sequence chunking , lstm leads to many more successful runs , and learns much faster . lstm also solves complex , artificial long-time-lag tasks that have never been solved by story_separator_special_tag in this paper we argue for the fundamental importance of the value distribution : the distribution of the random return received by a reinforcement learning agent . this is in contrast to the common approach to reinforcement learning which models the expectation of this return , or value . although there is an established body of literature studying the value distribution , thus far it has always been used for a specific purpose such as implementing risk-aware behaviour . we begin with theoretical results in both the policy evaluation and control settings , exposing a significant distributional instability in the latter . we then use the distributional perspective to design a new algorithm which applies bellman 's equation to the learning of approximate value distributions . we evaluate our algorithm using the suite of games from the arcade learning environment . we obtain both state-of-the-art results and anecdotal evidence demonstrating the importance of the value distribution in approximate reinforcement learning . finally , we combine theoretical and empirical evidence to highlight the ways in which the value distribution impacts learning in the approximate setting . story_separator_special_tag in reinforcement learning ( rl ) , an agent interacts with the environment by taking actions and observing the next state and reward . when sampled probabilistically , these state transitions , rewards , and actions can all induce randomness in the observed long-term return . traditionally , reinforcement learning algorithms average over this randomness to estimate the value function . in this paper , we build on recent work advocating a distributional approach to reinforcement learning in which the distribution over returns is modeled explicitly instead of only estimating the mean . that is , we examine methods of learning the value distribution instead of the value function . we give results that close a number of gaps between the theoretical and algorithmic results given by bellemare , dabney , and munos ( 2017 ) . first , we extend existing results to the approximate distribution setting . second , we present a novel distributional reinforcement learning algorithm consistent with our theoretical formulation . finally , we evaluate this new algorithm on the atari 2600 games , observing that it significantly outperforms many of the recent improvements on dqn , including the related distributional algorithm c51 . story_separator_special_tag distributional approaches to value-based reinforcement learning model the entire distribution of returns , rather than just their expected values , and have recently been shown to yield state-of-the-art empirical performance . this was demonstrated by the recently proposed c51 algorithm , based on categorical distributional reinforcement learning ( cdrl ) [ bellemare et al. , 2017 ] . however , the theoretical properties of cdrl algorithms are not yet well understood . in this paper , we introduce a framework to analyse cdrl algorithms , establish the importance of the projected distributional bellman operator in distributional rl , draw fundamental connections between cdrl and the cramer distance , and give a proof of convergence for sample-based categorical distributional reinforcement learning algorithms . story_separator_special_tag standard reinforcement learning ( rl ) aims to optimize decision-making rules in terms of the expected return . however , especially for risk-management purposes , other criteria such as the expected shortfall are sometimes preferred . here , we describe a method of approximating the distribution of returns , which allows us to derive various kinds of information about the returns . we first show that the bellman equation , which is a recursive formula for the expected return , can be extended to the cumulative return distribution . then we derive a nonparametric return distribution estimator with particle smoothing based on this extended bellman equation . a key aspect of the proposed algorithm is to represent the recursion relation in the extended bellman equation by a simple replacement procedure of particles associated with a state by using those of the successor state . we show that our algorithm leads to a risk-sensitive rl paradigm . the usefulness of the proposed approach is demonstrated through numerical experiments . story_separator_special_tag deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward . however , environments contain a much wider variety of possible training signals . in this paper , we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning . all of these tasks share a common representation that , like unsupervised learning , continues to develop in the absence of extrinsic rewards . we also introduce a novel mechanism for focusing this representation upon extrinsic rewards , so that learning can rapidly adapt to the most relevant aspects of the actual task . our agent significantly outperforms the previous state-of-the-art on atari , averaging 880\\ % expert human performance , and a challenging suite of first-person , three-dimensional \\emph { labyrinth } tasks leading to a mean speedup in learning of 10 $ \\times $ and averaging 87\\ % expert human performance on labyrinth . story_separator_special_tag we propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers . we present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers . the best performing method , an asynchronous variant of actor-critic , surpasses the current state-of-the-art on the atari domain while training for half the time on a single multi-core cpu instead of a gpu . furthermore , we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3d mazes using a visual input . story_separator_special_tag we explore the use of evolution strategies ( es ) , a class of black box optimization algorithms , as an alternative to popular mdp-based rl techniques such as q-learning and policy gradients . experiments on mujoco and atari show that es is a viable solution strategy that scales extremely well with the number of cpus available : by using a novel communication strategy based on common random numbers , our es implementation only needs to communicate scalars , making it possible to scale to over a thousand parallel workers . this allows us to solve 3d humanoid walking in 10 minutes and obtain competitive results on most atari games after one hour of training . in addition , we highlight several advantages of es as a black box optimization technique : it is invariant to action frequency and delayed rewards , tolerant of extremely long horizons , and does not need temporal discounting or value function approximation . story_separator_special_tag this article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units . these algorithms , called reinforce algorithms , are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks , and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed . specific examples of such algorithms are presented , some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right . also given are results that show how such algorithms can be naturally integrated with backpropagation . we close with a brief discussion of a number of additional issues surrounding the use of such algorithms , including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms . story_separator_special_tag in this paper we consider deterministic policy gradient algorithms for reinforcement learning with continuous actions . the deterministic policy gradient has a particularly appealing form : it is the expected gradient of the action-value function . this simple form means that the deterministic policy gradient can be estimated much more efficiently than the usual stochastic policy gradient . to ensure adequate exploration , we introduce an off-policy actor-critic algorithm that learns a deterministic target policy from an exploratory behaviour policy . we demonstrate that deterministic policy gradient algorithms can significantly outperform their stochastic counterparts in high-dimensional action spaces . story_separator_special_tag technical process control is a highly interesting area of application serving a high practical impact . since classical controller design is , in general , a demanding job , this area constitutes a highly attractive domain for the application of learning approaches -- in particular , reinforcement learning ( rl ) methods . rl provides concepts for learning controllers that , by cleverly exploiting information from interactions with the process , can acquire high-quality control behaviour from scratch . this article focuses on the presentation of four typical benchmark problems whilst highlighting important and challenging aspects of technical process control : nonlinear dynamics ; varying set-points ; long-term dynamic effects ; influence of external variables ; and the primacy of precision . we propose performance measures for controller quality that apply both to classical control design and learning controllers , measuring precision , speed , and stability of the controller . a second set of key-figures describes the performance from the perspective of a learning approach while providing information about the efficiency of the method with respect to the learning effort needed . for all four benchmark problems , extensive and detailed information is provided with which to carry out story_separator_special_tag we adapt the ideas underlying the success of deep q-learning to the continuous action domain . we present an actor-critic , model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces . using the same learning algorithm , network architecture and hyper-parameters , our algorithm robustly solves more than 20 simulated physics tasks , including classic problems such as cartpole swing-up , dexterous manipulation , legged locomotion and car driving . our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives . we further demonstrate that for many of the tasks the algorithm can learn policies end-to-end : directly from raw pixel inputs . story_separator_special_tag invited talks.- data analysis in the life sciences - sparking ideas -.- machine learning for natural language processing ( and vice versa ? ) .- statistical relational learning : an inductive logic programming perspective.- recent advances in mining time series data.- focus the mining beacon : lessons and challenges from the world of e-commerce.- data streams and data synopses for massive data sets ( invited talk ) .- long papers.- clustering and metaclustering with nonnegative matrix decompositions.- a sat-based version space algorithm for acquiring constraint satisfaction problems.- estimation of mixture models using co-em.- nonrigid embeddings for dimensionality reduction.- multi-view discriminative sequential learning.- robust bayesian linear classifier ensembles.- an integrated approach to learning bayesian networks of rules.- thwarting the nigritude ultramarine : learning to identify link spam.- rotational prior knowledge for svms.- on the learnability of abstraction theories from observations for relational learning.- beware the null hypothesis : critical value tables for evaluating classifiers.- kernel basis pursuit.- hybrid algorithms with instance-based classification.- learning and classifying under hard budgets.- training support vector machines with multiple equality constraints.- a model based method for automatic facial expression recognition.- margin-sparsity trade-off for the set covering machine.- learning from positive and unlabeled examples with different data story_separator_special_tag this paper presents an actor-critic deep reinforcement learning agent with experience replay that is stable , sample efficient , and performs remarkably well on challenging environments , including the discrete 57-game atari domain and several continuous control problems . to achieve this , the paper introduces several innovations , including truncated importance sampling with bias correction , stochastic dueling network architectures , and a new trust region policy optimization method . story_separator_special_tag in this work we present a new reinforcement learning agent , called reactor ( for retrace-actor ) , based on an off-policy multi-step return actor-critic architecture . the agent uses a deep recurrent neural network for function approximation . the network outputs a target policy { \\pi } ( the actor ) , an action-value q-function ( the critic ) evaluating the current policy { \\pi } , and an estimated behavioral policy { \\hat \\mu } which we use for off-policy correction . the agent maintains a memory buffer filled with past experiences . the critic is trained by the multi-step off-policy retrace algorithm and the actor is trained by a novel { \\beta } -leave-one-out policy gradient estimate ( which uses both the off-policy corrected return and the estimated q-function ) . the reactor is sample-efficient thanks to the use of memory replay , and numerical efficient since it uses multi-step returns . also both acting and learning can be parallelized . we evaluated our algorithm on 57 atari 2600 games and demonstrate that it achieves state-of-the-art performance . story_separator_special_tag policy gradient is an efficient technique for improving a policy in a reinforcement learning setting . however , vanilla online variants are on-policy only and not able to take advantage of off-policy data . in this paper we describe a new technique that combines policy gradient with off-policy q-learning , drawing experience from a replay buffer . this is motivated by making a connection between the fixed points of the regularized policy gradient algorithm and the q-values . this connection allows us to estimate the q-values from the action preferences of the policy , to which we apply q-learning updates . we refer to the new technique as pgql , for policy gradient and q-learning . we also establish an equivalency between action-value fitting techniques and actor-critic algorithms , showing that regularized policy gradient techniques can be interpreted as advantage function learning algorithms . we conclude with some numerical examples that demonstrate improved data efficiency and stability of pgql . in particular , we tested pgql on the full suite of atari games and achieved performance exceeding that of both asynchronous advantage actor-critic ( a3c ) and q-learning . story_separator_special_tag model-free reinforcement learning algorithms , such as q-learning , perform poorly in the early stages of learning in noisy environments , because much effort is spent unlearning biased estimates of the state-action value function . the bias results from selecting , among several noisy estimates , the apparent optimum , which may actually be suboptimal . we propose g-learning , a new off-policy learning algorithm that regularizes the value estimates by penalizing deterministic policies in the beginning of the learning process . we show that this method reduces the bias of the value-function estimation , leading to faster convergence to the optimal value and the optimal policy . moreover , g-learning enables the natural incorporation of prior domain knowledge , when available . the stochastic nature of g-learning also makes it avoid some exploration costs , a property usually attributed only to on-policy algorithms . we illustrate these ideas in several examples , where g-learning results in significant improvements of the convergence rate and the cost of the learning process . story_separator_special_tag we propose a method for learning expressive energy-based policies for continuous states and actions , which has been feasible only in tabular domains before . we apply our method to learning maximum entropy policies , resulting into a new algorithm , called soft q-learning , that expresses the optimal policy via a boltzmann distribution . we use the recently proposed amortized stein variational gradient descent to learn a stochastic sampling network that approximates samples from this distribution . the benefits of the proposed algorithm include improved exploration and compositionality that allows transferring skills between tasks , which we confirm in simulated experiments with swimming and walking robots . we also draw a connection to actor-critic methods , which can be viewed performing approximate inference on the corresponding energy-based model . story_separator_special_tag two of the leading approaches for model-free reinforcement learning are policy gradient methods and $ q $ -learning methods . $ q $ -learning methods can be effective and sample-efficient when they work , however , it is not well-understood why they work , since empirically , the $ q $ -values they estimate are very inaccurate . a partial explanation may be that $ q $ -learning methods are secretly implementing policy gradient updates : we show that there is a precise equivalence between $ q $ -learning and policy gradient methods in the setting of entropy-regularized reinforcement learning , that `` soft '' ( entropy-regularized ) $ q $ -learning is exactly equivalent to a policy gradient method . we also point out a connection between $ q $ -learning methods and natural policy gradient methods . experimentally , we explore the entropy-regularized versions of $ q $ -learning and policy gradients , and we find them to perform as well as ( or slightly better than ) the standard variants on the atari benchmark . we also show that the equivalence holds in practical settings by constructing a $ q $ -learning method that closely matches the story_separator_special_tag abstract : learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately , and therefore , to some degree , its content and dynamics . this is why pixel-space video prediction may be viewed as a promising avenue for unsupervised feature learning . in addition , while optical flow has been a very studied problem in computer vision for a long time , future frame prediction is rarely approached . still , many vision applications could benefit from the knowledge of the next frames of videos , that does not require the complexity of tracking every pixel trajectories . in this work , we train a convolutional network to generate future frames given an input sequence . to deal with the inherently blurry predictions obtained from the standard mean squared error ( mse ) loss function , we propose three different and complementary feature learning strategies : a multi-scale architecture , an adversarial training method , and an image gradient difference loss function . we compare our predictions to different published results based on recurrent neural networks on the ucf101 dataset story_separator_special_tag conventional wisdom holds that model-based planning is a powerful approach to sequential decision-making . it is often very challenging in practice , however , because while a model can be used to evaluate a plan , it does not prescribe how to construct a plan . here we introduce the `` imagination-based planner '' , the first model-based , sequential decision-making agent that can learn to construct , evaluate , and execute plans . before any action , it can perform a variable number of imagination steps , which involve proposing an imagined action and evaluating it with its model-based imagination . all imagined actions and outcomes are aggregated , iteratively , into a `` plan context '' which conditions future real and imagined actions . the agent can even decide how to imagine : testing out alternative imagined actions , chaining sequences of actions together , or building a more complex `` imagination tree '' by navigating flexibly among the previously imagined states using a learned policy . and our agent can learn to plan economically , jointly optimizing for external rewards and computational costs associated with using its imagination . we show that our architecture can learn to story_separator_special_tag in this paper , we introduce pilco , a practical , data-efficient model-based policy search method . pilco reduces model bias , one of the key problems of model-based reinforcement learning , in a principled way . by learning a probabilistic dynamics model and explicitly incorporating model uncertainty into long-term planning , pilco can cope with very little data and facilitates learning from scratch in only a few trials . policy evaluation is performed in closed form using state-of-the-art approximate inference . furthermore , policy gradients are computed analytically for policy improvement . we report unprecedented learning efficiency on challenging and high-dimensional control tasks . story_separator_special_tag data-efficient learning in continuous state-action spaces using very high-dimensional observations remains a key challenge in developing fully autonomous systems . in this paper , we consider one instance of this challenge , the pixels-totorques problem , where an agent must learn a closed-loop control policy from pixel information only . we introduce a data-efficient , model-based reinforcement learning algorithm that learns such a closed-loop policy directly from pixel information . the key ingredient is a deep dynamical model that uses deep autoencoders to learn a low-dimensional embedding of images jointly with a prediction model in this low-dimensional feature space . this joint learning ensures that not only static properties of the data are accounted for , but also dynamic properties . this is crucial for long-term predictions , which lie at the core of the adaptive model predictive control strategy that we use for closedloop control . compared to state-of-the-art reinforcement learning methods , our approach learns quickly , scales to high-dimensional state spaces and facilitates fully autonomous learning from pixels to torques . story_separator_special_tag direct policy search can effectively scale to high-dimensional systems , but complex policies with hundreds of parameters often present a challenge for such methods , requiring numerous samples and often falling into poor local optima . we present a guided policy search algorithm that uses trajectory optimization to direct policy learning and avoid poor local optima . we show how differential dynamic programming can be used to generate suitable guiding samples , and describe a regularized importance sampled policy optimization that incorporates these samples into the policy search . we evaluate the method by learning neural network controllers for planar swimming , hopping , and walking , as well as simulated 3d humanoid running . story_separator_special_tag model-free reinforcement learning has been successfully applied to a range of challenging problems , and has recently been extended to handle large neural network policies and value functions . however , the sample complexity of model-free algorithms , particularly when using high-dimensional function approximators , tends to limit their applicability to physical systems . in this paper , we explore algorithms and representations to reduce the sample complexity of deep reinforcement learning for continuous control tasks . we propose two complementary techniques for improving the efficiency of such algorithms . first , we derive a continuous variant of the q-learning algorithm , which we call normalized adantage functions ( naf ) , as an alternative to the more commonly used policy gradient and actor-critic methods . naf representation allows us to apply q-learning with experience replay to continuous tasks , and substantially improves performance on a set of simulated robotic control tasks . to further improve the efficiency of our approach , we explore the use of learned models for accelerating model-free reinforcement learning . we show that iteratively refitted local linear models are especially effective for this , and demonstrate substantially faster learning on domains where such models are story_separator_special_tag model-free deep reinforcement learning algorithms have been shown to be capable of learning a wide range of robotic skills , but typically require a very large number of samples to achieve good performance . model-based algorithms , in principle , can provide for much more efficient learning , but have proven difficult to extend to expressive , high-capacity models such as deep neural networks . in this work , we demonstrate that neural network dynamics models can in fact be combined with model predictive control ( mpc ) to achieve excellent sample complexity in a model-based reinforcement learning algorithm , producing stable and plausible gaits that accomplish various complex locomotion tasks . we further propose using deep neural network dynamics models to initialize a model-free learner , in order to combine the sample efficiency of model-based approaches with the high task-specific performance of model-free methods . we empirically demonstrate on mujoco locomotion tasks that our pure model-based approach trained on just random action data can follow arbitrary trajectories with excellent sample efficiency , and that our hybrid algorithm can accelerate model-free learning on high-speed benchmark tasks , achieving sample efficiency gains of $ 3-5\\times $ on swimmer , cheetah , story_separator_special_tag we present a unified framework for learning continuous control policies using backpropagation . it supports stochastic control by treating stochasticity in the bellman equation as a deterministic function of exogenous noise . the product is a spectrum of general policy gradient algorithms that range from model-free methods with value functions to model-based methods without value functions . we use learned models but only require observations from the environment instead of observations from model-predicted trajectories , minimizing the impact of compounded model errors . we apply these algorithms first to a toy stochastic control problem and then to several physics-based control problems in simulation . one of these variants , svg ( 1 ) , shows the effectiveness of learning models , value functions , and policies simultaneously in continuous domains . story_separator_special_tag the recent adaptation of deep neural network-based methods to reinforcement learning and planning domains has yielded remarkable progress on individual tasks . nonetheless , progress on task-to-task transfer remains limited . in pursuit of efficient and robust generalization , we introduce the schema network , an object-oriented generative physics simulator capable of disentangling multiple causes of events and reasoning backward through causes to achieve goals . the richly structured architecture of the schema network can learn the dynamics of an environment directly from data . we compare schema networks with asynchronous advantage actor-critic and progressive networks on a suite of breakout variations , reporting results on training efficiency and zero-shot generalization , consistently demonstrating faster , more robust learning and better transfer . we argue that generalizing from limited data and learning causal relationships are essential abilities on the path toward generally intelligent systems . story_separator_special_tag four machine learning algorithms are used for prediction in stock markets.focus is on data pre-processing to improve the prediction accuracy.technical indicators are discretised by exploiting the inherent opinion.prediction accuracy of algorithms increases when discrete data is used . this paper addresses problem of predicting direction of movement of stock and stock price index for indian stock markets . the study compares four prediction models , artificial neural network ( ann ) , support vector machine ( svm ) , random forest and naive-bayes with two approaches for input to these models . the first approach for input data involves computation of ten technical parameters using stock trading data ( open , high , low & close prices ) while the second approach focuses on representing these technical parameters as trend deterministic data . accuracy of each of the prediction models for each of the two input approaches is evaluated . evaluation is carried out on 10years of historical data from 2003 to 2012 of two stocks namely reliance industries and infosys ltd. and two stock price indices cnx nifty and s & p bombay stock exchange ( bse ) sensex . the experimental results suggest that for the first approach story_separator_special_tag financial news has been proven to be a crucial factor which causes fluctuations in stock prices . however , previous studies heavily relied on analyzing shallow features and ignored the structural relation among words in a sentence . several sentiment analysis studies have tried to point out the relationship between investors reaction and news events . however , the sentiment dataset was usually constructed from the lingual dataset which is unrelated to the financial sector and led to poor performance . this paper proposes a novel framework to predict the directions of stock prices by using both financial news and sentiment dictionary . the original contributions of this paper include the proposal of a novel two-stream gated recurrent unit network and stock2vec a sentiment word embedding trained on financial news dataset and harvard iv-4 . two main experiments are conducted : the first experiment predicts sp 2 ) stock2vec is more efficient in dealing with financial datasets ; and 3 ) applying the model , a simulation scenario proves that our model is effective for the stock sector . story_separator_special_tag from past to present , the prediction of stock price in stock market has been a knotty problem . many researchers have made various attempts and studies to predict stock prices . the prediction of stock price in stock market has been of concern to researchers in many disciplines , including economics , mathematics , physics , and computer science . this study intends to learn fluctuation of stock prices in stock market by using recently spotlighted techniques of deep learning to predict future stock price . in previous studies , we have used price-based input-features to measure performance changes in deep learning models . results of this studies have revealed that the performance of stock price models would change according to varied input-features configured based on stock price . therefore , we have concluded that more novel input-feature in deep learning model is needed to predict patterns of stock price fluctuation more precisely . in this paper , for predicting stock price fluctuation , we design deep learning model using 715 novel input-features configured on the basis of technical analyses . the performance of the prediction model was then compared to another model that employed simple price-based input-features . story_separator_special_tag in this paper we began with finding ways to predict stock value flows of stock using deep learning . the purpose of this paper is to analyze the patterns in stock value and to analyze the relationship from stock values by deep running to predict what patterns will happen next stock value . in this paper we made the data by dividing the stock value information of the time series for a certain period of time and the pattern of stock value by analyzing these data . it is configured the model to be used for deep learning and learned the patterned time series information using the created model . and then it is predicted the next pattern of stock value . this paper focused machine learning . it is used of a time-series stock value information to predict the rise and fall of stock value . this paper is about how to analyze and how to predict . on the other hand , we can expect trend of stock value with high probability by analyzing pattern of current chart and anticipating pattern to follow . this is about what the deep-learning machine will analyze and predict for what . story_separator_special_tag the main attractive feature to stock market is speedy growth of stock economic value in short yoke of time . the investor analyses the demonstration , estimated value and growth of organizations before investing money in market . the analysis may not be enough by using conventional process or some available methods suggested by different researches . in present days large number of stocks are available in market it is very difficult to study each stock by help of very few suggested foretelling methods . to know the anticipated stock value we need some advanced prediction technology for stock market . this paper introduce an advanced skillful method to plan and analyze the different organizers stock execution in market and prognosticate best suitable stock by predicting close price of stock . the projected arrangement is based on multilayer deep learning neural network optimized by adam optimizer . recent 6 years ( 2010-2016 ) data of different organizations are applied to the model to demonstrate the skillfulness of the projected proficient method . from result it has been ascertained that the projected framework is best suited to all different data set of various sectors . the prediction error is very minimal story_separator_special_tag in this paper , we introduce a new prediction model depend on bidirectional gated recurrent unit ( bgru ) . our predictive model relies on both online financial news and historical stock prices data to predict the stock movements in the future . experimental results show that our model accuracy achieves nearly 60 % in s & p 500 index prediction whereas the individual stock prediction is over 65 % . story_separator_special_tag in the business sector , it has always been a difficult task to predict the exact daily price of the stock market index , hence , there is a great deal of research being conducted regarding the prediction of the direction of stock price index movement . many factors such as political events , general economic conditions , and traders ' expectations may have an influence on the stock market index . there are numerous research studies that use indicators to forecast the direction of the stock market index . in this study , we applied two types of input variables to predict the direction of the daily stock market index . the main contribution of this study is the ability to predict the direction of the next day 's price of the japanese stock market index by using an optimized artificial neural network ( ann ) model . to improve the prediction accuracy of the trend of the stock market index in the future , we optimize the ann model using genetic algorithms ( ga ) . we demonstrate and verify the predictability of stock price direction by using the hybrid ga-ann model and then compare the performance with story_separator_special_tag the automobile insurance fraud is one of the main challenges for insurance companies . this form of fraud is performed either opportunistic or professional occurring through group cooperation that leads to greater financial losses , while most presented methods thus far are unsuited for flagging these groups . the article has put forward a new approach for identification , representation , and analysis of organized fraudulent groups in automobile insurance through focusing on structural aspects of networks , and cycles in particular , that demonstrate the occurrence of potential fraud . suspicious groups have been detected by applying cycle detection algorithms ( using both dfs , bfs trees ) , afterward , the probability of being fraudulent for suspicious components were investigated to reveal fraudulent groups with the maximum likelihood , and their reviews were prioritized . the actual data of iran insurance company is used for evaluating the provided approach . as a result , the detection of cycles is not only more efficient , accurate , but also less time-consuming in comparison with previous methods for finding such groups . story_separator_special_tag in recent years , technological improvements have provided a variety of new opportunities for insurance companies to adopt telematics devices in line with usage-based insurance models . this paper sheds new light on the application of big data analytics for car insurance companies that may help to estimate the risks associated with individual policyholders based on complex driving patterns . we propose a conceptual framework that describes the structural design of a risk predictor model for insurance customers and combines the value of telematics data with deep learning algorithms . the model s components consist of data transformation , criteria mining , risk modelling , driving style detection , and risk prediction . the expected outcome is our methodology that generates more accurate results than other methods in this area . story_separator_special_tag abstract multiple objects may be sold by posting a schedule consisting of one price for each possible bundle and permitting the buyer to select the price bundle pair of his choice . we identify conditions that must be satisfied by any price schedule that maximizes revenue within the class of all such schedules . we then provide conditions under which a price schedule maximizes expected revenue within the class of all incentive compatible and individually rational mechanisms in the n -object case . we use these results to characterize environments , mainly distributions of valuations , where bundling is the optimal mechanism in the two and three good cases . story_separator_special_tag we solve for the optimal mechanism for selling two goods when the buyer s demand characteristics are unobservable . in the case of substitutable goods , the seller has an incentive to offer lotteries over goods in order to charge the buyers with large differences in the valuations a higher price for obtaining their desired good with certainty . however , the seller also has a countervailing incentive to make the allocation of the goods among the participating buyers more efficient in order to increase the overall demand . in the case when the buyer can consume both goods , the seller has an incentive to underprovide one of the goods in order to charge the buyers with large valuations a higher price for the bundle of both goods . as in the case of substitutable goods , the seller also has a countervailing incentive to lower the price of the bundle in order to increase the overall demand . story_separator_special_tag we obtain a characterization of feasible , bayesian , multi-item multi-bidder auctions with independent , additive bidders as distributions over hierarchical mechanisms . combined with cyclic-monotonicity our results provide a complete characterization of feasible , bayesian incentive compatible ( bic ) auctions for this setting . our characterization is enabled by a novel , constructive proof of border 's theorem , and a new generalization of this theorem to independent ( but not necessarily iid ) bidders . for one item and independent bidders , we show that any feasible reduced form auction can be implemented as a distribution over hierarchical mechanisms . we also give a polytime algorithm for determining feasibility of a reduced form , or finding a separation hyperplane from feasible reduced forms . finally , we provide polytime algorithms to find and exactly sample from a distribution over hierarchical mechanisms consistent with a given feasible reduced form . our results generalize to multi-item reduced forms for independent , additive bidders . for multiple items , additive bidders with hard demand constraints , and arbitrary value correlation across items or bidders , we give a proper generalization of border 's theorem , and characterize feasible reduced forms story_separator_special_tag designing an auction that maximizes expected revenue is an intricate task . despite major efforts , only the single-item case is fully understood . we explore the use of tools from deep learning on this topic . the design objective is revenue optimal , dominant-strategy incentive compatible auctions . for a baseline , we show that multi-layer neural networks can learn almost-optimal auctions for a variety of settings for which there are analytical solutions , and even without encoding characterization results into the design of the network . our research also demonstrates the potential that deep nets have for deriving auctions with high revenue for poorly understood problems . a fundamental result in auction theory is the characterization of revenue optimal auctions as virtual value maximizers [ 21 ] . we know , for example , that second price auctions with a suitably chosen reserve price are optimal when selling to bidders with i.i.d . values , and how to prioritize one bidder over another in settings with bidder asymmetry . myerson s theory is as rare as it is beautiful . in a single item auction , a bidder s type is a single number ( her value for story_separator_special_tag the design of revenue-maximizing auctions for settings with private budgets is a hard task . even the single-item case is not fully understood , and there are no analytical results for optimal , dominant-strategy incentive compatibile , two-item auctions . in this work , we model a mechanism as a neural network , and use machine learning for the automated design of optimal auctions . we extend the \\em regretnet framework~\\citedeep-auction to handle private budget constraints and bayesian incentive compatibility . we discover new auctions with very close approximations to incentive-compatibility and high revenue for multi-unit auctions with private budgets , including problems with unit-demand bidders . for benchmarking purposes , we also illustrate that \\em regretnet can obtain essentially optimal designs for simpler settings where analytical solutions are available~\\citeche2000 , malakhov2008 , pai2014 . story_separator_special_tag we explore an approach to designing false-name-proof auction mechanisms using deep learning . while multi-agent systems researchers have recently proposed data-driven approaches to automatically designing auction mechanisms through deep learning , false-name-proofness , which generalizes strategy-proofness by assuming that a bidder can submit multiple bids under fictitious identifiers , has not been taken into account as a property that a mechanism has to satisfy . we extend the regretnet neural network architecture to incorporate false-name-proof constraints and then conduct experiments demonstrating that the generated mechanisms satisfy false-name-proofness . story_separator_special_tag blockchain has recently been applied in many applications such as bitcoin , smart grid , and internet of things ( iot ) as a public ledger of transactions . however , the use of blockchain in mobile environments is still limited because the mining process consumes too much computing and energy resources on mobile devices . edge computing offered by the edge computing service provider ( ecsp ) can be adopted as a viable solution for offloading the mining tasks from the mobile devices , i.e. , miners , in the mobile blockchain environment . however , a mechanism for edge resource allocation to maximize the revenue for the ecsp and to ensure incentive compatibility and individual rationality is still open . in this paper , we develop an optimal auction based on deep learning for the edge resource allocation . specifically , we construct a multi-layer neural network architecture based on an analytical solution of the optimal auction . the neural networks first perform monotone transformations of the miners ' bids . then , they calculate allocation and conditional payment rules for the miners . we use valuations of the miners as the training data to adjust parameters of story_separator_special_tag designing an incentive compatible auction that maximizes expected revenue is an intricate task . the single-item case was resolved in a seminal piece of work by myerson in 1981. even after 30-40 years of intense research the problem remains unsolved for settings with two or more items . in this work , we initiate the exploration of the use of tools from deep learning for the automated design of optimal auctions . we model an auction as a multi-layer neural network , frame optimal auction design as a constrained learning problem , and show how it can be solved using standard machine learning pipelines . we prove generalization bounds and present extensive experiments , recovering essentially all known analytical solutions for multi-item settings , and obtaining novel mechanisms for settings in which the optimal mechanism is unknown . story_separator_special_tag fraudulent credit card transaction is still one of problems that face the companies and banks sectors ; it causes them to lose billions of dollars every year . the design of efficient algorithm is one of the most important challenges in this area . this paper aims to propose an efficient approach that automatic detects fraud credit card related to insurance companies using deep learning algorithm called\xa0autoencoders . the effectiveness of the proposed method has been proved in identifying fraud in actual data from transactions made by credit cards in september 2013 by european cardholders . in addition , a solution for data unbalancing is provided in this paper , which affects most current algorithms . the suggested solution relies on training for the autoencoder for the reconstruction normal data . anomalies are detected by defining a reconstruction error threshold and considering the cases with a superior threshold as anomalies . the algorithm 's performance was able to detected fraudulent transactions between 64 % at the threshold = 5 , 79 % at the threshold = 3 and 91 % at threshold= 0.7 , it is better in performance compare with logistic regression 57 % in unbalanced dataset . story_separator_special_tag most of the current anti money laundering ( aml ) systems , using handcrafted rules , are heavily reliant on existing structured databases , which are not capable of effectively and efficiently identifying hidden and complex ml activities , especially those with dynamic and timevarying characteristics , resulting in a high percentage of false positives . therefore , analysts1 are engaged for further investigation which significantly increases human capital cost and processing time . to alleviate these issues , this paper presents a novel framework for the next generation aml by applying and visualizing deep learning-driven natural language processing ( nlp ) technologies in a distributed and scalable manner to augment aml monitoring and investigation . the proposed distributed framework performs news and tweet sentiment analysis , entity recognition , relation extraction , entity linking and link analysis on different data sources ( e.g . news articles and tweets ) to provide additional evidence to human investigators for final decisionmaking . each nlp module is evaluated on a task-specific data set , and the overall experiments are performed on synthetic and real-world datasets . feedback from aml practitioners suggests that our system can reduce approximately 30 % time and cost story_separator_special_tag a positive slope of the yield curve is associated with a future increase in real economic activity : consumption ( nondurables plus services ) , consumer durables , and investment . it has extra predictive power over the index of leading indicators , real short-term interest rates , lagged growth in economic activity , and lagged rates of inflation . it outperforms survey forecasts , both in-sample and out-of-sample . historically , the information in the slope reflected , inter alia , factors that were independent of monetary policy and , thus , the slope could have provided useful information both to private investors and to policymakers . copyright 1991 by american finance association . ( this abstract was borrowed from another version of this item . ) story_separator_special_tag economic policymaking relies upon accurate forecasts of economic conditions . current methods for unconditional forecasting are dominated by inherently linear models { { p } } that exhibit model dependence and have high data demands . { { p } } we explore deep neural networks as an { { p } } opportunity to improve upon forecast accuracy with limited data and while remaining agnostic as to { { p } } functional form . we focus on predicting civilian unemployment using models based on four different neural network architectures . each of these models outperforms bench- mark models at short time horizons . one model , based on an encoder decoder architecture outperforms benchmark models at every forecast horizon ( up to four quarters ) . story_separator_special_tag an artificial neural network ( hence after , ann ) is an information processing paradigm that is inspired by the way biological nervous systems , such as the brain , process information . in previous two decades , ann applications in economics and finance ; for such tasks as pattern reorganization , and time series forecasting , have dramatically increased . many central banks use forecasting models based on ann methodology for predicting various macroeconomic indicators , like inflation , gdp growth and currency in circulation etc . in this paper , we have attempted to forecast monthly yoy inflation for pakistan by using ann for fy08 on the basis of monthly data of july 1993 to june 2007. we also compare the forecast performance of the ann model with conventional univariate time series forecasting models such as ar ( 1 ) and arima based models and observed that rmse of ann based forecasts is much less than the rmse of forecasts based on ar ( 1 ) and arima models . at least by this criterion forecast based on ann are more precise . story_separator_special_tag we show how one can use deep neural networks with macro-economic data in conjunction with price-volume data in a walk-forward setting to do tactical asset allocation . low cost publicly traded etfs corresponding to major asset classes ( equities , fixed income , real estate ) and geographies ( us , ex-us developed , emerging ) are used as proxies for asset classes and for back-testing performance . we take dropout as a bayesian approximation to obtain prediction uncertainty and show it often deviates significantly from other measures of uncertainty such as volatility . we propose two very different ways of portfolio construction - one based on expected returns and uncertainty and the other which obtains allocations as part of the neural network and optimizes a custom utility function such as portfolio sharpe . we also find that adding a layer of error correction helps reduce drawdown significantly during the 2008 financial crisis . finally , we compare results to risk parity and show that the above deep learning strategies trained in totally walk-forward manner have comparable performance . story_separator_special_tag in financial risk , credit risk management is one of the most important issues in financial decision-making . reliable credit scoring models are crucial for financial agencies to evaluate credit applications and have been widely studied in the field of machine learning and statistics . deep learning is a powerful classification tool which is currently an active research area and successfully solves classification problems in many domains . deep learning provides training stability , generalization , and scalability with big data . deep learning is quickly becoming the algorithm of choice for the highest predictive accuracy . feature selection is a process of selecting a subset of relevant features , which can decrease the dimensionality , reduce the running time , and improve the accuracy of classifiers . in this study , we constructed a credit scoring model based on deep learning and feature selection to evaluate the applicant s credit score from the applicant s input features . two public datasets , australia and german credit ones , have been used to test our method . the experimental results of the real world data showed that the proposed method results in a higher prediction rate than a baseline method story_separator_special_tag this paper analyzes multi-period mortgage risk at loan and pool levels using an unprecedented dataset of over 120 million prime and subprime mortgages originated across the united states between 1995 and 2014 , which includes the individual characteristics of each loan , monthly updates on loan performance over the life of a loan , and a number of time-varying economic variables at the zip code level . we develop , estimate , and test dynamic machine learning models for mortgage prepayment , delinquency , and foreclosure which capture loan-to-loan correlation due to geographic proximity and exposure to common risk factors . the basic building block is a deep neural network which addresses the nonlinear relationship between the explanatory variables and loan performance . our likelihood estimators , which are based on 3.5 billion borrower-month observations , indicate that mortgage risk is strongly influenced by local economic factors such as zip-code level foreclosure rates . the out-of-sample predictive performance of our deep learning model is a significant improvement over linear models such as logistic regression . model parameters are estimated using gpu parallel computing due to the computational challenges associated with the large amount of data . the deep learning model story_separator_special_tag the aim of this paper is to layout deep investment techniques in financial markets using deep learning models . financial prediction problems usually involve huge variety of data-sets with complex data interactions which makes it difficult to design an economic model . applying deep learning models to such problems can exploit potentially non-linear patterns in data . in this paper author introduces deep learning hierarchical decision models for prediction analysis and better decision making for financial domain problem set such as pricing securities , risk factor analysis and portfolio selection . the section 3 includes architecture as well as detail on training a financial domain deep learning neural network . it further lays out different models such aslstm , auto-encoding , smart indexing , credit risk analysis model for solving the complex data interactions . the experiments along with their results show how these models can be useful in deep investments for financial domain problems . general terms prediction analysis , portfolio selection , financial markets story_separator_special_tag this paper presents the random neural network in a deep learning cluster structure with a new learning algorithm based on the genetics according to the genome model , where information is transmitted in the combination of genes rather than the genes themselves . the proposed genetic model transmits information to future generations in the network weights rather than the neurons . the innovative genetic algorithm is implanted in a complex deep learning structure that emulates the human brain : reinforcement learning takes fast local current decisions , deep learning clusters provide identity and memory , deep learning management clusters take final strategic decisions and finally genetic learning transmits the information learned to future generations . this proposed structure has been applied and validated in fintech ; a smart investment application : an intelligent banker that performs buy and sell decisions on several assets with an associated market and risk . our results are promising ; we have connected the human brain and genetics with machine learning based on the random neural network model where biology ; similar as artificial intelligence is learning gradually and continuously while adapting to the environment . story_separator_special_tag we propose a nonparametric method for estimating derivative financial asset pricing formulae using learning networks . to demonstrate feasibility , we first simulate black-scholes option prices and show that learning networks can recover the black-scholes formula from a two-year training set of daily options prices , and that the resulting network formula can be used successfully to both price and delta-hedge options out-of-sample . for comparison , we estimate models using four popular methods : ordinary least squares , radial basis functions , multilayer perceptrons , and projection pursuit . to illustrate practical relevance , we also apply our approach to s\\ & p 500 futures options data from 1987 to 1991 . story_separator_special_tag customer behavior analysis is an essential issue for retailers , allowing for optimized store performance , enhanced customer experience , reduced operational costs , and consequently higher profitability . nevertheless , not much attention has been given to computer vision approaches to automatically extract relevant information from images that could be of great value to retailers . in this paper , we present a low-cost deep learning approach to estimate the number of people in retail stores in real-time and to detect and visualize hot spots . for this purpose , only an inexpensive rgb camera , such as a surveillance camera , is required . to solve the people counting problem , we employ a supervised learning approach based on a convolutional neural network ( cnn ) regression model . we also present a four channel image representation named rgbp image , composed of the conventional rgb image and an extra binary image p representing whether there is a visible person in each pixel of the image . to extract the latter information , we developed a foreground/background detection method that considers the peculiarities of people behavior in retail stores . the p image is also exploited to detect story_separator_special_tag retail food packages contain various types of information such as food name , ingredients list and use by dates . such information is critical to ensure proper distribution of products to the market and eliminate health risks due to erroneous mislabelling . the latter is considerably detrimental to both consumers and suppliers alike . in this paper , an adaptable deep learning based system is proposed and tested across various possible scenarios : a ) for the identification of blurry images and/or missing information from food packaging photos . these were captured during the validation process in supply chains ; b ) for deep neural network adaptation . this was achieved through a novel methodology that utilises facets of the same convolutional neural network architecture . latent variables were extracted from different datasets and used as input into a -means clustering and -nearest neighbour classification algorithm , to compute a new set of centroids which better adapts to the target dataset 's distribution . furthermore , visualisation and analysis of network adaptation provides insight into how higher accuracy was achieved when compared to the original deep neural network . the proposed system performed very well in the conducted experiments , story_separator_special_tag to tackle the complex problem of providing business intelligence solutions based on business data , bioinspired deep learning has to be considered . this paper focuses on the application of artificial metaplasticity learning in business intelligence systems as an alternative paradigm of achieving a deeper information extraction and learning from arbitrary size data sets . as a case study , artificial metaplasticity multilayer perceptron applied to the automation of credit approval decision based on collected client data is analyzed , showing its potential and improvements over the state-of-the-art techniques . this paper successfully introduces the relevant novelty that the artificial neural network itself estimates the pdf of the input data to be used in the metaplasticity learning , so it is much closer to the biologic reality than previous implementations of artificial metaplasticity . story_separator_special_tag in this paper , we propose binet , a neural network architecture for real-time multivariate anomaly detection in business process event logs . binet has been designed to handle both the control flow and the data perspective of a business process . additionally , we propose a heuristic for setting the threshold of an anomaly detection algorithm automatically . we demonstrate that binet can be used to detect anomalies in event logs not only on a case level , but also on event attribute level . we compare binet to 6 other state-of-the-art anomaly detection algorithms and evaluate their performance on an elaborate data corpus of 60 synthetic and 21 real life event logs using artificial anomalies . binet reached an average \\ ( f_1\\ ) score over all detection levels of 0.83 , whereas the next best approach , a denoising autoencoder , reached only 0.74. this \\ ( f_1\\ ) score is calculated over two different levels of detection , namely case and attribute level . binet reached 0.84 on case and 0.82 on attribute level , whereas the next best approach reached 0.78 and 0.71 respectively . story_separator_special_tag this paper investigates the credit scoring accuracy of `` ve neural network models : multilayer perceptron , mixture-of-experts , radial basis function , learning vector quantization , and fuzzy adaptive resonance . the neural network credit scoring models are tested using 10-fold crossvalidation with two real world data sets . results are benchmarked against more traditional methods under consideration for commercial applications including linear discriminant analysis , logistic regression , k nearest neighbor , kernel density estimation , and decision trees . results demonstrate that the multilayer perceptron may not be the most accurate neural network model , and that both the mixture-of-experts and radial basis function neural network models should be considered for credit scoring applications . logistic regression is found to be the most accurate of the traditional methods . scope and purpose in the last few decades quantitative methods known as credit scoring models have been developed for the credit granting decision . the objective of quantitative credit scoring models is to assign credit applicants to one of two groups : a good credita group that is likely to repay the `` nancial obligation , or a bad credita group that should be denied credit because of story_separator_special_tag speed and scalability are two essential issues in data mining and knowledge discovery . this paper proposed a mathematical programming model that addresses these two issues and applied the model to credit classification problems . the proposed multi-criteria convex quadric programming ( mcqp ) model is highly efficient ( computing time complexity o ( n^1^.^5^-^2 ) ) and scalable to massive problems ( size of o ( 10^9 ) ) because it only needs to solve linear equations to find the global optimal solution . kernel functions were introduced to the model to solve nonlinear problems . in addition , the theoretical relationship between the proposed mcqp model and svm was discussed . story_separator_special_tag display omitted this study proposes an intelligent hybrid trading system for discovering technicaltrading rules.this study deals with the optimization problem of data discretization and reducts.rough set analysis is adopted to represent trading rules.a genetic algorithm is used to discover optimal and sub-optimal trading rules.to evaluate the proposed system , a sliding window method is applied . discovering intelligent technical trading rules from nonlinear and complex stock market data , and then developing decision support trading systems , is an important challenge . the objective of this study is to develop an intelligent hybrid trading system for discovering technical trading rules using rough set analysis and a genetic algorithm ( ga ) . in order to obtain better trading decisions , a novel rule discovery mechanism using a ga approach is proposed for solving optimization problems ( i.e. , data discretization and reducts ) of rough set analysis when discovering technical trading rules for the futures market . experiments are designed to test the proposed model against comparable approaches ( i.e. , random , correlation , and ga approaches ) . in addition , these comprehensive experiments cover most of the current trading system topics , including the use of a story_separator_special_tag stock trading strategy plays a crucial role in investment companies . however , it is challenging to obtain optimal strategy in the complex and dynamic stock market . we explore the potential of deep reinforcement learning to optimize stock trading strategy and thus maximize investment return . 30 stocks are selected as our trading stocks and their daily prices are used as the training and trading market environment . we train a deep reinforcement learning agent and obtain an adaptive trading strategy . the agent 's performance is evaluated and compared with dow jones industrial average and the traditional min-variance portfolio allocation strategy . the proposed deep reinforcement learning approach is shown to outperform the two baselines in terms of both the sharpe ratio and cumulative returns . story_separator_special_tag portfolio allocation is crucial for investment companies . however , getting the best strategy in a complex and dynamic stock market is challenging . in this paper , we propose a novel adaptive deep deterministic reinforcement learning scheme ( adaptive ddpg ) for the portfolio allocation task , which incorporates optimistic or pessimistic deep reinforcement learning that is reflected in the influence from prediction errors . dow jones 30 component stocks are selected as our trading stocks and their daily prices are used as the training and testing data . we train the adaptive ddpg agent and obtain a trading strategy . the adaptive ddpg 's performance is compared with the vanilla ddpg , dow jones industrial average index and the traditional min-variance and mean-variance portfolio allocation strategies . adaptive ddpg outperforms the baselines in terms of the investment return and the sharpe ratio . story_separator_special_tag the stock market plays a major role in the entire financial market . how to obtain effective trading signals in the stock market is a topic that stock market has long been discussing . this paper first reviews the deep reinforcement learning theory and model , validates the validity of the model through empirical data , and compares the benefits of the three classical deep reinforcement learning models . from the perspective of the automated stock market investment transaction decision-making mechanism , deep reinforcement learning model has made a useful reference for the construction of investor automation investment model , the construction of stock market investment strategy , the application of artificial intelligence in the field of financial investment and the improvement of investor strategy yield . story_separator_special_tag in this paper , we implement three state-of-art continuous reinforcement learning algorithms , deep deterministic policy gradient ( ddpg ) , proximal policy optimization ( ppo ) and policy gradient ( pg ) in portfolio management . all of them are widely-used in game playing and robot control . what 's more , ppo has appealing theoretical propeties which is hopefully potential in portfolio management . we present the performances of them under different settings , including different learning rates , objective functions , feature combinations , in order to provide insights for parameters tuning , features selection and data preparation . we also conduct intensive experiments in china stock market and show that pg is more desirable in financial market than ddpg and ppo , although both of them are more advanced . what 's more , we propose a so called adversarial training method and show that it can greatly improve the training efficiency and significantly promote average daily return and sharpe ratio in back test . based on this new modification , our experiments results show that our agent based on policy gradient can outperform ucrp . story_separator_special_tag portfolio management is the decision-making process of allocating an amount of fund into different financial investment products . cryptocurrencies are electronic and decentralized alternatives to government-issued money , with bitcoin as the best-known example of a cryptocurrency . this paper presents a model-less convolutional neural network with historic prices of a set of financial assets as its input , outputting portfolio weights of the set . the network is trained with 0.7 years ' price data from a cryptocurrency exchange . the training is done in a reinforcement manner , maximizing the accumulative return , which is regarded as the reward function of the network . back test trading experiments with trading period of 30 minutes is conducted in the same market , achieving 10-fold returns in 1.8 month 's periods . some recently published portfolio selection strategies are also used to perform the same back tests , whose results are compared with the neural network . the network is not limited to cryptocurrency , but can be applied to any other financial markets . story_separator_special_tag dynamic portfolio optimization is the process of sequentially allocating wealth to a collection of assets in some consecutive trading periods , based on investors ' return-risk profile . automating this process with machine learning remains a challenging problem . here , we design a deep reinforcement learning ( rl ) architecture with an autonomous trading agent such that , investment decisions and actions are made periodically , based on a global objective , with autonomy . in particular , without relying on a purely model-free rl agent , we train our trading agent using a novel rl architecture consisting of an infused prediction module ( ipm ) , a generative adversarial data augmentation module ( dam ) and a behavior cloning module ( bcm ) . our model-based approach works with both on-policy or off-policy rl algorithms . we further design the back-testing and execution engine which interact with the rl agent in real time . using historical { \\em real } financial market data , we simulate trading with practical constraints , and demonstrate that our proposed model is robust , profitable and risk-sensitive , as compared to baseline trading strategies and model-free rl agents from prior work . story_separator_special_tag recommendation is crucial in both academia and industry , and various techniques are proposed such as content-based collaborative filtering , matrix factorization , logistic regression , factorization machines , neural networks and multi-armed bandits . however , most of the previous studies suffer from two limitations : ( 1 ) considering the recommendation as a static procedure and ignoring the dynamic interactive nature between users and the recommender systems , ( 2 ) focusing on the immediate feedback of recommended items and neglecting the long-term rewards . to address the two limitations , in this paper we propose a novel recommendation framework based on deep reinforcement learning , called drr . the drr framework treats recommendation as a sequential decision making procedure and adopts an `` actor-critic '' reinforcement learning scheme to model the interactions between the users and recommender systems , which can consider both the dynamic adaptation and long-term rewards . furthermore , a state representation module is incorporated into drr , which can explicitly capture the interactions between items and users . three instantiation structures are developed . extensive experiments on four real-world datasets are conducted under both the offline and online evaluation settings . the experimental story_separator_special_tag bidding optimization is one of the most critical problems in online advertising . sponsored search ( ss ) auction , due to the randomness of user query behavior and platform nature , usually adopts keyword-level bidding strategies . in contrast , the display advertising ( da ) , as a relatively simpler scenario for auction , has taken advantage of real-time bidding ( rtb ) to boost the performance for advertisers . in this paper , we consider the rtb problem in sponsored search auction , named ss-rtb . ss-rtb has a much more complex dynamic environment , due to stochastic user query behavior and more complex bidding policies based on multiple keywords of an ad . most previous methods for da can not be applied . we propose a reinforcement learning ( rl ) solution for handling the complex dynamic environment . although some rl methods have been proposed for online advertising , they all fail to address the `` environment changing '' problem : the state transition probabilities vary between two days . motivated by the observation that auction sequences of two days share similar transition patterns at a proper aggregation level , we formulate a robust mdp story_separator_special_tag in this paper we present an end-to-end framework for addressing the problem of dynamic pricing on e-commerce platform using methods based on deep reinforcement learning ( drl ) . by using four groups of different business data to represent the states of each time period , we model the dynamic pricing problem as a markov decision process ( mdp ) . compared with the state-of-the-art drl-based dynamic pricing algorithms , our approaches make the following three contributions . first , we extend the discrete set problem to the continuous price set . second , instead of using revenue as the reward function directly , we define a new function named difference of revenue conversion rates ( drcr ) . third , the cold-start problem of mdp is tackled by pre-training and evaluation using some carefully chosen historical sales data . our approaches are evaluated by both offline evaluation method using real dataset of alibaba inc. , and online field experiments on this http url , a major online shopping website owned by alibaba inc. in particular , experiment results suggest that drcr is a more appropriate reward function than revenue , which is widely used by current literature . in story_separator_special_tag in this paper , we propose a novel deep reinforcement learning framework for news recommendation . online personalized news recommendation is a highly challenging problem due to the dynamic nature of news features and user preferences . although some online recommendation models have been proposed to address the dynamic nature of news recommendation , these methods have three major issues . first , they only try to model current reward ( e.g. , click through rate ) . second , very few studies consider to use user feedback other than click / no click labels ( e.g. , how frequent user returns ) to help improve recommendation . third , these methods tend to keep recommending similar news to users , which may cause users to get bored . therefore , to address the aforementioned challenges , we propose a deep q-learning based recommendation framework , which can model future reward explicitly . we further consider user return pattern as a supplement to click / no click label in order to capture more user feedback information . in addition , an effective exploration strategy is incorporated to find new attractive news for users . extensive experiments are conducted on the story_separator_special_tag invited papers.- is personalization all about technology ? .- adaptive linking between text and photos using common sense reasoning.- resource-adaptive interfaces to hybrid navigation systems.- full papers.- ubiquitous user assistance in a tourist information server.- automatic extraction of semantically-meaningful information from the web.- towards open adaptive hypermedia.- gas : group adaptive system.- tv scout : lowering the entry barrier to personalized tv program recommendation.- adaptivity , adaptability , and reading behaviour : some results from the evaluation of a dynamic hypertext system.- towards generic adaptive systems : analysis of a case study.- a methodology for developing adaptive educational-game environments.- multi-model , metadata driven approach to adaptive hypermedia services for personalized elearning.- adaptation and personalization on board cars : a framework and its application to tourist services.- adaptive authoring of adaptive educational hypermedia.- hypermedia presentation adaptation on the semantic web.- user data management and usage model acquisition in an adaptive educational collaborative environment.- personalizing assessment in adaptive educational hypermedia systems.- visual based content understanding towards web adaptation.- knowledge modeling for open adaptive hypermedia.- adaptive navigation for learners in hypermedia is scaffolded navigation.- pros and cons of controllability : an empirical study.- personis : a server for user models.- the munich reference story_separator_special_tag recent progress in artificial intelligence through reinforcement learning ( rl ) has shown great success on increasingly complex single-agent environments and two-player turn-based games . however , the real-world contains multiple agents , each learning and acting independently to cooperate and compete with other agents , and environments reflecting this degree of complexity remain an open challenge . in this work , we demonstrate for the first time that an agent can achieve human-level in a popular 3d multiplayer first-person video game , quake iii arena capture the flag , using only pixels and game points as input . these results were achieved by a novel two-tier optimisation process in which a population of independent rl agents are trained concurrently from thousands of parallel matches with agents playing in teams together and against each other on randomly generated environments . each agent in the population learns its own internal reward signal to complement the sparse delayed reward from winning , and selects actions using a novel temporally hierarchical representation that enables the agent to reason at multiple timescales . during game-play , these agents display human-like behaviours such as navigating , following , and defending based on a rich learned story_separator_special_tag this is the fourth volume of the paris-princeton lectures in mathematical finance . the goal of this series is to publish cutting edge research in self contained articles prepared by established academics or promising young researchers invited by the editors . contributions are refereed and particular attention is paid to the quality of the exposition , the goal being to publish articles that can serve as introductory references for research . the series is a result of frequent exchanges between researchers in nance and nancial mathematics in paris and princeton . many of us felt that the eld would bene t from timely exposes of topics in which there is important progress . rene carmona , erhan cinlar , ivar ekeland , elyes jouini , jose scheinkman and nizar touzi serve in the rst editorial board of the paris-princeton lectures in financial mathematics . although many of the chapters involve lectures given in paris orprinceton , we also invite other contributions . springer verlag kindly offered to hostthe initiative under the umbrella of the lecture notes in mathematics series , and weare thankful to catriona byrne for her encouragement and her help . this fourth volume contains ve chapters . story_separator_special_tag this thesis is going to give a gentle introduction to mean field games . it aims to produce a coherent text beginning for simple notions of deterministic control theory progressively to current mean field games theory . the framework gradually extended form single agent stochastic control problems to multi agent stochastic differential mean field games . the concept of nash equilibrium is introduced to define a solution of the mean field game . to achieve considerable simplifications the number of agents goes to infinity and formulate this problem on the basis of mckean-vlasov theory for interacting particle systems . furthermore , the problem at infinity is being solved by a variation of the stochastic maximum principle and forward backward stochastic differential equations . to elaborate more the aiyagari macroeconomic model in continuous time is presented using mfgs techniques
the rapid uptake of mobile devices and the rising popularity of mobile applications and services pose unprecedented demands on mobile and wireless networking infrastructure . upcoming 5g systems are evolving to support exploding mobile traffic volumes , real-time extraction of fine-grained analytics , and agile management of network resources , so as to maximize user experience . fulfilling these tasks is challenging , as mobile environments are increasingly complex , heterogeneous , and evolving . one potential solution is to resort to advanced machine learning techniques , in order to help manage the rise in data volumes and algorithm-driven applications . the recent success of deep learning underpins new and powerful tools that tackle problems in this space . in this paper , we bridge the gap between deep learning and mobile and wireless networking research , by presenting a comprehensive survey of the crossovers between the two areas . we first briefly introduce essential background and state-of-the-art in deep learning techniques with potential applications to networking . we then discuss several techniques and platforms that facilitate the efficient deployment of deep learning onto mobile systems . subsequently , we provide an encyclopedic review of mobile and wireless networking research story_separator_special_tag abstract the internet of things ( iot ) is expected to require more effective and efficient wireless communications than ever before . for this reason , techniques such as spectrum sharing , dynamic spectrum access , extraction of signal intelligence and optimized routing will soon become essential components of the iot wireless communication paradigm . in this vision , iot devices must be able to not only learn to autonomously extract spectrum knowledge on-the-fly from the network but also leverage such knowledge to dynamically change appropriate wireless parameters ( e.g . , frequency band , symbol modulation , coding rate , route selection , etc . ) to reach the network s optimal operating point.\xa0given that the majority of the iot will be composed of tiny , mobile , and energy-constrained devices , traditional techniques based on a priori network optimization may not be suitable , since ( i ) an accurate model of the environment may not be readily available in practical scenarios ; ( ii ) the computational requirements of traditional optimization techniques may prove unbearable for iot devices . to address the above challenges , much research has been devoted to exploring the use of machine learning story_separator_special_tag despite more than two decades of continuous development learning from imbalanced data is still a focus of intense research . starting as a problem of skewed distributions of binary tasks , this topic evolved way beyond this conception . with the expansion of machine learning and data mining , combined with the arrival of big data era , we have gained a deeper insight into the nature of imbalanced learning , while at the same time facing new emerging challenges . data-level and algorithm-level methods are constantly being improved and hybrid approaches gain increasing popularity . recent trends focus on analyzing not only the disproportion between classes , but also other difficulties embedded in the nature of data . new real-life problems motivate researchers to focus on computationally efficient , adaptive and real-time methods . this paper aims at discussing open issues and challenges that need to be addressed to further develop the field of imbalanced learning . seven vital areas of research in this topic are identified , covering the full spectrum of learning from imbalanced data : classification , regression , clustering , data streams , big data analytics and applications , e.g. , in social media and story_separator_special_tag abstract deep learning , as one of the most currently remarkable machine learning techniques , has achieved great success in many applications such as image analysis , speech recognition and text understanding . it uses supervised and unsupervised strategies to learn multi-level representations and features in hierarchical architectures for the tasks of classification and pattern recognition . recent development in sensor networks and communication technologies has enabled the collection of big data . although big data provides great opportunities for a broad of areas including e-commerce , industrial control and smart medical , it poses many challenging issues on data mining and information processing due to its characteristics of large volume , large variety , large velocity and large veracity . in the past few years , deep learning has played an important role in big data analytic solutions . in this paper , we review the emerging researches of deep learning models for big data feature learning . furthermore , we point out the remaining challenges of big data deep learning and discuss the future topics . story_separator_special_tag human-centered data collection is typically costly and implicates issues of privacy . various solutions have been proposed in the literature to reduce this cost , such as crowdsourced data collection , or the use of semi-supervised algorithms . however , semi-supervised algorithms require a source of unlabeled data , and crowd-sourcing methods require numbers of active participants . an alternative passive data collection modality is fingerprint-based localization . such methods use received signal strength ( rss ) or channel state information ( csi ) in wireless sensor networks to localize users in indoor/outdoor environments . in this paper , we introduce a novel approach to reduce training data collection costs in fingerprint-based localization by using synthetic data . generative adversarial networks ( gans ) are used to learn the distribution of a limited sample of collected data and , following this , to produce synthetic data that can be used to augment the real collected data in order to increase overall positioning accuracy . experimental results on a benchmark dataset show that by applying the proposed method and using a combination of 10 % collected data and 90 % synthetic data , we can obtain essentially similar positioning accuracy to story_separator_special_tag many image-to-image translation problems are ambiguous , as a single input image may correspond to multiple possible outputs . in this work , we aim to model a distribution of possible outputs in a conditional generative modeling setting . the ambiguity of the mapping is distilled in a low-dimensional latent vector , which can be randomly sampled at test time . a generator learns to map the given input , combined with this latent code , to the output . we explicitly encourage the connection between output and the latent code to be invertible . this helps prevent a many-to-one mapping from the latent code to the output during training , also known as the problem of mode collapse , and produces more diverse results . we explore several variants of this approach by employing different training objectives , network architectures , and methods of injecting the latent code . our proposed method encourages bijective consistency between the latent encoding and output modes . we present a systematic comparison of our method and other variants on both perceptual realism and diversity . story_separator_special_tag face recognition systems have been shown to be vulnerable to adversarial faces resulting from adding small perturbations to probe images . such adversarial images can lead state-of-the-art face matchers to falsely reject a genuine subject ( obfuscation attack ) or falsely match to an impostor ( impersonation attack ) . current approaches to crafting adversarial faces lack perceptual quality and take an unreasonable amount of time to generate them . we propose , advfaces , an automated adversarial face synthesis method that learns to generate minimal perturbations in the salient facial regions via generative adversarial networks . once advfaces is trained , a hacker can automatically generate imperceptible face perturbations that can evade four black-box state-of-the-art face matchers with attack success rates as high as 97.22 % and 24.30 % at 0.1 % false accept rate , for obfuscation and impersonation attacks , respectively . story_separator_special_tag we investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems . these networks not only learn the mapping from input image to output image , but also learn a loss function to train this mapping . this makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations . we demonstrate that this approach is effective at synthesizing photos from label maps , reconstructing objects from edge maps , and colorizing images , among other tasks . indeed , since the release of the pix2pix software associated with this paper , a large number of internet users ( many of them artists ) have posted their own experiments with our system , further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking . as a community , we no longer hand-engineer our mapping functions , and this work suggests we can achieve reasonable results without hand-engineering our loss functions either . story_separator_special_tag image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs . however , for many tasks , paired training data will not be available . we present an approach for learning to translate an image from a source domain x to a target domain y in the absence of paired examples . our goal is to learn a mapping g : x y such that the distribution of images from g ( x ) is indistinguishable from the distribution y using an adversarial loss . because this mapping is highly under-constrained , we couple it with an inverse mapping f : y x and introduce a cycle consistency loss to push f ( g ( x ) ) x ( and vice versa ) . qualitative results are presented on several tasks where paired training data does not exist , including collection style transfer , object transfiguration , season transfer , photo enhancement , etc . quantitative comparisons against several prior methods demonstrate the superiority of our approach . story_separator_special_tag conditional generative adversarial networks ( gans ) for cross-domain image-to-image translation have made much progress recently [ 7 , 8 , 21 , 12 , 4 , 18 ] . depending on the task complexity , thousands to millions of labeled image pairs are needed to train a conditional gan . however , human labeling is expensive , even impractical , and large quantities of data may not always be available . inspired by dual learning from natural language translation [ 23 ] , we develop a novel dual-gan mechanism , which enables image translators to be trained from two sets of unlabeled images from two domains . in our architecture , the primal gan learns to translate images from domain u to those in domain v , while the dual gan learns to invert the task . the closed loop made by the primal and dual tasks allows images from either domain to be translated and then reconstructed . hence a loss function that accounts for the reconstruction error of images can be used to train the translators . experiments on multiple image translation tasks with unlabeled data show considerable performance gain of dualgan over a single gan . story_separator_special_tag procedural content generation is helping game developers to create significant quantity of high quality dynamic content for video games at a fraction of cost of the traditional methods . procedural texture synthesis is a sub category of procedural content generation which helps video games to have significant variations in textures of the environments and the objects across the progress of the game and to avoid repetition . generative adversarial networks are a class of deep learning algorithms which are capable of learning the patterns and creating new patterns . in this paper , generative adversarial networks is used for procedural content generation for original textures synthesis for video game development . this method is used by video game designers for autonomous redesigning of objects and environment textures . this process saves significant time and cost in video game development . the particular attention in this paper is on procedural synthesis of ground surface textures . the generated texture samples are visually acceptable and have mean score of 2.45 with 0.1 standard deviation after 2k iteration . also the discriminator loss of generated samples reached 0.74 at the final stage of training . the proposed framework can be used as an story_separator_special_tag generative adversarial networks ( gans ) are a recent approach to train generative models of data , which have been shown to work particularly well on image data . in the current paper we introduce a new model for texture synthesis based on gan learning . by extending the input noise distribution space from a single vector to a whole spatial tensor , we create an architecture with properties well suited to the task of texture synthesis , which we call spatial gan ( sgan ) . to our knowledge , this is the first successful completely data-driven texture synthesis method based on gans . our method has the following features which make it a state of the art algorithm for texture synthesis : high image quality of the generated textures , very high scalability w.r.t . the output texture size , fast real-time forward generation , the ability to fuse multiple diverse source images in complex textures . to illustrate these capabilities we present multiple experiments with different classes of texture images and use cases . we also discuss some limitations of our method with respect to the types of texture images it can synthesize , and compare it story_separator_special_tag generative adversarial networks have gained a lot of attention in the computer vision community due to their capability of data generation without explicitly modelling the probability density function . the adversarial loss brought by the discriminator provides a clever way of incorporating unlabeled samples into training and imposing higher order consistency . this has proven to be useful in many cases , such as domain adaptation , data augmentation , and image-to-image translation . these properties have attracted researchers in the medical imaging community , and we have seen rapid adoption in many traditional and novel applications , such as image reconstruction , segmentation , detection , classification , and cross-modality synthesis . based on our observations , this trend will continue and we therefore conducted a review of recent advances in medical imaging using the adversarial training scheme with the hope of benefiting researchers interested in this technique . story_separator_special_tag despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks , one central problem remains largely unsolved : how do we recover the finer texture details when we super-resolve at large upscaling factors ? the behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function . recent work has largely focused on minimizing the mean squared reconstruction error . the resulting estimates have high peak signal-to-noise ratios , but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution . in this paper , we present srgan , a generative adversarial network ( gan ) for image super-resolution ( sr ) . to our knowledge , it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors . to achieve this , we propose a perceptual loss function which consists of an adversarial loss and a content loss . the adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images . story_separator_special_tag speech blind bandwidth extension technologies have been available for some time , but until now have not seen widespread deployment , partly because the added bandwidth has been accompanied by added artifacts . in this paper , we present three generations of blind bandwidth extension technologies , from vector quantization mapping through gaussian mixture models , to our latest architecture based on deep neural networks using generative adversarial networks . this latest approach shows a sharp jump in quality , and demonstrates that machine-learning based blind bandwidth extension algorithms can achieve quality equal to wideband codecs , both objectively and subjectively . we believe that blind bandwidth extension can now achieve sufficiently high quality to warrant deployment in the existing telecommunication networks . story_separator_special_tag with the extensive development of deep learning , automatic composition has become a vanguard subject exercising the minds of scientists in the area of computer music . this paper proposes an advanced arithmetic for generating music using generative adversarial networks ( gan ) . the music is divided into tracks and the note segment of tracks is expressed as a piano-roll , through trained a gan model which generator and discriminator continuous zero-sum game to generate a wonderful music integrallty . in most cases , although gan excel in image generation , the model adopts a full-channel lateral deep convolutional network structure according to the music data characteristics in this paper , generate music more in line with human hearing and aesthetics . story_separator_special_tag environmental sounds , everyday audio events that do not consist of music or speech data and are often more diverse and chaotic in their structure , have proven to be a promising type of carrier signals to carry out covert communication as they occur frequently in the natural environment , e.g. , marine communication by mimicking dolphin or sea lion whistles . however , a mass collection of the carrier signals still remains a challenging task . recently proposed generator models represented by generator adversarial nets ( gan ) have provided an effective way to synthesize environmental sounds . in this study , an end-to-end convolutional neural network ( cnn ) is proposed to directly transform the randomly sampled gaussian noise into environmental sound that contains the secret message . the proposed network structure is composed of upsampling groups and orthogonal quantization layer , which can simultaneously realize factor analysis and information embedding . the design of the orthogonal quantization layer to complete the message embedding task is inspired by spread spectrum , model-based modulation , and compensative quantization . the underlying idea in this study is to treat the secret message as the constraint information in the generative model story_separator_special_tag anomaly detection is a significant problem faced in several research areas . detecting and correctly classifying something unseen as anomalous is a challenging problem that has been tackled in many different manners over the years . generative adversarial networks ( gans ) and the adversarial training process have been recently employed to face this task yielding remarkable results . in this paper we survey the principal gan-based anomaly detection methods , highlighting their pros and cons . our contributions are the empirical validation of the main gan models for anomaly detection , the increase of the experimental results on different datasets and the public release of a complete open source toolbox for anomaly detection using gans . story_separator_special_tag as the next generation of the power system , smart grid develops towards automated and intellectualized . along with the benefits brought by smart grids , e.g. , improved energy conversion rate , power utilization rate , and power supply quality , are the security challenges . one of the most important issues in smart grids is to ensure reliable communication between the secondary equipment . the state-of-art method to ensure smart grid security is to detect cyber attacks by deep learning . however , due to the small number of negative samples , the performance of the detection system is limited . in this paper , we propose a novel approach that utilizes the generative adversarial network ( gan ) to generate abundant negative samples , which helps to improve the performance of the state-of-art detection system . the evaluation results demonstrate that the proposed method can effectively improve the performance of the detection system by 4 % . story_separator_special_tag in many real applications , the ground truths of class labels from voltage dip sequences used for training a voltage dip classification system are unknown , and require manual labelling by human experts . this paper proposes a novel deep active learning method for automatic labelling of voltage dip sequences used for the training process . we propose a novel deep active learning method , guided by a generative adversarial network ( gan ) , where the generator is formed by modelling data with a gaussian mixture model and provides the estimated probability distribution function ( pdf ) where the query criterion of the deep active learning method is built upon . furthermore , the discriminator is formed by a support vector machine ( svm ) . the proposed method has been tested on a voltage dip dataset ( containing 916 dips ) measured in a european country . the experiments have resulted in good performance ( classification rate 83 % and false alarm 3.2 % ) , which have demonstrated the effectiveness of the proposed method . story_separator_special_tag the availability of fine grained time series data is a pre-requisite for research in smart-grids . while data for transmission systems is relatively easily obtainable , issues related to data collection , security and privacy hinder the widespread public availability/accessibility of such datasets at the distribution system level . this has prevented the larger research community from effectively applying sophisticated machine learning algorithms to significantly improve the distribution-level accuracy of predictions and increase the efficiency of grid operations . synthetic dataset generation has proven to be a promising solution for addressing data availability issues in various domains such as computer vision , natural language processing and medicine . however , its exploration in the smart grid context remains unsatisfactory . previous works have tried to generate synthetic datasets by modeling the underlying system dynamics : an approach which is difficult , time consuming , error prone and often times infeasible in many problems . in this work , we propose a novel data-driven approach to synthetic dataset generation by utilizing deep generative adversarial networks ( gan ) to learn the conditional probability distribution of essential features in the real dataset and generate samples based on the learned distribution . to story_separator_special_tag identification of the defective patterns of the wafer maps can provide insights for the quality control in the semiconductor wafer fabrication systems ( swfss ) . in real swfss , the collected wafer maps are usually imbalanced from the defective types , which will result in misidentification . in this paper , a novel deep learning model called adaptive balancing generative adversarial network ( adabalgan ) is proposed for the defective pattern recognition ( dpr ) of wafer maps with imbalanced data . in addition , a categorical generative adversarial network is improved to generate simulated wafer maps in high fidelity and classify the patterns with high accuracy for all defective categories . taking consideration of the various learning abilities of the dpr model for different patterns into account , an adaptive generative controller is designed to balance the number of samples of each defective type according to the classification accuracy . the experiment results indicated that the proposed adabalgan model outperforms conventional models with higher accuracy and stability for the dpr of wafer maps . further results of comparative experiments revealed that the proposed adaptive generative mechanism can enhance and balance the recognition accuracy for all categories in the story_separator_special_tag in back-end analog/mixed-signal ( ams ) design flow , well generation persists as a fundamental challenge for layout compactness , routing complexity , circuit performance and robustness . the immaturity of ams layout automation tools comes to a large extent from the difficulty in comprehending and incorporating designer expertise . to mimic the behavior of experienced designers in well generation , we propose a generative adversarial network ( gan ) guided well generation framework with a post-refinement stage leveraging the previous high-quality manually-crafted layouts . guiding regions for wells are first created by a trained gan model , after which the well generation results are legalized through post-refinement to satisfy design rules . experimental results show that the proposed technique is able to generate wells close to manual designs with comparable post-layout circuit performance . story_separator_special_tag abstract deep learning can be applied to the field of fault diagnosis for its powerful feature representation capabilities . when a certain class fault samples available are very limited , it is inevitably to be unbalanced . the fault feature extracted from unbalanced data via deep learning is inaccurate , which can lead to high misclassification rate . to solve this problem , new generator and discriminator of generative adversarial network ( gan ) are designed in this paper to generate more discriminant fault samples using a scheme of global optimization . the generator is designed to generate those fault feature extracted from a few fault samples via auto encoder ( ae ) instead of fault data sample . the training of the generator is guided by fault feature and fault diagnosis error instead of the statistical coincidence of traditional gan . the discriminator is designed to filter the unqualified generated samples in the sense that qualified samples are helpful for more accurate fault diagnosis . the experimental results of rolling bearings verify the effectiveness of the proposed algorithm . story_separator_special_tag mechanical fault datasets are always highly imbalanced with abundant common mechanical fault samples but a paucity of samples from rare fault conditions . to overcome this weakness , the simulation of rare fault signals is proposed in this paper . specifically , frequency spectra are employed as model signals , then wasserstein generative adversarial network ( wgan ) is implemented to generate simulated signals based on a labeled dataset . finally , the real and artificial signals are combined to train stacked autoencoders ( sae ) to detect mechanical health conditions . to validate the effectiveness of the proposed wgan-sae method , two specially designed experiments are carried out and some traditional methods are adopted for comparison . the diagnosis results show that the proposed method can deal with imbalanced fault classification problem much more effectively . the improved performance is mainly due to the artificial fault signals generated from the wgan to balance the dataset , where the signals that are lacking in training dataset are effectively augmented . furthermore , the learned features in each layer of the generator network are also analyzed via visualization , which may help us understand the working process of the wgan . story_separator_special_tag transfer learning has been an important aspect in recent years for labeled data are pretty rare in real application under different conditions . in modern industry systems , collected sample signals are usually not equally distributed , meaning the quantity of data from different working conditions are barely the same . researchers have proposed a number of methods tackling the issue , most of which try to extract the features of the original data to unify them . basic and valid algorithm is distribution adaption which include transfer component analysis ( tca ) , joint distribution adaptation ( jda ) , correlation alignment ( coral ) and other varieties . these methods have shown great effectiveness in practice . generative adversarial networks ( gans ) are newly developed generative models which can generate new sample data similar to original data through a special designed competitive training procedure . while distribution adaption unify the signal under all the conditions , gan models are able to achieve approximate distribution functions and generate fake samples for different working conditions . in this paper , a new fault diagnosis transfer learning approach is proposed with a cycle-consistent gan model . the designed gan tries story_separator_special_tag the aero-engine faults diagnosis is essential to the safety of the long-endurance aircraft . the problem of fault diagnosis for aero-engines is essentially a sort of model classification problem . due to the difficulty of the engine faults modeling , a data-driven approach is used in this paper , based on the relevance vector machine for classification . however , the collection of the fault sample is so difficult that causes the imbalance learning problem . to solve this problem , a semi-supervised learning approach based on the improved wasserstein generative adversarial networks and k-means cluster technique is proposed in this paper . the theoretical analysis and the experiment show that , compared with another sampling method synthetic minority oversampling technique ( smote ) , the proposed approach can better fit the fault sample distribution , generate much more appropriate new samples by learning from the small number of fault samples . it is more efficient to prevent over-fitting by training with the original samples that mixed with the improved wasserstein generative adversarial networks generated samples . story_separator_special_tag the appearance of generative adversarial networks ( gan ) provides a new approach and framework for computer vision . compared with traditional machine learning algorithms , gan works via adversarial training concept and is more powerful in both feature learning and representation . gan also exhibits some problems , such as non-convergence , model collapse , and uncontrollability due to high degree of freedom . how to improve the theory of gan and apply it to computer-vision-related tasks have now attracted much research efforts . in this paper , recently proposed gan models and their applications in computer vision are systematically reviewed . in particular , we firstly survey the history and development of generative algorithms , the mechanism of gan , its fundamental network structures , and theoretical analysis of the original gan . classical gan algorithms are then compared comprehensively in terms of the mechanism , visual results of generated samples , and frechet inception distance . these networks are further evaluated from network construction , performance , and applicability aspects by extensive experiments conducted over public datasets . after that , several typical applications of gan in computer vision , including high-quality samples generation , style transfer story_separator_special_tag multilayer neural networks trained with the backpropagation algorithm constitute the best example of a successful gradient-based learning technique . given an appropriate network architecture , gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns such as handwritten characters , with minimal preprocessing . this paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task . convolutional neural networks , that are specifically designed to deal with the variability of 2d shapes , are shown to outperform all other techniques . real-life document recognition systems are composed of multiple modules including field extraction , segmentation , recognition , and language modeling . a new learning paradigm , called graph transformer networks ( gtn ) , allows such multi-module systems to be trained globally using gradient-based methods so as to minimize an overall performance measure . two systems for on-line handwriting recognition are described . experiments demonstrate the advantage of global training , and the flexibility of graph transformer networks . a graph transformer network for reading bank check is also described . it uses convolutional neural network character recognizers combined with global training techniques story_separator_special_tag we present fashion-mnist , a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories , with 7,000 images per category . the training set has 60,000 images and the test set has 10,000 images . fashion-mnist is intended to serve as a direct drop-in replacement for the original mnist dataset for benchmarking machine learning algorithms , as it shares the same image size , data format and the structure of training and testing splits . the dataset is freely available at this https url story_separator_special_tag recently , generative adversarial networks u+0028 gans u+0029 have become a research focus of artificial intelligence . inspired by two-player zero-sum game , gans comprise a generator and a discriminator , both trained under the adversarial learning idea . the goal of gans is to estimate the potential distribution of real data samples and generate new samples from that distribution . since their initiation , gans have been widely studied due to their enormous prospect for applications , including image and vision computing , speech and language processing , etc . in this review paper , we summarize the state of the art of gans and look into the future . firstly , we survey gans u+02bc proposal background , theoretic and implementation models , and application fields . then , we discuss gans u+02bc advantages and disadvantages , and their development trends . in particular , we investigate the relation between gans and parallel intelligence , with the conclusion that gans have a great potential in parallel systems research in terms of virtual-real interaction and integration . clearly , gans can provide substantial algorithmic support for parallel intelligence . story_separator_special_tag generative adversarial network ( gans ) is one of the most important research avenues in the field of artificial intelligence , and its outstanding data generation capacity has received wide attention . in this paper , we present the recent progress on gans . first , the basic theory of gans and the differences among different generative models in recent years were analyzed and summarized . then , the derived models of gans are classified and introduced one by one . third , the training tricks and evaluation metrics were given . fourth , the applications of gans were introduced . finally , the problem , we need to address , and future directions were discussed . story_separator_special_tag with the recent improvements in computation power and high scale datasets , many interesting studies have been presented based on discriminative models such as convolutional neural network ( cnn ) and recurrent neural network ( rnn ) for various classification problems . these models have achieved current state-of-the-art results in almost all applications of computer vision but not sufficient sampling out-of-data , understanding of data distribution . by pioneers of the deep learning community , generative adversarial training is defined as the most exciting topic of computer vision field nowadays . with the influence of these views and potential usages of generative models , many kinds of researches were conducted using generative models especially generative adversarial network ( gan ) and autoencoder ( ae ) based models with an increasing trend . in this study , a comprehensive review of generative models with defining relations among them is presented for a better understanding of gans and aes by pointing the importance of generative models . story_separator_special_tag this report summarizes the tutorial presented by the author at nips 2016 on generative adversarial networks ( gans ) . the tutorial describes : ( 1 ) why generative modeling is a topic worth studying , ( 2 ) how generative models work , and how gans compare to other generative models , ( 3 ) the details of how gans work , ( 4 ) research frontiers in gans , and ( 5 ) state-of-the-art image models that combine gans with other methods . finally , the tutorial contains three exercises for readers to complete , and the solutions to these exercises . story_separator_special_tag deep learning has achieved great success in the field of artificial intelligence , and many deep learning models have been developed . generative adversarial networks ( gan ) is one of the deep learning model , which was proposed based on zero-sum game theory and has become a new research hotspot . the significance of the model variation is to obtain the data distribution through unsupervised learning and to generate more realistic/actual data . currently , gans have been widely studied due to the enormous application prospect , including image and vision computing , video and language processing , etc . in this paper , the background of the gan , theoretic models and extensional variants of gans are introduced , where the variants can further optimize the original gan or change the basic structures . then the typical applications of gans are explained . finally the existing problems of gans are summarized and the future work of gans models are given . story_separator_special_tag generative adversarial networks ( gans ) has received wide attention in the machine learning field because it can generate real-like data by estimating real data probability distribution . gans has been successfully applied to many fields such as computer vision , pattern recognition , natural language processing and so on . by now many kinds of extended models of gans have been proposed and investigated by different researchers from different viewpoints . although there are a few review papers on the extended models of gans in the literature , some remarkable extensions of gans published in the recent years are not included in these surveys . this paper attempts to provide the potential readers with a recent advance on gans by surveying its twelve representative variants . furthermore , we also present the lineage of the extended models of gans . this paper can provide researchers engaged in related works with very valuable help . story_separator_special_tag this paper presents a survey of image synthesis and editing with generative adversarial networks ( gans ) . gans consist of two deep networks , a generator and a discriminator , which are trained in a competitive way . due to the power of deep networks and the competitive training manner , gans are capable of producing reasonable and realistic images , and have shown great capability in many image synthesis and editing applications . this paper surveys recent gan papers regarding topics including , but not limited to , texture synthesis , image inpainting , image-to-image translation , and image editing . story_separator_special_tag deep convolutional neural networks have performed remarkably well on many computer vision tasks . however , these networks are heavily reliant on big data to avoid overfitting . overfitting refers to the phenomenon when a network learns a function with very high variance such as to perfectly model the training data . unfortunately , many application domains do not have access to big data , such as medical image analysis . this survey focuses on data augmentation , a data-space solution to the problem of limited data . data augmentation encompasses a suite of techniques that enhance the size and quality of training datasets such that better deep learning models can be built using them . the image augmentation algorithms discussed in this survey include geometric transformations , color space augmentations , kernel filters , mixing images , random erasing , feature space augmentation , adversarial training , generative adversarial networks , neural style transfer , and meta-learning . the application of augmentation methods based on gans are heavily covered in this survey . in addition to augmentation techniques , this paper will briefly discuss other characteristics of data augmentation such as test-time augmentation , resolution impact , final dataset story_separator_special_tag in recent years , frameworks that employ generative adversarial networks ( gans ) have achieved immense results for various applications in many fields especially those related to image generation both due to their ability to create highly realistic and sharp images as well as train on huge data sets . however , successfully training gans are notoriously difficult task in case ifhigh resolution images are required . in this article , we discuss five applicable and fascinating areas for image synthesis based on the state-of-theart gans techniques including text-to-image-synthesis , image-to-image-translation , face manipulation , 3d image synthesis and deepmasterprints . we provide a detailed review of current gans-based image generation models with their advantages and disadvantages.the results of the publications in each section show the gans based algorithmsaregrowing fast and their constant improvement , whether in the same field or in others , will solve complicated image generation tasks in the future . story_separator_special_tag generative adversarial networks ( gans ) provide a way to learn deep representations without extensively annotated training data . they achieve this through deriving backpropagation signals through a competitive process involving a pair of networks . the representations that can be learned by gans may be used in a variety of applications , including image synthesis , semantic image editing , style transfer , image super-resolution and classification . the aim of this review paper is to provide an overview of gans for the signal processing community , drawing on familiar analogies and concepts where possible . in addition to identifying different methods for training and constructing gans , we also point to remaining challenges in their theory and application . story_separator_special_tag despite the remarkable success of deep rl in learning control policies from raw pixels , the resulting models do not generalize . we demonstrate that a trained agent fails completely when facing small visual changes , and that fine-tuning -- -the common transfer learning paradigm -- -fails to adapt to these changes , to the extent that it is faster to re-train the model from scratch . we show that by separating the visual transfer task from the control policy we achieve substantially better sample efficiency and transfer behavior , allowing an agent trained on the source task to transfer well to the target tasks . the visual mapping from the target to the source domain is performed using unaligned gans , resulting in a control policy that can be further improved using imitation learning from imperfect demonstrations . we demonstrate the approach on synthetic visual variants of the breakout game , as well as on transfer between subsequent levels of road fighter , a nintendo car-driving game . a visualization of our approach can be seen in this https url and this https url . story_separator_special_tag generative adversarial networks ( gans ) are a recently proposed class of generative models in which a generator is trained to optimize a cost function that is being simultaneously learned by a discriminator . while the idea of learning cost functions is relatively new to the field of generative modeling , learning costs has long been studied in control and reinforcement learning ( rl ) domains , typically for imitation learning from demonstrations . in these fields , learning cost function underlying observed behavior is known as inverse reinforcement learning ( irl ) or inverse optimal control . while at first the connection between cost learning in rl and cost learning in generative modeling may appear to be a superficial one , we show in this paper that certain irl methods are in fact mathematically equivalent to gans . in particular , we demonstrate an equivalence between a sample-based algorithm for maximum entropy irl and a gan in which the generator 's density can be evaluated and is provided as an additional input to the discriminator . interestingly , maximum entropy irl is a special case of an energy-based model . we discuss the interpretation of gans as an algorithm story_separator_special_tag over the past years , generative adversarial networks ( gans ) have shown a remarkable generation performance especially in image synthesis . unfortunately , they are also known for having an unstable training process and might loose parts of the data distribution for heterogeneous input data . in this paper , we propose a novel gan extension for multi-modal distribution learning ( mmgan ) . in our approach , we model the latent space as a gaussian mixture model with a number of clusters referring to the number of disconnected data manifolds in the observation space , and include a clustering network , which relates each data manifold to one gaussian cluster . thus , the training gets more stable . moreover , mmgan allows for clustering real data according to the learned data manifold in the latent space . by a series of benchmark experiments , we illustrate that mmgan outperforms competitive state-of-the-art models in terms of clustering performance . story_separator_special_tag semi-supervised learning methods using generative adversarial networks ( gans ) have shown promising empirical success recently . most of these methods use a shared discriminator/classifier which discriminates real examples from fake while also predicting the class label . motivated by the ability of the gans generator to capture the data manifold well , we propose to estimate the tangent space to the data manifold using gans and employ it to inject invariances into the classifier . in the process , we propose enhancements over existing methods for learning the inverse mapping ( i.e. , the encoder ) which greatly improves in terms of semantic similarity of the reconstructed sample with the input sample . we observe considerable empirical gains in semi-supervised learning over baselines , particularly in the cases when the number of labeled examples is low . we also provide insights into how fake examples influence the semi-supervised learning procedure . story_separator_special_tag nowadays , this is very popular to use the deep architectures in machine learning . deep belief networks ( dbns ) are deep architectures that use stack of restricted boltzmann machines ( rbm ) to create a powerful generative model using training data . dbns have many ability like feature extraction and classification that are used in many applications like image processing , speech processing and etc . this paper introduces a new object oriented matlab toolbox with most of abilities needed for the implementation of dbns . in the new version , the toolbox can be used in octave . according to the results of the experiments conducted on mnist ( image ) , isolet ( speech ) , and 20 newsgroups ( text ) datasets , it was shown that the toolbox can learn automatically a good representation of the input from unlabeled data with better discrimination between different classes . also on all datasets , the obtained classification errors are comparable to those of state of the art classifiers . in addition , the toolbox supports different sampling methods ( e.g . gibbs , cd , pcd and our new fepcd method ) , different sparsity methods story_separator_special_tag the choice of approximate posterior distribution is one of the core problems in variational inference . most applications of variational inference employ simple families of posterior approximations in order to allow for efficient inference , focusing on mean-field or other simple structured approximations . this restriction has a significant impact on the quality of inferences made using variational methods . we introduce a new approach for specifying flexible , arbitrarily complex and scalable approximate posterior distributions . our approximations are distributions constructed through a normalizing flow , whereby a simple initial density is transformed into a more complex one by applying a sequence of invertible transformations until a desired level of complexity is attained . we use this view of normalizing flows to develop categories of finite and infinitesimal flows and provide a unified view of approaches for constructing rich posterior approximations . we demonstrate that the theoretical advantages of having posteriors that better match the true posterior , combined with the scalability of amortized variational approaches , provides a clear improvement in performance and applicability of variational inference . story_separator_special_tag we propose a new framework for estimating generative models via an adversarial process , in which we simultaneously train two models : a generative model g that captures the data distribution , and a discriminative model d that estimates the probability that a sample came from the training data rather than g. the training procedure for g is to maximize the probability of d making a mistake . this framework corresponds to a minimax two-player game . in the space of arbitrary functions g and d , a unique solution exists , with g recovering the training data distribution and d equal to \xbd everywhere . in the case where g and d are defined by multilayer perceptrons , the entire system can be trained with backpropagation . there is no need for any markov chains or unrolled approximate inference networks during either training or generation of samples . experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples . story_separator_special_tag we present a variety of new architectural features and training procedures that we apply to the generative adversarial networks ( gans ) framework . using our new techniques , we achieve state-of-the-art results in semi-supervised classification on mnist , cifar-10 and svhn . the generated images are of high quality as confirmed by a visual turing test : our model generates mnist samples that humans can not distinguish from real data , and cifar-10 samples that yield a human error rate of 21.3 % . we also present imagenet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of imagenet classes . story_separator_special_tag generative adversarial nets [ 8 ] were recently introduced as a novel way to train generative models . in this work we introduce the conditional version of generative adversarial nets , which can be constructed by simply feeding the data , y , we wish to condition on to both the generator and discriminator . we show that this model can generate mnist digits conditioned on class labels . we also illustrate how this model could be used to learn a multi-modal model , and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels . story_separator_special_tag the ability of the generative adversarial networks ( gans ) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically , with compelling results showing that the latent space of such generators captures semantic variation in the data distribution . intuitively , models trained to predict these semantic latent representations given data may serve as useful feature representations for auxiliary problems where semantics are relevant . however , in their existing form , gans have no means of learning the inverse mapping -- projecting data back into the latent space . we propose bidirectional generative adversarial networks ( bigans ) as a means of learning this inverse mapping , and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks , competitive with contemporary approaches to unsupervised and self-supervised feature learning . story_separator_special_tag while humans easily recognize relations between data from different domains without any supervision , learning to automatically discover them is in general very challenging and needs many ground-truth pairs that illustrate the relations . to avoid costly pairing , we address the task of discovering cross-domain relations given unpaired data . we propose a method based on generative adversarial networks that learns to discover relations between different domains ( discogan ) . using the discovered relations , our proposed network successfully transfers style from one domain to another while preserving key attributes such as orientation and face identity . source code for official implementation is publicly available this https url story_separator_special_tag this paper describes infogan , an information-theoretic extension to the gener-ative adversarial network that is able to learn disentangled representations in a completely unsupervised manner . infogan is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation . we derive a lower bound of the mutual information objective that can be optimized efficiently . specifically , infogan successfully disentangles writing styles from digit shapes on the mnist dataset , pose from lighting of 3d rendered images , and background digits from the central digit on the svhn dataset . it also discovers visual concepts that include hair styles , pres-ence/absence of eyeglasses , and emotions on the celeba face dataset . experiments show that infogan learns interpretable representations that are competitive with representations learned by existing supervised methods . story_separator_special_tag contrastive learning and supervised learning have both seen significant progress and success . however , thus far they have largely been treated as two separate objectives , brought together only by having a shared neural network . in this paper we show that through the perspective of hybrid discriminative-generative training of energy-based models we can make a direct connection between contrastive learning and supervised learning . beyond presenting this unified view , we show our specific choice of approximation of the energy-based loss significantly improves energybased models and contrastive learning based methods in confidence-calibration , out-of-distribution detection , adversarial robustness , generative modeling , and image classification tasks . in addition to significantly improved performance , our method also gets rid of sgld training and does not suffer from training instability . our evaluations also demonstrate that our method performs better than or on par with state-of-the-art hand-tailored methods in each task . story_separator_special_tag in dental computed tomography ( ct ) scanning , high-quality images are crucial for oral disease diagnosis and treatment . however , many artifacts , such as metal artifacts , downsampling artifacts and motion artifacts , can degrade the image quality in practice . the main purpose of this article is to reduce motion artifacts . motion artifacts are caused by the movement of patients during data acquisition during the dental ct scanning process . to remove motion artifacts , the goal of this study was to develop a dental ct motion artifact-correction algorithm based on a deep learning approach . we used dental ct data with motion artifacts reconstructed by conventional filtered back-projection ( fbp ) as inputs to a deep neural network and used the corresponding high-quality ct data as labeled data during training . we proposed training a generative adversarial network ( gan ) with wasserstein distance and mean squared error ( mse ) loss to remove motion artifacts and to obtain high-quality ct dental images . in our network , to improve the generator structure , the generator used a cascaded cnn-net style network with residual blocks . to the best of our knowledge , this story_separator_special_tag unsupervised learning with generative adversarial networks ( gans ) has proven hugely successful . regular gans hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function . however , we found that this loss function may lead to the vanishing gradients problem during the learning process . to overcome such a problem , we propose in this paper the least squares generative adversarial networks ( lsgans ) which adopt the least squares loss function for the discriminator . we show that minimizing the objective function of lsgan yields minimizing the pearson $ \\chi^2 $ divergence . there are two benefits of lsgans over regular gans . first , lsgans are able to generate higher quality images than regular gans . second , lsgans perform more stable during the learning process . we evaluate lsgans on five scene datasets and the experimental results show that the images generated by lsgans are of better quality than the ones generated by regular gans . we also conduct two comparison experiments between lsgans and regular gans to illustrate the stability of lsgans . story_separator_special_tag abstract : in recent years , supervised learning with convolutional networks ( cnns ) has seen huge adoption in computer vision applications . comparatively , unsupervised learning with cnns has received less attention . in this work we hope to help bridge the gap between the success of cnns for supervised learning and unsupervised learning . we introduce a class of cnns called deep convolutional generative adversarial networks ( dcgans ) , that have certain architectural constraints , and demonstrate that they are a strong candidate for unsupervised learning . training on various image datasets , we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator . additionally , we use the learned features for novel tasks - demonstrating their applicability as general image representations . story_separator_special_tag synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning . in this paper we introduce new methods for the improved training of generative adversarial networks ( gans ) for image synthesis . we construct a variant of gans employing label conditioning that results in 128x128 resolution image samples exhibiting global coherence . we expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models . these analyses demonstrate that high resolution samples provide class information not present in low resolution samples . across 1000 imagenet classes , 128x128 samples are more than twice as discriminable as artificially resized 32x32 samples . in addition , 84.7 % of the classes have samples exhibiting diversity comparable to real imagenet data . story_separator_special_tag the goal of this paper is not to introduce a single algorithm or method , but to make theoretical steps towards fully understanding the training dynamics of generative adversarial networks . in order to substantiate our theoretical analysis , we perform targeted experiments to verify our assumptions , illustrate our claims , and quantify the phenomena . this paper is divided into three sections . the first section introduces the problem at hand . the second section is dedicated to studying and proving rigorously the problems including instability and saturation that arize when training generative adversarial networks . the third section examines a practical and theoretically grounded direction towards solving these problems , while introducing new tools to study them . story_separator_special_tag generative adversarial networks ( gans ) are powerful generative models , but suffer from training instability . the recently proposed wasserstein gan ( wgan ) makes progress toward stable training of gans , but sometimes can still generate only poor samples or fail to converge . we find that these problems are often due to the use of weight clipping in wgan to enforce a lipschitz constraint on the critic , which can lead to undesired behavior . we propose an alternative to clipping weights : penalize the norm of gradient of the critic with respect to its input . our proposed method performs better than standard wgan and enables stable training of a wide variety of gan architectures with almost no hyperparameter tuning , including 101-layer resnets and language models with continuous generators . we also achieve high quality generations on cifar-10 and lsun bedrooms . story_separator_special_tag the fifth generation of mobile communications is anticipated to open up innovation opportunities for new industries such as vertical markets . however , these verticals originate myriad use cases with diverging requirements that future 5g networks have to efficiently support . network slicing may be a natural solution to simultaneously accommodate , over a common network infrastructure , the wide range of services that vertical- specific use cases will demand . in this article , we present the network slicing concept , with a particular focus on its application to 5g systems . we start by summarizing the key aspects that enable the realization of so-called network slices . then we give a brief overview on the sdn architecture proposed by the onf and show that it provides tools to support slicing . we argue that although such architecture paves the way for network slicing implementation , it lacks some essential capabilities that can be supplied by nfv . hence , we analyze a proposal from etsi to incorporate the capabilities of sdn into the nfv architecture . additionally , we present an example scenario that combines sdn and nfv technologies to address the realization of network slices . finally story_separator_special_tag with the rapid development of the mobile network and growing complexity of new networking applications , it is challenging to meet the diverse resource demands under the current mobile network architecture , especially for iot applications . in this paper , we propose ganslicing , a dynamic service-oriented software-defined mobile network slicing scheme that leverages generative adversarial networks ( gans ) based prediction to timely and flexibly allocate resources for iot applications and to improve quality of experience ( qoe ) of users . compared with the current tenant-oriented mobile network slicing scheme , ganslicing is able to accept 16 % more requests with 12 % fewer resources for the same service request batch according to our evaluation . the result demonstrates that the proposed scheme not only improves the utilization of resources but also enhances the qoe of iot applications . story_separator_special_tag this article surveys the literature over the period of the last decade on the emerging field of self organisation as applied to wireless cellular communication networks . self organisation has been extensively studied and applied in adhoc networks , wireless sensor networks and autonomic computer networks ; however in the context of wireless cellular networks , this is the first attempt to put in perspective the various efforts in form of a tutorial/survey . we provide a comprehensive survey of the existing literature , projects and standards in self organising cellular networks . additionally , we also aim to present a clear understanding of this active research area , identifying a clear taxonomy and guidelines for design of self organising mechanisms . we compare strength and weakness of existing solutions and highlight the key research areas for further development . this paper serves as a guide and a starting point for anyone willing to delve into research on self organisation in wireless cellular communication networks . story_separator_special_tag while an al dente character of 5g is yet to emerge , network densification , miscellany of node types , split of control and data plane , network virtualization , heavy and localized cache , infrastructure sharing , concurrent operation at multiple frequency bands , simultaneous use of different medium access control and physical layers , and flexible spectrum allocations can be envisioned as some of the potential ingredients of 5g . it is not difficult to prognosticate that with such a conglomeration of technologies , the complexity of operation and opex can become the biggest challenge in 5g . to cope with similar challenges in the context of 3g and 4g networks , recently , self-organizing networks , or sons , have been researched extensively . however , the ambitious quality of experience requirements and emerging multifarious vision of 5g , and the associated scale of complexity and cost , demand a significantly different , if not totally new , approach toward sons in order to make 5g technically as well as financially feasible . in this article we first identify what challenges hinder the current self-optimizing networking paradigm from meeting the requirements of 5g . we then propose story_separator_special_tag in this paper , a survey of the literature of the past 15 years involving machine learning ( ml ) algorithms applied to self-organizing cellular networks is performed . in order for future networks to overcome the current limitations and address the issues of current cellular systems , it is clear that more intelligence needs to be deployed so that a fully autonomous and flexible network can be enabled . this paper focuses on the learning perspective of self-organizing networks ( son ) solutions and provides , not only an overview of the most common ml techniques encountered in cellular networks but also manages to classify each paper in terms of its learning solution , while also giving some examples . the authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed . in addition , a comparison between the most commonly found ml algorithms in terms of certain son metrics is performed and general guidelines on when to choose each ml algorithm for each son function are proposed . lastly , this paper also provides future research directions and new paradigms that the use of more robust and intelligent algorithms , story_separator_special_tag in this paper , we provide an analysis of self-organized network management , with an end-to-end perspective of the network . self-organization as applied to cellular networks is usually referred to self-organizing networks ( sons ) , and it is a key driver for improving operations , administration , and maintenance ( oam ) activities . son aims at reducing the cost of installation and management of 4g and future 5g networks , by simplifying operational tasks through the capability to configure , optimize and heal itself . to satisfy 5g network management requirements , this autonomous management vision has to be extended to the end to end network . in literature and also in some instances of products available in the market , machine learning ( ml ) has been identified as the key tool to implement autonomous adaptability and take advantage of experience when making decisions . in this paper , we survey how network management can significantly benefit from ml solutions . we review and provide the basic concepts and taxonomy for son , network management and ml . we analyse the available state of the art in the literature , standardization , and in the market story_separator_special_tag self-organizing networks ( sons ) aim at automating the management of cellular networks . however , tasks , such as the selection of the most appropriate performance indicators for son functions , are still carried out by experts . in this letter , an unsupervised and autonomous technique for the selection of the most useful performance indicators is proposed , consisting in a data clustering stage followed by a supervised procedure for feature selection . results show that the proposed method effectively relieves and outperforms an expert s selection , allowing a drastic reduction of the volume and complexity of both network databases and son procedures without human intervention . story_separator_special_tag network operators require a high level of performance and reliability for the cellular radio access network ( ran ) to deliver high quality of service for mobile users . however , these network do experience some rare and hard-to-predict anomaly events , for example , hardware failures and high radio interference , which can significantly degrade performance and end-user experience . in this work , we propose sora , a self-organizing cellular radio access network system enhanced with deep learning . sora involves four core components : self-kpis monitoring , self-anomaly prediction , self-root cause analysis , and self-healing . in particular , we design and implement the anomaly prediction and root cause analysis components with deep learning methods and evaluate the system performance with 6 months of real-world data from a top-tier us cellular network operator . we demonstrate that the proposed methods can achieve 86.9 % accuracy in predicting anomalies and 99.5 % accuracy for root cause analysis of network faults . story_separator_special_tag self-organizing networks ( son ) aim at simplifying network management ( nm ) and optimizing network capital and operational expenditure through automation . most son functions ( sfs ) are rule-based control structures , which evaluate metrics and decide actions based on a set of rules . these rigid structures are , however , very complex to design since rules must be derived for each sf in each possible scenario . in practice , rules only support generic behavior , which can not respond to the specific scenarios in each network or cell . moreover , son coordination becomes very complicated with such varied control structures . in this paper , we propose to advance son toward cognitive cellular networks ( ccn ) \xa0by adding cognition that enables the sfs to independently learn the required optimal configurations . we propose a generalized q-learning framework for the ccn functions and show how the framework fits to a general sf control loop . we then apply this framework to two functions on mobility robustness optimization ( mro ) and mobility load balancing ( mlb ) . our results show that the mro function learns to optimize handover performance while the mlb function story_separator_special_tag we propose an algorithm to automate fault management in an outdoor cellular network using deep reinforcement learning ( rl ) against wireless impairments . this algorithm enables the cellular network cluster to self-heal by allowing rl to learn how to improve the downlink signal to interference plus noise ratio through exploration and exploitation of various alarm corrective actions . the main contributions of this paper are to 1 ) introduce a deep rl-based fault handling algorithm which self-organizing networks can implement in a polynomial runtime and 2 ) show that this fault management method can improve the radio link performance in a realistic network setup . simulation results show that our proposed algorithm learns an action sequence to clear alarms and improve the performance in the cellular cluster better than existing algorithms , even against the randomness of the network fault occurrences and user movements . story_separator_special_tag for enabling automatic deployment and management of cellular networks , self-organizing network ( son ) was boosted to enhance network performance , to improve service quality , and to reduce operational and capital expenditure . cell outage detection is an essential functionality of son to autonomously detect cells that fail to provide services , due to either software or hardware faults . machine learning represents an effective tool for such a task . however , traditional classification algorithms for cell outage detection are likely to construct a biased classifier when training samples in one class significantly outnumber other classes . to counter this problem , in this letter , we present a novel method that is able to learn from imbalanced cell outage data in cellular networks , through combining generative adversarial network ( gan ) and adaboost . specifically , the proposed approach utilizes gan to change distribution of imbalanced dataset by synthesizing more samples for minority class , and then uses adaboost to classify the calibrated dataset . experimental results show significant improvement of classification performance for imbalanced cell outage data , on the basis of several metrics including receiver operating characteristic ( roc ) , precision , story_separator_special_tag in the wake of diversity of service requirements and increasing push for extreme efficiency , adaptability propelled by machine learning ( ml ) a.k.a self organizing networks ( son ) is emerging as an inevitable design feature for future mobile 5g networks . the implementation of son with ml as a foundation requires significant amounts of real labeled sample data for the networks to train on , with high correlation between the amount of sample data and the effectiveness of the son algorithm . as generally real labeled data is scarce therefore it can become bottleneck for ml empowered son for unleashing their true potential . in this work , we propose a method of expanding these sample data sets using generative adversarial networks ( gans ) , which are based on two interconnected deep artificial neural networks . this method is an alternative to taking more data to expand the sample set , preferred in cases where taking more data is not simple , feasible , or efficient . we demonstrate how the method can generate large amounts of realistic synthetic data , utilizing the gan s ability of generation and discrimination , able to be easily added to story_separator_special_tag in this paper , we provide a novel approach that uses a generative adversarial network to produce synthetic network traffic . the intent is to leverage this synthetic data to improve the robustness of machine learning algorithms that perform analysis on communication networks . in our experimental results , we demonstrate that a generative adversarial network can construct samples of network traffic that are statistically similar to an original set of reference samples . additionally , we provide insight into the performance of our approach when evaluating different varieties of generative adversarial networks for their ability to produce and converge to realistic output . story_separator_special_tag this paper presents a spiral-based manta ray foraging algorithm ( smrfo ) . it is an improvement of manta ray foraging algorithm ( mrfo ) . the original mrfo has a competitive performance in terms of its accuracy in locating an optimal solution . its performance can be improved further provided the balanced exploration and exploitation strategies throughout a search operation are improved . a modification in the somersault phase of the mrfo is proposed . a spiral strategy is incorporated into the somersault phase of the mrfo . this is to guide all agents toward the best agent in spiral-based trajectory in every iteration . the spiral strategy also offers a dynamic step size scheme for all search agents during the operation . the proposed algorithm is tested on a set of benchmark functions that consist of various fitness landscapes . in terms of solving an engineering application , the proposed algorithm is applied to optimize a pid controller for a flexible manipulator system . result of the accuracy performance test on benchmark functions shows that the proposed algorithm outperforms the original mrfo significantly . in solving the engineering problem , both smrfo and mrfo optimize the pid control story_separator_special_tag network traffic classification is a central topic nowadays in the field of computer science . it is a very essential task for internet service providers ( isps ) to know which types of network applications flow in a network . network traffic classification is the first step to analyze and identify different types of applications flowing in a network . through this technique , internet service providers or network operators can manage the overall performance of a network . there are many methods traditional technique to classify internet traffic like port based , pay load based and machine learning based technique . the most common technique used these days is machine learning ( ml ) technique . which is used by many researchers and got very effective accuracy results . in this paper , we discuss network traffic classification techniques step by step and real time internet data set is develop using network traffic capture tool , after that feature extraction tool is use to extract features from the capture traffic and then four machine learning classifiers support vector machine , c4.5 decision tree , naive bays and bayes net classifiers are applied . experimental analysis shows that c4.5 classifiers story_separator_special_tag it is crucial to accurately identify the type of traffic and application so that it can enable various policy-driven network management and security monitoring . however , with the increasing adoption of internet applications use encryption protocols to transmit data , traffic classification is becoming more difficult . although existing machine learning methods and novel deep learning methods have many advantages , which can solve the drawbacks of port and payload based methods , but there are still some shortcomings , one of which is the imbalanced property of network traffic data . in this paper , we proposed a gan based method called flowgan to tackle with the problem of class imbalance for traffic classification . as an instance of generative adversarial network ( gan ) , flowgan leverages the superiority of gan 's data augmentation to produce synthetic traffic data for classes with few samples . furthermore , we trained a classical deep learning model , multilayer perceptron ( mlp ) based network traffic classifier to evaluate the performance of flowgan . based on the public dataset 'iscx ' , our experimental results show that our proposed flowgan can outperform an unbalanced dataset and balancing dataset by the story_separator_special_tag traffic analysis attacks , including website fingerprinting and protocol fingerprinting , are widely adopted by internet censorship to block a specific type of traffic . to mitigate these attacks , some advanced approaches such as traffic morphing and protocol tunneling techniques have been proposed . however , the existing traffic morphing/protocol tunneling techniques suffer from showing a strong traffic pattern or can be uncovered with a low false positive . further , they mainly rely on learning the pattern for specific traffic , which makes it highly possible to be identified due to a lack of dynamics . in this paper , we propose a dynamic traffic camouflaging technique , coined flowgan , to dynamically morph traffic feature as another normal network flow to bypass internet censorship . the core idea of flowgan is to automatically learn the features of the normal network flow , and dynamically morph the on-going traffic flows based on the learned features by the adoption of the recently proposed generative adversarial networks ( gan ) model . to measure the indistinguishability of the target traffic and the morphed traffic , we introduce a novel concept of -indistinguishability . we evaluate the proposed method on a story_separator_special_tag in this paper , we generally formulate the dynamics prediction problem of various network systems ( e.g. , the prediction of mobility , traffic and topology ) as the temporal link prediction task . different from conventional techniques of temporal link prediction that ignore the potential non-linear characteristics and the informative link weights in the dynamic network , we introduce a novel non-linear model gcn-gan to tackle the challenging temporal link prediction task of weighted dynamic networks . the proposed model leverages the benefits of the graph convolutional network ( gcn ) , long short-term memory ( lstm ) as well as the generative adversarial network ( gan ) . thus , the dynamics , topology structure and evolutionary patterns of weighted dynamic networks can be fully exploited to improve the temporal link prediction performance . concretely , we first utilize gcn to explore the local topological characteristics of each single snapshot and then employ lstm to characterize the evolving features of the dynamic networks . moreover , gan is used to enhance the ability of the model to generate the next weighted network snapshot , which can effectively tackle the sparsity and the wide-value-range problem of edge weights in story_separator_special_tag social tie prediction is an important issue in social network analysis . transfer learning is often used for social tie prediction to address the problem of insufficient labeled training data , since few users manually annotate their social relationships . in this paper , we propose trangan , a novel generative adversarial network ( gan ) based transfer learning framework for social tie prediction , which leverages social theories as the common knowledge to bridge the source network and the target network . gan helps augment the original data set by generating data samples that have a similar probability distribution to that of the original data , and the training of trangan converges faster compared to existing transfer learning models . we evaluate the performance of trangan with extensive experiments , and show that trangan outperforms traditional learning algorithms and existing transfer learning algorithm on several metrics , and is efficient for large-scale social networks . story_separator_special_tag generative adversarial nets ( gans ) have shown promise in image generation and semi-supervised learning ( ssl ) . however , existing gans in ssl have two problems : ( 1 ) the generator and the discriminator ( i.e . the classifier ) may not be optimal at the same time ; and ( 2 ) the generator can not control the semantics of the generated samples . the problems essentially arise from the two-player formulation , where a single discriminator shares incompatible roles of identifying fake samples and predicting labels and it only estimates the data without considering the labels . to address the problems , we present triple generative adversarial net ( triple-gan ) , which consists of three players -- -a generator , a discriminator and a classifier . the generator and the classifier characterize the conditional distributions between images and labels , and the discriminator solely focuses on identifying fake image-label pairs . we design compatible utilities to ensure that the distributions characterized by the classifier and the generator both converge to the data distribution . our results on various datasets demonstrate that triple-gan as a unified model can simultaneously ( 1 ) achieve the state-of-the-art story_separator_special_tag in practice , there are few available attack dataset . although there are many methods that can be used to simulate cyberattacks for attack data , such as using specific tools , writing scripts to simulate the attack scenes , etc . the disadvantages of those methods are also obvious . tools developers and script authors need to know professional network security knowledge . as tools are implemented in different ways , users also need to have some expertise . what 's more , it may take a long time to generate a large amount of attack data . in this paper , we present some of the existing network attack tools and proposed a method to generate attack data based on generative adversarial network . using our method you do not need to have a professional network security knowledge , only use some basic network attack data one can generate a large number of attack data in a very short period of time . as network malicious activities become increasingly complex and diverse , network security analysts face serious challenges . our method also can generate mixed features attack data by setting training data with different attack types . story_separator_special_tag wireless sensor networks ( wsns ) have found more and more applications in a variety of pervasive computing environments . however , how to support the development , maintenance , deployment and execution of applications over wsns remains to be a nontrivial and challenging task , mainly because of the gap between the high level requirements from pervasive computing applications and the underlying operation of wsns . middleware for wsn can help bridge the gap and remove impediments . in recent years , research has been carried out on wsn middleware from different aspects and for different purposes . in this paper , we provide a comprehensive review of the existing work on wsn middleware , seeking for a better understanding of the current issues and future directions in this field . we propose a reference framework to analyze the functionalities of wsn middleware in terms of the system abstractions and the services provided . we review the approaches and techniques for implementing the services . on the basis of the analysis and by using a feature tree , we provide taxonomy of the features of wsn middleware and their relationships , and use the taxonomy to classify and evaluate story_separator_special_tag despite the popularity of wireless sensor networks ( wsns ) in a wide range of applications , security problems associated with them have not been completely resolved . middleware is generally introduced as an intermediate layer between wsns and the end user to resolve some limitations , but most of the existing middleware is unable to protect data from malicious and unknown attacks during transmission . this paper introduces an intelligent middleware based on an unsupervised learning technique called generative adversarial networks ( gans ) algorithm . gans contain two networks : a generator ( g ) network and a detector ( d ) network . the g creates fake data similar to the real samples and combines it with real data from the sensors to confuse the attacker . the d contains multi-layers that have the ability to differentiate between real and fake data . the output intended for this algorithm shows an actual interpretation of the data that is securely communicated through the wsn . the framework is implemented in python with experiments performed using keras . results illustrate that the suggested algorithm not only improves the accuracy of the data but also enhances its security by protecting story_separator_special_tag with the fast growing demand of location-based services in indoor environments , indoor positioning based on fingerprinting has attracted a lot of interest due to its high accuracy . in this paper , we present a novel deep learning based indoor fingerprinting system using channel state information ( csi ) , which is termed deepfi . based on three hypotheses on csi , the deepfi system architecture includes an off-line training phase and an on-line localization phase . in the off-line training phase , deep learning is utilized to train all the weights of a deep network as fingerprints . moreover , a greedy learning algorithm is used to train the weights layer-by-layer to reduce complexity . in the on-line localization phase , we use a probabilistic method based on the radial basis function to obtain the estimated location . experimental results are presented to confirm that deepfi can effectively reduce location error compared with three existing methods in two representative indoor environments . story_separator_special_tag smartphones are linked with individuals and are valuable and yet easily available sources for characterizing users ' behavior and activities . user 's location is among the characteristics of each individual that can be utilized in the provision of location-based services ( lbs ) in numerous scenarios such as remote health-care and interactive museums . mobile phone tracking and positioning techniques approximate the position of a mobile phone and thereby its user , by disclosing the actual coordinate of a mobile phone . considering the advances in positioning techniques , indoor positioning is still a challenging issue , because the coverage of satellite signals is limited in indoor environments . one of the promising solutions for indoor positioning is fingerprinting in which the signals of some known transmitters are measured in several reference points ( rps ) . this measured data , which is called dataset is stored and used to train a mathematical model that relates the received signal from the transmitters ( model input ) and the location of that user ( the output of the model ) . considering all the improvements in indoor positioning , there is still a gap between practical solutions and the optimal story_separator_special_tag wi-fi positioning is currently the mainstream indoor localization method , and the construction of fingerprint database is crucial to the wi-fi based localization system . however , the accuracy requirement needs enough data sampled at many reference points , which consumes significant manpower and time . in this paper , we convert the acquired channel state information ( csi ) data to feature maps using complex wavelet transform and then extend the fingerprint database by the proposed wavelet transform-feature deep convolutional generative adversarial network model . with this model , the convergence process in training phase can be accelerated and the diversity of generated feature maps can be increased significantly . based on the extended fingerprint database , the accuracy of indoor localization system can be improved with reduced human effort . story_separator_special_tag this paper describes and evaluates the use of generative adversarial networks ( gans ) for path planning in support of smart mobility applications such as indoor and outdoor navigation applications , individualized wayfinding for people with disabilities ( e.g. , vision impairments , physical disabilities , etc . ) , path planning for evacuations , robotic navigations , and path planning for autonomous vehicles . we propose an architecture based on gans to recommend accurate and reliable paths for navigation applications . the proposed system can use crowd-sourced data to learn the trajectories and infer new ones . the system provides users with generated paths that help them navigate from their local environment to reach a desired location . as a use case , we experimented with the proposed method in support of a wayfinding application in an indoor environment . our experiments assert that the generated paths are correct and reliable . the accuracy of the classification task for the generated paths is up to 99 % and the quality of the generated paths has a mean opinion score of 89 % . story_separator_special_tag with the advancement of wireless technologies and sensing methodologies , many studies have shown the success of re-using wireless signals ( e.g. , wifi ) to sense human activities and thereby realize a set of emerging applications , ranging from intrusion detection , daily activity recognition , gesture recognition to vital signs monitoring and user identification involving even finer-grained motion sensing . these applications arguably can brace various domains for smart home and office environments , including safety protection , well-being monitoring/management , smart healthcare and smart-appliance interaction . the movements of the human body impact the wireless signal propagation ( e.g. , reflection , diffraction and scattering ) , which provide great opportunities to capture human motions by analyzing the received wireless signals . researchers take the advantage of the existing wireless links among mobile/smart devices ( e.g. , laptops , smartphones , smart thermostats , smart refrigerators and virtual assistance systems ) by either extracting the ready-to-use signal measurements or adopting frequency modulated signals to detect the frequency shift . due to the low-cost and non-intrusive sensing nature , wireless-based human activity sensing has drawn considerable attention and become a prominent research field over the past decade . story_separator_special_tag indoor human activity recognition ( har ) explores the correlation between human body movements and the reflected wifi signals to classify different activities . by analyzing wifi signal patterns , especially the dynamics of channel state information ( csi ) , different activities can be distinguished . gathering csi data is expensive both from the timing and equipment perspective . in this paper , we use synthetic data to reduce the need for real measured csi . we present a semi-supervised learning method for csi-based activity recognition systems in which long short-term memory ( lstm ) is employed to learn features and recognize seven different actions . we apply principal component analysis ( pca ) on csi amplitude data , while short-time fourier transform ( stft ) extracts the features in the frequency domain . at first , we train the lstm network with entirely raw csi data , which takes much more processing time . to this end , we aim to generate data by using 50 % of raw data in conjunction with a generative adversarial network ( gan ) . our experimental results confirm that this model can increase classification accuracy by 3.4 % and reduce the story_separator_special_tag as a cornerstone service for many internet of things applications , channel state information ( csi ) -based activity recognition has received immense attention over recent years . however , recognition performance of general approaches might significantly decrease when applying the trained model to the left-out user whose csi data are not used for model training . to overcome this challenge , we propose a semi-supervised generative adversarial network ( gan ) for csi-based activity recognition ( csigan ) . based on the general semi-supervised gans , we mainly design three components for csigan to meet the scenarios that unlabeled data from left-out users are very limited and enhance recognition performance : 1 ) we introduce a new complement generator , which can use limited unlabeled data to produce diverse fake samples for training a robust discriminator ; 2 ) for the discriminator , we change the number of probability outputs from $ k+1 $ into $ 2k+1 $ ( here , $ k $ is the number of categories ) , which can help to obtain the correct decision boundary for each category ; and 3 ) based on the introduced generator , we propose a manifold regularization , story_separator_special_tag we present and discuss several novel applications of deep learning for the physical layer . by interpreting a communications system as an autoencoder , we develop a fundamental new way to think about communications system design as an end-to-end reconstruction task that seeks to jointly optimize transmitter and receiver components in a single process . we show how this idea can be extended to networks of multiple transmitters and receivers and present the concept of radio transformer networks as a means to incorporate expert domain knowledge in the machine learning model . lastly , we demonstrate the application of convolutional neural networks on raw iq samples for modulation classification which achieves competitive accuracy with respect to traditional schemes relying on expert features . the paper is concluded with a discussion of open challenges and areas for future investigation . story_separator_special_tag this paper presents a novel method for synthesizing new physical layer modulation and coding schemes for communications systems using a learning-based approach which does not require an analytic model of the impairments in the channel . it extends prior work published on the channel autoencoderto consider the case where the stochastic channel response is not known or can not be easily modeled in a closed form analytic expression . by adopting an adversarial approach for learning a channel response approximation and information encoding , we jointly learn a solution to both tasks applicable over a wide range of channel environments . we describe the operation of the proposed adversarial system , share results for its training and validation over-the-air , and discuss implications and future work in the area . story_separator_special_tag channel modeling is a critical topic when considering accurately designing or evaluating the performance of a communications system . most prior work in designing or learning new modulation schemes has focused on using simplified analytic channel models such as additive white gaussian noise ( awgn ) , rayleigh fading channels or other similar compact parametric models . in this paper , we extend recent work training generative adversarial networks ( gans ) to approximate wireless channel responses to more accurately reflect the probability distribution functions ( pdfs ) of stochastic channel behaviors . we introduce the use of variational gans to provide appropriate architecture and loss functions which accurately capture these stochastic behaviors . finally , we illustrate why prior gan-based methods failed to accurately capture these behaviors and share results illustrating the performance of such as system over a range of complex realistic channel effects . story_separator_special_tag in this article , we use deep neural networks ( dnns ) to develop an end-to-end wireless communication system , in which dnns are employed for all signal-related functionalities , including encoding , decoding , modulation , and equalization . however , accurate instantaneous channel transfer function , i.e. , the channel state information ( csi ) , is necessary to compute the gradient of the dnn representing . in many communication systems , the channel transfer function is hard to obtain in advance and varies with time and location . in this article , this constraint is released by developing a channel agnostic end-to-end system that does not rely on any prior information about the channel . we use a conditional generative adversarial net ( gan ) to represent the channel effects , where the encoded signal of the transmitter will serve as the conditioning information . in addition , in order to obtain accurate channel state information for signal detection at the receiver , the received signal corresponding to the pilot data is added as a part of the conditioning information . from the simulation results , the proposed method is effective on additive white gaussian noise ( story_separator_special_tag autoencoder-based communication systems use neural network channel models to backwardly propagate message reconstruction error gradients across an approximation of the physical communication channel . in this work , we develop and test a new generative adversarial network ( gan ) architecture for the purpose of training a stochastic channel approximating neural network . in previous research , investigators have focused on additive white gaussian noise ( awgn ) channels and/or simplified rayleigh fading channels , both of which are linear and have well defined analytic solutions . given that training a neural network is computationally expensive , channel approximation networks-and more generally the autoencoder systems-should be evaluated in communication environments that are traditionally difficult . to that end , our investigation focuses on channels that contain a combination of non-linear amplifier distortion , pulse shape filtering , intersymbol interference , frequency-dependent group delay , multipath , and non-gaussian statistics . each of our models are trained without any prior knowledge of the channel . we show that the trained models have learned to generalize over an arbitrary amplifier drive level and constellation alphabet . we demonstrate the versatility of our gan architecture by comparing the marginal probability density function of story_separator_special_tag in modern wireless communication systems , wireless channel modeling has always been a fundamental task in system design and performance optimization . traditional channel modeling methods , such as ray-tracing and geometry- based stochastic channel models , require in-depth domain-specific knowledge and technical expertise in radio signal propagations across electromagnetic fields . to avoid these difficulties and complexities , a novel generative adversarial network ( gan ) framework is proposed for the first time to address the problem of autonomous wireless channel modeling without complex theoretical analysis or data processing . specifically , the gan is trained by raw measurement data to reach the nash equilibrium of a minmax game between a channel data generator and a channel data discriminator . once this process converges , the resulting channel data generator is extracted as the target channel model for a specific application scenario . to demonstrate , the distribution of a typical additive white gaussian noise channel is successfully approximated by using the proposed gan-based channel modeling framework , thus verifying its good performance and effectiveness . story_separator_special_tag beyond their benign uses , the small unmanned aerial vehicles ( uavs ) are expected to take the major role in future smart cities that have attracted the attention of the public and authorities . therefore , detecting , tracking and classifying the type of uavs is important for surveillance and air traffic management applications . existing uavs detection works focus on radars , visual detection , and acoustic sensors . however , the work was done by applying support vector machine ( svm ) , k-nearest neighbor ( knn ) based methods to classify the uavs need a large number of samples for feature extraction to train a model . in this paper , we propose a new small uavs classification system using auxiliary classifier wasserstein generative adversarial networks ( ac-wgans ) based on the wireless signals collected from the uavs of various types . before the classification , using the universal software radio peripheral ( usrp ) , oscilloscope and antenna to collect the wireless signals , preprocessing and dimensionality reduction to represent information at a lower dimension space . the processed data from uavs is input to the uavs ' discriminant model of the ac-wgans for classification story_separator_special_tag cognitive radio offers the promise of intelligent radios that can learn from and adapt to their environment . to date , most cognitive radio research has focused on policy-based radios that are hard-coded with a list of rules on how the radio should behave in certain scenarios . some work has been done on radios with learning engines tailored for very specific applications . this article describes a concrete model for a generic cognitive radio to utilize a learning engine . the goal is to incorporate the results of the learning engine into a predicate calculus-based reasoning engine so that radios can remember lessons learned in the past and act quickly in the future . we also investigate the differences between reasoning and learning , and the fundamentals of when a particular application requires learning , and when simple reasoning is sufficient . the basic architecture is consistent with cognitive engines seen in ai research . the focus of this article is not to propose new machine learning algorithms , but rather to formalize their application to cognitive radio and develop a framework from within which they can be useful . we describe how our generic cognitive engine can tackle story_separator_special_tag in this survey paper , we characterize the learning problem in cognitive radios ( crs ) and state the importance of artificial intelligence in achieving real cognitive communications systems . we review various learning problems that have been studied in the context of crs classifying them under two main categories : decision-making and feature classification . decision-making is responsible for determining policies and decision rules for crs while feature classification permits identifying and classifying different observation models . the learning algorithms encountered are categorized as either supervised or unsupervised algorithms . we describe in detail several challenging learning issues that arise in cognitive radio networks ( crns ) , in particular in non-markovian environments and decentralized networks , and present possible solution methods to address them . we discuss similarities and differences among the presented algorithms and identify the conditions under which each of the techniques may be applied . story_separator_special_tag abstract two key tasks in the development of cognitive radio networks in commercial and military applications are spectrum sensing and automatic modulation classification ( amc ) . these tasks become even more difficult when the cognitive radio receiver has no information about the channel or the modulation type . an integrated scheme which includes both these aspects is proposed in this paper . spectrum sensing is done using cumulants derived from fractional lower order statistics . it is shown through simulations that the proposed sensing method has improved performance , especially in low snr environments in gaussian and non-gaussian noise when compared with the conventional higher-order statistics ( hos ) based method . the performance of the automatic modulation classifier is presented in the form of conditional probability of classification , probability of correct classification and confusion matrix under noisy and under fading conditions . simulations in our previous work showed that the proposed method achieved better classification accuracy when compared to cumulant based amc method in noise conditions that are highly impulsive than gaussian . in this paper , simulations show significant improvement in the performance of amc in the presence of awgn and under multipath fading , for story_separator_special_tag a novel approach of training data augmentation and domain adaptation is presented to support machine learning applications for cognitive radio . machine learning provides effective tools to automate cognitive radio functionalities by reliably extracting and learning intrinsic spectrum dynamics . however , there are two important challenges to overcome , in order to fully utilize the machine learning benefits with cognitive radios . first , machine learning requires significant amount of truthed data to capture complex channel and emitter characteristics , and train the underlying algorithm ( e.g. , a classifier ) . second , the training data that has been identified for one spectrum environment can not be used for another one ( e.g. , after channel and emitter conditions change ) . to address these challenges , a generative adversarial network ( gan ) with deep learning structures is used to 1 ) ~generate additional synthetic training data to improve classifier accuracy , and 2 ) adapt training data to spectrum dynamics . this approach is applied to spectrum sensing by assuming only limited training data without knowledge of spectrum statistics . machine learning classifiers are trained to detect signals using either limited , augmented or adapted training story_separator_special_tag we introduce generative adversarial network ( gan ) into the radio machine learning domain for the task of modulation recognition by proposing a general , scalable , end-to-end framework named radio classify generative adversarial networks ( rcgans ) . this method naively learns its features through self-optimization during an extensive data-driven gpu-based training process . several experiments are taken on a synthetic radio frequency dataset , simulation results show that , compared with some renowned deep learning methods and classic machine learning methods , the proposed method achieves higher or equivalent classification accuracy , superior data utilization , and presents robustness against noises . story_separator_special_tag the paper presents a novel approach of spoofing wireless signals by using a general adversarial network ( gan ) to generate and transmit synthetic signals that can not be reliably distinguished from intended signals . it is of paramount importance to authenticate wireless signals at the phy layer before they proceed through the receiver chain . for that purpose , various waveform , channel , and radio hardware features that are inherent to original wireless signals need to be captured . in the meantime , adversaries become sophisticated with the cognitive radio capability to record , analyze , and manipulate signals before spoofing . building upon deep learning techniques , this paper introduces a spoofing attack by an adversary pair of a transmitter and a receiver that assume the generator and discriminator roles in the gan and play a minimax game to generate the best spoofing signals that aim to fool the best trained defense mechanism . the output of this approach is two-fold . from the attacker point of view , a deep learning-based spoofing mechanism is trained to potentially fool a defense mechanism such as rf ingerprinting . from the defender point of view , a deep learning-based story_separator_special_tag an adversarial machine learning approach is introduced to launch jamming attacks on wireless communications and a defense strategy is presented . a cognitive transmitter uses a pre-trained classifier to predict the current channel status based on recent sensing results and decides whether to transmit or not , whereas a jammer collects channel status and acks to build a deep learning classifier that reliably predicts the next successful transmissions and effectively jams them . this jamming approach is shown to reduce the transmitter s performance much more severely compared with random or sensing-based jamming . the deep learning classification scores are used by the jammer for power control subject to an average power constraint . next , a generative adversarial network is developed for the jammer to reduce the time to collect the training dataset by augmenting it with synthetic samples . as a defense scheme , the transmitter deliberately takes a small number of wrong actions in spectrum access ( in form of a causative attack against the jammer ) and therefore prevents the jammer from building a reliable classifier . the transmitter systematically selects when to take wrong actions and adapts the level of defense to mislead the jammer story_separator_special_tag covert communication conceals the transmission of the message from an attentive adversary . recent work on the limits of covert communication in additive white gaussian noise channels has demonstrated that a covert transmitter ( alice ) can reliably transmit a maximum of $ \\mathcal { o } ( \\sqrt { n } ) $ bits to a covert receiver ( bob ) without being detected by an adversary ( warden willie ) in $ n $ channel uses . this paper focuses on the scenario where other friendly nodes distributed according to a two-dimensional poisson point process with density $ m $ are present . we propose a strategy where the friendly node closest to the adversary , without close coordination with alice , produces artificial noise . we show that this method allows alice to reliably and covertly send $ \\mathcal { o } ( \\min \\ { { n , m^ { \\gamma /2 } \\sqrt { n } } \\ } ) $ bits to bob in $ n $ channel uses , where $ \\gamma $ is the path-loss exponent . we also consider a setting where there are $ n_ { \\mathrm { w } story_separator_special_tag this letter investigates a power allocation problem for a cooperative cognitive covert communication system , where the relay secondary transmitter ( st ) covertly transmits private information under the supervision of the primary transmitter ( pt ) . aiming to achieve the tradeoff between the covert rate and the probability of detection errors , a novel generative adversarial network based power allocation algorithm ( gan-pa ) is proposed to perform power allocation at the relay st for covert communication . under the proposed gan-pa , the generator adaptively generates the power allocation solution for covert communication , while the discriminator determines whether transmitting covert message or not . in particular , by utilizing the proposed deep neural network ( dnn ) , the discriminator and the generator are alternately trained in a competitive manner . numerical results show that the proposed gan-pa can attain near-optimal power allocation solution for the covert communication and achieve rapid convergence . story_separator_special_tag this survey paper describes a focused literature survey of machine learning ( ml ) and data mining ( dm ) methods for cyber analytics in support of intrusion detection . short tutorial descriptions of each ml/dm method are provided . based on the number of citations or the relevance of an emerging method , papers representing each method were identified , read , and summarized . because data are so important in ml/dm approaches , some well-known cyber data sets used in ml/dm are described . the complexity of ml/dm algorithms is addressed , discussion of challenges for using ml/dm for cyber security is presented , and some recommendations on when to use a given method are provided . story_separator_special_tag cyber-physical systems ( cpss ) have become ubiquitous in recent years and has become the core of modern critical infrastructure and industrial applications . therefore , ensuring security is a prime concern . due to the success of deep learning ( dl ) in a multitude of domains , development of dl based cps security applications have received increased interest in the past few years . developing generalized models is critical since the models have to perform well under threats that they havent trained on . however , despite the broad body of work on using dl for ensuring the security of cpss , to our best knowledge very little work exists where the focus is on the generalization capabilities of these dl applications . in this paper , we intend to provide a concise survey of the regularization methods for dl algorithms used in security-related applications in cpss and thus could be used to improve the generalization capability of dl based cyber-physical system based security applications . further , we provide a brief insight into the current challenges and future directions as well . story_separator_special_tag the internet of things ( iot ) integrates billions of smart devices that can communicate with one another with minimal human intervention . iot is one of the fastest developing fields in the history of computing , with an estimated 50 billion devices by the end of 2020. however , the crosscutting nature of iot systems and the multidisciplinary components involved in the deployment of such systems have introduced new security challenges . implementing security measures , such as encryption , authentication , access control , network and application security for iot devices and their inherent vulnerabilities is ineffective . therefore , existing security methods should be enhanced to effectively secure the iot ecosystem . machine learning and deep learning ( ml/dl ) have advanced considerably over the last few years , and machine intelligence has transitioned from laboratory novelty to practical machinery in several important applications . consequently , ml/dl methods are important in transforming the security of iot systems from merely facilitating secure communication between devices to security-based intelligence systems . the goal of this work is to provide a comprehensive survey of ml methods and recent advances in dl methods that can be used to develop enhanced story_separator_special_tag this survey paper describes a literature review of deep learning ( dl ) methods for cyber security applications . a short tutorial-style description of each dl method is provided , including deep autoencoders , restricted boltzmann machines , recurrent neural networks , generative adversarial networks , and several others . then we discuss how each of the dl methods is used for security applications . we cover a broad array of attack types including malware , spam , insider threats , network intrusions , false data injection , and malicious domain names used by botnets . story_separator_special_tag software defined networking ( sdn ) has recently emerged to become one of the promising solutions for the future internet . with the logical centralization of controllers and a global network overview , sdn brings us a chance to strengthen our network security . however , sdn also brings us a dangerous increase in potential threats . in this paper , we apply a deep learning approach for flow-based anomaly detection in an sdn environment . we build a deep neural network ( dnn ) model for an intrusion detection system and train the model with the nsl-kdd dataset . in this work , we just use six basic features ( that can be easily obtained in an sdn environment ) taken from the forty-one features of nsl-kdd dataset . through experiments , we confirm that the deep learning approach shows strong potential to be used for flow-based anomaly detection in sdn environments . story_separator_special_tag intrusion detection systems ( idss ) are an essential cog of the network security suite that can defend the network from malicious intrusions and anomalous traffic . many machine learning ( ml ) -based idss have been proposed in the literature for the detection of malicious network traffic . however , recent works have shown that ml models are vulnerable to adversarial perturbations through which an adversary can cause idss to malfunction by introducing a small impracticable perturbation in the network traffic . in this paper , we propose an adversarial ml attack using generative adversarial networks ( gans ) that can successfully evade an ml-based ids . we also show that gans can be used to inoculate the ids and make it more robust to adversarial perturbations . story_separator_special_tag generative adversarial networks have been able to generate striking results in various domains . this generation capability can be general while the networks gain deep understanding regarding the data distribution . in many domains , this data distribution consists of anomalies and normal data , with the anomalies commonly occurring relatively less , creating datasets that are imbalanced . the capabilities that generative adversarial networks offer can be leveraged to examine these anomalies and help alleviate the challenge that imbalanced datasets propose via creating synthetic anomalies . this anomaly generation can be specifically beneficial in domains that have costly data creation processes as well as inherently imbalanced datasets . one of the domains that fits this description is the host-based intrusion detection domain . in this work , adfa-ld dataset is chosen as the dataset of interest containing system calls of small foot-print next generation attacks . the data is first converted into images , and then a cycle-gan is used to create images of anomalous data from images of normal data . the generated data is combined with the original dataset and is used to train a model to detect anomalies . by doing so , it is shown story_separator_special_tag as an important tool in security , the intrusion detection system bears the responsibility of the defense to network attacks performed by malicious traffic . nowadays , with the help of machine learning algorithms , the intrusion detection system develops rapidly . however , the robustness of this system is questionable when it faces the adversarial attacks . to improve the detection system , more potential attack approaches are under research . in this paper , a framework of the generative adversarial networks , called idsgan , is proposed to generate the adversarial malicious traffic records aiming to attack intrusion detection systems by deceiving and evading the detection . given that the internal structure of the detection system is unknown to attackers , the adversarial attack examples perform the black-box attacks against the detection system . idsgan leverages a generator to transform original malicious traffic records into adversarial malicious ones . a discriminator classifies traffic examples and learns the black-box detection system . more significantly , to guarantee the validity of the intrusion , only part of the nonfunctional features are modified in attack traffic . based on the tests to the dataset nsl-kdd , the feasibility of the model story_separator_special_tag a controller area network ( can ) bus in the vehicles is an efficient standard bus enabling communication between all electronic control units ( ecu ) . however , can bus is not enough to protect itself because of lack of security features . to detect suspicious network connections effectively , the intrusion detection system ( ids ) is strongly required . unlike the traditional ids for internet , there are small number of known attack signatures for vehicle networks . also , ids for vehicle requires high accuracy because any false-positive error can seriously affect the safety of the driver . to solve this problem , we propose a novel ids model for in-vehicle networks , gids ( gan based intrusion detection system ) using deep-learning model , generative adversarial nets . gids can learn to detect unknown attacks using only normal data . as experiment result , gids shows high detection accuracy for four unknown attacks . story_separator_special_tag malicious software is generated with more and more modified features of which the methods to detect malicious software use characteristics . automatic classification of malicious software is efficient because it does not need to store all characteristic . in this paper , we propose a transferred generative adversarial network ( tgan ) for automatic classification and detection of the zero-day attack . since the gan is unstable in training process , often resulting in generator that produces nonsensical outputs , a method to pre-train gan with autoencoder structure is proposed . we analyze the detector , and the performance of the detector is visualized by observing the clustering pattern of malicious software using t-sne algorithm . the proposed model gets the best performance compared with the conventional machine learning algorithms . story_separator_special_tag understanding and analyzing the radio frequency ( rf ) environment have become indispensable for various autonomous wireless deployments . to this end , machine learning techniques have become popular as they can learn , analyze and even predict the rf signals and associated parameters that characterize a rf environment . however , classical machine learning methods have their limitations and there are situations where such methods become ineffective . one such setting is where active adversaries are present and try to disrupt the rf environment through malicious activities like jamming or spoofing . in this paper we propose an adversarial learning technique for identifying rogue rf transmitters and classifying trusted ones by designing and implementing generative adversarial nets ( gan ) . the gan exploits the in-phase ( i ) and quadrature imbalance ( i.e. , the iq imbalance ) present in all transmitters to learn the unique high dimensional features that can be used as fingerprints for identifying and classifying the transmitters . we implement a generative model that learns the sample space of the iq values of the known transmitters and use the learned representation to generate fake signals that imitate the transmissions of the known transmitters . story_separator_special_tag generative adversarial networks ( gans ) have been successfully used in a large number of domains . this paper proposes the use of gans for generating network traffic in order to mimic other types of traffic . in particular , our method modifies the network behavior of a real malware in order to mimic the traffic of a legitimate application , and therefore avoid detection . by modifying the source code of a malware to receive parameters from a gan , it was possible to adapt the behavior of its command and control ( c2 ) channel to mimic the behavior of facebook chat network traffic . in this way , it was possible to avoid the detection of new-generation intrusion prevention systems that use machine learning and behavioral characteristics . a real-life scenario was successfully implemented using the stratosphere behavioral ips in a router , while the malware and the gan were deployed in the local network of our laboratory , and the c2 server was deployed in the cloud . results show that a gan can successfully modify the traffic of a malware to make it undetectable . the modified malware also tested if it was being blocked story_separator_special_tag machine learning has been used to detect new malware in recent years , while malware authors have strong motivation to attack such algorithms . malware authors usually have no access to the detailed structures and parameters of the machine learning models used by malware detection systems , and therefore they can only perform black-box attacks . this paper proposes a generative adversarial network ( gan ) based algorithm named malgan to generate adversarial malware examples , which are able to bypass black-box machine learning based detection models . malgan uses a substitute detector to fit the black-box malware detection system . a generative network is trained to minimize the generated adversarial examples ' malicious probabilities predicted by the substitute detector . the superiority of malgan over traditional gradient based adversarial example generation algorithms is that malgan is able to decrease the detection rate to nearly zero and make the retraining based defensive method against adversarial examples hard to work . story_separator_special_tag in recent years , researches on malware detection using machine learning have been attracting wide attention . at the same time , how to avoid these detections is also regarded as an emerging topic . in this paper , we focus on the avoidance of malware detection based on generative adversarial network ( gan ) . previous gan-based researches use the same feature quantities for learning malware detection . moreover , existing learning algorithms use multiple malware , which affects the performance of avoidance and is not realistic on attackers . to settle this issue , we apply differentiated learning methods with the different feature quantities and only one malware . experimental results show that our method can achieve better performance than existing ones . story_separator_special_tag machine learning ( ml ) models , e.g. , deep neural networks ( dnns ) , are vulnerable to adversarial examples : malicious inputs modified to yield erroneous model outputs , while appearing unmodified to human observers . potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior . yet , all existing adversarial example attacks require knowledge of either the model internals or its training data . we introduce the first practical demonstration of an attacker controlling a remotely hosted dnn with no such knowledge . indeed , the only capability of our black-box adversary is to observe labels given by the dnn to chosen inputs . our attack strategy consists in training a local model to substitute for the target dnn , using inputs synthetically generated by an adversary and labeled by the target dnn . we use the local substitute to craft adversarial examples , and find that they are misclassified by the targeted dnn . to perform a real-world and properly-blinded evaluation , we attack a dnn hosted by metamind , an online deep learning api . we find that their dnn misclassifies 84.24 % of the adversarial examples crafted with story_separator_special_tag state-of-the-art password guessing tools , such as hashcat and john the ripper , enable users to check billions of passwords per second against password hashes . in addition to performing straightforward dictionary attacks , these tools can expand password dictionaries using password generation rules , such as concatenation of words ( e.g. , password123456 ) and leet speak ( e.g. , password becomes p4s5w0rd ) . although these rules work well in practice , creating and expanding them to model further passwords is a labor-intensive task that requires specialized expertise . story_separator_special_tag current credit card detection methods usually utilize the idea of classification , requiring a balanced training dataset which should contain both positive and negative samples . however , we often get highly skewed datasets with very few frauds . in this paper , we want to apply deep learning techniques to help handle this situation . we firstly use sparse autoencoder ( sae ) to obtain representations of normal transactions and then train a generative adversarial network ( gan ) with these representations . finally , we combine the sae and the discriminator of gan and apply them to detect whether a transaction is genuine or fraud . the experimental results show that our solution outperforms the other state-of-the-art one-class methods . story_separator_special_tag credit card transactions have become the preferred mode of payments in developed countries and its utility is rapidly growing in developing countries making frauds an increasingly consequential problem leading to financial losses and erosion of consumer confidence . although , credit card data is highly class imbalanced and this makes training of models to classify fraud data difficult . this study employs the use of multiple adversarial networks to generate pseudo data to enhance model performance . this study uses the vanilla implementation , least squares , wasserstein , margin adaptive , relaxed wasserstein of gans . the distribution of the generated data against original fraud data , the classifier accuracy , convergence for each model and an optimal number of data generations is analyzed . the generated data is then augmented and tested using an artificial neural network model and a 12.86 % increase in recall for a dataset with a class imbalance of initial 579 to 1 is recorded . story_separator_special_tag host-based anomaly intrusion detection system design is very challenging due to the notoriously high false alarm rate . this paper introduces a new host-based anomaly intrusion detection methodology using discontiguous system call patterns , in an attempt to increase detection rates whilst reducing false alarm rates . the key concept is to apply a semantic structure to kernel level system calls in order to reflect intrinsic activities hidden in high-level programming languages , which can help understand program anomaly behaviour . excellent results were demonstrated using a variety of decision engines , evaluating the kdd98 and unm data sets , and a new , modern data set . the adfa linux data set was created as part of this research using a modern operating system and contemporary hacking methods , and is now publicly available . furthermore , the new semantic method possesses an inherent resilience to mimicry attacks , and demonstrated a high level of portability between different operating system versions . story_separator_special_tag the impacts of the i/q imbalance in the quadrature down-converter on the performance of a qpsk-ofdm-qam system are studied . either amplitude or phase imbalance introduces inter-channel interference ( ici ) . in addition to the ici , there is a cross-talk between in-phase and quadrature channels in each and every sub-carrier when both amplitude and phase imbalances are present . the ber ( bit error ratio ) performance of qpsk sub-carriers are also calculated to illustrate the impacts of the i/q imbalance . it is observed that with the amplitude imbalance less than 1 db and phase imbalance less than 5 degrees , the degradation of ber performance is less than 0.5 db for a ber > 10/sup -6/ . story_separator_special_tag abstract android pattern lock system is a popular form of user authentication extensively used in mobile phones today . however , it is vulnerable to potential security attacks such as shoulder surfing , camera attack and smudge attack . this study proposes a new kind of authentication system based on a generative deep neural network that can defend any attacks by imposters except a registered user . this network adopts the anomaly detection paradigm where only normal data is used while training the network . for this purpose , we utilize both generative adversarial networks as an anomaly detector and long short term memory that processes 1d time varying signals converted from 2d android patterns . to handle the stability problem of gans during the training , replay buffer , which has been effectively used in deep q-networks , is also utilized . evaluation of the proposed method was carried out thoroughly and the accuracy reached to 0.95 in term of the area under curve . although training this network requires extensive computing resources , it runs on a mobile phone well since the testing version is very light . further experiments conducted using a group of mobile phone users story_separator_special_tag as online systems based on machine learning are offered to public or paid subscribers via application programming interfaces ( apis ) , they become vulnerable to frequent exploits and attacks . this paper studies adversarial machine learning in the practical case when there are rate limitations on api calls . the adversary launches an exploratory ( inference ) attack by querying the api of an online machine learning system ( in particular , a classifier ) with input data samples , collecting returned labels to build up the training data , and training an adversarial classifier that is functionally equivalent and statistically close to the target classifier . the exploratory attack with limited training data is shown to fail to reliably infer the target classifier of a real text classifier api that is available online to the public . in return , a generative adversarial network ( gan ) based on deep learning is built to generate synthetic training data from a limited number of real training data samples , thereby extending the training data and improving the performance of the inferred classifier . the exploratory attack provides the basis to launch the causative attack ( that aims to poison story_separator_special_tag we propose the margin adaptation for generative adversarial networks ( magans ) algorithm , a novel training procedure for gans to improve stability and performance by using an adaptive hinge loss function . we estimate the appropriate hinge loss margin with the expected energy of the target distribution , and derive principled criteria for when to update the margin . we prove that our method converges to its global optimum under certain assumptions . evaluated on the task of unsupervised image generation , the proposed training procedure is simple yet robust on a diverse set of data , and achieves qualitative and quantitative improvements compared to the state-of-the-art . story_separator_special_tag abstract generative models , in particular generative adversarial networks ( gans ) , have gained significant attention in recent years . a number of gan variants have been proposed and have been utilized in many applications . despite large strides in terms of theoretical progress , evaluating and comparing gans remains a daunting task . while several measures have been introduced , as of yet , there is no consensus as to which measure best captures strengths and limitations of models and should be used for fair model comparison . as in other areas of computer vision and machine learning , it is critical to settle on one or few good measures to steer the progress in this field . in this paper , i review and critically discuss more than 24 quantitative and 5 qualitative measures for evaluating generative models with a particular emphasis on gan-derived models . i also provide a set of 7 desiderata followed by an evaluation of whether a given measure or a family of measures is compatible with them . story_separator_special_tag although generative adversarial networks achieve state-of-the-art results on a variety of generative tasks , they are regarded as highly unstable and prone to miss modes . we argue that these bad behaviors of gans are due to the very particular functional shape of the trained discriminators in high dimensional spaces , which can easily make training stuck or push probability mass in the wrong direction , towards that of higher concentration than that of the data generating distribution . we introduce several ways of regularizing the objective , which can dramatically stabilize the training of gan models . we also show that our regularizers can help the fair distribution of probability mass across the modes of the data generating distribution , during the early phases of training and thus providing a unified solution to the missing modes problem . story_separator_special_tag generative adversarial networks ( gans ) excel at creating realistic images with complex models for which maximum likelihood is infeasible . however , the convergence of gan training has still not been proved . we propose a two time-scale update rule ( ttur ) for training gans with stochastic gradient descent on arbitrary gan loss functions . ttur has an individual learning rate for both the discriminator and the generator . using the theory of stochastic approximation , we prove that the ttur converges under mild assumptions to a stationary local nash equilibrium . the convergence carries over to the popular adam optimization , for which we prove that it follows the dynamics of a heavy ball with friction and thus prefers flat minima in the objective landscape . for the evaluation of the performance of gans at image generation , we introduce the ` frechet inception distance '' ( fid ) which captures the similarity of generated images to real ones better than the inception score . in experiments , ttur improves learning for dcgans and improved wasserstein gans ( wgan-gp ) outperforming conventional gan training on celeba , cifar-10 , svhn , lsun bedrooms , and the one story_separator_special_tag evaluating generative adversarial networks ( gans ) is inherently challenging . in this paper , we revisit several representative sample-based evaluation metrics for gans , and address the problem of how to evaluate the evaluation metrics . we start with a few necessary conditions for metrics to produce meaningful scores , such as distinguishing real from generated samples , identifying mode dropping and mode collapsing , and detecting overfitting . with a series of carefully designed experiments , we comprehensively investigate existing sample-based metrics and identify their strengths and limitations in practical settings . based on these results , we observe that kernel maximum mean discrepancy ( mmd ) and the 1-nearest-neighbor ( 1-nn ) two-sample test seem to satisfy most of the desirable properties , provided that the distances between samples are computed in a suitable feature space . our experiments also unveil interesting properties about the behavior of several popular gan models , such as whether they are memorizing training samples , and how far they are from learning the target distribution . story_separator_special_tag abstract : probabilistic generative models can be used for compression , denoising , inpainting , texture synthesis , semi-supervised learning , unsupervised feature learning , and other tasks . given this wide range of applications , it is not surprising that a lot of heterogeneity exists in the way these models are formulated , trained , and evaluated . as a consequence , direct comparison between models is often difficult . this article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models . in particular , we show that three of the currently most commonly used criteria -- -average log-likelihood , parzen window estimates , and visual fidelity of samples -- -are largely independent of each other when the data is high-dimensional . good performance with respect to one criterion therefore need not imply good performance with respect to the other criteria . our results show that extrapolation from one criterion to another is not warranted and generative models need to be evaluated directly with respect to the application ( s ) they were intended for . in addition , we provide examples demonstrating that parzen window story_separator_special_tag we propose a framework for analyzing and comparing distributions , which we use to construct statistical tests to determine if two samples are drawn from different distributions . our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel hilbert space ( rkhs ) , and is called the maximum mean discrepancy ( mmd ) .we present two distribution free tests based on large deviation bounds for the mmd , and a third test based on the asymptotic distribution of this statistic . the mmd can be computed in quadratic time , although efficient linear time approximations are available . our statistic is an instance of an integral probability metric , and various classical metrics on distributions are obtained when alternative function classes are used in place of an rkhs . we apply our two-sample tests to a variety of problems , including attribute matching for databases using the hungarian marriage method , where they perform strongly . excellent performance is also obtained when comparing distributions over graphs , for which these are the first such tests . story_separator_special_tag detecting users in an indoor environment based on wi-fi signal strength has a wide domain of applications . this can be used for objectives like locating users in smart home systems , locating criminals in bounded regions , obtaining the count of users on an access point etc . the paper develops an optimized model that could be deployed in monitoring and tracking devices used for locating users based on the wi-fi signal strength they receive in their personal devices . here , we procure data of signal strengths from various routers , map them to the user s location and consider this mapping as a classification problem . we train a neural network using the weights obtained by the proposed fuzzy hybrid of particle swarm optimization & gravitational search algorithm ( fpsogsa ) , an optimization strategy that results in better accuracy of the model . story_separator_special_tag generative adversarial networks ( gans ) are a learning framework that rely on training a discriminator to estimate a measure of difference between a target and generated distributions . gans , as normally formulated , rely on the generated samples being completely differentiable w.r.t . the generative parameters , and thus do not work for discrete data . we introduce a method for training gans with discrete data that uses the estimated difference measure from the discriminator to compute importance weights for generated samples , thus providing a policy gradient for training the generator . the importance weights have a strong connection to the decision boundary of the discriminator , and we call our method boundary-seeking gans ( bgans ) . we demonstrate the effectiveness of the proposed algorithm with discrete image and character-based natural language generation . in addition , the boundary-seeking objective extends to continuous data , which can be used to improve stability of training , and we demonstrate this on celeba , large-scale scene understanding ( lsun ) bedrooms , and imagenet without conditioning . story_separator_special_tag as a new way of training generative models , generative adversarial net ( gan ) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data . however , it has limitations when the goal is for generating sequences of discrete tokens . a major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model . also , the discriminative model can only assess a complete sequence , while for a partially generated sequence , it is nontrivial to balance its current score and the future one once the entire sequence has been generated . in this paper , we propose a sequence generation framework , called seqgan , to solve the problems . modeling the data generator as a stochastic policy in reinforcement learning ( rl ) , seqgan bypasses the generator differentiation problem by directly performing gradient policy update . the rl reward signal comes from the gan discriminator judged on a complete sequence , and is passed back to the intermediate state-action steps using monte carlo search . extensive experiments story_separator_special_tag consider learning a policy from example expert behavior , without interaction with the expert or access to a reinforcement signal . one approach is to recover the expert 's cost function with inverse reinforcement learning , then extract a policy from that cost function with reinforcement learning . this approach is indirect and can be slow . we propose a new general framework for directly extracting a policy from data as if it were obtained by reinforcement learning following inverse reinforcement learning . we show that a certain instantiation of our framework draws an analogy between imitation learning and generative adversarial networks , from which we derive a model-free imitation learning algorithm that obtains significant performance gains over existing model-free methods in imitating complex behaviors in large , high-dimensional environments .
we define an invariant $ abla_g ( m ) $ of pairs m , g , where m is a 3-manifold obtained by surgery on some framed link in the cylinder $ s\\times i $ , s is a connected surface with at least one boundary component , and g is a fatgraph spine of s. in effect , $ abla_g $ is the composition with the $ \\iota_n $ maps of le-murakami-ohtsuki of the link invariant of andersen-mattes-reshetikhin computed relative to choices determined by the fatgraph g ; this provides a basic connection between 2d geometry and 3d quantum topology . for each fixed g , this invariant is shown to be universal for homology cylinders , i.e. , $ abla_g $ establishes an isomorphism from an appropriate vector space $ \\bar { h } $ of homology cylinders to a certain algebra of jacobi diagrams . via composition $ abla_ { g ' } \\circ abla_g^ { -1 } $ for any pair of fatgraph spines g , g ' of s , we derive a representation of the ptolemy groupoid , i.e. , the combinatorial model for the fundamental path groupoid of teichmuller space , as a story_separator_special_tag \xa9 gauthier-villars ( \xe9ditions scientifiques et m\xe9dicales elsevier ) , 1971 , tous droits r\xe9serv\xe9s . l acc\xe8s aux archives de la revue \xab annales scientifiques de l \xe9.n.s . \xbb ( http : //www . elsevier.com/locate/ansens ) , implique l accord avec les conditions g\xe9n\xe9rales d utilisation ( http : //www.numdam.org/legal.php ) . toute utilisation commerciale ou impression syst\xe9matique est constitutive d une infraction p\xe9nale . toute copie ou impression de ce fichier doit contenir la pr\xe9sente mention de copyright . story_separator_special_tag einleitung . wir betrachten eine orientierbare und geschlossene fl\xe4che % vom geschlecht p > i und legen uns die frage vor : ( j ) wann sind zwei einfache kurven auf % isotop ) ? im \xa7 2 wird es uns gelingen , die beantwortung dieser frage auf die der frage ( h ) zur\xfcckzuf\xfchren , die wir in einer vorhergehenden abhandlung ) behandelt haben . wir beweisen n\xe4mlich : zwei einfache kurven sind dann und nur dann auf $ isotop , wenn sie homotop ) sind . damit wird aber gezeigt , da\xdf die invarianten eines einfachen kurventypus auch die in ihm enthaltene klasse einfacher isotoper kurven vollst\xe4ndig bestimmen , d. h. , da\xdf in einer klasse einfacher homotoper kurven auch nur eine klasse isotoper kurven enthalten ist . wir erweitern die g\xfcltigkeit dieser ergebnisse noch auf systeme von endlich vielen , untereinander fremden , einfachen kurven , indem wir die isotopie derartiger systeme auf die der einzelnen systemkurven zur\xfcckf\xfchren . das so gewonnene wollen wir zum studium der fl\xe4che selbst verwerten . wir fragen : ( a ) wann sind zwei topologische abbildungen von % auf sich ineinander deformierbar * ) ? die antwort auf ( a ) story_separator_special_tag the theory of knot invariants of finite type ( vassiliev invariants ) is described . these invariants turn out to be at least as powerful as the jones polynomial and its numerous generalizations coming from various quantum groups , and it is conjectured that these invariants are precisely as powerful as those polynomials . as invariants of finite type are much easier to define and manipulate than the quantum group invariants , it is likely that in attempting to classify knots , invariants of finite type will play a more fundamental role than the various knot polynomials . story_separator_special_tag abstract we make a systematic study of filtrations of a free group f defined as products of powers of the lower central series of f. under some assumptions on the exponents , we characterize these filtrations in terms of the group algebra , the magnus algebra of non-commutative power series , and linear representations by upper-triangular unipotent matrices . these characterizations generalize classical results of gr\xfcn , magnus , witt , and zassenhaus from the 1930s , as well as later results on the lower p-central filtration and the p-zassenhaus filtrations . we derive alternative recursive definitions of such filtrations , extending results of lazard . finally , we relate these filtrations to massey products in group cohomology . story_separator_special_tag in this article we give an overview of the proof of a conjecture of f. oort that every prime-to-p hecke orbit in the moduli space ag of principally polarized abelian varieties over fp is dense in the leaf containing it . see conjecture 4.1 for a precise statement , definition 2.1 for the definition of hecke orbits , and definition 3.1 for the definition of a leaf . roughly speaking , a leaf is the locus in ag consisting of all points s such that the principally quasi-polarized barsotti tate group attached to s belongs to a fixed isomorphism class , while the prime-to-p hecke orbit of a closed point x consists of all closed points y such that there exists a prime-to-p quasi-isogeny from ax to ay which preserves the polarizations . here ( ax , x ) , ( ay , y ) denote the principally polarized abelian varieties attached to x , y respectively ; a prime-to-p quasi-isogeny is the composition of a prime-to-p isogeny with the inverse of a prime-to-p isogeny . story_separator_special_tag 1 getting started 3 1.1 braids in the natural world . 3 1.2 braids in mathematics . 4 1.2.1 braid patterns . 4 1.2.2 multiplying fractions and multiplying braid patterns . 8 1.2.3 the integer 1 and the identity braid pattern i . 9 1.2.4 inverses of fractions and inverses of braid patterns . 9 1.2.5 the associative law for fractions and for braid patterns . 10 1.2.6 the group bn of n-braids . 11 1.3 looking ahead . 12 story_separator_special_tag let % ( ri ) be the group of orientation-preserving selfhomeomorphisms of a closed oriented surface bd u of genus n , and let 3c ( n ) be the subgroup of those elements which induce the identity on /f , ( bd u ; z ) . to each element h e. % ( n ) we associate a 3-manifold m ( h ) which is defined by a heegaard splitting . it is shown that for each h 6 % ( ri ) there is a representation p of % ( n ) into z/2z such that if k \xa3 % ( n ) , then the u-invariant p ( m ( h ) ) is equal to the ju-invariant p ( m ( kh ) ) if and only if k 6 kernel p. thus , properties of the 4-manifolds which a given 3-manifold bounds are related to group-theoretical structure in the group of homeomorphisms of a 2-manifold . the kernels of the homomorphisms from 3c ( n ) onto z/2z are studied and are shown to constitute a complete conjugacy class of subgroups of % ( n ) . the class has nontrivial finite order . story_separator_special_tag quantization allows physical signals to be processed using digital devices . quantizers are commonly implemented using analog-to-digital converters ( adcs ) , which operate in a serial and scalar manner and are designed to yield an accurate digital representation of the observed signal . however , in many practical scenarios quantization is part of a system whose task is not to recover the observed signal , but some function of it . recent works have shown that properly designed task-based quantizers , which include pre-quantization analog combining as well as digital processing , can achieve notable gains in recovering linear functions of the observations . in this work we focus on quantization for the task of recovering quadratic functions . our analysis is based on principal inertia components ( pics ) , which form a basis for decomposing the statistical dependence between random quantities . using pics , we identify a practical structure of the pre-quantization mapping for recovering quadratic functions , which allows us to design a task-based quantization system capable of accurately estimating these functions . our numerical study demonstrates that , when using scalar adcs , notable performance gains that can be achieved using the proposed design story_separator_special_tag garoufalidis and levine introduced the homology cobordism group of homology cylinders over a surface . this group can be regarded as an enlargement of the mapping class group . using torsion invariants , we show that the abelianization of this group is infinitely generated provided that the first betti number of the surface is positive . in particular , this shows that the group is not perfect . this answers questions of garoufalidis and levine , and goda and sakasai . furthermore , we show that the abelianization of the group has infinite rank for the case that the surface has more than one boundary component . these results also hold for the homology cylinder analogue of the torelli group . story_separator_special_tag lagrangian cobordisms are three-dimensional compact oriented cobordisms between once-punctured surfaces , subject to some homological conditions . we extend the le-murakami-ohtsuki invariant of homology three-spheres to a functor from the category of lagrangian cobordisms to a certain category of jacobi diagrams . we prove some properties of this functorial lmo invariant , including its universality among rational finite-type invariants of lagrangian cobordisms . finally , we apply the lmo functor to the study of homology cylinders from the point of view of their finite-type invariants . story_separator_special_tag we construct a topological quantum field theory associated to the universal finite-type invariant of 3-dimensional manifolds , as a functor from a category of 3-dimensional manifolds with parametrized boundary , satisfying some additional conditions , to an algebraic-combinatorial category . this is built together with its truncations with respect to a natural grading , and we prove that these tqfts are non-degenerate and anomaly-free . the tqft ( s ) induce ( s ) a ( series of ) representation ( s ) of a subgroup $ $ { { \\mathcal { l } } } _g $ $ of the mapping class group that contains the torelli group . the n =\xa0 1 truncation is a tqft for the casson-walker-lescop invariant . story_separator_special_tag we show that reasonably well behaved 3d and 4d tqfts must contain certain algebraic structures . in 4d , we find both hopf categories and trialgebras . story_separator_special_tag for an oriented integral homology 3-sphere 2 , a. casson has introduced an integer invariant a ( 2 ) that is defined by using the space \xa3 % ( 2 ) of conjugacy classes of irreducible representations of ^ ( 2 ) into su ( 2 ) ( see [ 1 ] ) . this invariant a ( 2 ) can be computed from a surgery or heegard description of 2 and satisfies a ( 2 ) = ju ( 2 ) ( mod 2 ) , where /i ( 2 ) is the kervaire-milnor-rochlin invariant of 2. this powerful new invariant was used to settle an outstanding problem in 3-manifold topology ; namely , showing that if 2 is a homotopy 3-sphere , then ju ( 2 ) = o. c. taubes [ 17 ] , utilizing gauge-theoretic considerations , has reinterpreted casson 's invariant in terms of flat connections . refining this approach , a. floer [ 13 ] has recently defined another invariant of 2 , its 'instanton homology ' , which takes the form of an abelian group / * ( 2 ) with a natural z8-grading that is an enhancement of a ( 2 ) story_separator_special_tag the purpose of the present paper is to introduce and explore two surprises that arise when we apply a standard procedure to study the number of finite type invariants of 3-manifolds introduced independently by m. goussarov and k. habiro based on surgery on claspers , y-graphs or clovers , \\cite { gu , ha , ggp } . one surprise is that the upper bounds depend on a bit more than a choice of generators for h_1 . a complementary surprise a curious brane relation ( in two flavors , open and closed ) which shows that the upper bounds are in a certain sense independent of the choice of generators of h_1 . story_separator_special_tag a clover is a framed trivalent graph with some additional structure , embedded in a 3 { manifold . we dene surgery on clovers , generalizing surgery on y { graphs used earlier by the second author to dene a new theory of nite-type invariants of 3 { manifolds . we give a systematic exposition of a topological calculus of clovers and use it to deduce some important results about the corresponding theory of nite type invariants . in particular , we give a description of the weight systems in terms of uni-trivalent graphs modulo the as and ihx relations , reminiscent of the similar results for links . we then compare several denitions of nite type invariants of homology spheres ( based on surgery on y { graphs , blinks , algebraically split links , and boundary links ) and prove in a self-contained way their equivalence . story_separator_special_tag recently ohtsuki [ oh2 ] , motivated by the notion of finite type knot invariants , introduced the notion of finite type invariants for oriented , integral homology 3-spheres . in the present paper we propose another definition of finite type invariants of integral homology 3-spheres and give equivalent reformulations of our notion . we show that our invariants form a filtered commutative algebra . we compare the two induced filtrations on the vector space on the set of integral homology 3-spheres . as an observation , we discover a new set of restrictions that finite type invariants in the sense of ohtsuki satisfy and give a set of axioms that characterize the casson invariant . finally , we pose a set of questions relating the finite type 3-manifold invariants with the ( vassiliev ) knot invariants . story_separator_special_tag abstract . using the recently developed theory of finite type invariants of integral homology 3-spheres we study the structure of the torelli group of a closed surface . explicitly , we construct ( a ) natural cocycles of the torelli group ( with coefficients in a space of trivalent graphs ) and cohomology classes of the abelianized torelli group ; ( b ) group homomorphisms that detect ( rationally ) the nontriviality of the lower central series of the torelli group . our results are motivated by the appearance of trivalent graphs in topology and in representation theory and the dual role played by the casson invariant in the theory of finite type invariants of integral homology 3-spheres and in morita 's study [ mo2 , mo3 ] of the structure of the torelli group . our results generalize those of s. morita [ mo2 , mo3 ] and complement the recent calculation , due to r. hain [ ha2 ] , of the i-adic completion of the rational group ring of the torelli group . we also give analogous results for two other subgroups of the mapping class group . story_separator_special_tag two references added and the introduction slightly expanded . we show that the tree-level part of a recent theory of invariants of 3-manifolds ( due , independently , to goussarov and habiro ) is essentially given by classical algebraic topology in terms of the johnson homomorphism and massey products , for arbitrary 3-manifolds . a key role of our proof is played by the notion of a homology cylinder , viewed as an enlargement of the mapping class group , and an apparently new lie algebra of graphs colored by the first homology of a closed surface , closely related to deformation quantization on a surface as well as to a lie algebra that encodes the symmetries of massey products and the johnson homomorphism . in addition , we present a realization theorem for massey products and the johnson homomorphism on homology cylinders . story_separator_special_tag a homology cylinder over a surface consists of a homology cobordism between two copies of the surface and markings of its boundary . the set of isomorphism classes of homology cylinders over a fixed surface has a natural monoid structure and it is known that this monoid can be seen as an enlargement of the mapping class group of the surface . we now focus on abelian quotients of this monoid . we show that both the monoid of all homology cylinders and that of irreducible homology cylinders are not finitely generated and moreover they have big abelian quotients . these properties contrast with the fact that the mapping class group is perfect in general . the proof is given by applying sutured floer homology theory to homologically fibered knots studied in a previous paper . story_separator_special_tag a real 3-manifold is a smooth 3-manifold together with an orientation preserving smooth involution , called a real structure . in this article we study open book decompositions on smooth real 3-manifolds that are compatible with the real structure . we call them real open book decompositions . we show that each real open book carries a real contact structure and two real contact structures supported by the same real open book decomposition are equivariantly isotopic . we also show that every real contact structure on a closed 3-dimensional real manifold is supported by a real open book . finally , we conjecture that two real open books on a real contact manifold supporting the same real contact structure are related by positive real stabilizations and equivariant isotopy and that the giroux correspondence applies to real manifolds as well namely that there is a one to one correspondence between the real contact structures on a real 3-manifold up to equivariant contact isotopy and the real open books up to positive real stabilization . meanwhile , we study some examples of real open books and real heegaard decompositions in lens spaces . story_separator_special_tag we announce a new approach to a definition of finite type invariants and introduce a notion of n-equivalence for 3-manifolds with boundary . for the hs we state equivalence of different definitions of finite type invariants . the theory for 3-manifolds is made completely parallel to the corresponding theory for links . we show how to reduce general classification problems of arbitrary 3-manifolds up to n-equivalences to the study of finite type invariants of string links and integer homology spheres . story_separator_special_tag we define two new families of invariants for ( 3-manifold , graph ) pairs which detect the unknot and are additive under connected sum of pairs and ( -1/2 ) -additive under trivalent vertex sum of pairs . the first of these families is closely related to both bridge number and tunnel number . the second of these families is a variation and generalization of gabai 's width for knots in the 3-sphere . we give applications to the tunnel number and higher genus bridge number of connected sums of knots . story_separator_special_tag we prove homological mirror symmetry for milnor fibers of simple singularities , which are among the log fano cases of conjecture 1.5 in arxiv:1806.04345. the proof is based on a relation between matrix factorizations and calabi -- yau completions . as an application , we give an explicit computation of the symplectic cohomology group of the milnor fiber of a simple singularity in all dimensions . story_separator_special_tag though the study of knots and links in dimension three has been with us for well over a century , progress towards the ultimate goal of their classification has been slow . various methods have been used in their study , ranging from braid theory to the study of the link complement and its fundamental group . braid theory succeeded in the classification of braids ( i.e. , in solving the word and conjugacy problems in the braid groups ) . an equivalence relation generated by markov moves on braids was found yielding the set of isotopy classes of links ( see [ b ] ) . however , the combinatorics of the markov moves are very difficult , and braid theory has not yet led to the classification of links , although recent work of birman and menasco shows progress in that direction [ bm ] . on the other hand , it has led to polynomial invariants , via the work of jones and others [ j ] . ( recently , witten [ w ] has given a physical interpretation of these polynomials in terms of particle `` transmission . '' possible relations with the results in story_separator_special_tag we establish several new results about both the .n/ solvable filtration of the set of link concordance classes and the .n/ solvable filtration of the string link concordance group , c m . the set of .n/ solvable m component string links is denoted by f m n . we first establish a relationship between milnor s invariants and links , l , with certain restrictions on the 4 manifold bounded by ml , the zero-framed surgery of s 3 on l. using this relationship , we can relate .n/ solvability of a link ( or string link ) with its milnor s x invariants . specifically , we show that if a link is .n/ solvable , then its milnor s invariants vanish for lengths up to 2 nc2 1. previously , there were no known results about the other half of the filtration , namely f m:5 =f m nc1 . we establish the effect of the bing doubling operator on .n/ solvability and using this , we show that f m n:5 =f m nc1 is nontrivial for links ( and string links ) with sufficiently many components . moreover , we show that these quotients contain story_separator_special_tag abstract a formula for computing the milnor ( concordance ) invariants from the kontsevich integral is obtained . the reduced kontsevich integral ( with values in the quotient by all loop diagrams ) is shown to be the universal concordance invariant of finite type . some applications are discussed . story_separator_special_tag let t g ; 1 denote the torelli group of a surface of genus g with 1 boundary component and let l ( t g ; 1 ) denote the associated graded lie algebra of lower central series quotients . using the methods of hain h ] , we give a presentation of l ( t g ; 1 ) over the rationals , g 6 . story_separator_special_tag we introduce the concept of ` claspers , ' which are surfaces in 3-manifolds with some additional structure on which surgery operations can be performed . using claspers we define for each positive integer k an equivalence relation on links called ` c_k-equivalence , ' which is generated by surgery operations of a certain kind called ` c_k-moves ' . we prove that two knots in the 3-sphere are c_ { k+1 } -equivalent if and only if they have equal values of vassiliev-goussarov invariants of type k with values in any abelian groups . this result gives a characterization in terms of surgery operations of the informations that can be carried by vassiliev-goussarov invariants . in the last section we also describe outlines of some applications of claspers to other fields in 3-dimensional topology . story_separator_special_tag let s be a compact connected oriented surface , whose boundary is connected or empty . a homology cylinder over the surface s is a cobordism between s and itself , homologically equivalent to the cylinder over s. the y-filtration on the monoid of homology cylinders over s is defined by clasper surgery . using a functorial extension of the le-murakami-ohtsuki invariant , we show that the graded lie algebra associated to the y-filtration is isomorphic to the lie algebra of `` symplectic jacobi diagrams . '' this lie algebra consists of the primitive elements of a certain hopf algebra whose multiplication is a diagrammatic analogue of the moyal-weyl product . the mapping cylinder construction embeds the torelli group into the monoid of homology cylinders , sending the lower central series to the y-filtration . we give a combinatorial description of the graded lie algebra map induced by this embedding , by connecting hain 's infinitesimal presentation of the torelli group to the lie algebra of symplectic jacobi diagrams . this lie algebra map is shown to be injective in degree two , and the question of the injectivity in higher degrees is discussed . story_separator_special_tag this is a significant revision of the early version of this paper which was posted last december . the speculative section has been removed in light of some recent results of morita and kawazumi . numerous typos have been fixed . the companion paper `` the hodge de rham theory of relative malcev completion '' has just been posted . story_separator_special_tag we define new bordism and spin bordism invariants of certain subgroups of the mapping class group of a surface . in particular , they are invariants of the johnson filtration of the mapping class group . the second and third terms of this filtration are the well-known torelli group and johnson subgroup , respectively . we introduce a new representation in terms of spin bordism , and we prove that this single representation contains all of the information given by the johnson homomorphism , the birman-craggs homomorphism , and the morita homomorphism . story_separator_special_tag it is natural to try to place the new polynomial invariants of links in algebraic topology ( e.g . to try to interpret them using homology or homotopy groups ) . however , one can think that these new polynomial invariants are byproducts of a new more delicate algebraic invariant of 3-manifolds which measures the obstruction to isotopy of links ( which are homotopic ) . we propose such an algebraic invariant based on skein theory introduced by conway ( 1969 ) and developed by giller ( 1982 ) as well as lickorish and millett ( 1987 ) . ( this is the first paper i wrote about skein modules , almost 20 years ago . the recent survey of skein modules is available at this http url ) story_separator_special_tag einleitung . bei der untersuchung der vorg\xe4nge der schmelzflusselektrolyse fand b i c h a r d l o r e n z 1 ) dass gewisse schwermetalle wie blei , kadmium , zinn , zink usw . bei ihrer abscheidung statt regulinisch zu boden zu sinken , zum teil in gestalt von gef\xe4rbten wolken in den elektrolyten diffundieren oder in gestalt von feinen tr\xf6pfchen ( zink ) darin schwebend bleiben . diese wolken gelangen bei der elektrolyse durch die geschmolzenen halogenide hindurch zur anode und werden dort von den anodischen produkten wieder aufgezehrt . die erscheinung erwies sich vom elektrischen felde insofern als unabh\xe4ngig , als sie beim zusammenschmelzen der metalle mit den salzen ebenfalls auftritt . erhitzt man . b. blei unter geschmolzenem chlorblei auf 600\xb0 also nahezu 1000\xb0 unterhalb des bleisiedepunkts- ) , so st\xf6sst es ziemlich pl\xf6tzlich r\xf6tlichbraun gef\xe4rbte wolken aus . welche die urspr\xfcngliche hellgelbe schmelze rotbraun , sp\xe4ter dunkelbraun bis schwarz f\xe4rben . erniedrigt man die temperatur , so sinkt vor dem erstarren der schmelze ein feiner schwarzer nebel der wie ein feinverteiltss metall aussieht zu boden , und die schmelze nimmt eine schwach r\xf6tlichgelbe f\xe4rbung an . l o r e n z story_separator_special_tag in december 2019 a novel disease [ coronavirus disease 19 ( covid-19 ) emerged in the wuhan province of the people 's republic of china . covid-19 is caused by a novel coronavirus ( sars-cov-2 ) thought to have jumped species , from another mammal to humans . a pandemic caused by this virus is running rampant throughout the world . thousands of cases of covid-19 are reported in england and over 10,000 patients have died . whilst there has been progress in managing this disease , it is not clear which factors , besides age , affect the severity and mortality of covid-19 . a recent analysis of covid-19 in italy identified links between air pollution and death rates . here , we explored the correlation between three major air pollutants linked to fossil fuels and sars-cov-2 lethality in england . we compare up-to-date , real-time sars-cov-2 cases and death measurements from public databases to air pollution data monitored across over 120 sites in different regions . we found that the levels of some markers of poor air quality , nitrogen oxides and ozone , are associated with covid-19 lethality in different english regions . we conclude that the story_separator_special_tag introduction . in this paper we study the ( discrete ) group ring of a finitely generated torsion free nilpotent group over a field of characteristic zero . we show that if is the ideal of spanned by all elements of the form g 1 , where g , then and the only element belonging to w for all w is the zero element ( cf . ( 4.3 ) below ) . story_separator_special_tag let ' ) be the mapping class group of a surface of genus g > 3 , and i the subgroup of those classes acting trivially on homology . an infinite set of generators for 5 , involving three conjugacy classes , was obtained by powell . in this paper we improve powell 's result to show that i is generated by a single conjugacy class and that [ 91 ) , 5 ] = 5. i. let m = mg , i be an orientable surface of genus g > 3 with one boundary component . ( we shall frequently refer to the boundary curve as `` the hole '' . ) let 91 = 9lg , l be its mapping class group ( that is , homeomorphisms of m which are 1 on the boundary modulo homeomorphisms which are isotopic to 1 by an isotopy which is fixed on the boundary ) , and let i = gg i be the mapping classes of 9k which induce the identity map on the homology group h1 ( m , z ) . the group i is of specific interest to topologists for a number of reasons . for example story_separator_special_tag let mg be the mapping class group of a genus g orientable surface m , and gg the subgroup of those maps acting trivially on the homology group h , ( m , z ) . birman and craggs produced homomorphisms from 5g to z2 via the rochlin invariant and raised the question of enumerating them ; in this paper we answer their question . it is shown that the homomorphisms are closely related to the quadratic forms on h , ( m , z2 ) which induce the intersection form ; in fact , they are in 1-1 correspondence with those quadratic forms of arf invariant zero . furthermore , the methods give a description of the quotient of 5. by the intersection of the kernels of all these homomorphisms . it is a z2-vector space isomorphic to a certain space of cubic polynomials over h1 ( m , z2 ) . the dimension is then computed and found to be ( 39 ) + ( 2s ) . these results are also extended to the case of a surface with one boundary component , and in this situation the linear relations among the various homomorphisms are also determined story_separator_special_tag this is the first of three papers concerning the so-called torelli group . let m = mg be a compact orientable surface of genus g having n boundary components and let 9 = xg . be its mapping class group , that is , the group of orientation preserving diffeomorphisms of m which are 1 on the boundary am modulo isotopies which fix 3m pointwise . this group is also known to the complex analysts as the teichmuller group or modular group . if n = 0 or 1 , let further 4 = jg . be the subgroup of d1 which acts trivially on h1 ( m , z ) . the topologists have no traditional name for a , but the analysts tell me it was known classically and is called the torelli group . several interesting problems and conjectures exist concerning f. the principal one can be found in kirby 's problem list [ k ] and asks if gg is finitely generated . in this first paper we shall answer the question affirmatively for both gg 0 and fgg , when g > 3 and shall give a fairly simple set of generators . two other story_separator_special_tag we generalize the notion of a magnus expansion of a free group in order to extend each of the johnson homomorphisms defined on a decreasing filtration of the torelli group for a surface with one boundary component to the whole of the automorphism group of a free group aut ( fn ) .the extended ones are not homomor- phisms , but satisfy an infinite sequence of coboundary relations , so that we call them the johnson maps.in this paper we confine ourselves to studying the first and the second relations , which have cohomological consequences about the group aut ( fn ) and the mapping class groups for surfaces.the first one means that the first johnson map is a twisted 1-cocycle of the group aut ( fn ) .its cohomology class coincides with `` the unique elementary particle '' of all the morita-mumford classes on the mapping class group for a surface ( ka1 ) ( km1 ) .the second one restricted to the mapping class group is equal to a fundamental relation among twisted morita-mumford classes pro- posed by garoufalidis and nakamura ( gn ) and established by morita and the author ( km2 ) .this means we give story_separator_special_tag we develop a calculus of surgery data , called bridged links , which involves besides links also pairs of balls that describe one-handle attachements . as opposed to the usual link calculi of kirby and others this description uses only elementary , local moves ( namely modifications and isolated cancellations ) , and it is valid also on non-simply connected and disconnected manifolds . in particular , it allows us to give a presentation of a 3-manifold by doing surgery on any other 3-manifold with the same boundary . bridged link presentations on unions of handlebodies are used to give a cerf-theoretical derivation of presentations of 2+1-dimensional cobordisms categories in terms of planar ribbon tangles and their composition rules . as an application we give a different , more natural proof of the matveev-polyak presentations of the mapping class group , and , furthermore , find systematically surgery presentations of general mapping tori . we discuss a natural extension of the reshetikhin turaev invariant to the calculus of bridged links . invariance follows now - similar as for knot invariants - from simple identifications of the elementary moves with elementary categorial relations for invariances or cointegrals , respectively . hence story_separator_special_tag handlebodies and framed links.- intersection forms.- classification theorems.- spin structures.- t lie 3 and .- immersing 4-manifolds in r 6.- 3-manifolds a digression.- bounding 5-manifolds.- p 1 ( m ) = 3 ? ( m ) , ? 4 so = z and ? 4 spin = z.- wall 's diffeomorphisms and h-cobordism.- rohlin 's theorem.- casson handles.- freedman 's work.- exotic r 4 's . story_separator_special_tag abstract johnson 's homomorphism k of the subgroup of the mapping class group of surfaces is defined via the action on the lower central series of the fundamental group . we give some descriptions of k by using the magnus expansion and thereby give a geometric meaning to it in terms of the massey products on mapping tori of the corresponding mapping classes . story_separator_special_tag some time ago b. feigin , v. retakh and i had tried to understand a remark of j. stasheff [ s1 ] on open string theory and higher associative algebras [ s2 ] . then i found a strange construction of cohomology classes of mapping class groups using as initial data any differential graded algebra with finite-dimensional cohomology and a kind of poincare duality . story_separator_special_tag we shall describe a program here relating feynman diagrams , topology of manifolds , homotopical algebra , non-commutative geometry and several kinds of topological physics . story_separator_special_tag abstract in this paper we show that the lie algebra associated to the descending central series of a finitely generated group with a single primitive defining relation is a lie algebra with a single defining relation . the proof uses results of [ 1 ] . story_separator_special_tag \xa9 gauthier-villars ( \xe9ditions scientifiques et m\xe9dicales elsevier ) , 1954 , tous droits r\xe9serv\xe9s . l acc\xe8s aux archives de la revue \xab annales scientifiques de l \xe9.n.s . \xbb ( http : //www . elsevier.com/locate/ansens ) implique l accord avec les conditions g\xe9n\xe9rales d utilisation ( http : //www.numdam.org/conditions ) . toute utilisation commerciale ou impression syst\xe9matique est constitutive d une infraction p\xe9nale . toute copie ou impression de ce fichier doit contenir la pr\xe9sente mention de copyright . story_separator_special_tag using finite type invariants ( or vassiliev invariants ) of framed links and the kirby calculus we construct an invariant of closed oriented three-dimensional manifolds with values in a graded hopf algebra of certain kinds of 3-valent graphs ( of feynman diagrams ) . the degree 1 part of the invariant is essentially the casson-lescop-walker invariant of 3-manifolds . a generalization for links in 3-manifolds is also given . the theory of this invariant can be regarded as a mathematically rigorous realization of part of witten s theory of quantum invariants in [ 33 ] . for a 3-manifold m , a compact lie group g , and an integer k , witten claimed that zk ( m , g ) = s ej _ 1kcs ( a ) 23a story_separator_special_tag this book presents a new result in 3-dimensional topology . it is well known that any closed oriented 3-manifold can be obtained by surgery on a framed link in `` s '' 3. in `` global surgery formula for the casson-walker invariant , `` a function f of framed links in `` s '' 3 is described , and it is proven that f consistently defines an invariant , lamda ( `` l '' ) , of closed oriented 3-manifolds . `` l '' is then expressed in terms of previously known invariants of 3-manifolds . for integral homology spheres , `` l '' is the invariant introduced by casson in 1985 , which allowed him to solve old and famous questions in 3-dimensional topology . `` l '' becomes simpler as the first betti number increases.as an explicit function of alexander polynomials and surgery coefficients of framed links , the function f extends in a natural way to framed links in rational homology spheres . it is proven that f describes the variation of `` l '' under any surgery starting from a rational homology sphere . thus f yields a global surgery formula for the casson invariant . story_separator_special_tag we consider a homological enlargement of the mapping class group , defined by homology cylinders over a closed oriented surface ( up to homology cobordism ) . these are important model objects in the recent goussarov-habiro theory of finite-type invariants of 3-manifolds . we study the structure of this group from several directions : the relative weight filtration of dennis johnson , the finite-type filtration of goussarov-habiro , and the relation to string link concordance . we also consider a new lagrangian filtration of both the mapping class group and the group of homology cylinders . story_separator_special_tag in a previous paper [ homology cylinders : an enlargement of the mapping class group , algebr . geom . topol . 1 ( 2001 ) 243 270 ] , a group g of homology cylinders over the oriented surface of genus g is defined . a filtration of g is defined , using the goussarov-habiro notion of finite-type . it is erroneously claimed that this filtration essentially coincides with the relative weight filtration . the present note corrects this error and studies the actual relation between the two filtrations . story_separator_special_tag we study the natural map eta between a group of binary planar trees whose leaves are labeled by elements of a free abelian group h and a certain group d ( h ) derived from the free lie algebra over h. both of these groups arise in several different topological contexts . the map eta is known to be an isomorphism over q , but not over z. we determine its cokernel and attack the conjecture that it is injective . story_separator_special_tag the following question has been posed by bing [ 1 ] : `` which compact , connected 3-manifolds can be obtained from s3 as follows : remove a finite collection of mutually exclusive ( but perhaps knotted and linking ) polyhedral tori t1 , t2 , * * , to from s3 , and sew them back. `` this paper answers that question by showing that every closed , connected , orientable , 3-manifold is obtainable from s3 in the above way . whereas this fact can now be deduced from general theorems of differential topology , the combinatorial proof given here is direct and elementary ; while , in the proof , a study is made of a certain type of homeomorphism of a two dimensional manifold that is of interest in itself . having obtained the above mentioned result on 3-manifolds , it is then easy to deduce the well known result ( theorem 3 ) that the combinatorial cobordism group for orientable 3-manifolds is trivial . story_separator_special_tag one may think of vassiliev invariants ( see bn1 ] , b ] , bl ] , ko ] and v1,2 ] ) as link invariants with certain nilpotency . from this point of view , it is not surprising that milnor 's link invariants ( ( m1,2 ] ) are of the same nature as vassiliev invariants ( ( bn2 ] , l2 ] ) . we give a conceptually clearer proof of this fact here by synthesizing kontse-vich 's construction of vassiliev invariants and milnor invariants under the framework of power series expansions . the synthesis carried out in the rst four sections of this paper motivates the discussion in x5 of some questions about innnitesimal structures of discrete groups and the possibility of constructing 3-manifold invariants under the same framework . story_separator_special_tag resume . le \xab mapping class group \xbb ( ou groupe de di eotopie , ou groupe mo- dulaire ) d'une surface de genre g est le groupe de ses auto-di eomorphismes directs , consid er es a isotopie pr es . ce groupe de pr esentation nie agit naturellement sur des objets math ematiques attach es a la surface : espace de teichmespaces de connexions plates , modules skein . ces actions permettent de construire des repr e- sentations de dimension nie de ce groupe , appel ees quantiques a cause de leur liens avec la th eorie quantique des champs et les invariants quantiques des nuds . l'ex- pos e se propose de discuter quelques r esultats r ecents concernant ces repr esentations , notamment leur d elit e asymptotique ( j.e.andersen , freedman-walker-wang ) . story_separator_special_tag for a certain class of compact oriented 3-manifolds , m. goussarov and k. habiro have conjectured that the information carried by finite-type invariants should be characterized in terms of `` cut-and-paste '' operations defined by the lower central series of the torelli group of a surface . in this paper , we observe that this is a variation of a classical problem in group theory , namely the `` dimension subgroup problem . '' this viewpoint allows us to prove , by purely algebraic methods , an analogue of the goussarov-habiro conjecture for finite-type invariants with values in a fixed field . we deduce that their original conjecture is true at least in a weaker form . story_separator_special_tag let s be a compact connected oriented surface with one boundary component , and let p be the fundamental group of s. the johnson filtration is a decreasing sequence of subgroups of the torelli group of s , whose k-th term consists of the self-homeomorphisms of s that act trivially at the level of the k-th nilpotent quotient of p. morita defined a homomorphism from the k-th term of the johnson filtration to the third homology group of the k-th nilpotent quotient of p. in this paper , we replace groups by their malcev lie algebras and we study the `` infinitesimal '' version of the k-th morita homomorphism , which corresponds to the original version by a canonical isomorphism . we give a diagrammatic description of the k-th infinitesimal morita homomorphism and , given an expansion of the free group p which is `` symplectic '' in some sense , we show how to compute it from kawazumi 's total johnson map . we also consider the diagrammatic representation of the torelli group that we derived from the le-murakami-ohtsuki invariant of 3-manifolds in a previous joint work with cheptea and habiro , and which we call the `` lmo story_separator_special_tag for a compact connected oriented surface , we consider homology cylinders over : these are homology cobordisms with an extra homological triviality condition . when considered up to y2-equivalence , which is a surgery equivalence relation arising from the goussarov-habiro theory , homology cylinders form an abelian group . in this paper , when has one or zero boundary component , we define a surgery map from a certain space of graphs to this group . this map is shown to be an isomorphism , with inverse given by some extensions of the first johnson homomorphism and birman-craggs homomorphisms . story_separator_special_tag let eg be a closed oriented surface of genus g and let mg be its mapping class group . namely it is the group of all isotopy classes of orientation preserving diffeomorphisms of eg . it is also called the teichmuller modular group because it acts on the teichmuller space 2tg properly discontinuously with the quotient space mg : the riemann moduli space of compact riemann surfaces of genus g. a classical theorem of nielsen asserts that jtg can be naturally identified with the proper outer automorphism group of tui ( z ' g ) . thus the mapping class group appears in diverse branches of mathematics and have been investigated from various points of view . in this article we would like to describe some of the recent progress in the topological aspects of the theory of the mapping class group . more precisely we will concern ourselves with the following three topics which are all of cohomological nature and mutually closely related : a brief review of some of the known results about the cohomology of jtg , in particular various properties of some canonical cohomology classes of jtg , called the characteristic classes of surface bundles ( story_separator_special_tag we define an $ sl_2 ( \\mathbb { r } ) $ -casson invariant of closed 3-manifolds . we also observe procedures of computing the invariants in terms of reidemeister torsions . we discuss some approach of giving the casson invariant some gradings . story_separator_special_tag in this paper , we survey recent works on the structure of the mapping class groups of surfaces mainly from the point of view of topology . we then discuss several possible directions for future research . these include the relation between the structure of the mapping class group and invariants of 3-manifolds , the unstable cohomology of the moduli space of curves and faber 's conjecture , cokernel of the johnson homomorphisms and the galois as well as other new obstructions , cohomology of certain infinite dimensional lie algebra and characteristic classes of outer automorphism groups of free groups and the secondary characteristic classes of surface bundles . we give some experimental results concerning each of them and , partly based on them , we formulate several conjectures and problems . story_separator_special_tag we extend the universal quantum invariant defined in [ 15 ] to an invariant of 3-manifolds with boundaries , and show that the invariant satisfies modified axioms of tqft . story_separator_special_tag we prove that every closed , orientable 3-manifold has an open book decomposition with connected binding . we then give some applications of this result . story_separator_special_tag abstract we prove that for each positive integer n , the v n -equivalence classes of ribbon knot types form a subgroup r n , of index two , of the free abelian group v n constructed by the author and stanford . as a corollary , any non-ribbon knot whose arf invariant is trivial can not be distinguished from ribbon knots by finitely many independent vassiliev invariants . furthermore , except the arf invariant , all non-trivial additive knot cobordism invariants are not of finite type . we prove a few more consequences about the relationship between knot cobordism and v n -equivalence of knots . as a by-product , we prove that the number of independent vassiliev invariants of order n is bounded above by ( n 2 ) ! 2 if n > 5 , improving the previously known upper bound of ( n 1 ) ! . story_separator_special_tag in super-symmetric quantum theory , or in string theory , ( including generalizations of these theories to underlying quantum spaces ) we study a certain partition function z ( q , a , g ) . here q denotes a supercharge , a denotes an observable with the property a^2 = i , and g denotes an element of a symmetry group of q. the supercharge may depend on a parameter lambda , namely q = q ( lambda ) . we give an elementary argument to show that z , as defined , does not actually depend on lambda . story_separator_special_tag a l'aide du calcul differentiel de fox , on definit , pour tout entier positif k , une application sur le groupe d'homeotopie m g,1 d'une surface de genre g et de bord a une composante , qui coincide avec le k + 1 eme homomorphisme de johnson-morita quand on la restreint a un sous-grimpe approprie . ceci permet d'obtenir de facon tres simple une extension homomorphe des deuxieme et troisieme homomorphismes de johnson-morita a tout le groupe m g,1 . story_separator_special_tag sullivan model and rationalization of a non-simply connected space homotopy lie algebra of a space and fundamental group of the rationalization , model of a fibration holonomy operation in a fibration malcev completion of a group and examples lusternik-schnirelmann category depth of a sullivan lie algebra growth of rational homotopy groups structure of rational homotopy lie algebras weighted lie algebras story_separator_special_tag in this paper we show that if the group ring kg of a group g over a field k is filtered by the powers of its augmentation ideal , then the associated graded ring is isomorphic to the universal enveloping algebra of the p-lie algebra gr ? g oz k , where gr ? g is the graded p-lie algebra associated to the p-lower central series of g and where p is the characteristic exponent of k. the proof uses ideas of lazard s thesis [ i ] . story_separator_special_tag we give a dehn-nielsen type theorem for the homology cobordism group of homology cylinders by considering its action on the acyclic closure , which was defined by levine , of a free group . then we construct an additive invariant of those homology cylinders which act on the acyclic closure trivially . we also describe some tools to study the automorphism group of the acyclic closure of a free group generalizing those for the automorphism group of a free group or the homology cobordism group of homology cylinders . story_separator_special_tag we investigate the cohomology rings of regular semisimple hessenberg varieties whose hessenberg functions are of the form $ h= ( h ( 1 ) , n\\dots , n ) $ in lie type $ a_ { n-1 } $ . the main result of this paper gives an explicit presentation of the cohomology rings in terms of generators and their relations . our presentation naturally specializes to borel 's presentation of the cohomology ring of the flag variety and it is compatible with the representation of the symmetric group $ \\mathfrak { s } _n $ on the cohomology constructed by j. tymoczko . as a corollary , we also give an explicit presentation of the $ \\mathfrak { s } _n $ -invariant subring of the cohomology ring . story_separator_special_tag it was recently observed by novikov that if two compact oriented 4k-manifolds are glued by a diffeomorphism ( reversing orientation ) of their boundaries , then the signature of their union is the sum of their signatures . the proof is given in [ 1 , 7.1 ] ; the result has been exploited by j~inich [ 2 ] to characterise the signature of closed manifolds . in constructing such manifolds , it is often desirable to consider a more general case of glueing : viz . along a common submanifold , which may itself have boundary z 4k-2 , of the boundaries of the original manifolds . additivity still holds if z is empty , or more generally if h2k-l ( z ; ~x ) vanishes , by the same argument as before . however , it does not hold in general . the simplest counterexample is the hopf bundle ( with fibre d e ) over s 2 , with signature + 1 depending on the choices of sign : this is the union of the induced bundles over the upper and lower hemispheres of s 2 , each of which ( being contractible ) has signature zero
we study the orbits of g=gl ( v ) in the enhanced nilpotent cone , where is the variety of nilpotent endomorphisms of v. these orbits are parametrized by bipartitions of n=dimv , and we prove that the closure ordering corresponds to a natural partial order on bipartitions . moreover , we prove that the local intersection cohomology of the orbit closures is given by certain bipartition analogues of kostka polynomials , defined by shoji . finally , we make a connection with kato 's exotic nilpotent cone in type c , proving that the closure ordering is the same , and conjecturing that the intersection cohomology is the same but with degrees doubled story_separator_special_tag dans cette these , je relie la theorie , plutot arithmetique , des modules de cycles de m.rost a la theorie plus geometrique des faisceaux avec transferts invariants par homotopie de v.voevodsky . je montre precisement que cette derniere categorie est une localisation de la categorie des modules de cycles . de plus , en s'inspirant de la construction des spectres de la topologie algebrique , j'introduit la notion de module homotopique avec transferts a partir des faisceaux invariants par homotopie avec transferts . la categorie formee par ces modules est equivalente a la categorie des modules de cycles , prolongeant ainsi l'affirmation concernant les faisceaux homotopiques . ceci me permet de redemontrer a l'aide des resultats de m.rost que les faisceaux invariants par homotopie avec transferts ont une cohomologie invariante par homotopie , resultat deja obtenu par v.voevodsky . par ailleurs , j'en deduis que la categorie des modules de cycles est abelienne de grothendieck , et munie d'une structure monoidale telle que la k-theorie de milnor est l'element neutre . par ailleurs , nous montrons comment les techniques employees se prolongent a la categorie des motifs , obtenant ainsi des formules qui mettent en jeu les triangles de story_separator_special_tag new high-grade helium discoveries in tanzania d. danabalan , j.g . gluyas , c.g . macpherson , t.h . abraham-james , j.j. bluett , p.h . barry & c.j . ballentine department of earth sciences , durham university , south rd , durham , dh1 3le , united kingdom ( * correspondence : diveena.danabalan @ durham.ac.uk ) department of earth sciences , university of oxford , south parks rd , oxford , ox1 3an , united kingdom helium one limited , p.o . box 957 , offshore incorporations centre , road town , bvi story_separator_special_tag springer resolution of the set of nilpotent elements in a semisimple lie algebra plays a central role in geometric representation theory . a new structure on this variety has arisen in several representation theoretic constructions , such as the ( local ) geometric langlands duality and modular representation theory . it is also related to some algebro-geometric problems , such as the derived equivalence conjecture and description of t. bridgeland 's space of stability conditions . the structure can be described as a noncommutative counterpart of the resolution , or as a $ t $ -structure on the derived category of the resolution . the intriguing fact that the same $ t $ -structure appears in these seemingly disparate subjects has strong technical consequences for modular representation theory . story_separator_special_tag abstract let p be a parabolic subgroup of some general linear group gl ( v ) where v is a finite-dimensional vector space over an infinite field . the group p acts by conjugation on its unipotent radical pu and via the adjoint action on p u , the lie algebra of pu . more generally , we consider the action of p on the lth member of the descending central series of p u , denoted by p ( l ) u. let l ( p u ) denote the nilpotency class of pu . in our main result we show that p acts on p ( l ) u with a finite number of orbits precisely when l ( p u ) \xa0 \xa04 for l\xa0=\xa00 , or l ( p u ) \xa0 \xa05\xa0+\xa02l for l\xa0 \xa01 . moreover , in case the field is algebraically closed , we consider the modality mod ( p\xa0 : \xa0 p ( l ) u ) of the action of p on p ( l ) u. we show that mod ( p\xa0 : \xa0 p ( l ) u grows linearly in the minimal cases which admit infinitely many story_separator_special_tag it is well known that the auslander algebra of any representation finite algebra is quasi-hereditary . we consider the auslander algebra an of k [ t ] / n ( here , k is a field , t a variable and n a natural number ) . we determine all -filtered an-modules without self-extensions . they can be described purely combinatorially . given any -filtered module n , we show that there is ( up to isomorphism ) a unique -filtered module m without self-extensions which has the same dimension vector . in the case where k is an infinite field , n is a degeneration of this module m. in particular , we see that in this case , the set of -filtered modules with a fixed dimension vector is the closure of an open orbit ( thus irreducible ) . as observed by hille and r\xf6hrle , the problem of describing all -filtered an-modules is the same as that of describing the conjugacy classes of elements in the unipotent radical of a parabolic subgroup p of gl ( m , k ) under the action of p , thus we recover richardson 's dense orbit theorem in this story_separator_special_tag in [ 7 ] , d. kazhdan and g. lusztig gave a conjecture on the multiplicity of simple modules which appear in a jordan-h61der series of the verma modules . this multiplicity is described in the terms of coxeter groups and also by the geometry of schubert cells in the flag manifold ( see [ 8 ] ) . the purpose of this paper is to give the proof of their conjecture . the method employed here is to associate holonomic systems of linear differential equations with r.s . on the flag manifold with verma modules and to use the correspondance of holonomic systems and constructible sheaves . let g be a semi-simple lie group defined over and g its lie algebra . we take a pair ( b , b- ) of opposed borel subgroups of g and let t=b~bbe a maximal torus and w the weyl group . let b , b and f the corresponding lie algebras and 9l the nilpotent radical of b. let us denote by jg the category of holonomic systems with r.s . on x=g/b whose characteristic varieties are contained in the union of the conormal bundles of xw=bwb/b ( we w ) story_separator_special_tag we give an algebraic construction of standard modules-infinite-dimensional modules categorifying the poincare-birkhoff-witt basis of the underlying quantized enveloping algebra-for khovanov-lauda-rouquier algebras in all finite types . this allows us to prove in an elementary way that these algebras satisfy the homological properties of an `` affine quasihereditary algebra . '' in simply laced types these properties were established originally by kato via a geometric approach . we also construct some koszul-like projective resolutions of standard modules corresponding to multiplicity-free positive roots . story_separator_special_tag currently , there are many revolutionary technologies in construction , among which a significant place is occupied by construction 3d printing . it attracts an increasing number of researchers and entrepreneurs . however , the creation of effective compositions for this technology is still an urgent issue , as these mixtures must have a number of required characteristics : high plasticity during extrusion and low fluidity after laying the mixture , as well as a high setting speed . the results of research on the preparation of composite binders based on portland cement using a superplasticizer and a hardening accelerator are presented . to reduce the energy intensity and cost of production , wet magnetic separation dropouts of metallurgical production were added to the compositions . optimal dosages of the accelerator additive and superplasticizer when used together were established . a comprehensive study of the samples was performed using x-ray phase analysis and electron microscopy . a two-factor mathematical model of the obtained composite binders is proposed using regression equations and the optimal composition is selected for construction 3d printing . energy-efficient , cost-effective compositions were obtained that have the required characteristics for workability in 3d printing , as well story_separator_special_tag we describe noncommutative desingularizations of determinantal varieties , determinantal varieties defined by minors of generic symmetric matrices , and pfaffian varieties defined by pfaffians of generic anti-symmetric matrices . for maximal minors of square matrices and symmetric matrices , this gives a non-commutative crepant resolution . along the way , we describe a method to calculate the quiver with relations for any non-commutative desingularizations coming from exceptional collections over partial flag varieties . story_separator_special_tag in our paper non-commutative desingularization of determinantal varieties i we constructed and studied non-commutative resolutions of determinantal varieties defined by maximal minors . at the end of the introduction we asserted that the results could be generalized to determinantal varieties defined by non-maximal minors , at least in characteristic zero . in this paper we prove the existence of non-commutative resolutions in the general case in a manner which is still characteristic free . the explicit description of the resolution by generators and relations is deferred to a later paper . as an application of our results we prove that there is a fully faithful embedding between the bounded derived categories of the two canonical ( commutative ) resolutions of a determinantal variety , confirming a well-known conjecture of bondal and orlov in this special case . story_separator_special_tag one of the most remarkable results of this century in mathematics has been the classification completed in 1980 of all the finite simple groups . this took over 20 years and occupies almost 5000 pages in the literature , and it is conceivable that there are some errors there , so the details of classification are not really available to us , but the main results can be summarized . there are 17 families of simple groups , the alternating groups and 16 families of lie type . these in turn are broken up into subfamilies in several different ways . there are , first , the historical breakdowns , 6 families of classical groups , the projective special linear groups over finite fields , the orthogonal groups , ( three types ) , unitary groups , and the symplectic groups . then there are the 5 families of groups of exceptional types , the groups g 2 ( q ) , f 4 ( q ) , e 6 ( q ) , e 7 ( q ) , and e 8 ( q ) , the q denoting the order of the finite field over which they are story_separator_special_tag ( 1.1 ) this paper concerns three aspects of the action of a compact group k on a space x . the \xaerst is concrete and the others are rather abstract . ( 1 ) equivariantly formal spaces . these have the property that their cohomology may be computed from the structure of the zero and one dimensional orbits of the action of a maximal torus in k. ( 2 ) koszul duality . this enables one to translate facts about equivariant cohomology into facts about its ordinary cohomology , and back . ( 3 ) equivariant derived category . many of the results in this paper apply not only to equivariant cohomology , but also to equivariant intersection cohomology . the equivariant derived category provides a framework in both of these may be considered simultaneously , as examples of `` equivariant sheaves '' . we treat singular spaces on an equal footing with nonsingular ones . along the way , we give a description of equivariant homology and equivariant intersection homology in terms of equivariant geometric cycles . most of the themes in this paper have been considered by other authors in some context . in sect . 1.7 story_separator_special_tag considering homological algebra , this text is based on the systematic use of the language and ideas of derived categories and derived functors . relations with standard cohomology theory are described , and in most cases , complete proofs are given . story_separator_special_tag a new class of algebras have been introduced by khovanov and lauda and independently by rouquier . these algebras categorify one-half of the quantum group associated to arbitrary cartan data . in this paper , we use the combinatorics of lyndon words to construct the irreducible representations of those algebras associated to cartan data of finite type . this completes the classification of simple modules for the quiver hecke algebra initiated by kleshchev and ram . story_separator_special_tag in this paper , we provide a general ( functorial ) construction of modules over convolution algebras ( i.e. , where the multiplication is provided by a convolution operation ) starting with an appropriate equivariant derived category . the construction is sufficiently general to be applicable to different situations . one of the main applications is to the construction of modules over the graded hecke algebras associated to complex reductive groups starting with equivariant complexes on the unipotent variety . it also applies to the affine quantum enveloping algebras of typean . as is already known , in each case the algebra can be realized as a convolution algebra . our constructionturns suitable equivariant derived categories into an abundant source of modules over such algebras ; most of these are new , in that , so far the only modules have been provided by suitable borel moore homology or cohomology with respect to a constant sheaf ( or by an appropriate k-theoretic variant . ) in a sequel to this paper we will apply these constructions to equivariant perverse sheaves and also obtain a general multiplicity formula for the simple modules in the composition series of the modules constructed here story_separator_special_tag 0. introduction . the notion of the q-analogue of universal enveloping algebras is introduced independently by v. g. drinfeld and m. jimbo in 1985 in their study of exactly solvable models in the statistical mechanics . this algebra uq ( g ) contains a parameter q , and , when q 1 , this coincides with the universal enveloping algebra . in the context of exactly solvable models , the parameter q is that of temperature , and q 0 corresponds to the absolute temperature zero . for that reason , we can expect that the q-analogue has a simple structure at q 0. in [ k1 ] we named crystallization the study at q 0 , and we introduced the notion of crystal bases . roughly speaking , crystal bases are bases of uq ( 9 ) -modules at q 0 that satisfy certain axioms . there , we proved the existence and the uniqueness of crystal bases of finite-dimensional representations of u ( g ) when g is one of the classical lie algebras a , , b , , c , and d , . k. misra and t. miwa ( [ m ] ) proved the story_separator_special_tag let g=sp ( 2n , c ) be a complex symplectic group . we introduce a ( g\xd7 ( c\xd7 ) l+1 ) -variety nl , which we call the l-exotic nilpotent cone . then , we realize the hecke algebra h of type cn ( 1 ) with three parameters via equivariant algebraic k-theory in terms of the geometry of n2 . this enables us to establish a deligne-langlands type classification of simple h-modules under a mild assumption on parameters . as applications , we present a character formula and multiplicity formulas of h-modules story_separator_special_tag let g = sp ( 2n ) be the symplectic group over z. we present a certain kind of deformation of the nilpotent cone of g with g-action . this enables us to make direct links between the springer correspondence of sp_ { 2n } over c , that over characteristic two , and our exotic springer correspondence . as a by-product , we obtain a complete description of our exotic springer correspondence . story_separator_special_tag we present simple conditions which guarantee a geometric extension algebra to behave like a variant of quasi-hereditary algebras . in particular , standard modules of affine hecke algebras of type $ \\sf { bc } $ , and the quiver schur algebras are shown to satisfy the brauer-humphreys type reciprocity and the semi-orthogonality property . in addition , we present a new criterion of purity of weights in the geometric side . this yields a proof of shoji 's conjecture on limit symbols of type $ \\sf { b } $ [ t. shoji , { \\it adv . stud . pure math . } 40 ( 2004 ) ] , and the purity of the exotic springer fibers [ s. kato , { \\it duke math . j . } 148 ( 2009 ) ] . using this , we describe the leading terms of the $ c^ { \\infty } $ -realization of a solution of the lieb-mcguire system in the appendix . in [ s. kato , { \\it duke math . j . } 163 ( 2014 ) ] , we apply the results of this paper to the klr algebras of type $ \\sf { story_separator_special_tag let u be a unipotent element in a complex semisimple group g and let 3 , , be the variety of bore1 subgroups of g which contain u. in [ 4 , 51 , springer has defined a representation of w , the weyl group of g , on the homology of 2 : , and showed how to decompose the wrepresentation in the top homology of .53 u so that all irreducible representations of w are obtained . his mehod used etale cohomology of algebraic varieties in characteristic p. in this paper , we shall give an elementary construction ( independent of a he conjectured that the representation in the top homology of 2 is the two-sided regular representation of w and showed how this could be used to prove the completeness of the set of w-representation in the top homologies of z # , , . story_separator_special_tag 0.2. we are interested in the problem of constructing bases of u+ as a q ( v ) vector space . one class of bases of u+ has been given in [ dl ] . we call them ( or , rather , a slight modification of them , see ? 2 ) bases of pbw type , since for v = 1 , they specialize to bases of u+ of the type provided by the poincare see however ? 12 . ) story_separator_special_tag 1. preliminaries 2. a class of perverse sheaves on ev 3. multiplication 4. restriction 5. fourier-deligne transform 6. analysis of a sink 7. multiplicative generators 8. compatibility of multiplication with restriction 9. rank 2 10. definition of the canonical basis b of u 11. properties of the canonical basis b of u 12. the variety av 13. singular supports 14. example : graphs of type a , d , e 15. example : graphs of affine type a 16. graphs with a cyclic group action story_separator_special_tag abstract the concept and some basic properties of a twisted hopf algebra are introduced and investigated . its unique difference from a hopf algebra is that the comultiplication : a \xa0 \xa0 a \xa0 \xa0 a is an algebra homomorphism , not for the componentwise multiplication on a \xa0 \xa0 a , but for the twisted multiplication on a \xa0 \xa0 a given by lusztig 's rule . also , it is proved that any object a in green 's category has a twisted hopf algebra structure , any morphism between objects is a twisted hopf algebra homomorphism , the antipode s of a is self-adjoint under the lusztig form ( , \xa0 ) on a , and the green polynomials m a , \xa0 b ( t ) share a so-called cyclic-symmetry . as examples , the twisted ringel hall algebras , ringel 's twisted composition algebras , lusztig 's free algebras f and non-degenerate algebras f , the positive part u +\xa0 of the drinfeld jimbo quantized enveloping algebras u , and rosso 's quantum shuffle algebra t ( v ) all are twisted hopf algebras . the antipode and its inverse for a twisted ringel hall are story_separator_special_tag ferrari , s. , lombardi , a.m. , putti , m.c. , bertomoro , a. , cortella , i. , barzon , i. , girolami , a . & fabris , f. ( 2017 ) spectrum of 50utr mutations in ankrd26 gene in patients with inherited thrombocytopenia : c.-140c > g mutation is more frequent than expected . platelets , 28 , 621 624. ghalloussi , d. , saut , n. , bernot , d. , pillois , x. , rameau , p. , s ebahoun , g. , alessi , m.c. , raslova , h . & baccini , v. ( 2017 ) a new heterozygous mutation in gp1ba gene responsible for macrothrombocytopenia . british journal of haematology , doi:10 . 1111/bjh.14986 kunishima , s. , kamiya , t. & saito , h. ( 2002 ) genetic abnormalities of bernard-soulier syndrome . international journal of hematology , 76 , 319 327. li , r. & emsley , j . ( 2013 ) the organizing principle of the platelet glycoprotein ib-ix-v complex . journal thrombosis and haemostasis , 11 , 605 614. noris , p. , perrotta , s. , bottega , r. , pecci , a. , melazzini , story_separator_special_tag a class of desingularizations for orbit closures of representations of dynkin quivers is constructed , which can be viewed as a graded analogue of the springer resolution . a stratification of the singular fibres is introduced ; its geometry and combinatorics are studied . via the hall algebra approach , these constructions relate to bases of quantized enveloping algebras . using ginzburg s theory of convolution algebras , the base change coefficients of lusztig s canonical basis are expressed as decomposition numbers of certain convolution algebras . story_separator_special_tag we provide an introduction to the 2-representation theory of kac-moody algebras , starting with basic properties of nil hecke algebras and quiver hecke algebras , and continuing with the resulting monoidal categories , which have a geometric description via quiver varieties , in certain cases . we present basic properties of 2-representations and describe simple 2-representations , via cyclotomic quiver hecke algebras , and through microlocalized quiver varieties . story_separator_special_tag targeting a key enzyme in sars-cov-2 scientists across the world are working to understand severe acute respiratory syndrome coronavirus 2 ( sars-cov-2 ) , the virus that causes coronavirus disease 2019 ( covid-19 ) . zhang et al . determined the x-ray crystal structure of a key protein in the virus ' life cycle : the main protease . this enzyme cuts the polyproteins translated from viral rna to yield functional viral proteins . the authors also developed a lead compound into a potent inhibitor and obtained a structure with the inhibitor bound , work that may provide a basis for development of anticoronaviral drugs . science , this issue p. 409 optimized inhibitor of a key enzyme of the novel coronavirus exhibits pronounced lung tropism . the coronavirus disease 2019 ( covid-19 ) pandemic caused by severe acute respiratory syndrome coronavirus 2 ( sars-cov-2 ) is a global health emergency . an attractive drug target among coronaviruses is the main protease ( mpro , also called 3clpro ) because of its essential role in processing the polyproteins that are translated from the viral rna . we report the x-ray structures of the unliganded sars-cov-2 mpro and its complex with story_separator_special_tag european funding under framework 7 ( fp7 ) for the virtual physiological human ( vph ) project has been in place now for nearly 2 years . the vph network of excellence ( noe ) is helping in the development of common standards , open-source software , freely accessible data and model repositories , and various training and dissemination activities for the project . it is also helping to coordinate the many clinically targeted projects that have been funded under the fp7 calls . an initial vision for the vph was defined by framework 6 strategy for a european physiome ( step ) project in 2006. it is now time to assess the accomplishments of the last 2 years and update the step vision for the vph . we consider the biomedical science , healthcare and information and communications technology challenges facing the project and we propose the vph institute as a means of sustaining the vision of vph beyond the time frame of the noe . story_separator_special_tag in the previous chapter , we investigated properties of -functions . in this chapter , we use them to derive results for function spaces defined by means of -functions . story_separator_special_tag affine algebraic varieties , affine algebraic groups and their orbits.- first part : jordan decompositions , unipotent and diagonalizable groups.- second part : quotients and solvable groups.- reductive and semisimple algebraic groups , regular and subregular elements . story_separator_special_tag we prove a conjecture of kashiwara and miemietz on canonical bases and branching rules of affine hecke algebras of type d. the proof is similar to the proof of the type b case . story_separator_special_tag we give a variant of the proof of brundan and kleshchev that klr algebras for cyclic quivers and hecke algebras at roots of unity are isomorphic . this new proof constructs a different isomorphism , which has the advantages both of behaving better in with respect to deformation of parameters , and having a more conceptual construction .
we consider open-domain question answering ( qa ) where answers are drawn from either a corpus , a knowledge base ( kb ) , or a combination of both of these . we focus on a setting in which a corpus is supplemented with a large but incomplete kb , and on questions that require non-trivial ( e.g. , multi-hop ) reasoning . we describe pullnet , an integrated framework for ( 1 ) learning what to retrieve and ( 2 ) reasoning with this heterogeneous information to find the best answer . pullnet uses an iterative process to construct a question-specific subgraph that contains information relevant to the question . in each iteration , a graph convolutional network ( graph cnn ) is used to identify subgraph nodes that should be expanded using retrieval ( or pull ) operations on the corpus and/or kb . after the subgraph is complete , another graph cnn is used to extract the answer from the subgraph . this retrieve-and-reason process allows us to answer multi-hop questions using large kbs and corpora . pullnet is weakly supervised , requiring question-answer pairs but not gold inference paths . experimentally pullnet improves over the prior story_separator_special_tag background : few studies have addressed the clinical parameters predictive power related to caries lesion associated with their progression . this study assessed the predictive validity and proposed simplified models to predict short term caries progression using clinical parameters related to caries lesion activity status . methods : the occlusal surfaces of primary molars , presenting no frank cavitation , were examined according to the following clinical predictors : colour , luster , cavitation , texture , and clinical depth . after one year , children were re evaluated using the international caries detection and assessment system to assess caries lesion progression . progression was set as the outcome to be predicted . univariate multilevel poisson models were fitted to test each of the independent variables ( clinical features ) as predictors of short term caries progression . the multimodel inference was made based on the akaike information criteria and c statistic . afterwards , plausible interactions among some of the variables were tested in the models to evaluate the benefit of combining these variables when assessing caries lesions . results : 205 children ( 750 surfaces ) presented no frank cavitations at the baseline . after one year , story_separator_special_tag training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions . this paper studies the impact of multitask and transfer learning for simple question answering ; a setting for which the reasoning required to answer is quite easy , as long as one can retrieve the correct evidence given a question , which can be difficult in large-scale conditions . to this end , we introduce a new dataset of 100k questions that we use in conjunction with existing benchmarks . we conduct our study within the framework of memory networks ( weston et al. , 2015 ) because this perspective allows us to eventually scale up to more complex reasoning , and show that memory networks can be successfully trained to achieve excellent performance . story_separator_special_tag knowledge graph question answering aims to automatically answer natural language questions via well-structured relation information between entities stored in knowledge graphs . when faced with a multi-relation question , existing embedding-based approaches take the whole topic-entity-centric subgraph into account , resulting in high time complexity . meanwhile , due to the high cost for data annotations , it is impractical to exactly show how to answer a complex question step by step , and only the final answer is labeled , as weak supervision . to address these challenges , this paper proposes a neural method based on reinforcement learning , namely stepwise reasoning network , which formulates multi-relation question answering as a sequential decision problem . the proposed model performs effective path search over the knowledge graph to obtain the answer , and leverages beam search to reduce the number of candidates significantly . meanwhile , based on the attention mechanism and neural networks , the policy network can enhance the unique impact of different parts of a given question over triple selection . moreover , to alleviate the delayed and sparse reward problem caused by weak supervision , we propose a potential-based reward shaping strategy , which can story_separator_special_tag in this paper we introduce a novel semantic parsing approach to query freebase in natural language without requiring manual annotations or question-answer pairs . our key insight is to represent natural language via semantic graphs whose topology shares many commonalities with freebase . given this representation , we conceptualize semantic parsing as a graph matching problem . our model converts sentences to semantic graphs using ccg and subsequently grounds them to freebase guided by denotations as a form of weak supervision . evaluation experiments on a subset of the free917 and webquestions benchmark datasets show our semantic parser improves over the state of the art . story_separator_special_tag in this paper , we conduct an empirical investigation of neural query graph ranking approaches for the task of complex question answering over knowledge graphs . we experiment with six different ranking models and propose a novel self-attention based slot matching model which exploits the inherent structure of query graphs , our logical form of choice . our proposed model generally outperforms the other models on two qa datasets over the dbpedia knowledge graph , evaluated in different settings . in addition , we show that transfer learning from the larger of those qa datasets to the smaller dataset yields substantial improvements , effectively offsetting the general lack of training data . story_separator_special_tag abstract in recent years , many knowledge bases have been constructed or populated . these knowledge bases link real-world entities by their relationships on a large scale , serving as good resources to answer factoid questions . to answer a natural language question using a knowledge base , the main task is mapping it to a structured query of the same meaning , whose results from the knowledge base will be used as the question s answers . this mapping task is non-trivial since different questions can express a same meaning and many queries can arise from a knowledge base . to fulfill the task , an important thing is to model a query s structure as it conveys a part of the meaning and affects word orders in the question . however , state-of-the-art methods based on deep learning have neglected query structures and focused only on capturing semantic correlations between a question and a simple relation chain . in this paper , we instead take a query as a tree , and encode the orders of entities and relations into its representations to better distinguish candidate queries of a given question . overall , we first construct candidate story_separator_special_tag standard accuracy metrics indicate that reading comprehension systems are making rapid progress , but the extent to which these systems truly understand language remains unclear . to reward systems with real language understanding abilities , we propose an adversarial evaluation scheme for the stanford question answering dataset ( squad ) . our method tests whether systems can answer questions about paragraphs that contain adversarially inserted sentences , which are automatically generated to distract computer systems without changing the correct answer or misleading humans . in this adversarial setting , the accuracy of sixteen published models drops from an average of 75 % f1 score to 36 % ; when the adversary is allowed to add ungrammatical sequences of words , average accuracy on four models decreases further to 7 % . we hope our insights will motivate the development of new models that understand language more precisely . story_separator_special_tag detecting user intents from utterances is the basis of natural language understanding ( nlu ) task . to understand the meaning of utterances , some work focuses on fully representing utterances via semantic parsing in which annotation cost is labor-intentsive . while some researchers simply view this as intent classification or frequently asked questions ( faqs ) retrieval , they do not leverage the shared utterances among different intents . we propose a simple and novel multi-point semantic representation framework with relatively low annotation cost to leverage the fine-grained factor information , decomposing queries into four factors , i.e. , topic , predicate , object/condition , query type . besides , we propose a compositional intent bi-attention model under multi-task learning with three kinds of attention mechanisms among queries , labels and factors , which jointly combines coarse-grained intent and fine-grained factor information . extensive experiments show that our framework and model significantly outperform several state-of-the-art approaches with an improvement of 1.35 % -2.47 % in terms of accuracy .
abstract background motor vehicle emissions contribute nearly a quarter of the world 's energy-related greenhouse gases and cause non-negligible air pollution , primarily in urban areas . changing people 's travel behaviour towards alternative transport is an efficient approach to mitigate harmful environmental impacts caused by a large number of vehicles . such a strategy also provides an opportunity to gain health co-benefits of improved air quality and enhanced physical activities . this study aimed at quantifying co-benefit effects of alternative transport use in adelaide , south australia . method we made projections for a business-as-usual scenario for 2030 with alternative transport scenarios . separate models including air pollution models and comparative risk assessment health models were developed to link alternative transport scenarios with possible environmental and health benefits . results in the study region with an estimated population of 1.4 million in 2030 , by shifting 40 % of vehicle kilometres travelled ( vkt ) by passenger vehicles to alternative transport , annual average urban pm 2.5 would decline by approximately 0.4\xa0 g/m 3 compared to business-as-usual , resulting in net health benefits of an estimated 13\xa0deaths/year prevented and 118 disability-adjusted life years ( dalys ) prevented per year story_separator_special_tag in this paper , we combine the most complete record of daily mobility , based on large-scale mobile phone data , with detailed geographic information system ( gis ) data , uncovering previously hidden patterns in urban road usage . we find that the major usage of each road segment can be traced to its own - surprisingly few - driver sources . based on this finding we propose a network of road usage by defining a bipartite network framework , demonstrating that in contrast to traditional approaches , which define road importance solely by topological measures , the role of a road segment depends on both : its betweeness and its degree in the road usage network . moreover , our ability to pinpoint the few driver sources contributing to the major traffic flow allows us to create a strategy that achieves a significant reduction of the travel time across the entire road system , compared to a benchmark approach . story_separator_special_tag abstract travel information is one of the factors that contribute to the quality of public transport . in particular , integrated multimodal travel information ( imti ) is expected to affect customers modal choice . the objective of this research is to identify customers desired quality of imti provision in public transport . customers desired imti quality can vary throughout the pre-trip , wayside and on-board stages of a journey . the main determinants are time savings ( travel and search time ) and effort savings ( physical , cognitive , and affective effort ) . in a sample of dutch travellers with a substantial share of young persons , the pre-trip stage turns out to be the favourite stage to collect imti when planning multimodal travel ; desired imti types in this stage are used to plan the part of the journey that is made by public transport . wayside imti is most desired when it helps the traveller to catch the right vehicle en route . on-board travellers are most concerned about timely arrival at interchanges in order to catch connecting modes . in the whole travel process , travel time is the most important saving . apart story_separator_special_tag this study quantifies the relationship between the perceived and actual waiting times experienced by passengers at a bus stop . understanding such a relationship would be useful in quantifying the value of providing real-time information to passengers on the time until the next bus is expected to arrive at a bus stop . data on perceived and actual passenger waiting times , along with socioeconomic characteristics , were collected at bus stops where no real-time bus arrival information is provided , and relationships between perceived and actual waiting times are estimated . the results indicate that passengers do perceive time to be greater than the actual amount of time waited . however , the hypothesis that the rate of change of perceived time does not vary with respect to the actual waiting time could not be rejected ( over a range of 3 to 15 minutes ) . assuming that a passenger s perceived waiting time is equal to the actual time when presented with accurate real-time bus arrival information , the value of the eliminated additional time is assessed in the form of reduced vehicle hours per day resulting from a longer headway that produces the same mean passenger story_separator_special_tag abstract in order to attract more choice riders , transit service must not only have a high level of service in terms of frequency and travel time but also must be reliable . although transit agencies continuously work to improve on-time performance , such efforts often come at a substantial cost . one inexpensive way to combat the perception of unreliability from the user perspective is real-time transit information . the onebusaway transit traveler information system provides real-time next bus countdown information for riders of king county metro via website , telephone , text-messaging , and smart phone applications . although previous studies have looked at traveler response to real-time information , few have addressed real-time information via devices other than public display signs . for this study , researchers observed riders arriving at seattle-area bus stops to measure their wait time while asking a series of questions , including how long they perceived that they had waited . the study found that for riders without real-time information , perceived wait time is greater than measured wait time . however , riders using real-time information do not perceive their wait time to be longer than their measured wait time . story_separator_special_tag instantaneous and accurate prediction of bus arrival time can help improve the quality of bus-arrival-time information service , and attract additional ridership . on the basis of bus running processes , a self-adaptive exponential smoothing algorithm is proposed to predict the bus running speed based on the short-term running speeds of taxis and buses available . and a bus travel time prediction model is proposed , in which the delay caused by the signal control and the acceleration and deceleration are considered . the research results show that there is a significant linear correlation between speeds of buses and taxis on the same link during the same time period , and the overall performance of the radio frequency identification ( rfid ) -data-based model is superior to that of automatic vehicle location ( avl ) - data-only-based model , regardless of whether the traffic congested or not . story_separator_special_tag abstract public transport users are increasingly expecting better service and up to date information , in pursuit of a seamless journey experience . in order to meet these expectations , many transport operators are already offering free mobile apps to help customers better plan their journeys and access real-time travel information . leveraging the spatio-temporal data that such apps can produce at scale ( i.e . timestamped gps traces ) , opens an opportunity to bridge the gap between passenger expectations and capabilities of the operators by providing a real-time 360-degree view of the transport network based on the apps as infrastructure paradigm . the first step towards fulfilling this vision is to understand which routes and services the passengers are travelling on at any given time . mapping a gps trace onto a particular transport network is known as network matching . in this paper , the problem is formulated as a supervised sequence classification task , where sequences are made of geographic coordinates , time , and line and direction of travel as the label . we present and compare two data-driven approaches to this problem : ( i ) a heuristic algorithm , which looks for nearby story_separator_special_tag bus transportation plays an important role in modern society and has been developed in many parts of the world . it reduces the private vehicle usage ; fuel consumption and more over reduce traffic congestion , if the arrival time of the buses is accurate . in this paper , various literatures have been surveyed which is used for prediction of bus arrival time . real time prediction of arrival time is so much valuable for passengers and transport departments . it reduces the waiting time which they have faced during trips and having a satisfaction to know about the schedule of the bus . story_separator_special_tag 164 abstract traffic flow in major urban roads is affected by several factors . it is often interrupted by stochastic conditions , such as traffic lights , road conditions , number of vehicles on the road , time of travel , weather conditions , driving style of vehicles . the provision of timely and accurate travel time information of transit vehicles is valuable for both operators and passengers , especially when dispatching is based on estimation of potential passengers waiting along the route rather than the predefined time schedule . operators manage their dispatches in real time , and passengers can form travel preferences dynamically . arrival time estimation for time scheduled public transport busses have been studied by many researchers using various paradigms . however , dynamic prediction on some type of transit vehicles , which do not follow any dispatch time schedule , or stop station constrains introduces extra complexities . in this paper , a survey on the recent studies using historical data , statistical methods , kalman filters and artificial neural networks ( ann ) have been applied to gps data collected from transit vehicles , are collected with an emphasis on their model and architecture story_separator_special_tag a precise prediction of transportation time is important to help both passengers to plan their trips and bus operations control to make an effective fleet management . in this study , we make use of gps data from a public transportation bus line to develop a public bus arrival time prediction at any distance along the route . with large and complex information , deep neural network model ( dnn ) is used to get high prediction accuracy . in this paper , variables and structures of the proposed dnn model are presented . the performance of the proposed model is evaluated by conducting real bmta-8 bus data in bangkok , thailand , and comparing the result with the currently used ordinary least square ( ols ) regression model . the result shows that the proposed dnn model is more accurate than the ols regression model around 55 % for mean absolute percentage error ( mape ) . it outperforms the current prediction method of the studied bus line , and it is feasible and applicable for bus travel time prediction of any route . story_separator_special_tag public transport , especially the bus transport , has been well developed in most cities . thus the technique of bus arrival time prediction has become a research hotspot nowadays . it is a very important subject to improve the precision and reliability of the bus arrival time prediction , reduce travelers ' anxieties and waiting times at bus stop . in this paper , we propose easycomeeasygo ( eceg ) , a novel bus arrival time prediction system based on smart phone for passengers who are traveling with bus in real-life scenarios . this study presents an algorithm that uses real-time gps data from field and takes delays automatically into account for an accurate prediction of bus arrival time . we develop a prototype system with different types of android-based smart phone and actual gps data from bus route 26 located in dalian , china are used as a test bed . we have extensively evaluated the eceg system with the bus route 26 over a month period . our results suggest that the proposed system achieves outstanding prediction accuracy and gains most travelers ' satisfy . story_separator_special_tag bus headway in a rural area usually is much larger than that in an urban area . providing real-time bus arrival information could make the public transit system more user-friendly and thus enhance its competitiveness among various transportation modes . as part of an operational test for rural traveler information systems currently ongoing in blacksburg , virginia , an experimental study has been conducted on forecasting the arrival time of the next bus with automatic vehicle location techniques . the process of developing arrival time estimation algorithms is discussed , including route representation , global positioning system ( gps ) data screening for identifying data quality and delay patterns , algorithm formulation , and development of measures of performance . whereas gps-based bus location data are adopted in all four algorithms presented , the extent to which other information is used in these algorithms varies . in addition to bus location data , information relevant to the performance of an algorithm includes scheduled arrival time , delay correlation , and waiting time at time-check stops . the performance of an algorithm using different levels of information is compared against three criteria : overall precision , robustness , and stability . story_separator_special_tag the main content of this paper is the prediction algorithm of the time that the campus bus needs to arrive at the position of the passenger . the prediction algorithm uses a method of piecewise prediction . according to the historical average velocity data of each section of route , the residence time of the campus bus on each section is predicted , and then the total time is predicted . the historical average velocity data is updated by the exponential smoothing method , which makes the new data account for a higher proportion of the forecast . keywords arrival time prediction ; intelligent transportation ; exponential smoothing method ; historical average model story_separator_special_tag in recent times , most of the industries provide transportation facility for their employees from scheduled pick-up and drop points . in order to reduce longer waiting time , it is important to accurately predict the vehicle arrival in real time . this paper proposes a simple , lightweight yet powerful historical data based vehicle arrival time prediction model . unlike previous work , the proposed model uses very limited input features namely vehicle trajectory and timestamp considering the scarcity and unavailability of data in the developing countries regarding traffic congestion , weather , scheduled arrival time , leg time , dwell time etc . the authors proposed model is evaluated against standard artificial neural network ( ann ) and support vector machine ( svm ) regression models using real bus data of an industry campus at siruseri , chennai collected over four months of time period . the result shows that proposed historical data based model can predict two and half ( approx . ) times faster than ann model and two ( approx . ) times faster than svm model while it also achieves a comparable accuracy ( 75.56 % ) with respect to ann model ( 76 story_separator_special_tag this research effort uses avl and apc dynamic data to develop a bus travel time model capable of providing real time information on bus arrival times to passengers , via traveler information services and to transit controllers for the application of proactive control strategies . the developed model is based on two kalman filter algorithms for the prediction of running times and dwell times alternately in an integrated framework . the avl and apc data used were obtained for a specific bus route in downtown toronto . the performance of the developed prediction model was tested using `` hold out '' data and other data from microsimulation representing different scenarios of bus operation along the investigated route using the vissim microsimulation software package . the kalman filter algorithm outperformed all other developed models in terms of accuracy demonstrating the dynamic ability to update itself based on new data that reflected the changing characteristics of the transit-operating environment . a user-interactive system was developed to provide continuous information on the expected arrival time of buses at downstream stops , hence the expected deviations from schedule . the system enables the user to assess in real time transit stop-based control actions to story_separator_special_tag the emphasis of this research effort was on using automatic vehicle location ( avl ) and automatic passenger counter ( apc ) dynamic data to develop a bus travel time model capable of providing real time information on bus arrival and departure times to passengers and to transit controllers for the application of proactive control strategies . the developed model is comprised of two kalman filter algorithms for the prediction of running times and dwell times alternately in an integrated framework . the avl and apc data used were obtained for a specific bus route in downtown toronto . the performance of the prediction model was tested using `` hold out '' data and other data from a microsimulation model representing different scenarios of bus operation along the investigated route using the vissim microsimulation software package . the kalman filter-based model outperformed other conventional models in terms of accuracy , demonstrating the dynamic ability to update itself based on new data that reflected the changing characteristics of the transit operating environment . story_separator_special_tag travel time information is a vital component of many intelligent transportation systems ( its ) applications . in recent years , the number of vehicles in india has increased tremendously , leading to severe traffic congestion and pollution in urban areas , particularly during peak periods . a desirable strategy to deal with such issues is to shift more people from personal vehicles to public transport by providing better service ( comfort , convenience and so on ) . in this context , advanced public transportation systems ( apts ) are one of the most important its applications , which can significantly improve the traffic situation in india . one such application will be to provide accurate information about bus arrivals to passengers , leading to reduced waiting times at bus stops . this needs a real-time data collection technique , a quick and reliable prediction technique to calculate the expected travel time based on real-time data and informing the passengers regarding the same . the scope of this study is to use global positioning system data collected from public transportation buses plying on urban roadways in the city of chennai , india , to predict travel times under heterogeneous story_separator_special_tag an algorithm is presented to predict transit vehicle arrival times up to 1 h in advance . it uses the time series of data from an automated vehicle location system , consisting of time and location pairs . these data are used with historical statistics in an optimal filtering framework to predict future arrivals . the algorithm is implemented for a large transit fleet in seattle , washington , and the prediction results for hundreds of locations are made widely available on the web . an evaluation of the second busiest but most complex prediction site is presented to demonstrate the value of prediction over the use of schedules alone . story_separator_special_tag in this paper we present a general prescription for the prediction of transit vehicle arrival/departure . the prescription identifies the set of activities that are necessary to preform the prediction task , and describes each activity in a component based framework . we identify the three components , a tracker , a filter , and a predictor , necessary to use automatic vehicle location ( avl ) data to position a vehicle in space and time and then predict the arrival/departure at a selected location . data , starting as an avl stream , flows through the three components , each component transforms the data , and the end result is a prediction of arrival/departure . the utility of this prescription is that it provides a framework that can be used to describe the steps in any prediction scheme . we describe a kalman filter for the filter component , and we present two examples of algorithms that are implemented in the predictor component . we use these implementations with avl data to create two examples of transit vehicle prediction systems for the cities of seattle and portland . story_separator_special_tag congestion has become a serious problem in the context of urban transport around the world . as more and more vehicles are being introduced into the urban streets every year , the mode share of the public transportation sector is declining at an alarming rate . particularly in developing countries , more people have moved to personalized mode since it is becoming easily affordable and the quality of service offered by the public transit is not improving . to attract more people , the public transit should provide a high level of quality service to the passengers . one way of achieving this is by using advanced public transport systems ( apts ) applications such as providing accurate real-time bus arrival information to the passengers which will improve the service reliability of the public transit . travel time prediction has been a well-renowned topic of research for years . however , studies which were model based and incorporating dwell times at bus stops explicitly for heterogeneous traffic conditions are limited . the present study tries to explicitly incorporate the bus stop delays associated with the total travel times of the buses under heterogeneous traffic conditions . this will help in story_separator_special_tag in this paper , the time series model , autoregressive integrated moving average ( arima ) is used to predict bus travel time . arima model is simpler used for predicting bus travel time based on travel time series data ( historic data ) compared to regression method as the factors affecting bus travel time are not available in detail such as delay at link , bus stop , intersection , etc . bus travel time prediction is an important aspect to bus operator in providing timetable for bus operation management and user information . the study aims at finding appropriate time series model for predicting bus travel time by evaluating the minimum of mean absolute relative error ( mare ) and mean absolute percentage prediction error ( mappe ) . in this case , data set was collected from the bus service operated on a divided 4-lane 2-way highway in ipoh-lumut corridor , perak , malaysia . the estimated parameters , appropriate model , and measures of model performance evaluation are presented . the analysis of both ipoh to lumut and lumut to ipoh directions is separately performed . the results show that the predicted travel times by using story_separator_special_tag the objective of this study is to apply artificial neural network ( ann ) for development of bus travel time prediction model . the bus travel time prediction model was developed to give real time bus arrival information to the passenger and transit agencies for applying proactive strategies . for development of ann model , dwell time , delays and distance between the bus stops was taken as input data . arrivals/departure times , delays , average speed between the bus stop and distance between the bus stops were collected for two urban routes in delhi . model was developed , validated and tested using gps ( global positioning system ) data collected from field study . comparative study reveals that ann model outperformed the regression model in terms of accuracy and robustness . story_separator_special_tag the provision of timely and accurate bus arrive time information is very important . it helps to attract additional ridership and increase the satisfaction of transit users . in this paper , a self-learning prediction algorithm is proposed based on historical data model . locations and speeds of the bus are periodically obtained from gps senor installed on the bus and stored in database . historical travel time in all road sections is collected . these historical data are trained using bp neural network to predict the average speed and arrival time of the road sections . experimental results indicate that the proposed algorithm achieves outstanding prediction accuracy compared with general solutions based on historical travel time . story_separator_special_tag a major component of atis is travel time information . the provision of timely and accurate transit travel time information is important because it attracts additional ridership and increases the satisfaction of transit users . the objectives of this research are to develop and apply a model to predict bus arrival time using automatic vehicle location ( avl ) data . in this research , the travel time prediction model considered schedule adherence and dwell times . actual avl data from a bus route located in houston , texas was used as a test bed . a historical data based model , regression models , and artificial neural network ( ann ) models were used to predict bus arrival time . it was found that ann models outperformed the historical data based model and the regression models in terms of prediction accuracy . story_separator_special_tag abstract most transit agencies are trying to increase their ridership . to achieve this goal , they are looking to maintain or even improve their level of service . this is very hard , since traffic congestion is normally increasing . as a result , bus travel times are higher and less reliable , which makes harder to predict travel times and avoid bunching . being able to accurately predict bus travel speeds and update this prediction with real-time information could improve the quality and reliability of the information given to users , and increase the effectiveness of control schemes . in this work we implement and compare different machine learning methods ( artificial neural networks , support vector machines and bayes networks ) to predict bus travel speeds using real-time information about traffic conditions . the proposed algorithms are compared against two common approaches used to predict travel speeds . in order to feed our models , we apply traffic shockwaves theory to select our predictors . the input data used in each model was the speed obtained and processed from gps devices installed in each of the buses from transantiago , the public transportation system from santiago , story_separator_special_tag the arrival times of buses are often hard to predict due to variation of real time traffic conditions , deployment schedules and traffic incidents . the provision of timely arrival time information is thus vital for passengers to minimize their waiting time and improve riders ' confidence in the public transportation system , directly promoting more ridership . multiple buses are commonly observed to arrive at a bus stop every hour . in this research , the prediction of estimated time of arrival ( eta ) of buses is translated into a multi-label classification problem . using buses historical global positioning system ( gps ) arrival time , neural network models ( ann ) are shown to be reliable solutions for the problem , and ensemble of neural networks are explored for more relevant output . the experimental results demonstrate that 77 78 % of the time , ann models are able to accurately predict arrival time of buses . the neural network models are able to outperform the other algorithms ( i.e . decision tree , random forest , naive bayes ) in the classification of multi-label arrival times by up to 8 % based on performance metrics such story_separator_special_tag this letter proposes random neural networks ( rnns ) to randomly train several neural network ( nn ) models for the promotion of traditional nn . moreover , an arrival time prediction method ( atpm ) based on rnns is proposed to predict the stop-to-stop travel time for motor carriers . in experiments , the results showed that the average accuracies of rnns are 94.75 % for highway and 78.22 % for urban road , respectively . furthermore , the accuracies of the proposed atpm are higher than previous data mining methods . therefore , the proposed atpm is suitable to predict the stop-to-stop travel time for motor carriers . story_separator_special_tag accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times . a dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper , based on support vector machines ( svms ) and kalman filtering-based algorithm . in the proposed model , the well-trained svm model predicts the baseline bus travel times from the historical bus trip data ; the kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times . the performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in shenzhen , china . the results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes . story_separator_special_tag artificial neural networks have been used in a variety of prediction models because of their flexibility in modeling complicated systems . using the automatic passenger counter data collected by new jersey transit , a model based on a neural network was developed to predict bus arrival times . test runs showed that the predicted travel times generated by the models are reasonably close to the actual arrival times . story_separator_special_tag automatic passenger counter ( apc ) systems have been implemented in various public transit systems to obtain bus occupancy along with other information such as location , travel time , etc . such information has great potential as input data for a variety of applications including performance evaluation , operations manage- ment , and service planning . in this study , a dynamic model for predicting bus-arrival times is developed using data collected by a real-world apc system . the model consists of two major elements : the first one is an artificial neural network model for predicting bus travel time between time points for a trip occurring at given time-of-day , day-of- week , and weather condition ; the second one is a kalman filter-based dynamic algorithm to adjust the arrival-time prediction using up-to-the-minute bus location informa- tion . test runs show that this model is quite powerful in modeling variations in bus-arrival times along the service route . story_separator_special_tag transit operations are interrupted frequently by stochastic variations in traffic and ridership conditions that deteriorate schedule or headway adherence and thus lengthen passenger wait times . providing passengers with accurate vehicle arrival information through advanced traveler information systems is vital to reducing wait time . two artificial neural networks ( anns ) , trained by link-based and stop-based data , are applied to predict transit arrival times . to improve prediction accuracy , both are integrated with an adaptive algorithm to adapt to the prediction error in real time . the bus arrival times predicted by the anns are assessed with the microscopic simulation model corsim , which has been calibrated and validated with real-world data collected from route number 39 of the new jersey transit corporation . results show that the enhanced anns outperform the ones without integration of the adaptive algorithm . story_separator_special_tag the prediction of bus travel time is one of the key of public traffic guidance , accurate bus arrival time information is vital to passengers for reducing their anxieties and waiting times at bus stop , or make reasonable travel arrangement before a trip . research aim at bus travel time prediction is comprehensive at home and abroad . this paper proposes a model to combine road traffic state with bus travel to form the bayesian network , with a lot of historical data , the parameter of network can be achieved , through estimating the real-time traffic status , so as to predict the bus travel time . we introduced markov transfer matrix to forecast the traffic state , and substitute the estimate state value into the joint distribution of bus travel time and state , the real time bus travel time predicted value can be obtained . bus travel time predicted by the proposed model is assessed with data of transit route 69 in guangzhou between two bus stops , the results show that the proposed model is feasible , but the accuracy needs to be further improved . story_separator_special_tag the public transport information has been focus of social attention , especially bus arrival time ( bat ) prediction . historical data in combination with real-time data may be used to predict the future travel times of vehicles more accurately , thus improving the experience of the users who rely on such information . in this paper , we expound the correspondence among real-time data , history data and bat . hence , we propose short distance bat prediction based on real-time traffic condition and long distance bat prediction based on k nearest neighbors ( knn ) respectively . furthermore , original matching algorithm of knn is modified for two times to accelerate matching procedure in terms of computationally expensive queries . in empirical studies with real data from buses , the model in this paper outperforms ann or knn used alone both in accuracy and efficiency of the algorithm , errors of which is less than 12 percent for a time horizon of 60 minutes . story_separator_special_tag urban mobility impacts urban life to a great extent . to enhance urban mobility , much research was invested in traveling time prediction : given an origin and destination , provide a passenger with an accurate estimation of how long a journey lasts . in this work , we investigate a novel combination of methods from queueing theory and machine learning in the prediction process . we propose a prediction engine that , given a scheduled bus journey ( route ) and a 'source/destination ' pair , provides an estimate for the traveling time , while considering both historical data and real-time streams of information that are transmitted by buses . we propose a model that uses natural segmentation of the data according to bus stops and a set of predictors , some use learning while others are learning-free , to compute traveling time . our empirical evaluation , using bus data that comes from the bus network in the city of dublin , demonstrates that the snapshot principle , taken from queueing theory , works well yet suffers from outliers . to overcome the outliers problem , we use machine learning techniques as a regulator that assists in identifying story_separator_special_tag in this paper we propose five neural network models for forecasting public transit . these models are evaluated in terms of accuracy and robustness . the research has two major objectives : to identify the best performing machine learning model in predicting bus travel time and to establish a set of methods in order to obtain a detailed dataset ( a variety of practical input values ) which will further result in more accurate predictions . favorably , the final result of this work will be an alternative bus arrival time predicting model , which can help encourage citizens to choose public transportation and consequently to reduce the carbon dioxide emission by decreasing the number of personal vehicles in the traffic . story_separator_special_tag the primary objective of this paper is to develop models to predict bus arrival time at a target stop using actual multi-route bus arrival time data from previous stop as inputs . in order to mix and fully utilize the multiple routes bus arrival time data , the weighted average travel time and three forgetting factor functions ( fffs ) f1 , f2 and f3 are introduced . based on different combinations of input variables , five prediction models are proposed . three widely used algorithms , i.e . support vector machine ( svm ) , artificial neutral network ( ann ) and linear regression ( lr ) , are tested to find the best for arrival time prediction . bus location data of 11 road segments from yichun ( china ) , covering 12 bus stops and 16 routes , are collected to evaluate the performance of the proposed approaches . the results show that the newly introduced parameters , the weighted average travel time , can significantly improve the prediction accuracy : the prediction errors reduce by around 20 % . the algorithm comparison demonstrates that the svm and ann outperform the lr . the fffs can also story_separator_special_tag the travel time between bus stops has obvious characteristics of time interval distribution , and the bus is a typical space-time process object , and its operation has a state transition . in order to predict the travel time between bus stations accurately , a support vector machine ( svm ) algorithm is proposed based on the measured travel time between bus stations . through a large number of gps data in different periods of time for a reasonable classification summary bin selected the appropriate kernel function to verify . the algorithm is verified by the actual operation data of no . 6 bus in qingdao economic and technological development zone . the results show that the results of support vector machine model operation are basically in agreement with the actual measured data , and the accuracy is relatively high , and it can even be used to predict bus travel time . story_separator_special_tag abstract the transportation literature is rich in the application of neural networks for travel time prediction . the uncertainty prevailing in operation of transportation systems , however , highly degrades prediction performance of neural networks . prediction intervals for neural network outcomes can properly represent the uncertainty associated with the predictions . this paper studies an application of the delta technique for the construction of prediction intervals for bus and freeway travel times . the quality of these intervals strongly depends on the neural network structure and a training hyperparameter . a genetic algorithm based method is developed that automates the neural network model selection and adjustment of the hyperparameter . model selection and parameter adjustment is carried out through minimization of a prediction interval-based cost function , which depends on the width and coverage probability of constructed prediction intervals . experiments conducted using the bus and freeway travel time datasets demonstrate the suitability of the proposed method for improving the quality of constructed prediction intervals in terms of their length and coverage probability . story_separator_special_tag bus arrival time prediction is an important part of intelligent transport system , the prediction accuracy directly affects the overall level of the intelligent transport system . this paper mainly studies the hybrid bus arrival time prediction model , which combines the real-time prediction and support vector machine ( svm ) model . and in the svm model , three factors was chosen as reference , which is time , weather , holidays . this paper is based on the analysis of the characteristics of yuxi intelligent transport system , and choosing the support vector machine ( svm ) model as prediction model , which is adaptive , robustness , and fitting small sample prediction . at the same time , using the gps technology , which solves the problem of the complexity of the road traffic and the singularity of historical data . the model of this paper effectively solves the problem of bus arrival time prediction . story_separator_special_tag providing real-time bus arrival information can help to improve the service quality of a transit system and enhance its competitiveness among other transportation modes . taking the city of jinan , china , as an example , this study proposes two artificial neural network ( ann ) models to predict the real-time bus arrivals , based on historical global positioning system ( gps ) data and automatic fare collection ( afc ) system data . also , to contend with the difficulty in capturing the traffic fluctuations over different time periods and account for the impact of signalized intersections , this study also subdivides the collected dataset into a bunch of clusters . sub-ann models are then developed for each cluster and further integrated into a hierarchical ann model . to validate the proposed models , six scenarios with respect to different time periods and route lengths are tested . the results reveal that both proposed ann models can outperform the kalman filter model . particularly , with several selected performance indices , it has been found that the hierarchical ann model clearly outperforms the other two models in most scenarios . story_separator_special_tag predicting arrival times of buses is a key challenge in the context of building intelligent public transportation systems . in this paper , we describe an efficient non-parametric algorithm which provides highly accurate predictions based on real-time gps measurements . the key idea is to use a kernel regression model to represent the dependencies between position updates and the arrival times at bus stops . the performance of the proposed algorithm is evaluated on real data from the public bus transportation system in dublin , ireland . for a time horizon of 50 minutes , the prediction error of the algorithm is less than 10 percent on average . it clearly outperforms parametric methods which use a linear regression model , predictions based on the k-nearest neighbor algorithm , and a system which computes predictions of arrival times based on the current delay of buses . a study investigating the selection of interpolation points to reduce the size of the training set concludes the paper . story_separator_special_tag abstract this paper proposes an approach combining historical data and real-time situation information to forecast the bus arrival time . the approach includes two phases . firstly , radial basis function neural networks ( rbfnn ) model is used to learn and approximate the nonlinear relationship in historical data in the first phase . then , in the second phase , an online oriented method is introduced to adjust to the actual situation , which means to use the practical information to modify the predicted result of rbfnn in the first phase . afterwards , the system designing outline is given to summarize the structure and components of the system . we did an experimental study on bus route no.21 in dalian by deploying this system to demonstrate the validity and effectiveness of this approach . in addition , multiple linear regression model , bp neural networks and rbfnn without online adjustment are used in contrast . results show that the approach with rbfnn and online adjustment has a better predicting performance . story_separator_special_tag provision of accurate bus arrival information is vital to passengers for reducing their anxieties and waiting times at bus stop . gps-equipped buses can be regarded as mobile sensors probing traffic flows on road surfaces . in this paper , we present an approach that predicts bus arrival time in terms of the knowledge learned from a large number of historical bus gps trajectories . in our approach , we build time-dependent path-section graph , where a path-section is a road segment between two adjacent bus stops , to model the properties of dynamic road networks . then , a clustering approach is designed to estimate the distribution of travel time on each path-section in different time slots . finally , bus arrival time is predicted based on the path-section graph and real-time gps information . using a real-world trajectory dataset generated by 1000 buses in a period of 2 months , a bus arrival time prediction system is built . then we evaluate the system with extensive experiments and realistic evaluations . experiments show that our method is close to the actual value and better than some typical algorithms . story_separator_special_tag abstract accurate bus arrival time is fundamental for efficient bus operation and dispatching decisions . this paper proposed a new prediction model based on support vector machine ( svm ) and artificial neural network ( ann ) to predict bus arrival time at an objective stop with multi-routes . the preceding bus arrival time of objective route and all other routes passing the same stop , and travel speed of the target one are three inputs of the model . a case study was conducted with data collected in all workdays in october , 2014 in zigong , sichuan , china . the results of the proposed model indicate that both svm and ann models have high accuracy , while the ann model is better than svm model comparatively . the mean absolute percentage errors ( mape ) of prediction are less than 10 % in most cases . by contrast , two groups with inputs changed or removed are set up to demonstrate the suitability of three inputs . no matter which method to use , svm or ann , the consequence of the proposed model is better than comparative groups . story_separator_special_tag effective prediction of bus arrival times is important to advanced traveler information systems ( atis ) . here a hybrid model , based on support vector machine ( svm ) and kalman filtering technique , is presented to predict bus arrival times . in the model , the svm model predicts the baseline travel times on the basic of historical trips occurring data at given time-of-day , weather conditions , route segment , the travel times on the current segment , and the latest travel times on the predicted segment ; the kalman filtering-based dynamic algorithm uses the latest bus arrival information , together with estimated baseline travel times , to predict arrival times at the next point . the predicted bus arrival times are examined by data of bus no . 7 in a satellite town of dalian in china . results show that the hybrid model proposed in this paper is feasible and applicable in bus arrival time forecasting area , and generally provides better performance than artificial neural network ( ann ) based methods . copyright \xa9 2010 john wiley & sons , ltd . story_separator_special_tag abstract provision of accurate bus arrival information is vital to passengers for reducing their anxieties and waiting times at bus stop . this paper proposes models to predict bus arrival times at the same bus stop but with different routes . in the proposed models , bus running times of multiple routes are used for predicting the bus arrival time of each of these bus routes . several methods , which include support vector machine ( svm ) , artificial neural network ( ann ) , k nearest neighbours algorithm ( k-nn ) and linear regression ( lr ) , are adopted for the bus arrival time prediction . observation surveys are conducted to collect bus running and arrival time data for validation of the proposed models . the results show that the proposed models are more accurate than the models based on the bus running times of single route . moreover , it is found that the svm model performs the best among the four proposed models for predicting the bus arrival times at bus stop with multiple routes . story_separator_special_tag the prediction of bus arrival time is important for passengers who want to determine their departure time and reduce anxiety at bus stops that lack timetables . the random forests based on the near neighbor ( rfnn ) method is proposed in this article to predict bus travel time , which has been calibrated and validated with real-world data . a case study with two bus routes is conducted , and the proposed rfnn is compared with four methods : linear regression ( lr ) , k-nearest neighbors ( knn ) , support vector machine ( svm ) , and classic random forest ( rf ) . the results indicate that the proposed model achieves high accuracy . that is , one bus route has the results of 13.65 mean absolute error ( mae ) , 6.90 % mean absolute percentage error ( mape ) , 26.37 root mean squared error ( rmse ) and 13.77 ( mae ) , 7.58 % ( mape ) , 29.01 ( rmse ) , respectively . rfnn has a longer computation time of 44,301 seconds for a data set with 14,182 data . the proposed method can be optimized by the technology of story_separator_special_tag the ability to obtain accurate predictions of bus arrival time on a real time basis is vital to both bus operations control and passenger information systems . several studies have been devoted to this arrival time prediction problem in many countries ; however , few resulted in completely satisfactory algorithms . this paper presents an effective method that can be used to predict the expected bus arrival time at individual bus stops along a service route . this method is a hybrid scheme that combines a neural network ( nn ) that infers decision rules from historical data with kalman filter ( kf ) that fuses prediction calculations with current gps measurements . the proposed algorithm relies on real-time location data and takes into account historical travel times as well as temporal and spatial variations of traffic conditions . a case study on a real bus route is conducted to evaluate the performance of the proposed algorithm in terms of prediction accuracy . the results indicate that the system is capable of achieving satisfactory performance and accuracy in predicting bus arrival times for egyptian environments . story_separator_special_tag countless learning tasks require dealing with sequential data . image captioning , speech synthesis , and music generation all require that a model produce outputs that are sequences . in other domains , such as time series prediction , video analysis , and musical information retrieval , a model must learn from inputs that are sequences . interactive tasks , such as translating natural language , engaging in dialogue , and controlling a robot , often demand both capabilities . recurrent neural networks ( rnns ) are connectionist models that capture the dynamics of sequences via cycles in the network of nodes . unlike standard feedforward neural networks , recurrent networks retain a state that can represent information from an arbitrarily long context window . although recurrent neural networks have traditionally been dicult to train , and often contain millions of parameters , recent advances in network architectures , optimization techniques , and parallel computation have enabled successful large-scale learning with them . in recent years , systems based on long short-term memory ( lstm ) and bidirectional ( brnn ) architectures have demonstrated ground-breaking performance on tasks as varied as image captioning , language translation , and handwriting recognition story_separator_special_tag abstractpredicting bus arrival times and travel times are crucial elements to make the public transport more attractive and reliable . the present study explores the use of intelligent transportation systems ( its ) to make public transportation systems more attractive by providing timely and accurate travel time information of transit vehicles . however , for such systems to be successful , the prediction should be accurate , which ultimately depends on the prediction method as well as the input data used . in the present study , to identify significant inputs , a data mining technique , namely k-nn classifying algorithm is used . it is based on the similarity in pattern between the input and historic data . these identified inputs are then used for predicting the travel time using a model-based recursive estimation scheme , based on kalman filtering . the performance is evaluated and compared with methods based on static inputs , to highlight the improved prediction accuracy . story_separator_special_tag background : as more and more researchers are turning to big data for new opportunities of biomedical discoveries , machine learning models , as the backbone of big data analysis , are mentioned more often in biomedical journals . however , owing to the inherent complexity of machine learning methods , they are prone to misuse . because of the flexibility in specifying machine learning models , the results are often insufficiently reported in research articles , hindering reliable assessment of model validity and consistent interpretation of model outputs . objective : to attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence . methods : a multidisciplinary panel of machine learning experts , clinicians , and traditional statisticians were interviewed , using an iterative process in accordance with the delphi method . results : the process produced a set of guidelines that consists of ( 1 ) a list of reporting items to be included in a research article and ( 2 ) a set of practical sequential steps for developing predictive story_separator_special_tag largest replication study to date casts doubt on many published positive results . story_separator_special_tag psychology has historically been concerned , first and foremost , with explaining the causal mechanisms that give rise to behavior . randomized , tightly controlled experiments are enshrined as the gold standard of psychological research , and there are endless investigations of the various mediating and moderating variables that govern various behaviors . we argue that psychology s near-total focus on explaining the causes of behavior has led much of the field to be populated by research programs that provide intricate theories of psychological mechanism but that have little ( or unknown ) ability to predict future behaviors with any appreciable accuracy . we propose that principles and techniques from the field of machine learning can help psychology become a more predictive science . we review some of the fundamental concepts and tools of machine learning and point out examples where these concepts have been used to conduct interesting and important psychological research that focuses on predictive research ques .
this report is a terminal evaluation of a un environment-gef project implemented between 2011 and 2018. the project 's overall development goal was to mainstream energy efficiency in buildings in east africa , thereby contributing to significantly reduced carbon emissions . the evaluation sought to assess project performance ( in terms of relevance , effectiveness and efficiency ) , and determine outcomes and impacts ( actual and potential ) stemming from the project , including their sustainability . story_separator_special_tag over the past few years , dozens of new techniques have been proposed for more accurate energy disaggregation , but the jury is still out on whether these techniques can actually save energy and , if so , whether higher accuracy translates into higher energy savings . in this paper , we explore both of these questions . first , we develop new techniques that use disaggregated power data to provide actionable feedback to residential users . we evaluate these techniques using power traces from 240 homes and find that they can detect homes that need feedback with as much as 84 % accuracy . second , we evaluate whether existing energy disaggregation techniques provide power traces with sufficient fidelity to support the feedback techniques that we created and whether more accurate disaggregation results translate into more energy savings for the users . results show that feedback accuracy is very low even while disaggregation accuracy is high . these results indicate a need to revisit the metrics by which disaggregation is evaluated . story_separator_special_tag non-intrusive load monitoring ( nilm ) is a popular approach to estimate appliance-level electricity consumption from aggregate consumption data of households . assessing the suitability of nilm algorithms to be used in real scenarios is however still cumbersome , mainly because there exists no standardized evaluation procedure for nilm algorithms and the availability of comprehensive electricity consumption data sets on which to run such a procedure is still limited . this paper contributes to the solution of this problem by : ( 1 ) outlining the key dimensions of the design space of nilm algorithms ; ( 2 ) presenting a novel , comprehensive data set to evaluate the performance of nilm algorithms ; ( 3 ) describing the design and implementation of a framework that significantly eases the evaluation of nilm algorithms using different data sets and parameter configurations ; ( 4 ) demonstrating the use of the presented framework and data set through an extensive performance evaluation of four selected nilm algorithms . both the presented data set and the evaluation framework are made publicly available . story_separator_special_tag to reduce energy demand in households it is useful to know which electrical appliances are in use at what times . monitoring individual appliances is costly and intrusive , whereas data on overall household electricity use is more easily obtained . in this paper , we consider the energy disaggregation problem where a household 's electricity consumption is disaggregated into the component appliances . the factorial hidden markov model ( fhmm ) is a natural model to fit this data . we enhance this generic model by introducing two constraints on the state sequence of the fhmm . the first is to use a non-homogeneous markov chain , modelling how appliance usage varies over the day , and the other is to enforce that at most one chain changes state at each time step . this yields a new model which we call the interleaved factorial non-homogeneous hidden markov model ( ifnhmm ) . we evaluated the ability of this model to perform disaggregation in an ultra-low frequency setting , over a data set of 251 english households . in this new setting , the ifnhmm outperforms the fhmm in terms of recovering the energy used by the component appliances story_separator_special_tag providing detailed appliance level energy consumption may lead consumers to understand their usage behavior and encourage them to optimize the energy usage . non-intrusive load monitoring ( nilm ) or energy disaggregation aims to estimate appliance level energy consumption from aggregate consumption data of households . hitherto , proposed nilm algorithms are either centralized or require high performance systems to derive appliance level data , owing to the computational complexity associated . this approach raises several issues related to scalability and privacy of consumer 's data . in this thesis , we present the \\textit { nilm-loc framework } that utilizes occupancy of users to derive accurate appliance level usage information . nilm-loc framework limits the appliances considered for disaggregation based on the current location of the occupants . thus , it can provide real-time feedback on appliance level energy consumption and run on an embedded system locally at the household . we propose several accuracy metrics to study the performance of nilm-loc . to test its robustness , we empirically evaluated it across multiple publicly available datasets . nilm-loc has significantly higher energy disaggregation accuracy while exponentially reducing the computational complexity . nilm-loc presents accuracy improvements up to 30\\ story_separator_special_tag this article surveys existing and emerging disaggregation techniques for energy-consumption data and highlights signal features that might be used to sense disaggregated data in an easily installed and cost-effective manner . story_separator_special_tag a nonintrusive appliance load monitor that determines the energy consumption of individual appliances turning on and off in an electric load , based on detailed analysis of the current and voltage of the total load , as measured at the interface to the power source is described . the theory and current practice of nonintrusive appliance load monitoring are discussed , including goals , applications , load models , appliance signatures , algorithms , prototypes field-test results , current research directions , and the advantages and disadvantages of this approach relative to intrusive monitoring . > story_separator_special_tag appliance load monitoring ( alm ) is essential for energy management solutions , allowing them to obtain appliance-specific energy consumption statistics that can further be used to devise load scheduling strategies for optimal energy utilization . fine-grained energy monitoring can be achieved by deploying smart power outlets on every device of interest ; however it incurs extra hardware cost and installation complexity . non-intrusive load monitoring ( nilm ) is an attractive method for energy disaggregation , as it can discern devices from the aggregated data acquired from a single point of measurement . this paper provides a comprehensive overview of nilm system and its associated methods and techniques used for disaggregated energy sensing . we review the state-of-the art load signatures and disaggregation algorithms used for appliance recognition and highlight challenges and future research directions . story_separator_special_tag non-intrusive load monitoring ( nilm ) refers to the analysis of the aggregate power consumption of electric loads in order to recognize the existence and the consumption profile of each individual appliance . in this paper , we briefly describe our ongoing research on an unsupervised nilm system suitable for applications in the residential sector . the proposed system consists of the typical stages of an event-based nilm system with the difference that only unsupervised algorithms are utilized in each stage eliminating the need for a pre-training process and providing wider applicability . in the event detector , a grid-based clustering algorithm is utilized in order to segment the power signals into transient and steady-state sections . macroscopic features are extracted from the detected events and used in a mean-shift clustering algorithm . the system is tested on the publicly available blued dataset and shows event detection and clustering accuracy more than 98 % . the system also shows possible disaggregation up to 92 % of the energy of phase a of the blued dataset . moreover , the system has been utilized in an energy-disaggregation competition held by belkin and achieved a score within the top ten results with story_separator_special_tag smart meters are an enabling technology for many smart grid applications . this paper introduces a design for a low-cost smart meter system as well as the fundamentals of smart metering . the smart meter platform , provided as open hardware , is designed with a connector interface compatible to the arduino platform , thus opening the possibilities for smart meters with flexible hardware and computation features , starting from low-cost 8 bit micro controllers up to powerful single board computers that can run linux . the metering platform features a current transformer which allows a non-intrusive installation of the current measurement unit . the suggested design can switch loads , offers a variable sampling frequency , and provides measurement data such as active power , reactive and apparent power . results indicate that measurement accuracy and resolution of the proposed metering platform are sufficient for a range of different applications and loads from a few watts up to five kilowatts . story_separator_special_tag utility companies around the world are replacing electro-mechanical power meters with new smart meters . these digital power meters have enhanced communication capabilities , but they are not actually smart . we present the cognitive power meter ( c-meter ) , a meter that is actually smart . by using load disaggregation intelligence , c-meter is the realization of demand response and other smart grid energy conservation initiatives . our c-meter is made of two key components : a prototype open source ammeter and an optimized embedded load disaggregation algorithm ( disagg ) . additionally , we provide an open source multi-circuit ammeter array that can build probabilistic appliance ( or load ) consumption models that are used by the c-meter . disagg is the first load disaggregation algorithm to be implemented on an inexpensive low-power embedded processor that runs in real-time using a typical/basic smart meter measurement ( current , in a ) . disagg can disaggregate loads with complex power states with a high degree of accuracy . story_separator_special_tag the concept of smart grids is closely related to energy conservation and load shedding concepts . however , it is difficult to quantify the effectiveness of energy conservation efforts in residential settings without any sort of end-use energy information as feedback . in order to achieve that , load monitoring methods are normally used . in recent years , non-intrusive load monitoring ( nilm ) approaches are gaining popularity due to their minimal installation requirements and cost effectiveness . for a nilm system to work , only one sensor at the entry point to a home is required . fluctuations in the aggregate power consumption signals are used to mathematically estimate the composition of operation of appliances . this approach eliminates the requirement of installing plug-meters for every appliance in the house . in this paper , we provide a review of recent research efforts on state-of-the-art nilm algorithms before concluding with a baseline and overall vision for our future research direction . story_separator_special_tag monitoring electricity consumption in the home is an important way to help reduce energy usage and non-intrusive load monitoring ( nilm ) techniques are a promising approach to obtain estimates of the electrical power consumption of individual appliances from aggregate measurements of voltage and/or current in the distribution system . in this paper , we discuss event detection algorithms used in the nilm literature and propose new metrics for evaluating them . in particular , we introduce metrics that incorporate information contained in the power signal instead of strict detection rates . we show that this information is important for nilm applications with the goal of improving appliance energy disaggregation . our work was carried out on a publicly-available week-long dataset of real residential power usage . story_separator_special_tag with ongoing large-scale smart energy metering deployments worldwide , disaggregation of a household s total energy consumption down to individual appliances using analytical tools , also known as non-intrusive appliance load monitoring ( nalm ) , has generated increased research interest lately . nalm can deepen energy feedback , support appliance retrofit advice , and support home automation . however , despite the fact that nalm was proposed over 30 years ago , there are still many open challenges with respect to its practicality and effectiveness at low sampling rates . indeed , the majority of nalm approaches , supervised or unsupervised , require training to build appliance models , and are sensitive to appliance changes in the house , thus requiring regular re-training . in this paper , we tackle this challenge by proposing an nalm approach that does not require any training . the main idea is to build upon the emerging field of graph signal processing to perform adaptive thresholding , signal clustering , and pattern matching . we determine the performance limits of our approach and demonstrate its usefulness in practice . using two open access datasets the us redd data set with active power measurements story_separator_special_tag disaggregating total household 's energy data down to individual appliances via non-intrusive appliance load monitoring ( nalm ) has generated renewed interest with ongoing or planned large-scale smart meter deployments worldwide . of special interest are nalm algorithms that are of low complexity and operate in near real time , supporting emerging applications such as in-home displays , remote appliance scheduling and home automation , and use low sampling rates data from commercial smart meters . nalm methods , based on hidden markov model ( hmm ) and its variations , have become the state of the art due to their high performance , but suffer from high computational cost . in this paper , we develop an alternative approach based on support vector machine ( svm ) and k-means , where k-means is used to reduce the svm training set size by identifying only the representative subset of the original dataset for the svm training . the resulting scheme outperforms individual k-means and svm classifiers and shows competitive performance to the state-of-the-art hmm-based nalm method with up to 45 times lower execution time ( including training and testing ) . story_separator_special_tag with the large-scale roll-out of smart metering worldwide , there is a growing need to account for the individual contribution of appliances to the load demand . in this paper , we design a graph signal processing ( gsp ) -based approach for non-intrusive appliance load monitoring ( nilm ) , i.e. , disaggregation of total energy consumption down to individual appliances used . leveraging piecewise smoothness of the power load signal , two gsp-based nilm approaches are proposed . the first approach , based on total graph variation minimization , searches for a smooth graph signal under known label constraints . the second approach uses the total graph variation minimizer as a starting point for further refinement via simulated annealing . the proposed gsp-based nilm approach aims to address the large training overhead and associated complexity of conventional graph-based methods through a novel event-based graph approach . simulation results using two datasets of real house measurements demonstrate the competitive performance of the gsp-based approaches with respect to traditionally used hidden markov model-based and decision tree-based approaches . story_separator_special_tag fear of increasing prices and concern about climate change are motivating residential power conservation efforts . we investigate the effectiveness of several unsupervised disaggregation methods on low frequency power measurements collected in real homes . specifically , we consider variants of the factorial hidden markov model . our results indicate that a conditional factorial hidden semi-markov model , which integrates additional features related to when and how appliances are used in the home and more accurately represents the power use of individual appliances , outperforms the other unsupervised disaggregation methods . our results show that unsupervised techniques can provide perappliance power usage information in a non-invasive manner , which is ideal for enabling power conservation efforts . story_separator_special_tag this paper considers additive factorial hidden markov models , an extension to hmms where the state factors into multiple independent chains , and the output is an additive function of all the hidden states . although such models are very powerful , accurate inference is unfortunately difficult : exact inference is not computationally tractable , and existing approximate inference techniques are highly susceptible to local optima . in this paper we propose an alternative inference method for such models , which exploits their additive structure by 1 ) looking at the observed difference signal of the observation , 2 ) incorporating a robust mixture component that can account for unmodeled observations , and 3 ) constraining the posterior to allow at most one hidden state to change at a time . combining these elements we develop a convex formulation of approximate inference that is computationally efficient , has no issues of local optima , and which performs much better than existing approaches in practice . the method is motivated by the problem of energy disaggregation , the task of taking a whole home electricity signal and decomposing it into its component appliances ; applied to this task , our algorithm story_separator_special_tag non-intrusive appliance load monitoring is the process of disaggregating a household 's total electricity consumption into its contributing appliances . in this paper we propose an approach by which individual appliances can be iteratively separated from an aggregate load . unlike existing approaches , our approach does not require training data to be collected by sub-metering individual appliances , nor does it assume complete knowledge of the appliances present in the household . instead , we propose an approach in which prior models of general appliance types are tuned to specific appliance instances using only signatures extracted from the aggregate load . the tuned appliance models are then used to estimate each appliance 's load , which is subsequently subtracted from the aggregate load . this process is applied iteratively until all appliances for which prior behaviour models are known have been disaggregated . we evaluate the accuracy of our approach using the redd data set , and show the disaggregation performance when using our training approach is comparable to when sub-metered training data is used . we also present a deployment of our system as a live application and demonstrate the potential for personalised energy saving feedback . story_separator_special_tag understanding how appliances in a house consume power is important when making intelligent and informed decisions about conserving energy . appliances can turn on and off either by the actions of occupants or by automatic sensing and actuation ( e.g. , thermostat ) . it is also difficult to understand how much a load consumes at any given operational state . occupants could buy sensors that would help , but this comes at a high financial cost . power utility companies around the world are now replacing old electro-mechanical meters with digital meters ( smart meters ) that have enhanced communication capabilities . these smart meters are essentially free sensors that offer an opportunity to use computation to infer what loads are running and how much each load is consuming ( i.e. , load disaggregation ) . we present a new load disaggregation algorithm that uses a super-state hidden markov model and a new viterbi algorithm variant which preserves dependencies between loads and can disaggregate multi-state loads , all while performing computationally efficient exact inference . our sparse viterbi algorithm can efficiently compute sparse matrices with a large number of super-states . additionally , our disaggregator can run in real-time story_separator_special_tag finding models that can efficiently represent load signals is one key issue in non-intrusive load monitoring ( nilm ) because they are the foundation of most load disaggregation algorithms . in the past , the factorial hidden markov model ( fhmm ) has been proposed as one probabilistic model for the aggregate real power measurement . it is assumed that each load can be represented as one hidden markov model ( hmm ) and the hmms of all loads have been learned successfully before disaggregation . although fhmm showed some promising results for eventless disaggregation , a detailed investigation on how well hmm is suited to model load signals is still missing till today . in this paper , we study the feasibility of hmm modeling for different categories of loads by using the uk-dale dataset and propose a method for model adaptation across different houses . story_separator_special_tag this paper examines the electronic thesis and dissertations ( etds ) deposited at inflibnet shodhganga project by indian universities . it is found that 32000+ theses have been deposited on various disciplines by 201 universities . the study considered only top five universities ranked by inflibnet shodhganga project . it is found the top five universities have contributed 3145 theses in the repository . story_separator_special_tag load signature is the unique consumption pattern intrinsic to each individual electrical appliance/piece of equipment . this paper focus on building a universal platform to better understand and explore the nature of electricity consumption patterns using load signatures and advanced technology , such as feature extraction and intelligent computing . through this knowledge , we can explore and develop innovative applications to achieve better utilization of resources and develop more intelligent ways of operation . this paper depicts the basic concept , features of load signatures , structure and methodology of applying mathematical programming techniques , pattern recognition tools , and committee decision mechanism to perform load disaggregation . new indices are also introduced to aid our understanding of the nature of load signatures and different disaggregation algorithms . story_separator_special_tag activity sensing in the home has a variety of important applications , including healthcare , entertainment , home automation , energy monitoring and post-occupancy research studies . many existing systems for detecting occupant activity require large numbers of sensors , invasive vision systems , or extensive installation procedures . we present an approach that uses a single plug-in sensor to detect a variety of electrical events throughout the home . this sensor detects the electrical noise on residential power lines created by the abrupt switching of electrical devices and the noise created by certain devices while in operation . we use machine learning techniques to recognize electrically noisy events such as turning on or off a particular light switch , a television set , or an electric stove . we tested our system in one home for several weeks and in five homes for one week each to evaluate the system performance over time and in different types of houses . results indicate that we can learn and classify various electrical events with accuracies ranging from 85-90 % . story_separator_special_tag we propose two algorithms for power load disaggregation at low-sampling rates ( greater than 1sec ) : a low-complexity , supervised approach based on decision trees and an unsupervised method based on dynamic time warping . both proposed algorithms share common pre-classification steps . we provide reproducible algorithmic description and benchmark the proposed methods with a state-of-the-art hidden markov model ( hmm ) -based approach . experimental results using three us and three uk households , show that both proposed methods outperform the hmm-based approach and are capable of disaggregating a range of domestic loads even when the training period is very short . story_separator_special_tag consumer systems for home energy management can provide significant energy saving . such systems may be based on nonintrusive appliance load monitoring ( nialm ) , in which individual appliance power consumption information is disaggregated from single-point measurements . the disaggregation methods constitute the most important part of nialm systems . this paper reviews the methodology of consumer systems for nialm in residential buildings . story_separator_special_tag a home-based intelligent energy conservation system needs to know what appliances ( or loads ) are being used in the home and when they are being used in order to provide intelligent feedback or to make intelligent decisions . this analysis task is known as load disaggregation or non-intrusive load monitoring ( nilm ) . the datasets used for nilm research generally contain real power readings , with the data often being too coarse for more sophisticated analysis algorithms , and often covering too short a time period . we present the almanac of minutely power dataset ( ampds ) for load disaggregation research ; it contains one year of data that includes 11 measurements at one minute intervals for 21 sub-meters . ampds also includes natural gas and water consumption data . finally , we use ampds to present findings from our own load disaggregation algorithm to show that current , rather than real power , is a more effective measure for nilm . story_separator_special_tag the problem of identifying end-use electrical appliances from their individual consumption profiles , known as the appliance identification problem , is a primary stage in both non-intrusive load monitoring ( nilm ) and automated plug-wise metering . therefore , appliance identification has received dedicated studies with various electric appliance signatures , classification models , and evaluation datasets . in this paper , we propose a neural network ensembles approach to address this problem using high resolution measurements . the models are trained on the raw current and voltage waveforms , and thus , eliminating the need for well engineered appliance signatures . we evaluate the proposed model on a publicly available appliance dataset from 55 residential buildings , 11 appliance categories , and over 1000 measurements . we further study the stability of the trained models with respect to training dataset , sampling frequency , and variations in the steady-state operation of appliances . story_separator_special_tag with ongoing massive smart energy metering deployments , disaggregation of household 's total energy consumption down to individual appliances using purely software tools , aka . non-intrusive appliance load monitoring ( nalm ) , has generated increased interest . however , despite the fact that nalm was proposed over 30 years ago , there are still many open challenges . indeed , the majority of approaches require training and are sensitive to appliance changes requiring regular re-training . in this paper , we tackle this challenge by proposing a `` blind '' nalm approach that does not require any training . the main idea is to build upon an emerging field of graph-based signal processing to perform adaptive thresholding , signal clustering and feature matching . using two datasets of active power measurements with 1min and 8sec resolution , we demonstrate the effectiveness of the proposed method using a state-of-the-art nalm approaches as benchmarks . story_separator_special_tag energy disaggregation ( or non-intrusive load monitoring ( nilm ) ) is the process of deducing individual load profiles from aggregate measurements using different machine learning and pattern recognition tools . existing disaggregation algorithms can be categorized into either supervised approaches or unsupervised ones . supervised approaches require external information represented in either sub-metered loads or hand-labeled observations while unsupervised algorithms utilize only unlabeled aggregate data . we observed that very few works attempt to utilize both labeled and unlabeled data . in this paper , we introduce a semi-supervised learning tool , namely self-training , to the energy disaggregation problem . semi-supervised learning ( ssl ) tools leverage both external and internal structural information in order to enhance the learning process and/or reduce the required labeling effort . we also provide test results of the utilized ssl tool compared with a traditional classification component of an event-based nilm system . results show that even a simple ssl tool is able to reduce the required labeling effort and provides a learning disaggregation system whose performance gradually increases as it observes more unlabeled aggregate measurements . story_separator_special_tag large-scale smart metering deployments and energy saving targets across the world have ignited renewed interest in residential non-intrusive appliance load monitoring ( nalm ) , that is , disaggregating total household s energy consumption down to individual appliances , using purely analytical tools . despite increased research efforts , nalm techniques that can disaggregate power loads at low sampling rates are still not accurate and/or practical enough , requiring substantial customer input and long training periods . in this paper , we address these challenges via a practical low-complexity lowrate nalm , by proposing two approaches based on a combination of the following machine learning techniques : k-means clustering and support vector machine , exploiting their strengths and addressing their individual weaknesses . the first proposed supervised approach is a low-complexity method that requires very short training period and is fairly accurate even in the presence of labelling errors . the second approach relies on a database of appliance signatures that we designed using publicly available datasets . the database compactly represents over 200 appliances using statistical modelling of measured active power . experimental results on three datasets from us , italy , austria and uk , demonstrate the reliability story_separator_special_tag model-driven analytics of energy meter data in smart homes story_separator_special_tag this paper considers the problem of energy disaggregation , which aims to decompose a whole home electric consumption into consumptions of individual appliances . recent studies have shown that the factorial hidden markov model ( fhmm ) is a favorable model for this problem . for effectiveness of inference , two key assumptions are often adapted : independence of devices and the one-at-a-time condition ( it assumes at most one device changes state at each time step ) . in this work , we argue that these assumptions in many cases do not hold in practical data . the contradiction of data and assumptions renders disaggregation problem particularly challenging . we attempt to address this problem by introducing a novel inference framework named hierarchical fhmm that enables effective inference of fhmm when the assumptions are violated . this framework utilizes the relationship between devices to improve the speed and accuracy of inference . our approach also has the advantage that it can be easily integrated with existing or future inference algorithms of fhmm . experimental results on two benchmark datasets , redd and pecan , demonstrated that our method yields state-of-the-art energy disaggregation results . story_separator_special_tag research on smart grids has recently focused on the energy monitoring issue , with the objective to maximize the user consumption awareness in building contexts on one hand , and to provide a detailed description of customer habits to the utilities on the other . one of the hottest topic in this field is represented by non-intrusive load monitoring ( nilm ) : it refers to those techniques aimed at decomposing the consumption aggregated data acquired at a single point of measurement into the diverse consumption profiles of appliances operating in the electrical system under study . the focus here is on unsupervised algorithms , which are the most interesting and of practical use in real case scenarios . indeed , these methods rely on a sustainable amount of a-priori knowledge related to the applicative context of interest , thus minimizing the user intervention to operate , and are targeted to extract all information to operate directly from the measured aggregate data . this paper reports and describes the most promising unsupervised nilm methods recently proposed in the literature , by dividing them into two main categories : load classification and source separation approaches . an overview of the public story_separator_special_tag many countries are rolling out smart electricity meters . a smart meter measures the aggregate energy consumption of an entire building . however , appliance-by-appliance energy consumption information may be more valuable than aggregate data for a variety of uses including reducing energy demand and improving load forecasting for the electricity grid . electricity disaggregation algorithms the focus of this thesis estimate appliance-by-appliance electricity demand from aggregate electricity demand . this thesis has three main goals : 1 ) to critically evaluate the benefits of energy disaggregation ; 2 ) to develop tools to enable rigorous disaggregation research ; 3 ) to advance the state of the art in disaggregation algorithms . the first part of this thesis explores whether disaggregated energy feedback helps domestic users to reduce energy consumption ; and discusses threats to the nilm . evidence is collected , summarised and aggregated by means of a critical , systematic review of the literature . multiple uses for disaggregated data are discussed . our review finds no robust evidence to support the hypothesis that current forms of disaggregated energy feedback are more effective than aggregate energy feedback at reducing energy consumption in the general population . but the story_separator_special_tag monitoring an individual electrical load 's energy usage is of great significance in energy-efficient buildings as it underlies the sophisticated load control and energy optimization strategies . non-intrusive load monitoring ( nilm ) provides an economical tool to access per-load power consumption without deploying fine-grained , large-scale smart meters . however , existing nilm approaches require training data to be collected by sub-metering individual appliances as well as the prior knowledge about the number of appliances attached to the meter , which are expensive or unlikely to obtain in practice . in this paper , we propose a fully unsupervised nilm framework based on non-parametric factorial hidden markov models , in which per-load power consumptions are disaggregated from the composite signal with minimum prerequisite . we develop an efficient inference algorithm to detect the number of appliances from data and disaggregate the power signal simultaneously . we also propose a criterion , generalized state prediction accuracy , to properly evaluate the overall performance for methods targeting at both appliance number detection and load disaggregation . we evaluate our framework by comparing against other multi-tasking schemes , and the results show that our framework compares favorably to prior work in both story_separator_special_tag graph-based signal processing ( gsp ) is an emerging field that is based on representing a dataset using a discrete signal indexed by a graph . inspired by the recent success of gsp in image processing and signal filtering , in this paper , we demonstrate how gsp can be applied to non-intrusive appliance load monitoring ( nalm ) due to smoothness of appliance load signatures . nalm refers to disaggregating total energy consumption in the house down to individual appliances used . at low sampling rates , in the order of minutes , nalm is a difficult problem , due to significant random noise , unknown base load , many household appliances that have similar power signatures , and the fact that most domestic appliances ( for example , microwave , toaster ) , have usual operation of just over a minute . in this paper , we proposed a different nalm approach to more traditional approaches , by representing the dataset of active power signatures using a graph signal . we develop a regularization on graph approach where by maximizing smoothness of the underlying graph signal , we are able to perform disaggregation . simulation results using publicly story_separator_special_tag energy disaggregation estimates appliance-by-appliance electricity consumption from a single meter that measures the whole home 's electricity demand . recently , deep neural networks have driven remarkable improvements in classification performance in neighbouring machine learning fields such as image classification and automatic speech recognition . in this paper , we adapt three deep neural network architectures to energy disaggregation : 1 ) a form of recurrent neural network called ` long short-term memory ' ( lstm ) ; 2 ) denoising autoencoders ; and 3 ) a network which regresses the start time , end time and average power demand of each appliance activation . we use seven metrics to test the performance of these algorithms on real aggregate power data from five appliances . tests are performed against a house not seen during training and against houses seen during training . we find that all three neural nets achieve better f1 scores ( averaged over all five appliances ) than either combinatorial optimisation or factorial hidden markov models and that our neural net algorithms generalise well to an unseen house . story_separator_special_tag in today s increasingly urban society , the consumption of power by residential customers presents a difficult challenge for the energy market , while also having significant environmental implications . understanding the energy usage characteristics of each individual household can assist in mitigating some of these issues . however , this is very challenging because there is no simple way to measure the power consumption of the different appliances within a home without installation of many individual sensors . this process is prohibitive since it is highly intrusive and not cost-effective for both users and providers . non-intrusive load monitoring ( nilm ) is a technique for inferring the power consumption of each appliance within a home from one central meter ( usually a commercial smartmeter ) . the ability to obtain such information from widely spread existing hardware , has the potential to overcome the cost and intrusiveness limitations of power usage research . various methods can be used for nilm , including hidden-markov-models ( hmms ) , and integer-programming ( ip ) , with deep learning gaining popularity in recent years . in this thesis , i will present three projects using novel deep learning approaches for solving story_separator_special_tag to low error rates occurs faster and yields ( in the average ) better models . if the mean vectors of the multivariate gaussian density functions are placed according to the clusters organized by soms , only a couple of iterations of maximum likelihood estimation is required to set suitable values to the other cdhmm parameters . the lvq was used to get more discrim-inative clustering but it seems that the baum-welch algorithm can not preserve this discriminativity very well . however , in the segmental k-means algorithm the lowest average speech recognition error rate was obtained when the mean vectors of the mixed gaussians were initially created from the reference vectors by using lvq . a tutorial on hidden markov models and selected applications in speech recognition . 77 ( 2 ) :257 { data . the weighting coecient is then the probability of being in the state for which the parameters are estimated computed by the old parameter values . the diierence in the segmental k-means compared to the baum-welch is that only data points assigned to the current state in the most probable state sequences are used in the estimation . the most probable state sequence for story_separator_special_tag hidden markov models ( hmms ) have proven to be one of the most widely used tools for learning probabilistic models of time series data . in an hmm , information about the past is conveyed through a single discrete variable the hidden state . we discuss a generalization of hmms in which this state is factored into multiple state variables and is therefore represented in a distributed manner . we describe an exact algorithm for inferring the posterior probabilities of the hidden state variables given the observations , and relate it to the forward backward algorithm for hmms and to algorithms for more general graphical models . due to the combinatorial nature of the hidden state representation , this exact algorithm is intractable . as in other intractable systems , approximate inference can be carried out using gibbs sampling or variational methods . within the variational framework , we present a structured approximation in which the the state variables are decoupled , yielding a tractable algorithm for learning the parameters of the model . empirical comparisons suggest that these approximations are efficient and provide accurate alternatives to the exact methods . finally , we use the structured approximation to story_separator_special_tag non-intrusive appliance load monitoring ( nialm ) is the process of disaggregating a household s total electricity consumption into its contributing appliances . smart meters are currently being deployed on national scales , providing a platform to collect aggregate household electricity consumption data . existing approaches to nialm require a manual training phase in which either sub-metered appliance data is collected or appliance usage is manually labelled . this training data is used to build models of the house- hold appliances , which are subsequently used to disaggregate the household s electricity data . due to the requirement of such a training phase , existing approaches do not scale automatically to the national scales of smart meter data currently being collected . in this thesis we propose an unsupervised training method which , unlike existing approaches , does not require a manual training phase . instead , our approach combines general appliance knowledge with just aggregate smart meter data from the household to perform disaggregation . to do so , we address the following three problems : ( i ) how to generalise the behaviour of multiple appliances of the same type , ( ii ) how to tune general story_separator_special_tag the increasing energy consumption is one of the greatest environmental challenges of our time . residential buildings account for a considerable part of the total electricity consumption and is furt . story_separator_special_tag the bayesian approach to statistical modelling is a consistent and intuitive framework for dealing with uncertainty about the world . in this approach , we encode any prior knowledge about variables ( observed or unobserved ) with the goal of inferring a posterior distribution over unobserved variables . the most common approaches to bayesian modelling to date are the so-called parametric bayesian models : these are specified with a finite number of unobserved variables . with vast amounts of data readily available today , these models generally fail to leverage a learning opportunity : no additional structure beyond that which was defined in the prior can be learned . any increase in data passed into the model will only affect the accuracy of the inferred posteriors . non-parametric bayesian models address this problem : they are probabilistic models whose additional flexibility allows for learning the structure of complex datasets . in this thesis we present new models and inference algorithms for non-parametric bayesian models in the context of hidden markov models . our contribution is three-fold : we introduce for the first time , a family of algorithms for efficient and exact monte carlo inference in non-parametric bayesian markov models story_separator_special_tag analysis and processing of very large data sets , or big data , poses a significant challenge . massive data sets are collected and studied in numerous domains , from engineering sciences to social networks , biomolecular research , commerce , and security . extracting valuable information from big data requires innovative approaches that efficiently process large amounts of data as well as handle and , moreover , utilize their structure . this article discusses a paradigm for large-scale data analysis based on the discrete signal processing ( dsp ) on graphs ( dspg ) . dspg extends signal processing concepts and methodologies from the classical signal processing theory to data indexed by general graphs . big data analysis presents several challenges to dspg , in particular , in filtering and frequency analysis of very large data sets . we review fundamental concepts of dspg , including graph signals and graph filters , graph fourier transform , graph frequency , and spectrum ordering , and compare them with their counterparts from the classical signal processing theory . we then consider product graphs as a graph model that helps extend the application of dspg methods to large data sets through efficient story_separator_special_tag we present a novel data classifier that is based on the regularization of graph signals . our approach is based on the theory of discrete signal processing on graphs where the graph represents similarities between data and we interpret labels for the dataset elements as a signal indexed by the nodes of the graph . we postulate that true labels form a low-frequency graph signal and the classifier finds the smoothest graph signal that satisfies constraints given by known data labels . our experiments demonstrate that our approach achieves high accuracy in multiclass classification and outperforms other classification approaches . story_separator_special_tag today 's web-enabled deluge of electronic data calls for automated methods of data analysis . machine learning provides these , developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data . this textbook offers a comprehensive and self-contained introduction to the field of machine learning , based on a unified , probabilistic approach . the coverage combines breadth and depth , offering necessary background material on such topics as probability , optimization , and linear algebra as well as discussion of recent developments in the field , including conditional random fields , l1 regularization , and deep learning . the book is written in an informal , accessible style , complete with pseudo-code for the most important algorithms . all topics are copiously illustrated with color images and worked examples drawn from such application domains as biology , text processing , computer vision , and robotics . rather than providing a cookbook of different heuristic methods , the book stresses a principled model-based approach , often using the language of graphical models to specify models in a concise and intuitive way . almost all the models described have been implemented in story_separator_special_tag this monograph provides an overview of general deep learning methodology and its applications to a variety of signal and information processing tasks . the application areas are chosen with the following three criteria in mind : ( 1 ) expertise or knowledge of the authors ; ( 2 ) the application areas that have already been transformed by the successful use of deep learning technology , such as speech recognition and computer vision ; and ( 3 ) the application areas that have the potential to be impacted significantly by deep learning and that have been experiencing research growth , including natural language and text processing , information retrieval , and multimodal information processing empowered by multi-task deep learning . story_separator_special_tag in modern face recognition , the conventional pipeline consists of four stages : detect = > align = > represent = > classify . we revisit both the alignment step and the representation step by employing explicit 3d face modeling in order to apply a piecewise affine transformation , and derive a face representation from a nine-layer deep neural network . this deep network involves more than 120 million parameters using several locally connected layers without weight sharing , rather than the standard convolutional layers . thus we trained it on the largest facial dataset to-date , an identity labeled dataset of four million facial images belonging to more than 4 , 000 identities . the learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments , even with a simple classifier . our method reaches an accuracy of 97.35 % on the labeled faces in the wild ( lfw ) dataset , reducing the error of the current state of the art by more than 27 % , closely approaching human-level performance . story_separator_special_tag we show that an end-to-end deep learning approach can be used to recognize either english or mandarin chinese speech-two vastly different languages . because it replaces entire pipelines of hand-engineered components with neural networks , end-to-end learning allows us to handle a diverse variety of speech including noisy environments , accents and different languages . key to our approach is our application of hpc techniques , enabling experiments that previously took weeks to now run in days . this allows us to iterate more quickly to identify superior architectures and algorithms . as a result , in several cases , our system is competitive with the transcription of human workers when benchmarked on standard datasets . finally , using a technique called batch dispatch with gpus in the data center , we show that our system can be inexpensively deployed in an online setting , delivering low latency when serving users at scale . story_separator_special_tag energy disaggregation ( a.k.a nonintrusive load monitoring , nilm ) , a single-channel blind source separation problem , aims to decompose the mains which records the whole house electricity consumption into appliance-wise readings . this problem is difficult because it is inherently unidentifiable . recent approaches have shown that the identifiability problem could be reduced by introducing domain knowledge into the model . deep neural networks have been shown to be a promising approach for these problems , but sliding windows are necessary to handle the long sequences which arise in signal processing problems , which raises issues about how to combine predictions from different sliding windows . in this paper , we propose sequence-to-point learning , where the input is a window of the mains and the output is a single point of the target appliance . we use convolutional neural networks to train the model . interestingly , we systematically show that the convolutional neural networks can inherently learn the signatures of the target appliances , which are automatically added into the model to reduce the identifiability problem . we applied the proposed neural network approaches to real-world household energy data , and show that the methods achieve story_separator_special_tag this paper presents a new supervised approach to extract the power trace of individual loads from single channel aggregate power signals in non-intrusive load monitoring ( nilm ) systems . recent approaches to this source separation problem are based on factorial hidden markov models ( fhmm ) . drawbacks are the needed knowledge of hmm models for all loads , what is infeasible for large buildings , and the large combinatorial complexity . our approach trains hmm with two emission probabilities , one for the single load to be extracted and the other for the aggregate power signal . a gaussian distribution is used to model observations of the single load whereas observations of the aggregate signal are modeled with a deep neural network ( dnn ) . by doing so , a single load can be extracted from the aggregate power signal without knowledge of the remaining loads . the performance of the algorithm is evaluated on the reference energy disaggregation ( redd ) dataset . story_separator_special_tag in this paper a novel approach for energy disaggregation is introduced that identifies additive sub-components of the power signal in an unsupervised way from high-frequency measurements of current . in a subsequent step , these sub-components are combined to create appliance power traces . once the subcomponents that constitute an appliance are identified , energy disaggregation can be viewed as non-linear filtering of the current signal . the approach introduced here tries to avoid numerous pitfalls of existing energy disaggregation techniques such as computational complexity issues , data transmission limitations and prior knowledge of appliances . we test the approach on a publicly available dataset and report an overall disaggregation error of 0.07 . story_separator_special_tag in many statistical problems , a more coarse-grained model may be suitable for population-level behaviour , whereas a more detailed model is appropriate for accurate modelling of individual behaviour . this raises the question of how to integrate both types of models . methods such as posterior regularization follow the idea of generalized moment matching , in that they allow matching expectations between two models , but sometimes both models are most conveniently expressed as latent variable models . we propose latent bayesian melding , which is motivated by averaging the distributions over populations statistics of both the individual-level and the population-level models under a logarithmic opinion pool framework . in a case study on electricity disaggregation , which is a type of single-channel blind source separation problem , we show that latent bayesian melding leads to significantly more accurate predictions than an approach based solely on generalized moment matching . story_separator_special_tag non-intrusive load monitoring ( nilm ) provides homeowners with detailed feedback on their electricity usage , but an open area is appliance labeling and generalizable appliance models that can be trained in one home and deployed in another . we therefore propose a semi-supervised learning appliance annotation scheme for home appliance signatures ( saraa ) . saraa utilizes time series of appliance turn on and turn off events to tune generic appliance classifiers to appliances in the target home using a mixture of labeled and unlabeled data . achieving this goal requires the development of a stopping criterion for semi-supervised learning , and we propose and evaluate a stopping heuristic for one-nearest neighbor semi-supervised learning of appliance signature time series . starting with only a single labeled instance in the target home , saraa produces classifiers with median f1 scores only 14.8 % lower than benchmark classifiers trained on the fully labeled ground truth data in the target home , outperforming classifiers trained only on data from other homes , which have a median f1 score that is 51.23 % poorer than the benchmark . the results of this paper will help develop nilm systems , which can automatically learn story_separator_special_tag nonintrusive load monitoring ( nilm ) , sometimes referred to as load disaggregation , is the process of determining what loads or appliances are running in a house from analysis of the power signal of the whole-house power meter . as the popularity of nilm grows , we find that there is no consistent way the researchers are measuring and reporting accuracies . in this short communication , we present a unified approach that would allow for consistent accuracy testing . story_separator_special_tag residential buildings contribute significantly to the overall energy consumption across most parts of the world . while smart monitoring and control of appliances can reduce the overall energy consumption , management and cost associated with such systems act as a big hindrance . prior work has established that detailed feedback in the form of appliance level consumption to building occupants improves their awareness and paves the way for reduction in electricity consumption . non-intrusive load monitoring ( nilm ) , i.e . the process of disaggregating the overall home electricity usage measured at the meter level into constituent appliances , provides a simple and cost effective methodology to provide such feedback to the occupants . in this paper we present improved non-intrusive load monitoring using load division and calibration ( indic ) that simplifies nilm by dividing the appliances across multiple instrumented points ( meters/phases ) and calibrating the measured power . proposed approach is used together with the combinatorial optimization framework and evaluated on the popular redd dataset . empirical results demonstrate significant improvement in disaggregation accuracy , achieved by using indic based combinatorial optimization , demonstrate significant improvement in disaggregation accuracy . story_separator_special_tag energy and sustainability issues raise a large number of problems that can be tackled using approaches from data mining and machine learning , but traction of such problems has been slow due to the lack of publicly available data . in this paper we present the reference energy disaggregation data set ( redd ) , a freely available data set containing detailed power usage information from several homes , which is aimed at furthering research on energy disaggregation ( the task of determining the component appliance contributions from an aggregated electricity signal ) . we discuss past approaches to disaggregation and how they have influenced our design choices in collecting data , we describe the hardware and software setups for the data collection , and we present initial benchmark disaggregation results using a well-known factorial hidden markov model ( fhmm ) technique . story_separator_special_tag retrieving the household electricity consumption at individual appliance level is an essential requirement to assess the contribution of different end uses to the total household consumption , and thus to design energy saving policies and user-tailored feedback for reducing household electricity usage . this has led to the development of nonintrusive appliance load monitoring ( nialm ) , or energy disaggregation , algorithms , which aim to decompose the aggregate energy consumption data collected from a single measurement point into device-level consumption estimations . existing nialm algorithms are able to provide accurate estimate of the fraction of energy consumed by each appliance . yet , in the authors experience , they provide poor performance in reconstructing the power consumption trajectories overtime . in this brief , a new nialm algorithm is presented , which , besides providing very accurate estimates of the aggregated consumption by appliance , also accurately characterizes the appliance power consumption profiles overtime . the proposed algorithm is based on the assumption that the unknown appliance power consumption profiles are piecewise constant overtime ( as it is typical for power use patterns of household appliances ) and it exploits the information on the time-of-day probability in which story_separator_special_tag in this demonstration , we present an open source toolkit for evaluating non-intrusive load monitoring research ; a field which aims to disaggregate a household 's total electricity consumption into individual appliances . the toolkit contains : a number of importers for existing public data sets , a set of preprocessing and statistics functions , a benchmark disaggregation algorithm and a set of metrics to evaluate the performance of such algorithms . specifically , this release of the toolkit has been designed to enable the use of large data sets by only loading individual chunks of the whole data set into memory at once for processing , before combining the results of each chunk . story_separator_special_tag non-intrusive load monitoring ( nilm ) , or energy disaggregation , is the process of using signal processing and machine learning to separate the energy consumption of a building into individual appliances . in recent years , a number of data sets have been released in order to evaluate such approaches , which contain both building-level and appliance-level energy data . however , these data sets typically cover less than 10 households due to the financial cost of such deployments , and are not released in a format which allows the data sets to be easily used by energy disaggregation researchers . to this end , the dataport database was created by pecan street inc , which contains 1 minute circuit-level and building-level electricity data from 722 households . furthermore , the non-intrusive load monitoring toolkit ( nilmtk ) was released in 2014 , which provides software infrastructure to support energy disaggregation research , such as data set parsers , benchmark disaggregation algorithms and accuracy metrics . this paper describes the release of a subset of the dataport database in nilmtk format , containing one month of electricity data from 669 households . through the release of this dataport data story_separator_special_tag over the past few years , dozens of new techniques have been proposed for more accurate energy disaggregation , but the jury is still out on whether these techniques can actually save energy and , if so , whether higher accuracy translates into higher energy savings . in this paper , we explore both of these questions . first , we develop new techniques that use disaggregated power data to provide actionable feedback to residential users . we evaluate these techniques using power traces from 240 homes and find that they can detect homes that need feedback with as much as 84 % accuracy . second , we evaluate whether existing energy disaggregation techniques provide power traces with sufficient fidelity to support the feedback techniques that we created and whether more accurate disaggregation results translate into more energy savings for the users . results show that feedback accuracy is very low even while disaggregation accuracy is high . these results indicate a need to revisit the metrics by which disaggregation is evaluated . story_separator_special_tag non intrusive load monitoring ( nilm ) , or energy disaggregation , is the process of separating the total electricity consumption of a building as measured at single point into the building s constituent loads . previous research in the eld has mostly focused on residential buildings , and although the potential benets of applying this technology to commercial buildings have been recognised since the eld s conception , nilm in the commercial domain has been largely unexplored by the academic community . as a result of the heterogeneity of this section of the building stock ( i.e. , encompassing buildings as diverse as airports , malls and coee shops ) , and hence the loads within them , many of the solutions developed for residential energy disaggregation do not apply directly . in this paper we highlight some insights for nilm in the commercial domain using data collected from a large smart meter deployment within an educational campus in delhi , india , of which a subset of the data has been released for public use . we present an empirical characterisation of loads in commercial buildings , highlighting the dierences in energy consumption and load characteristics between residential story_separator_special_tag many countries are rolling out smart electricity meters . these measure a home s total power demand . however , research into consumer behaviour suggests that consumers are best able to improve their energy efficiency when provided with itemised , appliance-by-appliance consumption information . energy disaggregation is a computational technique for estimating appliance-by-appliance energy consumption from a whole-house meter signal . to conduct research on disaggregation algorithms , researchers require data describing not just the aggregate demand per building but also the ground truth demand of individual appliances . in this context , we present uk-dale : an open-access dataset from the uk recording domestic appliance-level electricity at a sample rate of 16 khz for the whole-house and at 1/6 hz for individual appliances . this is the first open access uk dataset at this temporal resolution . we recorded from five houses , one of which was recorded for 655 days , the longest duration we are aware of for any energy dataset at this sample rate . we also describe the low-cost , open-source , wireless system we built for collecting our dataset . machine-accessible metadata file describing the reported data ( isa-tab format ) story_separator_special_tag the concept of smart home has attracted considerable attention in recent years , and energy management is one of its key component . this attributes to the growing concern in environmental protection and energy conservation , as well as the demands for big data collection from utility companies and policy makers . current solutions often approach this problem by either the centralized non-intrusive load monitoring ( nilm ) or the decentralized smart controls but seldom both , rendering them impractical to some extent . therefore , in this paper , we propose a novel framework of smart home energy management systems incorporating both approaches , so that accurate power consumption monitoring and intuitive interaction with the home appliances are simultaneously achieved . the smart components directly control the appliances , while the central controller coordinates the data collection and communication . the key feature is the capability of automatically mapping the appliances to their corresponding sockets , reducing the necessity for manual initial setup . numerical simulations prove the accuracy and efficiency of the framework . we believe that our systems , if widely deployed , can benefit not only individual households by saving energy bills and simplifying life but story_separator_special_tag in this paper we present our approach to create an end-to-end software platform to enable the creation of meaningful and systematic , cross-dataset performance evaluations and benchmarks of non-intrusive load monitoring technology . we specifically propose a new file format to represent public datasets , a software framework to implement algorithms and metrics as well as the application of ceiling analysis to evaluate the overall performance of nilm systems . story_separator_special_tag we first review some of the suggested methods for energy disaggregation . we then provide a real-life data set for researchers to help developing novel energy disaggregation algorithms . we also present a publicly available experimental data set captured from a typical building on the university of california , berkeley campus . story_separator_special_tag the problem of estimating the electricity consumption of individual appliances in a building from a limited number of voltage and/or current measurements in the distribution system has received renewed interest from the research community in recent years . in this paper , we present a building-level fully-labeled dataset for electricity disaggregation ( blued ) . the dataset consists of voltage and current measurements for a single-family residence in the united states , sampled at 12 khz for a whole week . every state transition of each appliance in the home during this time was labeled and time-stamped , providing the necessary ground truth for the evaluation of event-based algorithms . with this dataset , we aim to motivate algorithm development and testing . the paper describes the hardware and software configuration , as well as the dataset s benefits and limitations . we also present some of our detection results as a preliminary benchmark . story_separator_special_tag the goal of the smart * project is to optimize home energy consumption . as part of the project , we have designed and deployed a live system that continuously gathers a wide variety of environmental and operational data in three real homes . in contrast to prior work , our focus has been on sensing depth , i.e. , collecting as much data as possible from each home , rather than breadth , i.e. , collecting data from as many homes as possible . our data captures many important aspects of the home environment , including average household electricity usage every second , as well as usage at every circuit and nearly every plug load , electricity generation data from on-site solar panels and wind turbines , outdoor weather data , temperature and humidity data in indoor rooms , and , finally , data for a range of important binary events , e.g. , at wall switches , the hvac system , doors , and from motion sensors . we also have electricity usage data every minute from 400 anonymous homes . this data corpus has served as the foundation for much of our recent research . in this story_separator_special_tag providing detailed appliance level energy consumption information may lead consumers to understand their usage behavior and encourage them to optimize the energy usage . non-intrusive load monitoring ( nilm ) or energy disaggregation aims to estimate appliance level energy consumption from the aggregate consumption data of households . nilm algorithms , proposed hitherto , are either centralized or do require high performance systems to derive appliance level data , owing to the computational complexity associated . this approach raises several issues related to scalability and privacy of consumer 's data . in this paper , we present the location-aware energy disaggregation framework ( loced ) that utilizes occupancy of users to derive accurate appliance level usage information . loced framework limits the appliances considered for disaggregation based on the current location of occupants . thus , loced can provide real-time feedback on appliance level energy consumption and run on an embedded system locally at the household . we propose several accuracy metrics to study the performance of loced . to test the robustness of loced , we empirically evaluated it across multiple publicly available datasets . loced has significantly high energy disaggregation accuracy while exponentially reducing the computational complexity . story_separator_special_tag dynamic load management , i.e. , allowing electricity utilities to remotely turn electric appliances in households on or off , represents a key element of the smart grid . appliances should however only be disconnected from mains when no negative side effects , e.g. , loss of data or thawing food , are incurred thereby . this motivates the use of appliance identification techniques , which determine the type of an attached appliance based on the continuous sampling of its power consumption . while various implementations based on different sampling resolutions have been presented in existing literature , the achievable classification accuracies have rarely been analyzed . we address this shortcoming and evaluate the accuracy of appliance identification based on the characteristic features of traces collected during the 24 hours of a day . we evaluate our algorithm using more than 1,000 traces of different electrical appliances ' power consumptions . the results show that our approach can identify most of the appliances at high accuracy . story_separator_special_tag with the cost of consuming resources increasing ( both economically and ecologically ) , homeowners need to find ways to curb consumption . the almanac of minutely power dataset version 2 ( ampds2 ) has been released to help computational sustainability researchers , power and energy engineers , building scientists and technologists , utility companies , and eco-feedback researchers test their models , systems , algorithms , or prototypes on real house data . in the vast majority of cases , real-world datasets lead to more accurate models and algorithms . ampds2 is the first dataset to capture all three main types of consumption ( electricity , water , and natural gas ) over a long period of time ( 2 years ) and provide 11 measurement characteristics for electricity . no other such datasets from canada exist . each meter has 730 days of captured data . we also include environmental and utility billing data for cost analysis . ampds2 data has been pre-cleaned to provide for consistent and comparable accuracy results amongst different researchers and machine learning algorithms . story_separator_special_tag smart meter roll-outs provide easy access to granular meter measurements , enabling advanced energy services , ranging from demand response measures , tailored energy feedback and smart home/building automation . to design such services , train and validate models , access to data that resembles what is expected of smart meters , collected in a real-world setting , is necessary . the refit electrical load measurements dataset described in this paper includes whole house aggregate loads and nine individual appliance measurements at 8-second intervals per house , collected continuously over a period of two years from 20 houses . during monitoring , the occupants were conducting their usual routines . at the time of publishing , the dataset has the largest number of houses monitored in the united kingdom at less than 1-minute intervals over a period greater than one year . the dataset comprises 1,194,958,790 readings , that represent over 250,000 monitored appliance uses . the data is accessible in an easy-to-use comma-separated format , is time-stamped and cleaned to remove invalid measurements , correctly label appliance data and fill in small gaps of missing data . story_separator_special_tag home energy management systems can be used to monitor and optimize consumption and local production from renewable energy . to assess solutions before their deployment , researchers and designers of those systems demand for energy consumption datasets . in this paper , we present the greend dataset , containing detailed power usage information obtained through a measurement campaign in households in austria and italy . we provide a description of consumption scenarios and discuss design choices for the sensing infrastructure . finally , we benchmark the dataset with state-of-the-art techniques in load disaggregation , occupancy detection and appliance usage mining . story_separator_special_tag we report on the creation of a database of appliance consumption signatures and two test protocols to be used for appliance recognition tasks . by means of plug-based low-end sensors measuring the electrical consumption at low frequency , typically every 10 seconds , we made two acquisition sessions of one hour on about 100 home appliances divided into 10 categories : mobile phones ( via chargers ) , coffee machines , computer stations ( including monitor ) , fridges and freezers , hi-fi systems ( cd players ) , lamp ( cfl ) , laptops ( via chargers ) , microwave oven , printers , and televisions ( lcd or led ) . we measured their consumption in terms of real power ( w ) , reactive power ( var ) , rms current ( a ) and phase of voltage relative to current ( ) . we now give free access to this acs-fl database . the proposed test protocols will help the scientific community to objectively compare new algorithms . story_separator_special_tag non-intrusive load monitoring ( nilm ) has recently experienced a rebirth due to the expanding deployment of network-connected smart meters by utilities and the increasing availability of internet-enabled consumer-grade power meters . while many dimensions of the problem have been well-studied over the past 25 years , we argue that prior work has placed too much emphasis on incremental improvements in accuracy and not enough on designing novel nilm applications . as a result , the basic nilm problem and its primary application a simple appliance-level breakdown of home energy usage has remained unchanged since its inception . we believe a renewed focus on nilm applications could help steer future research in novel directions by exposing new problem variants , data analysis techniques , and evaluation metrics . in this paper , we summarize our own application-centric research agenda , which focuses on online applications that generate results in real time as smart meters produce data . as we discuss , our focus on applications has led us to consider efficiency and performance issues not addressed in prior work , which typically targets offline data analysis . story_separator_special_tag in this paper we propose an approach by which the energy efficiency of individual appliances can be estimated from an aggregate load . to date , energy disaggregation research has presented results for small data sets of 7 households or less , and as a result the generality of results are often unknown . in contrast , we have deployed household electricity sensors to 117 households and evaluated the accuracy by which our approach can identify the energy efficiency of refrigerators and freezers from an aggregate load . crucially , our approach does not require training data to be collected by sub-metering individual appliances , nor does it assume any knowledge of the appliances present in the household . instead , our approach uses prior models of general appliance types that are used to first identify which households contain either a combined fridge-freezer or separate refrigerator and freezer , and subsequently to estimate the energy efficiency of such appliances . finally , we calculate the time until the energy savings of replacing such appliances have offset the cost of the replacement appliance , which we show can be as low as 2.5 years . story_separator_special_tag summary nonintrusive load monitoring ( nilm ) is a technique for deducing the power consumption and operational schedule of individual loads in a building from measurements of the overall voltage and current feeding it , using information and communication technologies . in this article , we review the potential of this technology to enhance residential electricity audits . first , we review the currently commercially available whole-house and plug-level technology for residential electricity monitoring in the context of supporting audits . we then contrast this with nilm and show the advantages and disadvantages of the approach by discussing results from a prototype system installed in an apartment unit . recommendations for improving the technology to allow detailed , continuous appliance-level auditing of residential buildings are provided , along with ideas for possible future work in the field . story_separator_special_tag we present a forecast for systems-focused applications of non-intrusive load monitoring ( nilm ) , which meet the needs of homeowners , the technology sector , the service sector , and/or utilities . we discuss both near- and long-term applications . story_separator_special_tag the deployment of smart meters has made available high-frequency ( minutes as opposed to monthly ) measurements of electricity usage at individual households . converting these measurements to knowledge that can improve energy efficiency in the residential sector is critical to attract further smart grid investments and engage electricity consumers in the path towards reducing global carbon footprint . the goal of the reported research is to use smart meter measurement data to identify heating and cooling usage levels for a home . this is important to cost effectively design consumer energy services such as energy audit and demand response targeted towards improving an individual household 's heating usage efficiency . we present a machine learning approach akin to non-intrusive load monitoring ( nilm ) to disaggregate heating usage from measurements of a household 's total electricity usage . we use as input 15-minute interval meter data and hourly outdoor temperature measurements . our approach does not require a manual set-up procedure at each house . the method uses a hidden markov model to capture the dependence of heating usage on outdoor temperature . compared to existing methods based on linear regression , the proposed method provides details on heating story_separator_special_tag this paper brings the application of non-intrusive load monitoring ( nilm ) into demand response ( dr ) . nilm is usually applied to identify the major loads in buildings , which is very promising in meeting the load monitoring requirements of demand response . unlike the traditional approach of nilm in energy auditing , a new nilm system for dr is established based on a comprehensive analysis on the requirement of demand response . the new system is designed from both hardware and software aspects with a more practical load space and a more explicit measuring criteria . the ultimate goal of this paper is to pave the road for the future researchers to work in nilm for demand response . story_separator_special_tag since the early 1980s , the research community has developed ever more sophisticated algorithms for the problem of energy disaggregation , but despite decades of research , there is still a dearth of applications with demonstrated value . in this work , we explore a question that is highly pertinent to this research community : how good does energy disaggregation need to be in order to infer characteristics of a household ? we present novel techniques that use unsupervised energy disaggregation to predict both household occupancy and static properties of the household such as size of the home and number of occupants . results show that basic disaggregation approaches performs up to 30 % better at occupancy estimation than using aggregate power data alone , and are up to 10 % better at estimating static household characteristics . these results show that even rudimentary energy disaggregation techniques are sufficient for improved inference of household characteristics . to conclude , we re-evaluate the bar set by the community for energy disaggregation accuracy and try to answer the question `` how good is good enough ? '' story_separator_special_tag this paper presents a data collection and energy fe edback platform for smart homes to enhance the value of information given by smart energy meter da ta by providing user-tailored real-time energy consumption feedback and advice that can be easily accessed and acted upon by the household . our data management platform consists of an sql server back-end which collects data , namely , aggregate power consumption as well as consumption of major appliances , temperature , humidity , light , and motion data . these data streams allow us to infer information about the household s appliance usage and domestic activities , which in t urn enables meaningful and useful energy feedback . the platform developed has been rolled ou t in 20 uk households over a period of just over 21 months . as well as the data streams mentioned , q ualitative data such as appliance survey , tariff , house construction type and occupancy information a re also included . the paper presents a review of publically available smart home datasets and a desc ription of our own smart home set up and monitoring platform . we then provide examples of th e types of feedback story_separator_special_tag this study takes a look at the national energy outlook of nigeria . energy utilization pattern of the country was investigated , and possible areas of energy conservation in the major economic sectors ( industry , transportation , office and residential buildings ) were considered . the study reveals that there is inefficient utilization of energy in the major economic sectors of the country . this study presented several energy conservation opportunities to cause energy savings and identified about six major areas through which energy conservation measures can effectively cause some savings in energy and allow for its stability . such areas of focus for application of energy conservation measures include manufacturing/industrial setup , office and residential buildings , power generation and distribution , transportation , energy conservation through waste control etc . various measures that need to be considered and appropriately addressed in moving towards energy sustainability in nigeria have been recommended among which are energy use in ventilating equipment , lighting , electrically operated industrial machines and engines , design for energy-efficient buildings etc . story_separator_special_tag residential buildings contribute significantly to the overall energy usage across the world . real deployments , and collected data thereof , play a critical role in providing insights into home energy consumption and occupant behavior . existing datasets from real residential deployments are all from the developed countries . developing countries , such as india , present unique opportunities to evaluate the scalability of existing research in diverse settings . building upon more than a year of experience in sensor network deployments , we undertake an extensive deployment in a three storey home in delhi , spanning 73 days from may-august 2013 , measuring electrical , water and ambient parameters . we used 33 sensors across the home , measuring these parameters , collecting a total of approx . 400 mb of data daily . we discuss the architectural implications on the deployment systems that can be used for monitoring and control in the context of developing countries . addressing the unreliability of electrical grid and internet in such settings , we present sense local-store upload architecture for robust data collection . while providing several unique aspects , our deployment further validates the common considerations from similar residential deployments , story_separator_special_tag we examine 12 studies on the efficacy of disaggregated energy feedback . the average electricity reduction across these studies is 4.5 % . however , 4.5 % may be a positively-biased estimate of the savings achievable across the entire population because all 12 studies are likely to be prone to opt-in bias hence none test the effect of disaggregated feedback on the general population . disaggregation may not be required to achieve these savings : aggregate feedback alone drives 3 % reductions ; and the 4 studies which directly compared aggregate feedback against disaggregated feedback found that aggregate feedback is at least as effective as disaggregated feedback , possibly because web apps are viewed less often than in-home-displays ( in the short-term , at least ) and because some users do not trust fine-grained disaggregation ( although this may be an issue with the specific user interface studied ) . disaggregated electricity feedback may help a motivated sub-group of the population to save more energy but fine-grained disaggregation may not be necessary to achieve these energy savings . disaggregation has many uses beyond those discussed in this paper but , on the specific question of promoting energy reduction in the story_separator_special_tag utilities have deployed tens of millions of smart meters , which record and transmit home energy usage at fine-grained intervals . these deployments are motivating researchers to develop new energy analytics that mine smart meter data to learn insights into home energy usage and behavior . unfortunately , a significant barrier to evaluating energy analytics is the overhead of instrumenting homes to collect aggregate energy usage data and data from each device . as a result , researchers typically evaluate their analytics on only a small number of homes , and can not rigorously vary a home 's characteristics to determine what attributes of its energy usage affect accuracy . to address the problem , we develop smartsim , a publicly-available device-accurate smart home energy trace generator . smartsim generates energy usage traces for devices by combining a device energy model , which captures its pattern of energy usage when active , with a device usage model , which specifies its frequency , duration , and time of activity . smartsim then generates aggregate energy data for a simulated home by combining the data from each device . we integrate smartsim with nilm-tk , a publicly-available toolkit for non-intrusive load
we show that the large n limit of certain conformal field theories in various dimensions include in their hilbert space a sector describing supergravity on the product of anti-desitter spacetimes , spheres and other compact manifolds . this is shown by taking some branes in the full m/string theory and then taking a low energy limit where the field theory on the brane decouples from the bulk . we observe that , in this limit , we can still trust the near horizon geometry for large n. the enhanced supersymmetries of the near horizon geometry correspond to the extra supersymmetry generators present in the superconformal group ( as opposed to just the super-poincare group ) . the t hooft limit of 3+1n=4 super-yang-mills at the conformal point is shown to contain strings : they are iib strings . we conjecture that compactifications of m/string theory on various anti-desitter spacetimes is dual to various conformal field theories . this leads to a new proposal for a definition of m-theory which could be extended to incl . story_separator_special_tag recently , it has been proposed by maldacena that large $ n $ limits of certain conformal field theories in $ d $ dimensions can be described in terms of supergravity ( and string theory ) on the product of $ d+1 $ -dimensional $ ads $ space with a compact manifold . here we elaborate on this idea and propose a precise correspondence between conformal field theory observables and those of supergravity : correlation functions in conformal field theory are given by the dependence of the supergravity action on the asymptotic behavior at infinity . in particular , dimensions of operators in conformal field theory are given by masses of particles in supergravity . as quantitative confirmation of this correspondence , we note that the kaluza-klein modes of type iib supergravity on $ ads_5\\times { \\bf s } ^5 $ match with the chiral operators of $ =4 $ super yang-mills theory in four dimensions . with some further assumptions , one can deduce a hamiltonian version of the correspondence and show that the $ =4 $ theory has a large $ n $ phase transition related to the thermodynamics of $ ads $ black holes . story_separator_special_tag we calculate semiclassically the emission rate of spin 1/2 particles from charged , nonrotating black holes in d=5 , n=8 supergravity . the relevant dirac equation is solved by the same approximation as in the bosonic case . the resulting expression for the emission rate has a form which is predicted from d-brane effective field theory . story_separator_special_tag we construct three dimensional chern-simons-matter theories with gauge groups u ( n ) \xd7 u ( n ) and su ( n ) \xd7 su ( n ) which have explicit = 6 superconformal symmetry . using brane constructions we argue that the u ( n ) \xd7 u ( n ) theory at level k describes the low energy limit of n m2-branes probing a c4/zk singularity . at large n the theory is then dual to m-theory on ads4 \xd7 s7/zk . the theory also has a 't hooft limit ( of large n with a fixed ratio n/k ) which is dual to type iia string theory on ads4 \xd7 cp3 . for k = 1 the theory is conjectured to describe n m2-branes in flat space , although our construction realizes explicitly only six of the eight supersymmetries . we give some evidence for this conjecture , which is similar to the evidence for mirror symmetry in d = 3 gauge theories . when the gauge group is su ( 2 ) \xd7 su ( 2 ) our theory has extra symmetries and becomes identical to the bagger-lambert theory . story_separator_special_tag we present a trace formula for an index over the spectrum of four dimensional superconformal field theories on s3 \xd7\xa0 time . our index receives contributions from states invariant under at least one supercharge and captures all information that may be obtained purely from group theory about protected short representations in 4 dimensional superconformal field theories . in the case of the $ $ { \\mathcal n } =4 $ $ theory our index is a function of four continuous variables . we compute it at weak coupling using gauge theory and at strong coupling by summing over the spectrum of free massless particles in ads5\xa0\xd7 s5 and find perfect agreement at large n and small charges . our index does not reproduce the entropy of supersymmetric black holes in ads5 , but this is not a contradiction , as it differs qualitatively from the partition function over supersymmetric states of the $ $ { \\mathcal n } =4 $ $ theory . we note that entropy for some small supersymmetric ads5 black holes may be reproduced via a d-brane counting involving giant gravitons . for big black holes we find a qualitative ( but not exact ) agreement with story_separator_special_tag we present a trace formula for a witten type index for superconformal field theories in d = 3 , 5 and 6 dimensions , generalizing a similar recent construction in d = 4. we perform a detailed study of the decomposition of long representations into sums of short representations at the unitarity bound to demonstrate that our trace formula yields the most general index ( i.e . quantity that is guaranteed to be protected by superconformal symmetry alone ) for the corresponding superalgebras . using the dual gravitational description , we compute our index for the theory on the world volume of n m2 and m5 branes in the large n limit . we also compute our index for recently constructed chern simons theories in three dimensions in the large n limit , and find that , in certain cases , this index undergoes a large n phase transition as a function of chemical potentials . story_separator_special_tag aharony , bergman , jafferis and maldacena have recently proposed a dual gravitational description for a family of superconformal chern simons theories in three spacetime dimensions . in this note we perform the one loop computation that determines the field theory superconformal index of this theory and compare with the index computed over the fock space of dual supersymmetric gravitons . in the appropriate limit ( large n and large k ) we find a perfect match . story_separator_special_tag abstract we calculate the superconformal index for n = 6 chern simons-matter theory with gauge group u ( n ) k \xd7 u ( n ) k at arbitrary allowed value of the chern simons level k . the calculation is based on localization of the path integral for the index . our index counts supersymmetric gauge invariant operators containing inclusions of magnetic monopole operators , where latter operators create magnetic fluxes on 2-sphere . through analytic and numerical calculations in various sectors , we show that our result perfectly agrees with the index over supersymmetric gravitons in ads 4 \xd7 s 7 / z k in the large n limit . monopole operators in nontrivial representations of u ( n ) \xd7 u ( n ) play important roles . we also comment on possible applications of our methods to other superconformal chern simons theories . story_separator_special_tag n=4 super yang mills theory supplies us with a non-abelian 4d gauge theory with a meaningful perturbation expansion , both in the uv and in the ir . we calculate the free energy on a 3-sphere and observe a deconfinement transition for large n at zero coupling . the same thermodynamic behaviour is found for a wide class of toy models , possibly also including the case of nonzero coupling . below the transition we also find hagedorn behaviour , which is identified with fluctuations signaling the approach to the deconfined phase . the hagedorn and the deconfinement temperatures are identical . application of the ads/cft correspondence gives a connection between string hagedorn behaviour and black holes . story_separator_special_tag we demonstrate that weakly coupled , large $ n , d $ -dimensional $ su ( n ) $ gauge theories on a class of compact spatial manifolds ( including $ s^ { d-1 } \\times { \\rm time } $ ) undergo deconfinement phase transitions at temperatures proportional to the inverse length scale of the manifold in question . the low temperature phase has a free energy of order one , and is characterized by a stringy ( hagedorn ) growth in its density of states . the high temperature phase has a free energy of order $ n^2 $ . these phases are separated either by a single first order transition that generically occurs below the hagedorn temperature or by two continuous phase transitions , the first of which occurs at the hagedorn temperature . these phase transitions could perhaps be continuously connected to the usual flat space deconfinement transition in the case of confining gauge theories , and to the hawking-page nucleation of $ ads_5 $ black holes in the case of the $ \\mathcal { n } =4 $ supersymmetric yang-mills theory . we suggest that deconfinement transitions may generally be interpreted in terms of black story_separator_special_tag we carry out a thorough survey of entropy for a large class of $ p $ -branes in various dimensions . we find that the bekenstein-hawking entropy may be given a simple world volume interpretation only for the non-dilatonic $ p $ -branes , those with the dilaton constant throughout spacetime . the entropy of extremal non-dilatonic $ p $ -branes is non-vanishing only for the solutions preserving 1/8 of the original supersymmetries . upon toroidal compactification these reduce to dyonic black holes in 4 and 5 dimensions . for the self-dual string in 6 dimensions , which preserves 1/4 of the original supersymmetries , the near-extremal entropy is found to agree with a world sheet calculation , in support of the existing literature . the remaining 3 interesting cases preserve 1/2 of the original supersymmetries . these are the self-dual 3-brane in 10 dimensions , and the 2- and 5-branes in 11 dimensions . for all of them the scaling of the near-extremal bekenstein-hawking entropy with the hawking temperature is in agreement with a statistical description in terms of free massless fields on the world volume . story_separator_special_tag we calculate the weyl anomaly for conformal field theories that can be described via the ads/cft correspondence . this entails regularizing the gravitational part of the corresponding supergravity action in a manner consistent with general covariance . up to a constant , the anomaly only depends on the dimension d of the manifold on which the conformal field theory is defined . we present concrete expressions for the anomaly in the physically relevant cases d = 2 , 4 and 6. in d = 2 we find for the central charge c = 3 l/ 2 g_n in agreement with considerations based on the asymptotic symmetry algebra of ads_3 . in d = 4 the anomaly agrees precisely with that of the corresponding n = 4 superconformal su ( n ) gauge theory . the result in d = 6 provides new information for the ( 0 , 2 ) theory , since its weyl anomaly has not been computed previously . the anomaly in this case grows as n^3 , where n is the number of coincident m5 branes , and it vanishes for a ricci-flat background . story_separator_special_tag we examine the recently proposed technique of adding boundary counterterms to the gravitational action for spacetimes which are locally asymptotic to anti\\char21 { } de sitter spacetimes . in particular , we explicitly identify higher order counterterms , which allow us to consider spacetimes of dimensions $ dl~7. $ as the counterterms eliminate the need of `` background subtraction '' in calculating the action , we apply this technique to study examples where the appropriate background was ambiguous or unknown : topological black holes , taub-nut-ads and taub-bolt-ads . we also identify certain cases where the covariant counterterms fail to render the action finite , and we comment on the dual field theory interpretation of this result . in some examples , the case of a vanishing cosmological constant may be recovered in a limit , which allows us to check results and resolve ambiguities in certain asymptotically flat spacetime computations in the literature . story_separator_special_tag we propose a procedure for computing the boundary stress tensor associated with a gravitating system in asymptotically anti-de sitter space . our definition is free of ambiguities encountered by previous attempts , and correctly reproduces the masses and angular momenta of various spacetimes . via the ads/cft correspondence , our classical result is interpretable as the expectation value of the stress tensor in a quantum conformal field theory . we demonstrate that the conformal anomalies in two and four dimensions are recovered . the two dimensional stress tensor transforms with a schwarzian derivative and the expected central charge . we also find a nonzero ground state energy for global ads_5 , and show that it exactly matches the casimir energy of the dual n=4 super yang-mills theory on s^3 x r . story_separator_special_tag we use localization techniques to compute the expectation values of supersymmetric wilson loops in chern-simons theories with matter . we find the path-integral reduces to a non-gaussian matrix model . the wilson loops we consider preserve a single complex supersymmetry , and exist in any n=2 theory , though the localization requires superconformal symmetry . we present explicit results for the cases of pure chern-simons theory with gauge group u ( n ) , showing agreement with the known results , and abjm , showing agreement with perturbative calculations . our method applies to other theories , such as gaiotto-witten theories , blg , and their variants . story_separator_special_tag the partition function of n=6 supersymmetric chern-simons-matter theory ( known as abjm theory ) on s^3 , as well as certain wilson loop observables , are captured by a zero dimensional super-matrix model . this super-matrix model is closely related to a matrix model describing topological chern-simons theory on a lens space . we explore further these recent observations and extract more exact results in abjm theory from the matrix model . in particular we calculate the planar free energy , which matches at strong coupling the classical iia supergravity action on ads_4 x cp^3 and gives the correct n^ { 3/2 } scaling for the number of degrees of freedom of the m2 brane theory . furthermore we find contributions coming from world-sheet instanton corrections in cp^3 . we also calculate non-planar corrections , both to the free energy and to the wilson loop expectation values . this matrix model appears also in the study of topological strings on a toric calabi-yau manifold , and an intriguing connection arises between the space of couplings of the planar abjm theory and the moduli space of this calabi-yau . in particular it suggests that , in addition to the usual perturbative story_separator_special_tag localization methods reduce the path integrals in n { > = } 2 supersymmetric chern-simons gauge theories on s { sup 3 } to multimatrix integrals . a recent evaluation of such a two-matrix integral for the n=6 superconformal u ( n ) xu ( n ) aharony-bergman-jafferis-maldacena theory produced detailed agreement with the ads/cft correspondence , explaining , in particular , the n { sup 3/2 } scaling of the free energy . we study a class of p-matrix integrals describing n=3 superconformal u ( n ) { sup p } chern-simons gauge theories . we present a simple method that allows us to evaluate the eigenvalue densities and the free energies in the large n limit keeping the chern-simons levels k { sub i } fixed . the dual m-theory backgrounds are ads { sub 4 } xy , where y are seven-dimensional tri-sasaki einstein spaces specified by the k { sub i } . the gravitational free energy scales inversely with the square root of the volume of y. we find a general formula for the p-matrix free energies that agrees with the available results for volumes of the tri-sasaki einstein spaces y , thus providing a story_separator_special_tag using the matrix model which calculates the exact free energy of abjm theory on s^3 we study non-perturbative effects in the large n expansion of this model , i.e. , in the genus expansion of type iia string theory on ads4xcp^3 . we propose a general prescription to extract spacetime instanton actions from general matrix models , in terms of period integrals of the spectral curve , and we use it to determine them explicitly in the abjm matrix model , as exact functions of the 't hooft coupling . we confirm numerically that these instantons control the asymptotic growth of the genus expansion . furthermore , we find that the dominant instanton action at strong coupling determined in this way exactly matches the action of an euclidean d2-brane instanton wrapping rp^3 . story_separator_special_tag the localization technique allows us to compute the free energy of the u ( n ) _k x u ( n ) _ { -k } chern-simons-matter theory dual to type iia strings on ads_4 x cp^3 from weak to strong 't hooft coupling \\lambda = n / k at finite n , as demonstrated by drukker , marino , and putrov . in this note we study further the free energy at large 't hooft coupling with the aim of testing ads/cft at the quantum gravity level and , in particular , sum up all the 1/n corrections , apart from the worldsheet instanton contributions . the all genus partition function takes a remarkably simple form -- the airy function , ai ( k^ { 4/3 } \\lambda_r ) , with the renormalized 't hooft coupling \\lambda_r . story_separator_special_tag the partition function on the three-sphere of many supersymmetric chern-simons-matter theories reduces , by localization , to a matrix model . we develop a new method to study these models in the m-theory limit , but at all orders in the 1/n expansion . the method is based on reformulating the matrix model as the partition function of an ideal fermi gas with a non-trivial , one-particle quantum hamiltonian . this new approach leads to a completely elementary derivation of the n^ { 3/2 } behavior for abjm theory and n=3 quiver chern-simons-matter theories . in addition , the full series of 1/n corrections to the original matrix integral can be simply determined by a next-to-leading calculation in the wkb or semiclassical expansion of the quantum gas , and we show that , for several quiver chern-simons-matter theories , it is given by an airy function . this generalizes a recent result of fuji , hirano and moriyama for abjm theory . it turns out that the semiclassical expansion of the fermi gas corresponds to a strong coupling expansion in type iia theory , and it is dual to the genus expansion . this allows us to calculate explicitly non-perturbative story_separator_special_tag we report on the exact computation of the s3 partition function of u ( n ) k \xd7 u ( n ) -k abjm theory for k = 1 , n = 1 , , 19. the result is a polynomial in -1 with rational coefficients . as an application of our results , we numerically determine the coefficient of the membrane 1-instanton correction to the partition function . story_separator_special_tag we study the fermi gas quantum mechanics associated to the abjm matrix model . we develop the method to compute the grand partition function of the abjm theory , and compute exactly the partition function z ( n ) up to n=9 when the chern-simons level k=1 . we find that the eigenvalue problem of this quantum mechanical system is reduced to the diagonalization of a certain hankel matrix . in reducing the number of integrations by commuting coordinates and momenta , we find an exact relation concerning the grand partition function , which is interesting on its own right and very helpful for determining the partition function . we also study the tba-type integral equations that allow us to compute the grand partition function numerically . surprisingly , all of our exact results of the partition functions are written in terms of polynomials of 1/pi with rational coefficients . story_separator_special_tag we study the instanton effects of the abjm partition function using the fermi gas formalism . we compute the exact values of the partition function at the chern-simons levels k=1,2,3,4,6 up to n=44,20,18,16,14 respectively , and extract non-perturbative corrections from these exact results . fitting the resulting non-perturbative corrections by their expected forms from the fermi gas , we determine unknown parameters in them . after separating the oscillating behavior of the grand potential , which originates in the periodicity of the grand partition function , and the worldsheet instanton contribution , which is computed from the topological string theory , we succeed in proposing an analytical expression for the leading d2-instanton correction . just as the perturbative result , the instanton corrections to the partition function are expressed in terms of the airy function . story_separator_special_tag the partition function of the abjm theory receives non-perturbative corrections due to instanton effects . we study these non-perturbative corrections , including bound states of worldsheet instantons and membrane instantons , in the fermi-gas approach . we require that the total non-perturbative correction should be always finite for arbitrary chern-simons level . this finiteness is realized quite non-trivially because each bound state contribution naively diverges at some levels . the poles of each contribution should be canceled out in total . we use this pole cancellation mechanism to find unknown bound state corrections from known ones . we conjecture a general expression of the bound state contribution . summing up all the bound state contributions , we find that the effect of bound states is simply incorporated into the worldsheet instanton correction by a redefinition of the chemical potential in the fermi-gas system . analytic expressions of the 3- and 4-membrane instanton corrections are also proposed . story_separator_special_tag the partition function on the three-sphere of abjm theory contains non-perturbative corrections which correspond to membrane instantons in m-theory . these corrections can be studied in the fermi gas approach to the partition function , and they are encoded in a system of integral equations of the tba type . we study a semiclassical or wkb expansion of this tba system in the abjm coupling k , which corresponds to the strong coupling expansion of the type iia string . this allows us to study membrane instanton corrections in m-theory at high order in the wkb expansion . using these wkb results , we verify the conjectures for the form of the one-instanton correction at finite k proposed recently by hatsuda , moriyama and okuyama ( hmo ) , which are in turn based on a conjectural cancellation of divergences between worldsheet instantons and membrane instantons . the hmo cancellation mechanism is important since it shows in a precise , quantitative way , that the perturbative genus expansion is radically insufficient at strong coupling , and that non-perturbative membrane effects are essential to make sense of the theory . we propose analytic expressions in k for the full two-membrane instanton story_separator_special_tag the partition function of abjm theory on the three-sphere has non-perturbative corrections due to membrane instantons in the m-theory dual . we show that the full series of membrane instanton corrections is completely determined by the refined topological string on the calabi-yau manifold known as local p1xp1 , in the nekrasov-shatashvili limit . our result can be interpreted as a first-principles derivation of the full series of non-perturbative effects for the closed topological string on this calabi-yau background . based on this , we make a proposal for the non-perturbative free energy of topological strings on general , local calabi-yau manifolds . story_separator_special_tag the three sphere partition function , z , of three dimensional theories with four supercharges and an r-symmetry is computed using localization , resulting in a matrix integral over the cartan of the gauge group . there is a family of couplings to the curved background , parameterized by a choice of r-charge , such that supersymmetry is preserved ; z is a function of those parameters . the magnitude of the result is shown to be extremized for the superconformal r-charge of the infrared conformal field theory , in the absence of mixing of the r-symmetry with accidental symmetries . this exactly determines the ir superconformal r-charge . story_separator_special_tag we extend the formula for partition functions of n=2 superconformal gauge theories on s^3 obtained recently by kapustin , willett and yaakov , to incorporate matter fields with arbitrary r-charge assignments . we use the result to check that the self-mirror property of n=4 sqed with two electron hypermultiplets is preserved under a certain mass deformation which breaks the supersymmetry to n=2 . story_separator_special_tag for 3-dimensional field theories with $ \\mathcal { n } = 2 $ supersymmetry the euclidean path integrals on the three-sphere can be calculated using the method of localization ; they reduce to certain matrix integrals that depend on the r-charges of the matter fields . we solve a number ofsuch large n matrix models and calculate the free energy f as a function of the trial r-charges consistent with the marginality of the super potential . in all our $ \\mathcal { n } = 2 $ superconformal examples , the local maximization of f yields answers that scale as n3/2 and agree with the dual m-theory backgrounds ads4 \xd7 y , where y are 7-dimensional sasaki-einstein spaces . we also find in toric examples that local f-maximization is equivalent to the minimization of the volume of y over the space of sasakian metrics , a procedure also referred to as z-minimization . moreover , we find that the functions f and z are related for any trial r-charges . in the models we study f is positive and decreases along rg flows . we therefore propose the f-theorem that we hope applies to all 3-d field theories : story_separator_special_tag we consider three-dimensional n=2 superconformal field theories on a three-sphere and analyze their free energy f as a function of background gauge and supergravity fields . a crucial role is played by certain local terms in these background fields , including several chern-simons terms . the presence of these terms clarifies a number of subtle properties of f. this understanding allows us to prove the f-maximization principle . it also explains why computing f via localization leads to a complex answer , even though we expect it to be real in unitary theories . we discuss several corollaries of our results and comment on the relation to the f-theorem . story_separator_special_tag we show that the reeb vector , and hence in particular the volume , of a sasaki-einstein metric on the base of a toric calabi-yau cone of complex dimension n may be computed by minimising a function z on r^n which depends only on the toric data that defines the singularity . in this way one can extract certain geometric information for a toric sasaki-einstein manifold without finding the metric explicitly . for complex dimension n=3 the reeb vector and the volume correspond to the r-symmetry and the a central charge of the ads/cft dual superconformal field theory , respectively . we therefore interpret this extremal problem as the geometric dual of a-maximisation . we illustrate our results with some examples , including the y^ { p , q } singularities and the complex cone over the second del pezzo surface . story_separator_special_tag a bstractwe investigate infinite families of 3d $ \\mathcal { n } = { 2 } $ superconformal chern-simons quivers with an arbitrarily large number of gauge groups arising on m2-branes over toric cy4 s. these theories have the same matter content and superpotential of those on d3-branes probing cones over la , b , a sasaki-einstein manifolds . for all these infinite families , we explicitly show the correspondence between the free energy f on s3 and the volume of the 7-dimensional base of the associated cy4 , even before extremization . symmetries of the toric diagram are exploited for reducing the dimensionality of the space over which the volume of the sasaki-einstein manifold is extremized . similarly , the space of trial r-charges of the gauge theory is constrained using symmetries of the quiver . our results add to those existing in the literature , providing further support for the correspondence . we develop a lifting algorithm , based on the type iib realization of these theories , that takes from cy3 s to cy4 s and we use it to efficiently generate the models studied in the paper . finally , we show that in all the story_separator_special_tag a bstractwe study the supersymmetric free energy of three dimensional chern-simons-matter theories holographically dual to ads4 times toric sasaki-einstein seven-manifolds . in the large n limit , we argue that the square of the free energy can be written as a quartic polynomial of trial r-charges . the coefficients of the polynomial are determined geometrically from the toric diagrams . we present the coefficients of the quartic polynomial explicitly for generic toric diagrams with up to 6 vertices , and some particular diagrams with 8 vertices . decomposing the trial r-charges into mesonic and baryonic variables , and eliminating the baryonic ones , we show that the quartic polynomial reproduces the inverse of the martelli-sparks-yau volume function . on the gravity side , we explore the possibility of using the same quartic polynomial as the prepotential in the ads gauged supergravity . comparing kaluza-klein gravity and gauged supergravity descriptions , we find perfect agreement in the mesonic sector but some discrepancy in the baryonic sector . story_separator_special_tag we establish an attractor mechanism for the horizon metric of asymptotically locally ads 4 supersymmetric black holes . the horizon is a smooth riemann surface with arbitrary metric at asymptotic infinity which is fixed to the constant curvature metric in the near horizon region . we show how this mechanism is realized for four-dimensional n $ $ \\mathcal { n } $ $ = 2 gauged supergravity coupled to vector multiplets by focusing on the stu model . a similar analysis is performed for gauged supergravity theories in five , six , and seven dimen- sions where we establish the same mechanism by extending previous results on holographic uniformization . story_separator_special_tag abstract the general form of n = 2 supergravity coupled to an arbitrary number of vector multiplets and hypermultiplets , with a generic gauging of the scalar manifold isometries is given . this extends the results already available in the literature in that we use a coordinate independent and manifestly symplectic covariant formalism which allows to cover theories difficult to formulate within superspace or tensor calculus approach . we provide the complete lagrangian and supersymmetry variations with all fermionic terms , and the form of the scalar potential for arbitrary quaternionic manifolds and special geometry , not necessarily in special coordinates . lagrangians for rigid theories are also written in this general setting and the connection with local theories elucidated . the derivation of these results using geometrical techniques is briefly summarized . story_separator_special_tag we find a general principle which allows one to compute the area of the horizon of $ n=2 $ extremal black holes as an extremum of the central charge . one considers the adm mass equal to the central charge as a function of electric and magnetic charges and moduli and extremizes this function in the moduli space ( a minimum corresponds to a fixed point of attraction ) . the extremal value of the square of the central charge provides the area of the horizon , which depends only on electric and magnetic charges . the doubling of unbroken supersymmetry at the fixed point of attraction for $ n=2 $ black holes near the horizon is derived via conformal flatness of the bertotti-robinson-type geometry . these results provide an explicit model-independent expression for the macroscopic bekenstein-hawking entropy of $ n=2 $ black holes which is manifestly duality invariant . the presence of hypermultiplets in the solution does not affect the area formula . various examples of the general formula are displayed . we outline the attractor mechanism in $ n=4 , 8 $ super-symmetries and the relation to the $ n=2 $ case . the entropy-area formula in five story_separator_special_tag a bstractthis paper addresses a long standing problem , the counting of the microstates of supersymmetric asymptotically ads black holes in terms of a holographically dual field theory . we focus on a class of asymptotically ads4 static black holes preserving two real supercharges which are dual to a topologically twisted deformation of the abjm theory . we evaluate in the large n limit the topologically twisted index of the abjm theory and we show that it correctly reproduces the entropy of the ads4 black holes . an extremization of the index with respect to a set of chemical potentials is required . we interpret it as the selection of the exact r-symmetry of the superconformal quantum mechanics describing the horizon of the black hole . story_separator_special_tag abstract we present a counting of microstates of a class of dyonic bps black holes in ads 4 which precisely reproduces their bekenstein hawking entropy . the counting is performed in the dual boundary description , that provides a non-perturbative definition of quantum gravity , in terms of a twisted and mass-deformed abjm theory . we evaluate its twisted index and propose an extremization principle to extract the entropy , which reproduces the attractor mechanism in gauged supergravity . story_separator_special_tag we study the thermodynamics of the recently-discovered non-extremal charged rotating black holes of gauged supergravities in five , seven and four dimensions , obtaining energies , angular momenta and charges that are consistent with the first law of thermodynamics . we obtain their supersymmetric limits by using these expressions together with an analysis of the ads superalgebras including r-charges . we give a general discussion of the global structure of such solutions , and apply it in the various cases . we obtain new regular supersymmetric black holes in seven and four dimensions , as well as reproducing known examples in five and four dimensions . we also obtain new supersymmetric non-singular topological solitons in five and seven dimensions . the rest of the supersymmetric solutions either have naked singularities or naked time machines . the latter can be rendered non-singular if the asymptotic time is periodic . this leads to a new type of quantum consistency condition , which we call a josephson quantisation condition . finally , we discuss some aspects of rotating black holes in godel universe backgrounds . story_separator_special_tag we present new analytic rotating four-dimensional anti -- de sitter space ( $ { \\mathrm { ads } } _ { 4 } $ ) black holes , found as solutions of gauged $ \\mathcal { n } =2 $ supergravity coupled to abelian vector multiplets with a symmetric scalar manifold . these configurations preserve two real supercharges and have a smooth limit to the bps kerr-newman- $ { \\mathrm { ads } } _ { 4 } $ black hole . we spell out the solution of the $ stu $ model admitting an uplift to m-theory on $ { \\mathrm { s } } ^ { 7 } $ . we identify an entropy function , which upon extremization gives the black hole entropy , to be holographically reproduced by the leading $ n $ contribution of the generalized superconformal index of the dual theory . story_separator_special_tag we investigate the existence of supersymmetric static dyonic black holes with spherical horizon in the context of $ \\mathcal { n } = 2 $ u ( 1 ) gauged supergravity in four dimensions . we analyze the conditions for their existence and provide the general first-order flow equations driving the scalar fields and the metric warp factors from the asymptotic ads4 geometry to the horizon . we work in a general duality-symmetric setup , which allows to describe both electric and magnetic gaugings . we also discuss the attractor mechanism and the issue of moduli ( de- ) stabilization . story_separator_special_tag we provide a general formula for the partition function of three-dimensional $ \\mathcal { n } =2 $ gauge theories placed on $ s^2 \\times s^1 $ with a topological twist along $ s^2 $ , which can be interpreted as an index for chiral states of the theories immersed in background magnetic fields . the result is expressed as a sum over magnetic fluxes of the residues of a meromorphic form which is a function of the scalar zero-modes . the partition function depends on a collection of background magnetic fluxes and fugacities for the global symmetries . we illustrate our formula in many examples of 3d yang-mills-chern-simons theories with matter , including aharony and giveon-kutasov dualities . finally , our formula generalizes to $ \\omega $ -backgrounds , as well as two-dimensional theories on $ s^2 $ and four-dimensional theories on $ s^2 \\times t^2 $ . in particular this provides an alternative way to compute genus-zero a-model topological amplitudes and gromov-witten invariants . story_separator_special_tag suppose $ x $ is a compact symplectic manifold acted on by a compact lie group $ k $ ( which may be nonabelian ) in a hamiltonian fashion , with moment map $ \\mu : x \\to { \\rm lie } ( k ) ^ * $ and marsden-weinstein reduction $ \\xred = \\mu^ { -1 } ( 0 ) /k $ . there is then a natural surjective map $ \\kappa_0 $ from the equivariant cohomology $ h^ * _k ( x ) $ of $ x $ to the cohomology $ h^ * ( \\xred ) $ . in this paper we prove a formula ( theorem 8.1 , the residue formula ) for the evaluation on the fundamental class of $ \\xred $ of any $ \\eta_0 \\in h^ * ( \\xred ) $ whose degree is the dimension of $ \\xred $ , provided that $ 0 $ is a regular value of the moment map $ \\mu $ on $ x $ . this formula is given in terms of any class $ \\eta \\in h^ * _k ( x ) $ for which $ \\kappa_0 ( \\eta ) = \\eta_0 $ , and story_separator_special_tag we compute the elliptic genera of two-dimensional $ $ { \\mathcal { n } = ( 2 , 2 ) } $ $ n= ( 2,2 ) and $ $ { \\mathcal { n } = ( 0 , 2 ) } $ $ n= ( 0,2 ) -gauged linear sigma models via supersymmetric localization , for rank-one gauge groups . the elliptic genus is expressed as a sum over residues of a meromorphic function whose argument is the holonomy of the gauge field along both the spatial and the temporal directions of the torus . we illustrate our formulas by a few examples including the quintic calabi yau , $ $ { \\mathcal { n } = ( 2 , 2 ) } $ $ n= ( 2,2 ) su ( 2 ) and o ( 2 ) gauge theories coupled to n fundamental chiral multiplets , and a geometric $ $ { \\mathcal { n } = ( 0 , 2 ) } $ $ n= ( 0,2 ) model . story_separator_special_tag we compute the elliptic genera of general two-dimensional $ $ { \\mathcal { n } = ( 2 , 2 ) } $ $ n= ( 2,2 ) and $ $ { \\mathcal { n } = ( 0 , 2 ) } $ $ n= ( 0,2 ) gauge theories . we find that the elliptic genus is given by the sum of jeffrey kirwan residues of a meromorphic form , representing the one-loop determinant of fields , on the moduli space of flat connections on t2 . we give several examples illustrating our formula , with both abelian and non-abelian gauge groups , and discuss some dualities for u ( k ) and su ( k ) theories . this paper is a sequel to the authors previous paper ( benini et\xa0al. , lett math phys 104:465 493 , 2014 ) . story_separator_special_tag we compute the witten index of one-dimensional gauged linear sigma models with at least $ { \\mathcal n } =2 $ supersymmetry . in the phase where the gauge group is broken to a finite group , the index is expressed as a certain residue integral . it is subject to a change as the fayet-iliopoulos parameter is varied through the phase boundaries . the wall crossing formula is expressed as an integral at infinity of the coulomb branch . the result is applied to many examples , including quiver quantum mechanics that is relevant for bps states in $ d=4 $ $ { \\mathcal n } =2 $ theories .
face super-resolution is a subset of super resolution ( sr ) that aims to retrieve a high-resolution ( hr ) image of a face from a lower resolution input . recently , deep learning ( dl ) methods have improved drastically the quality of sr generated images . however , these qualitative improvements are not always followed by quantitative improvements in the traditional metrics of the area , namely psnr ( peak signal-to-noise ratio ) and ssim ( structural similarity index ) . in some cases , models that perform better in opinion scores and qualitative evaluation have worse performance in these metrics , indicating they are not sufficiently informative . to address this issue we propose a task-based evaluation procedure based on the comparative performance of face recognition algorithms on hr and sr images to evaluate how well the models retrieve high-frequency and identity defining information . furthermore , as our face recognition model is differentiable , this leads to a novel loss function that can be optimized to improve performance in these tasks . we successfully apply our evaluation method to validate this training method , yielding promising results . story_separator_special_tag in dental computed tomography ( ct ) scanning , high-quality images are crucial for oral disease diagnosis and treatment . however , many artifacts , such as metal artifacts , downsampling artifacts and motion artifacts , can degrade the image quality in practice . the main purpose of this article is to reduce motion artifacts . motion artifacts are caused by the movement of patients during data acquisition during the dental ct scanning process . to remove motion artifacts , the goal of this study was to develop a dental ct motion artifact-correction algorithm based on a deep learning approach . we used dental ct data with motion artifacts reconstructed by conventional filtered back-projection ( fbp ) as inputs to a deep neural network and used the corresponding high-quality ct data as labeled data during training . we proposed training a generative adversarial network ( gan ) with wasserstein distance and mean squared error ( mse ) loss to remove motion artifacts and to obtain high-quality ct dental images . in our network , to improve the generator structure , the generator used a cascaded cnn-net style network with residual blocks . to the best of our knowledge , this story_separator_special_tag faces appear in low-resolution video sequences in various domains such as surveillance . the information accumulated over multiple frames can help super-resolution for high magnification factors . we present a method to super-resolve a face image using the consecutive frames of the face in the same sequence . our method is based on a novel multi-input-single-output framework with a siamese deep network architecture that fuses multiple frames into a single face image . contrary to existing work on video super-resolution , it is model free and does not depend on facial landmark detection that might be difficult to handle for very low-resolution faces . the experiments show that the use of multiple frames as input improves the performance compared to single-inputsingle-output systems . story_separator_special_tag abstract in this paper , we study the various approaches and methodologies used for face hallucination . face hallucination was first presented as high-resolution image from a low-resolution image . the numerous applications of this method include in the field of image enhancement , face recognition surveillance and security . it is useful in surveillance and security system to enhance the a low-resolution face which possesses facial details matching that of a potential high-resolution image , helping in further analysis . in this paper we have analysed various approaches for enhancing low-resolution images namely , face hallucination ( fh ) with sparse representation , fh using eigentransformation , fh via locality constraint representation , learning-based fh in dct ( discrete cosine transform ) domain . story_separator_special_tag faces often appear very small in surveillance imagery because of the wide fields of view that are typically used and the relatively large distance between the cameras and the scene . for tasks such as face recognition , resolution enhancement techniques are therefore generally needed . although numerous resolution enhancement algorithms have been proposed in the literature , most of them are limited by the fact that they make weak , if any , assumptions about the scene . we propose an algorithm to learn a prior on the spatial distribution of the image gradient for frontal images of faces . we proceed to show how such a prior can be incorporated into a resolution enhancement algorithm to yield 4- to 8-fold improvements in resolution ( i.e. , 16 to 64 times as many pixels ) . the additional pixels are , in effect , hallucinated . story_separator_special_tag recent progress in face detection ( including keypoint detection ) , and recognition is mainly being driven by ( i ) deeper convolutional neural network architectures , and ( ii ) larger datasets . however , most of the large datasets are maintained by private companies and are not publicly available . the academic computer vision community needs larger and more varied datasets to make further progress . in this paper , we introduce a new face dataset , called umdfaces , which has 367,888 annotated faces of 8,277 subjects . we also introduce a new face recognition evaluation protocol which will help advance the state-of-the-art in this area . we discuss how a large dataset can be collected and annotated using human annotators and deep networks . we provide human curated bounding boxes for faces . we also provide estimated pose ( roll , pitch and yaw ) , locations of twenty-one key-points and gender information generated by a pre-trained neural network . in addition , the quality of keypoint annotations has been verified by humans for about 115,000 images . finally , we compare the quality of the dataset with other publicly available face datasets at similar scales story_separator_special_tag there are many factors affecting visual face recognition , such as low resolution images , aging , illumination and pose variance , etc . one of the most important problem is low resolution face images which can result in bad performance on face recognition . the modern face hallucination models demonstrate reasonable performance to reconstruct high-resolution images from its corresponding low resolution images . however , they do not consider identity level information during hallucination which directly affects results of the recognition of low resolution faces . to address this issue , we propose a face hallucination generative adversarial network ( fh-gan ) which improves the quality of low resolution face images and accurately recognize those low quality images . concretely , we make the following contributions : ( 1 ) we propose fh-gan network , an end-to-end system , that improves both face hallucination and face recognition simultaneously . the novelty of this proposed network depends on incorporating identity information in a gan-based face hallucination algorithm via combining a face recognition network for identity preserving . ( 2 ) we also propose a new face hallucination network , namely dense sparse network ( dsnet ) , which improves upon story_separator_special_tag we propose a new equilibrium enforcing method paired with a loss derived from the wasserstein distance for training auto-encoder based generative adversarial networks . this method balances the generator and discriminator during training . additionally , it provides a new approximate convergence measure , fast and stable training and high visual quality . we also derive a way of controlling the trade-off between image diversity and visual quality . we focus on the image generation task , setting a new milestone in visual quality , even at higher resolutions . this is achieved while using a relatively simple model architecture and a standard training procedure . story_separator_special_tag we propose a novel single face image super-resolution method , which named face conditional generative adversarial network ( fcgan ) , based on boundary equilibrium generative adversarial networks . without taking any facial prior information , our method can generate a high-resolution face image from a low-resolution one . compared with existing studies , both our training and testing phases are end-to-end pipeline with little pre/post-processing . to enhance the convergence speed and strengthen feature propagation , skip-layer connection is further employed in the generative and discriminative networks . extensive experiments demonstrate that our model achieves competitive performance compared with state-of-the-art models . story_separator_special_tag this paper investigates how far a very deep neural network is from attaining close to saturating performance on existing 2d and 3d face alignment datasets . to this end , we make the following 5 contributions : ( a ) we construct , for the first time , a very strong baseline by combining a state-of-the-art architecture for landmark localization with a state-of-the-art residual block , train it on a very large yet synthetically expanded 2d facial landmark dataset and finally evaluate it on all other 2d facial landmark datasets . ( b ) we create a guided by 2d landmarks network which converts 2d landmark annotations to 3d and unifies all existing datasets , leading to the creation of ls3d-w , the largest and most challenging 3d facial landmark dataset to date ( ~230,000 images ) . ( c ) following that , we train a neural network for 3d face alignment and evaluate it on the newly introduced ls3d-w. ( d ) we further look into the effect of all traditional factors affecting face alignment performance like large pose , initialization and resolution , and introduce a new one , namely the size of the network . ( story_separator_special_tag this paper addresses 2 challenging tasks : improving the quality of low resolution facial images and accurately locating the facial landmarks on such poor resolution images . to this end , we make the following 5 contributions : ( a ) we propose super-fan : the very first end-to-end system that addresses both tasks simultaneously , i.e . both improves face resolution and detects the facial landmarks . the novelty or super-fan lies in incorporating structural information in a gan-based super-resolution algorithm via integrating a sub-network for face alignment through heatmap regression and optimizing a novel heatmap loss . ( b ) we illustrate the benefit of training the two networks jointly by reporting good results not only on frontal images ( as in prior work ) but on the whole spectrum of facial poses , and not only on synthetic low resolution images ( as in prior work ) but also on real-world images . ( c ) we improve upon the state-of-the-art in face super-resolution by proposing a new residual-based architecture . ( d ) quantitatively , we show large improvement over the state-of-the-art for both face super-resolution and alignment . ( e ) qualitatively , we show story_separator_special_tag this paper is on image and face super-resolution . the vast majority of prior work for this problem focus on how to increase the resolution of low-resolution images which are artificially generated by simple bilinear down-sampling ( or in a few cases by blurring followed by down-sampling ) . we show that such methods fail to produce good results when applied to real-world low-resolution , low quality images . to circumvent this problem , we propose a two-stage process which firstly trains a high-to-low generative adversarial network ( gan ) to learn how to degrade and downsample high-resolution images requiring , during training , only unpaired high and low-resolution images . once this is achieved , the output of this network is used to train a low-to-high gan for image super-resolution using this time paired low- and high-resolution images . our main result is that this network can be now used to effectively increase the quality of real-world low-resolution images . we have applied the proposed pipeline for the problem of face super-resolution where we report large improvement over baselines and prior work although the proposed method is potentially applicable to other object categories . story_separator_special_tag combined variations containing low-resolution and occlusion often present in face images in the wild , e.g. , under the scenario of video surveillance . while most of the existing face image recovery approaches can handle only one type of variation per model , in this work , we propose a deep generative adversarial network ( fcsr-gan ) for performing joint face completion and face super-resolution via multi-task learning . the generator of fcsr-gan aims to recover a high-resolution face image without occlusion given an input low-resolution face image with occlusion . the discriminator of fcsr-gan uses a set of carefully designed losses ( an adversarial loss , a perceptual loss , a pixel loss , a smooth loss , a style loss , and a face prior loss ) to assure the high quality of the recovered high-resolution face images without occlusion . the whole network of fcsr-gan can be trained end-to-end using our two-stage training strategy . experimental results on the public-domain celeba and helen databases show that the proposed approach outperforms the state-of-the-art methods in jointly performing face super-resolution ( up to $ 8\\times $ ) and face completion , and shows good generalization ability in cross-database testing story_separator_special_tag combined variations such as low-resolution and occlusion often present in face images in the wild , e.g. , under the scenario of video surveillance . while most of the existing face enhancement approaches only handle one type of variation per model , in this paper , we propose a deep generative adversarial network ( fcsr-gan ) for joint face completion and face super-resolution via one model . the generator of fcsr-gan aims to recover a high-resolution face image without occlusion given an input low-resolution face image with partial occlusions . the discriminator of fcsr-gan consists of two adversarial losses , a perceptual loss , and a face parsing loss , which assure the high quality of the recovered face images . experimental results on several public-domain databases ( celeba and helen ) show that the proposed approach outperforms the state-of-the-art methods in jointly doing face super-resolution ( up to 4\xd7 ) and face completion from low-resolution face images with occlusions . story_separator_special_tag face super-resolution methods usually aim at producing visually appealing results rather than preserving distinctive features for further face identification . in this work , we propose a deep learning method for face verification on very low-resolution face images that involves identity-preserving face super-resolution . our framework includes a super-resolution network and a feature extraction network . we train a vgg-based deep face recognition network ( parkhi et al . 2015 ) to be used as feature extractor . our super-resolution network is trained to minimize the feature distance between the high resolution ground truth image and the super-resolved image , where features are extracted using our pre-trained feature extraction network . we carry out experiments on frgc , multi-pie , lfw-a , and megaface datasets to evaluate our method in controlled and uncontrolled settings . the results show that the presented method outperforms conventional super-resolution methods in low-resolution face verification . story_separator_special_tag the image super-resolution algorithm can overcome the imaging system s hardware limitation and obtain higher resolution and clearer images . existing super-resolution methods based on convolutional neural networks ( cnn ) can learn the mapping relationship between high-resolution ( hr ) and low-resolution ( lr ) images . however , when the reconstruction target is a face image , the reconstruction results often have problems that the face area is too smooth and lacks details . we propose a guided cascaded face super-resolution network , called guided cascaded super-resolution network ( gcfsrnet ) . gcfsrnet takes the lr image and a high-quality guided image as inputs , and it consists of a pose deformation module and a super-resolution network . firstly , the pose deformation module converts the guide image s posture into the same as the low-resolution face image based on 3d fitting and 3d morphable model ( 3dmm ) . then , the lr image and the deformed guide image are used as input of the super-resolution network . the super-resolution networks are formed by a cascade of two layers of networks , which extract different features . during the reconstruction process , the guide image can provide story_separator_special_tag face hallucination is a domain-specific super-resolution problem with the goal to generate high-resolution ( hr ) faces from low-resolution ( lr ) input images . in contrast to existing methods that often learn a single patch-to-patch mapping from lr to hr images and are regardless of the contextual interdependency between patches , we propose a novel attention-aware face hallucination ( attention-fh ) framework which resorts to deep reinforcement learning for sequentially discovering attended patches and then performing the facial part enhancement by fully exploiting the global interdependency of the image . specifically , in each time step , the recurrent policy network is proposed to dynamically specify a new attended region by incorporating what happened in the past . the state ( i.e. , face hallucination result for the whole image ) can thus be exploited and updated by the local enhancement network on the selected region . the attention-fh approach jointly learns the recurrent policy network and local enhancement network through maximizing the long-term reward that reflects the hallucination performance over the whole image . therefore , our proposed attention-fh is capable of adaptively personalizing an optimal searching path for each face image according to its own characteristic . story_separator_special_tag in this paper , we introduce a new large-scale face dataset named vggface2 . the dataset contains 3.31 million images of 9131 subjects , with an average of 362.6 images for each subject . images are downloaded from google image search and have large variations in pose , age , illumination , ethnicity and profession ( e.g . actors , athletes , politicians ) . the dataset was collected with three goals in mind : ( i ) to have both a large number of identities and also a large number of images for each identity ; ( ii ) to cover a large range of pose , age and ethnicity ; and ( iii ) to minimise the label noise . we describe how the dataset was collected , in particular the automated and manual filtering stages to ensure a high accuracy for the images of each identity . to assess face recognition performance using the new dataset , we train resnet-50 ( with and without squeeze-and-excitation blocks ) convolutional neural networks on vggface2 , on ms-celeb-1m , and on their union , and show that training on vggface2 leads to improved recognition performance over pose and age . story_separator_special_tag we present a learning-based method to super-resolve face images using a kernel principal component analysis-based prior model . a prior probability is formulated based on the energy lying outside the span of principal components identified in a higher-dimensional feature space . this is used to regularize the reconstruction of the high-resolution image . we demonstrate with experiments that including higher-order correlations results in significant improvements story_separator_special_tag we show that pre-trained generative adversarial networks ( gans ) , e.g. , stylegan , can be used as a latent bank to improve the restoration quality of large-factor image super-resolution ( sr ) . while most existing sr approaches attempt to generate realistic textures through learning with adversarial loss , our method , generative latent bank ( glean ) , goes beyond existing practices by directly leveraging rich and diverse priors encapsulated in a pre-trained gan . but unlike prevalent gan inversion methods that require expensive image-specific optimization at runtime , our approach only needs a single forward pass to generate the upscaled image . glean can be easily incorporated in a simple encoder-bank-decoder architecture with multi-resolution skip connections . switching the bank allows the method to deal with images from diverse categories , e.g. , cat , building , human face , and car . images upscaled by glean show clear improvements in terms of fidelity and texture faithfulness in comparison to existing methods . story_separator_special_tag in this paper , we propose a novel method for solving single-image super-resolution problems . given a low-resolution image as input , we recover its high-resolution counterpart using a set of training examples . while this formulation resembles other learning-based methods for super-resolution , our method has been inspired by recent manifold teaming methods , particularly locally linear embedding ( lle ) . specifically , small image patches in the lowand high-resolution images form manifolds with similar local geometry in two distinct feature spaces . as in lle , local geometry is characterized by how a feature vector corresponding to a patch can be reconstructed by its neighbors in the feature space . besides using the training image pairs to estimate the high-resolution embedding , we also enforce local compatibility and smoothness constraints between patches in the target high-resolution image through overlapping . experiments show that our method is very flexible and gives good empirical results . story_separator_special_tag this paper introduces a method for face recognition across age and also a dataset containing variations of age in the wild . we use a data-driven method to address the cross-age face recognition problem , called cross-age reference coding ( carc ) . by leveraging a large-scale image dataset freely available on the internet as a reference set , carc can encode the low-level feature of a face image with an age-invariant reference space . in the retrieval phase , our method only requires a linear projection to encode the feature and thus it is highly scalable . to evaluate our method , we introduce a large-scale dataset called cross-age celebrity dataset ( cacd ) . the dataset contains more than 160 000 images of 2,000 celebrities with age ranging from 16 to 62. experimental results show that our method can achieve state-of-the-art performance on both cacd and the other widely used dataset for face recognition across age . to understand the difficulties of face recognition across age , we further construct a verification subset from the cacd called cacd-vs and conduct human evaluation using amazon mechanical turk . cacd-vs contains 2,000 positive pairs and 2,000 negative pairs and is story_separator_special_tag general image super-resolution techniques have difficulties in recovering detailed face structures when applying to low resolution face images . recent deep learning based methods tailored for face images have achieved improved performance by jointly trained with additional task such as face parsing and landmark prediction . however , multi-task learning requires extra manually labeled data . besides , most of the existing works can only generate relatively low resolution face images ( e.g . , $ 128\\times 128 $ ) , and their applications are therefore limited . in this paper , we introduce a novel spatial attention residual network ( sparnet ) built on our newly proposed face attention units ( faus ) for face super-resolution . specifically , we introduce a spatial attention mechanism to the vanilla residual blocks . this enables the convolutional layers to adaptively bootstrap features related to the key face structures and pay less attention to those less feature-rich regions . this makes the training more effective and efficient as the key face structures only account for a very small portion of the face image . visualization of the attention maps shows that our spatial attention network can capture the key face structures well story_separator_special_tag face restoration is important in face image processing , and has been widely studied in recent years . however , previous works often fail to generate plausible high quality ( hq ) results for real-world low quality ( lq ) face images . in this paper , we propose a new progressive semantic-aware style transformation framework , named psfr-gan , for face restoration . specifically , instead of using an encoder-decoder framework as previous methods , we formulate the restoration of lq face images as a multi-scale progressive restoration procedure through semantic-aware style transformation . given a pair of lq face image and its corresponding parsing map , we first generate a multi-scale pyramid of the inputs , and then progressively modulate different scale features from coarse-to-fine in a semantic-aware style transfer way . compared with previous networks , the proposed psfr-gan makes full use of the semantic ( parsing maps ) and pixel ( lq images ) space information from different scales of input pairs . in addition , we further introduce a semantic aware style loss which calculates the feature style loss for each semantic region individually to improve the details of face textures . finally , we story_separator_special_tag face super-resolution reconstruction is the process of predicting high-resolution face images from one or more observed low-resolution face images , which is a typical pathological problem . as a domain-specific super-resolution task , we can use facial priori knowledge to improve the effect of super-resolution . we propose a method of face image super-resolution reconstruction based on combined representation learning method , using deep residual networks and deep neural networks as generators and discriminators , respectively . first , the model uses residual learning and symmetrical cross-layer connection to extract multilevel features . local residual mapping improves the expressive capability of the network to enhance performance , solves gradient dissipation in network training , and reduces the number of convolution cores in the model through feature reuse . the feature expression of the face image at the high-dimensional visual level is obtained . the visual feature is sent to the decoder through the cross-layer connection structure . the deconvolution layer is used to restore the spatial dimension gradually and repair the details and texture features of the face . finally , combine the attention block and the residual block reconstruction in the deep residual network to super-resolution face images that story_separator_special_tag face super-resolution ( sr ) is a domain-specific superresolution problem . the facial prior knowledge can be leveraged to better super-resolve face images . we present a novel deep end-to-end trainable face super-resolution network ( fsrnet ) , which makes use of the geometry prior , i.e. , facial landmark heatmaps and parsing maps , to super-resolve very low-resolution ( lr ) face images without well-aligned requirement . specifically , we first construct a coarse sr network to recover a coarse high-resolution ( hr ) image . then , the coarse hr image is sent to two branches : a fine sr encoder and a prior information estimation network , which extracts the image features , and estimates landmark heatmaps/parsing maps respectively . both image features and prior information are sent to a fine sr decoder to recover the hr image . to generate realistic faces , we also propose the face super-resolution generative adversarial network ( fsrgan ) to incorporate the adversarial loss into fsrnet . further , we introduce two related tasks , face alignment and parsing , as the new evaluation metrics for face sr , which address the inconsistency of classic metrics w.r.t . visual perception story_separator_special_tag face restoration from low resolution and noise is important for applications of face analysis recognition . however , most existing face restoration models omit the multiple scale issues in the face restoration problem , which is still not well solved in the research area . in this paper , we propose a sequential gating ensemble network ( sgen ) for a multiscale noise robust face restoration issue . to endow the network with multiscale representation ability , we first employ the principle of ensemble learning for sgen network architecture design . the sgen aggregates multilevel base-encoders and base-decoders into the network , which enables the network to contain multiple scales of receptive field . instead of combining these base-en/decoders directly with nonsequential operations , the sgen takes base-en/decoders from different levels as sequential data . specifically , it is visualized that sgen learns to sequentially extract high-level information from base-encoders in a bottom-up manner and restore low-level information from base-decoders in a top-down manner . besides , we propose realizing bottom-up and top-down information combination and selection with a sequential gating unit ( sgu ) . the sgu sequentially takes information from two different levels as inputs and decides the story_separator_special_tag generative adversarial networks ( gans ) have received a tremendous amount of attention in the past few years , and have inspired applications addressing a wide range of problems . despite its great potential , gans are difficult to train . recently , a series of papers ( arjovsky & bottou , 2017a ; arjovsky et al . 2017b ; and gulrajani et al . 2017 ) proposed using wasserstein distance as the training objective and promised easy , stable gan training across architectures with minimal hyperparameter tuning . in this paper , we compare the performance of wasserstein distance with other training objectives on a variety of gan architectures in the context of single image super-resolution . our results agree that wasserstein gan with gradient penalty ( wgan-gp ) provides stable and converging gan training and that wasserstein distance is an effective metric to gauge training progress . story_separator_special_tag in this paper , we propose an identity-preserving face hallucination ( ipfh ) method via deep reinforcement learning . most existing methods ultra-resolve facial visual information in guidance of appearance similarity which rarely attend to recovering the semantic property , undermining further face analysis ( e.g. , recognition ) . we present a visual-semantic hallucinator relying on deep reinforcement learning to adaptively repair local details for the restoration of both identity and appearance characteristics . specifically , we first capture the facial global topology structure to roughly recover the visual information with the pixel-wise similarity constraint . to super-resolve more photo-realistic faces , we explore the contextual interdependency to reconstruct facial local textural details ( e.g. , over-smoothed edges ) with the constraints of visual and identity similarity . in terms of the visual similarity constraint , we develop the dual domain network with bidirectional consistency on both hr domain and lr domain to improve the appearance quality . moreover , we introduce the identity constraint to encourage hallucinated faces to satisfy the identity property . experimental results on several benchmarks demonstrate our method achieves promising performance on the recovery of visual and semantic information . story_separator_special_tag existing facial image super-resolution ( sr ) methods focus mostly on improving `` artificially down-sampled '' lowresolution ( lr ) imagery . such sr models , although strong at handling artificial lr images , often suffer from significant performance drop on genuine lr test data . previous unsupervised domain adaptation ( uda ) methods address this issue by training a model using unpaired genuine lr and hr data as well as cycle consistency loss formulation . however , this renders the model overstretched with two tasks : consistifying the visual characteristics and enhancing the image resolution . importantly , this makes the end-to-end model training ineffective due to the difficulty of back-propagating gradients through two concatenated cnns . to solve this problem , we formulate a method that joins the advantages of conventional sr and uda models . specifically , we separate and control the optimisations for characteristics consistifying and image super-resolving by introducing characteristic regularisation ( cr ) between them . this task split makes the model training more effective and computationally tractable . extensive evaluations demonstrate the performance superiority of our method over state-of-the-art sr and uda models on both genuine and artificial lr facial imagery data . story_separator_special_tag in video surveillance , low resolution in face recognition is a major problem . various super resolution ( sr ) approaches are introduced to perform the high resolution of face video recognition from low resolution videos . however , enhancing the resolution of face videos and reconstructing the high frequency data is a major problem in research area . therefore , an effective face video super resolution method-based on deep convolutional neural network ( deep cnn ) is introduced in this paper to achieve the face resolution effectively . initially , the input video collected from the database is passed into the frame extraction stage , where the video frames are extracted and the face detection is carried out using the viola jones algorithm . moreover , the detected image frame is processed by the deep convolutional neural network ( deep cnn ) to enhance the image resolution . deep cnn is highly effective in performing super resolution in face videos . however , the proposed deep convolutional neural network attains better performance using the metrics , like second derivative like measure of enhancement ( sdme ) as 0.9743 using video-1 , and feature similarity index ( fsim ) as story_separator_special_tag recovering details from dark images has received increasing attention due to its potential in applications such as video surveillance . we propose the first approach to detect and enhance human faces in extreme low-light images . our method consists of two stages : a novel face location network ( flnet ) to locate the face , followed by a face enhancement network ( fe-net ) that uses concatenated sub-modules to progressively recover the face from coarse to fine grained details . specifically , our enhancement modules exploit the semantic priors of facial landmarks to facilitate face recovery . extensive experiments show our method is quantitatively and qualitatively superior to the state-of-the-art in terms of enhancement quality and face recognition . we have also collected a real-world dataset to support relevant research . all code and data will be shared for reproducing our experiments . story_separator_special_tag nowadays , due to the ubiquitous visual media there are vast amounts of already available high-resolution ( hr ) face images . therefore , for super-resolving a given very low-resolution ( lr ) face image of a person it is very likely to find another hr face image of the same person which can be used to guide the process . in this paper , we propose a convolutional neural network ( cnn ) -based solution , namely gwainet , which applies super-resolution ( sr ) by a factor 8x on face images guided by another unconstrained hr face image of the same person with possible differences in age , expression , pose or size . gwainet is trained in an adversarial generative manner to produce the desired high quality perceptual image results . the utilization of the hr guiding image is realized via the use of a warper subnetwork that aligns its contents to the input image and the use of a feature fusion chain for the extracted features from the warped guiding image and the input image . in training , the identity loss further helps in preserving the identity related features by minimizing the distance between the story_separator_special_tag we propose a deep learning method for single image super-resolution ( sr ) . our method directly learns an end-to-end mapping between the low/high-resolution images . the mapping is represented as a deep convolutional neural network ( cnn ) that takes the low-resolution image as the input and outputs the high-resolution one . we further show that traditional sparse-coding-based sr methods can also be viewed as a deep convolutional network . but unlike traditional methods that handle each component separately , our method jointly optimizes all layers . our deep cnn has a lightweight structure , yet demonstrates state-of-the-art restoration quality , and achieves fast speed for practical on-line usage . we explore different network structures and parameter settings to achieve trade-offs between performance and speed . moreover , we extend our network to cope with three color channels simultaneously , and show better overall reconstruction quality . story_separator_special_tag generative adversarial networks ( gans ) have been employed for face super resolution but they bring distorted facial details easily and still have weakness on recovering realistic texture . to further improve the performance of gan-based models on super-resolving face images , we propose pca-srgan which pays attention to the cumulative discrimination in the orthogonal projection space spanned by pca projection matrix of face data . by feeding the principal component projections ranging from structure to details into the discriminator , the discrimination difficulty will be greatly alleviated and the generator can be enhanced to reconstruct clearer contour and finer texture , helpful to achieve the high perception and low distortion eventually . this incremental orthogonal projection discrimination has ensured a precise optimization procedure from coarse to fine and avoids the dependence on the perceptual regularization . we conduct experiments on celeba and ffhq face datasets . the qualitative visual effect and quantitative evaluation have demonstrated the overwhelming performance of our model over related works . story_separator_special_tag the cross-sensor gap is one of the challenges that have aroused much research interests in heterogeneous face recognition ( hfr ) . although recent methods have attempted to fill the gap with deep generative networks , most of them suffer from the inevitable misalignment between different face modalities . instead of imaging sensors , the misalignment primarily results from facial geometric variations that are independent of the spectrum . rather than building a monolithic but complex structure , this paper proposes a pose aligned cross-spectral hallucination ( pach ) approach to disentangle the independent factors and deal with them in individual stages . in the first stage , an unsupervised face alignment ( ufa ) module is designed to align the facial shapes of the near-infrared ( nir ) images with those of the visible ( vis ) images in a generative way , where uv maps are effectively utilized as the shape guidance . thus the task of the second stage becomes spectrum translation with aligned paired data . we develop a texture prior synthesis ( tps ) module to achieve complexion control and consequently generate more realistic vis images than existing methods . experiments on three challenging nir-vis story_separator_special_tag this paper addresses the traditional issue of restoring a high-resolution ( hr ) facial image from a low-resolution ( lr ) counterpart . current state-of-the-art super-resolution ( sr ) methods commonly adopt the convolutional neural networks to learn a non-linear complex mapping between paired lr and hr images . they discriminate local patterns expressed by the neighboring pixels along the planar directions but ignore the intrinsic 3d proximity including the depth map . as a special case of general images , the face has limited geometric variations , which we believe that the relevant depth map can be learned and used to guide the face sr task . motivated by it , we design a network including two branches : one for auxiliary depth map estimation and the other for the main sr task . adaptive geometric features are further learned from the depth map and used to modulate the mid-level features of the sr branch . the whole network is implemented in an end-to-end trainable manner under the extra supervision of depth map . the supervisory depth map is either a paired one from rgb-d scans or a reconstructed one by a 3d prior model of faces . the story_separator_special_tag as a domain-specific super-resolution problem , facial image hallucination has enjoyed a series of breakthroughs thanks to the advances of deep convolutional neural networks . however , the direct migration of existing methods to video is still difficult to achieve good performance due to its lack of alignment and consistency modelling in temporal domain . taking advantage of high inter-frame dependency in videos , we propose a self-enhanced convolutional network for facial video hallucination . it is implemented by making full usage of preceding super-resolved frames and a temporal window of adjacent low-resolution frames . specifically , the algorithm first obtains the initial high-resolution inference of each frame by taking into consideration a sequence of consecutive low-resolution inputs through temporal consistency modelling . it further recurrently exploits the reconstructed results and intermediate features of a sequence of preceding frames to improve the initial super-resolution of the current frame by modelling the coherence of structural facial features across frames . quantitative and qualitative evaluations demonstrate the superiority of the proposed algorithm against state-of-the-art methods . moreover , our algorithm also achieves excellent performance in the task of general video super-resolution in a single-shot setting . story_separator_special_tag recently , many convolutional neural network ( cnn ) algorithms have been proposed for image super-resolution , but most of them aim at architecture or natural scene images . in this paper , we propose a new fractal residual network model for face image super-resolution , which is very useful in the domain of surveillance and security . the architecture of the proposed model is composed of multi-branches . each branch is incrementally cascaded with multiple self-similar residual blocks , which makes the branch appears as a fractal structure . such a structure makes it possible to learn both global residual and local residual sufficiently . we propose a multi-scale progressive training strategy to enlarge the image size and make the training feasible . we propose to combine the loss of face attributes and face structure to refine the super-resolution results . meanwhile , adversarial training is introduced to generate details . the results of our proposed model outperform other benchmark methods in qualitative and quantitative analysis . story_separator_special_tag most face super-resolution methods assume that low- and high-resolution manifolds have similar local geometrical structure ; hence , learn local models on the low-resolution manifold ( e.g . , sparse or locally linear embedding models ) , which are then applied on the high-resolution manifold . however , the low-resolution manifold is distorted by the one-to-many relationship between low- and high-resolution patches . this paper presents the linear model of coupled sparse support ( lm-css ) method , which learns linear models based on the local geometrical structure on the high-resolution manifold rather than on the low-resolution manifold . for this , in a first step , the low-resolution patch is used to derive a globally optimal estimate of the high-resolution patch . the approximated solution is shown to be close in the euclidean space to the ground truth , but is generally smooth and lacks the texture details needed by the state-of-the-art face recognizers . unlike existing methods , the sparse support that best estimates the first approximated solution is found on the high-resolution manifold . the derived support is then used to extract the atoms from the coupled low- and high-resolution dictionaries that are most suitable to learn story_separator_special_tag we propose a straightforward method that simultaneously reconstructs the 3d facial structure and provides dense alignment . to achieve this , we design a 2d representation called uv position map which records the 3d shape of a complete face in uv space , then train a simple convolutional neural network to regress it from a single 2d image . we also integrate a weight mask into the loss function during training to improve the performance of the network . our method does not rely on any prior face model , and can reconstruct full facial geometry along with semantic meaning . meanwhile , our network is very light-weighted and spends only 9.8ms to process an image , which is extremely faster than previous works . experiments on multiple challenging datasets show that our method surpasses other state-of-the-art methods on both reconstruction and alignment tasks by a large margin . story_separator_special_tag rendering the semantic content of an image in different styles is a difficult image processing task . arguably , a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and , thus , allow to separate image content from style . here we use image representations derived from convolutional neural networks optimised for object recognition , which make high level image information explicit . we introduce a neural algorithm of artistic style that can separate and recombine the image content and style of natural images . the algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks . our results provide new insights into the deep image representations learned by convolutional neural networks and demonstrate their potential for high level image synthesis and manipulation . story_separator_special_tag in the past few years , a lot of work has been done towards reconstructing the 3d facial structure from single images by capitalizing on the power of deep convolutional neural networks ( dcnns ) . in the most recent works , differentiable renderers were employed in order to learn the relationship between the facial identity features and the parameters of a 3d morphable model for shape and texture . the texture features either correspond to components of a linear texture space or are learned by auto-encoders directly from in-the-wild images . in all cases , the quality of the facial texture reconstruction of the state-of-the-art methods is still not capable of modeling textures in high fidelity . in this paper , we take a radically different approach and harness the power of generative adversarial networks ( gans ) and dcnns in order to reconstruct the facial texture and shape from single images . that is , we utilize gans to train a very powerful generator of facial texture in uv space . then , we revisit the original 3d morphable models ( 3dmms ) fitting approaches making use of non-linear optimization to find the optimal latent parameters that best story_separator_special_tag face hallucination is a specific super-resolution problem which aims to generate high-resolution ( hr ) faces from low-resolution ( lr ) input . recently , deep learning methods have been widely applied in single-image super resolution . considering face images have great similarities in both pixel value and global structure , we propose a wavelet-based deep learning method with loop architecture for face hallucination . in contrast to existing wavelet-based methods that generate wavelet coefficients independently without considering relationships between them , we propose a three-stage method with loop architecture . this alternately updated loop structure explores the statistical relationships among wavelet coefficients and has a maximum use of information flow with a small number of parameters . because of multi-resolution property of wavelet transform , we adopt a mixed input strategy to train images with different sizes to realize multi-scale face hallucination without retraining and adding extra sub-networks . experiments demonstrate that our method can get a robust performance with multi-scale face hallucination . story_separator_special_tag we propose a new framework for estimating generative models via an adversarial process , in which we simultaneously train two models : a generative model g that captures the data distribution , and a discriminative model d that estimates the probability that a sample came from the training data rather than g. the training procedure for g is to maximize the probability of d making a mistake . this framework corresponds to a minimax two-player game . in the space of arbitrary functions g and d , a unique solution exists , with g recovering the training data distribution and d equal to \xbd everywhere . in the case where g and d are defined by multilayer perceptrons , the entire system can be trained with backpropagation . there is no need for any markov chains or unrolled approximate inference networks during either training or generation of samples . experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples . story_separator_special_tag contemporary face hallucination ( fh ) models exhibit considerable ability to reconstruct high-resolution ( hr ) details from low-resolution ( lr ) face images . this ability is commonly learned from examples of corresponding hr-lr image pairs , created by artificially down-sampling the hr ground truth data . this down-sampling ( or degradation ) procedure not only defines the characteristics of the lr training data , but also determines the type of image degradations the learned fh models are eventually able to handle . if the image characteristics encountered with real-world lr images differ from the ones seen during training , fh models are still expected to perform well , but in practice may not produce the desired results . in this paper we study this problem and explore the bias introduced into fh models by the characteristics of the training data . we systematically analyze the generalization capabilities of several fh models in various scenarios where the degradation function does not match the training setup and conduct experiments with synthetically downgraded as well as real-life low-quality images . we make several interesting findings that provide insight into existing problems with fh models and point to future research directions . story_separator_special_tag in this paper we address the problem of hallucinating high-resolution facial images from low-resolution inputs at high magnification factors . we approach this task with convolutional neural networks ( cnns ) and propose a novel ( deep ) face hallucination model that incorporates identity priors into the learning procedure . the model consists of two main parts : i ) a cascaded super-resolution network that upscales the low-resolution facial images , and ii ) an ensemble of face recognition models that act as identity priors for the super-resolution network during training . different from most competing super-resolution techniques that rely on a single model for upscaling ( even with large magnification factors ) , our network uses a cascade of multiple sr models that progressively upscale the low-resolution images using steps of $ 2\\times $ . this characteristic allows us to apply supervision signals ( target appearances ) at different resolutions and incorporate identity constraints at multiple-scales . the proposed c-srip model ( cascaded super resolution with identity priors ) is able to upscale ( tiny ) low-resolution images captured in unconstrained conditions and produce visually convincing results for diverse low-resolution inputs . we rigorously evaluate the proposed model on story_separator_special_tag generative adversarial networks ( gans ) are powerful generative models , but suffer from training instability . the recently proposed wasserstein gan ( wgan ) makes progress toward stable training of gans , but sometimes can still generate only poor samples or fail to converge . we find that these problems are often due to the use of weight clipping in wgan to enforce a lipschitz constraint on the critic , which can lead to undesired behavior . we propose an alternative to clipping weights : penalize the norm of gradient of the critic with respect to its input . our proposed method performs better than standard wgan and enables stable training of a wide variety of gan architectures with almost no hyperparameter tuning , including 101-layer resnets and language models with continuous generators . we also achieve high quality generations on cifar-10 and lsun bedrooms . story_separator_special_tag face images that are captured by surveillance cameras usually have a very low resolution , which significantly limits the performance of face recognition systems . in the past , super-resolution techniques have been proposed to increase the resolution by combining information from multiple images . these techniques use super-resolution as a preprocessing step to obtain a high-resolution image that is later passed to a face recognition system . considering that most state-of-the-art face recognition systems use an initial dimensionality reduction method , we propose to transfer the super-resolution reconstruction from pixel domain to a lower dimensional face space . such an approach has the advantage of a significant decrease in the computational complexity of the super-resolution reconstruction . the reconstruction algorithm no longer tries to obtain a visually improved high-quality image , but instead constructs the information required by the recognition system directly in the low dimensional domain without any unnecessary overhead . in addition , we show that face-space super-resolution is more robust to registration errors and noise than pixel-domain super-resolution because of the addition of model-based constraints . story_separator_special_tag face hallucination refers to obtaining a clean face image from a degraded ones . the degraded face is assumed to be related to the clean face through the forward imaging model that account for blurring , sampling and noise . in recent years , many methods have been proposed and improved well progress . these methods usually learn a regression function to reconstruct the entire picture . however , there are huge differences among the optimal learned regression functions in different regions . in other words , the learned regression function needs to process all regions , which makes it difficult to reconstruct a satisfactory picture . as a result , the reconstructed images in some regions are relatively smooth . in order to address the problem , we present a novel face hallucination framework , called adaptive aggregation network ( aan ) , which uses the aggregation network to guide face hallucination . our network contains two branches : aggregation branch and generator branch . specifically , our aggregation branch can explore regression function from low-resolution ( lr ) to high-resolution ( hr ) images in different regions , and aggregate the regions by the similarity of the regression story_separator_special_tag generative adversarial networks ( gans ) excel at creating realistic images with complex models for which maximum likelihood is infeasible . however , the convergence of gan training has still not been proved . we propose a two time-scale update rule ( ttur ) for training gans with stochastic gradient descent on arbitrary gan loss functions . ttur has an individual learning rate for both the discriminator and the generator . using the theory of stochastic approximation , we prove that the ttur converges under mild assumptions to a stationary local nash equilibrium . the convergence carries over to the popular adam optimization , for which we prove that it follows the dynamics of a heavy ball with friction and thus prefers flat minima in the objective landscape . for the evaluation of the performance of gans at image generation , we introduce the ` frechet inception distance '' ( fid ) which captures the similarity of generated images to real ones better than the inception score . in experiments , ttur improves learning for dcgans and improved wasserstein gans ( wgan-gp ) outperforming conventional gan training on celeba , cifar-10 , svhn , lsun bedrooms , and the one story_separator_special_tag though generative adversarial networks ( gans ) can hallucinate high-quality high-resolution ( hr ) faces from low-resolution ( lr ) faces , they can not ensure identity preservation during face hallucination , making the hr faces difficult to recognize . to address this problem , we propose a siamese gan ( sigan ) to reconstruct hr faces that visually resemble their corresponding identities . on top of a siamese network , the proposed sigan consists of a pair of two identical generators and one discriminator . we incorporate reconstruction error and identity label information in the loss function of sigan in a pairwise manner . by iteratively optimizing the loss functions of the generator pair and the discriminator of sigan , we not only achieve visually-pleasing face reconstruction but also ensure that the reconstructed information is useful for identity recognition . experimental results demonstrate that sigan significantly outperforms existing face hallucination gans in objective face verification performance while achieving promising visual-quality reconstruction . moreover , for input lr faces with unseen identities that are not part of the training dataset , sigan can still achieve reasonable performance . story_separator_special_tag abstract to hallucinate super-resolution ( super-res ) face from a real low-quality face , a super-resolution technique based on definition-scalable inference ( srdsi ) is proposed in this paper . in the proposed strategy , all high-res labeled faces are first decomposed into basic faces and enhanced faces to train a basic face and an enhanced face inferring model , and then two inferring models are used to hallucinate super-res basic face with low-definition and enhanced faces with high-frequency information from a single low-res face . finally , the basic face is merged with its enhanced face into a super-res face with high-definition . in addition , this paper employs sift key-points to evaluate the similarity between the super-res face and its high-res labeled face . experimental results show that srdsi can effectively recover more structural information as well as sift key-points from real low-res faces and achieves better performance than state-of-the-art super-resolution techniques in terms of both visual and objective quality . story_separator_special_tag state-of-the-art face super-resolution methods employ deep convolutional neural networks to learn a mapping between low- and high- resolution facial patterns by exploring local appearance knowledge . however , most of these methods do not well exploit facial structures and identity information , and struggle to deal with facial images that exhibit large pose variations . in this paper , we propose a novel face super-resolution method that explicitly incorporates 3d facial priors which grasp the sharp facial structures . our work is the first to explore 3d morphable knowledge based on the fusion of parametric descriptions of face attributes ( e.g. , identity , facial expression , texture , illumination , and face pose ) . furthermore , the priors can easily be incorporated into any network and are extremely efficient in improving the performance and accelerating the convergence speed . firstly , a 3d face rendering branch is set up to obtain 3d priors of salient facial structures and identity knowledge . secondly , the spatial attention module is used to better exploit this hierarchical information ( i.e. , intensity similarity , 3d facial structure , and identity content ) for the super-resolution problem . extensive experiments demonstrate that story_separator_special_tag face hallucination aims to generate a high-resolution ( hr ) face image from an input low-resolution ( lr ) face image , which is a specific application field of image super resolution for face image . due to the complex and sensitive structures of face image , obtaining a super-resolved face image is more difficult than generic image super resolution . recently , deep learning based methods have been introduced in face hallucination . in this work , we develop a novel network architecture which integrates image super-resolution convolutional neural network with network style iterative back projection ( ibp ) method . extensive experiments demonstrate that the proposed improved model can obtain better performance . story_separator_special_tag most face databases have been created under controlled conditions to facilitate the study of specific parameters on the face recognition problem . these parameters include such variables as position , pose , lighting , background , camera quality , and gender . while there are many applications for face recognition technology in which one can control the parameters of image acquisition , there are also many applications in which the practitioner has little or no control over such parameters . this database , labeled faces in the wild , is provided as an aid in studying the latter , unconstrained , recognition problem . the database contains labeled face photographs spanning the range of conditions typically encountered in everyday life . the database exhibits natural variability in factors such as pose , lighting , race , accessories , occlusions , and background . in addition to describing the details of the database , we provide specific experimental paradigms for which the database is suitable . this is done in an effort to make research performed with the database as consistent and comparable as possible . we provide baseline results , including results of a state of the art face recognition story_separator_special_tag super-resolution reconstruction of face image is the problem of reconstructing a high resolution face image from one or more low resolution face images . assuming that high and low resolution images share similar intrinsic geometries , various recent super-resolution methods reconstruct high resolution images based on a weights determined from nearest neighbors in the local embedding of low resolution images . these methods suffer disadvantages from the finite number of samples and the nature of manifold learning techniques , and hence yield unrealistic reconstructed images . to address the problem , we apply canonical correlation analysis ( cca ) , which maximizes the correlation between the local neighbor relationships of high and low resolution images . we use it separately for reconstruction of global face appearance , and facial details . experiments using a collection of frontal human faces show that the proposed algorithm improves reconstruction quality over existing state-of-the-art super-resolution algorithms , both visually , and using a quantitative peak signal-to-noise ratio assessment . story_separator_special_tag most modern face super-resolution methods resort to convolutional neural networks ( cnn ) to infer highresolution ( hr ) face images . when dealing with very low resolution ( lr ) images , the performance of these cnn based methods greatly degrades . meanwhile , these methods tend to produce over-smoothed outputs and miss some textural details . to address these challenges , this paper presents a wavelet-based cnn approach that can ultra-resolve a very low resolution face image of 16 \xd7 16 or smaller pixelsize to its larger version of multiple scaling factors ( 2\xd7 , 4\xd7 , 8\xd7 and even 16\xd7 ) in a unified framework . different from conventional cnn methods directly inferring hr images , our approach firstly learns to predict the lr s corresponding series of hr s wavelet coefficients before reconstructing hr images from them . to capture both global topology information and local texture details of human faces , we present a flexible and extensible convolutional neural network with three types of loss : wavelet prediction loss , texture loss and full-image loss . extensive experiments demonstrate that the proposed approach achieves more appealing results both quantitatively and qualitatively than state-ofthe- art super-resolution story_separator_special_tag most modern face hallucination methods resort to convolutional neural networks ( cnn ) to infer high-resolution ( hr ) face images . however , when dealing with very low-resolution ( lr ) images , these cnn based methods tend to produce over-smoothed outputs . to address this challenge , this paper proposes a wavelet-domain generative adversarial method that can ultra-resolve a very low-resolution ( like $ $ 16\\times 16 $ $ or even $ $ 8\\times 8 $ $ ) face image to its larger version of multiple upscaling factors ( $ $ 2\\times $ $ to $ $ 16\\times $ $ ) in a unified framework . different from the most existing studies that hallucinate faces in image pixel domain , our method firstly learns to predict the wavelet information of hr face images from its corresponding lr inputs before image-level super-resolution . to capture both global topology information and local texture details of human faces , a flexible and extensible generative adversarial network is designed with three types of losses : ( 1 ) wavelet reconstruction loss aims to push wavelets closer with the ground-truth ; ( 2 ) wavelet adversarial loss aims to generate realistic wavelets ; story_separator_special_tag a face image super-resolution ( sr ) reconstruction based on convolution neural networks is constructed . firstly , extract two-level feature map by multiple convolution kernel images . secondly , after each feature map is extracted , mapping the extracted features to another plane by means of a non-linear mapping method . lastly , rebuilding the final sr images through adding all the second-level feature maps and plus a constant . the experimental results show that our method can get a better result in single face image sr reconstruction . story_separator_special_tag gatys et al . recently introduced a neural algorithm that renders a content image in the style of another image , achieving so-called style transfer . however , their framework requires a slow iterative optimization process , which limits its practical application . fast approximations with feed-forward neural networks have been proposed to speed up neural style transfer . unfortunately , the speed improvement comes at a cost : the network is usually tied to a fixed set of styles and can not adapt to arbitrary new styles . in this paper , we present a simple yet effective approach that for the first time enables arbitrary style transfer in real-time . at the heart of our method is a novel adaptive instance normalization ( adain ) layer that aligns the mean and variance of the content features with those of the style features . our method achieves speed comparable to the fastest existing approach , without the restriction to a pre-defined set of styles . in addition , our approach allows flexible user controls such as content-style trade-off , style interpolation , color & spatial controls , all using a single feed-forward neural network . story_separator_special_tag single image super-resolution ( sisr ) is an image reconstruction technique that aims to generate a high-resolution image from a low-resolution image . one of the sisr implementations is to reconstruct face images in order to gain more facial information from a low-resolution face images . in this paper , we propose a method to reconstruct face images using a generative adversarial network ( gan ) framework that able to generate plausible high-resolution images . inside the gan framework , we use inception residual network to improve the generated image quality and stabilize the training . experimental results demonstrated that our proposed method was able to generate visually pleasant face images with the highest psnr score of 26.615 and ssim score of 0.8461 . story_separator_special_tag 3d face reconstruction is a fundamental computer vision problem of extraordinary difficulty . current systems often assume the availability of multiple facial images ( sometimes from the same subject ) as input , and must address a number of methodological challenges such as establishing dense correspondences across large facial poses , expressions , and non-uniform illumination . in general these methods require complex and inefficient pipelines for model building and fitting . in this work , we propose to address many of these limitations by training a convolutional neural network ( cnn ) on an appropriate dataset consisting of 2d images and 3d facial models or scans . our cnn works with just a single 2d facial image , does not require accurate alignment nor establishes dense correspondence between images , works for arbitrary facial poses and expressions , and can be used to reconstruct the whole 3d facial geometry ( including the non-visible parts of the face ) bypassing the construction ( during training ) and fitting ( during testing ) of a 3d morphable model . we achieve this via a simple cnn architecture that performs direct regression of a volumetric representation of the 3d facial geometry from story_separator_special_tag convolutional neural networks define an exceptionally powerful class of models , but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner . in this work we introduce a new learnable module , the spatial transformer , which explicitly allows the spatial manipulation of data within the network . this differentiable module can be inserted into existing convolutional architectures , giving neural networks the ability to actively spatially transform feature maps , conditional on the feature map itself , without any extra training supervision or modification to the optimisation process . we show that the use of spatial transformers results in models which learn invariance to translation , scale , rotation and more generic warping , resulting in state-of-the-art performance on several benchmarks , and for a number of classes of transformations . story_separator_special_tag the localization of human faces in digital images is a fundamental step in the process of face recognition . this paper presents a shape comparison approach to achieve fast , accurate face detection that is robust to changes in illumination and background . the proposed method is edge-based and works on grayscale still images . the hausdorff distance is used as a similarity measure between a general face model and possible instances of the object within the image . the paper describes an efficient implementation , making this approach suitable for real-time applications . a two-step process that allows both coarse detection and exact localization of faces is presented . experiments were performed on a large test set base and rated with a new validation measurement . story_separator_special_tag recently , position-patch based approaches have been proposed to replace the probabilistic graph-based or manifold learning-based models for face hallucination . in order to obtain the optimal weights of face hallucination , these approaches represent one image patch through other patches at the same position of training faces by employing least square estimation or sparse coding . however , they can not provide unbiased approximations or satisfy rational priors , thus the obtained representation is not satisfactory . in this paper , we propose a simpler yet more effective scheme called locality-constrained representation ( lcr ) . compared with least square representation ( lsr ) and sparse representation ( sr ) , our scheme incorporates a locality constraint into the least square inversion problem to maintain locality and sparsity simultaneously . our scheme is capable of capturing the non-linear manifold structure of image patch samples while exploiting the sparse property of the redundant data representation . moreover , when the locality constraint is satisfied , face hallucination is robust to noise , a property that is desirable for video surveillance applications . a statistical analysis of the properties of lcr is given together with experimental results on some public face story_separator_special_tag most of the current face hallucination methods , whether they are shallow learning-based or deep learning-based , all try to learn a relationship model between low-resolution ( lr ) and high-resolution ( hr ) spaces with the help of a training set . they mainly focus on modeling image prior through either model-based optimization or discriminative inference learning . however , when the input lr face is tiny , the learned prior knowledge is no longer effective and their performance will drop sharply . to solve this problem , in this paper we propose a general face hallucination method that can integrate model-based optimization and discriminative inference . in particular , to exploit the model based prior , the deep convolutional neural networks ( cnn ) denoiser prior is plugged into the super-resolution optimization model with the aid of image-adaptive laplacian regularization . additionally , we further develop a high-frequency details compensation method by dividing the face image to facial components and performing face hallucination in a multi-layer neighbor embedding manner . experiments demonstrate that the proposed method can achieve promising super-resolution results for tiny input lr faces . story_separator_special_tag along with the performance improvement of deep-learning-based face hallucination methods , various face priors ( facial shape , facial landmark heatmaps , or parsing maps ) have been used to describe holistic and partial facial features , making the cost of generating super-resolved face images expensive and laborious . to deal with this problem , we present a simple yet effective dual-path deep fusion network ( dpdfn ) for face image super-resolution ( sr ) without requiring additional face prior , which learns the global facial shape and local facial components through two individual branches . the proposed dpdfn is composed of three components : a global memory subnetwork ( gmn ) , a local reinforcement subnetwork ( lrn ) , and a fusion and reconstruction module ( frm ) . in particular , gmn characterize the holistic facial shape by employing recurrent dense residual learning to excavate wide-range context across spatial series . meanwhile , lrn is committed to learning local facial components , which focuses on the patch-wise mapping relations between low-resolution ( lr ) and high-resolution ( hr ) space on local regions rather than the entire image . furthermore , by aggregating the global and local story_separator_special_tag although tremendous strides have been recently made in face hallucination , exiting methods based on a single deep learning framework can hardly satisfactorily provide fine facial features from tiny faces under complex degradation . this article advocates an adaptive-threshold-based multi-model fusion network ( atmfn ) for compressed face hallucination , which unifies different deep learning models to take advantages of their respective learning merits . first of all , we construct cnn- , gan- and rnn-based underlying super-resolvers to produce candidate sr results . further , the attention subnetwork is proposed to learn the individual fusion weight matrices capturing the most informative components of the candidate sr faces . particularly , the hyper-parameters of the fusion matrices and the underlying networks are optimized together in an end-to-end manner to drive them for collaborative learning . finally , a threshold-based fusion and reconstruction module is employed to exploit the candidates complementarity and thus generate high-quality face images . extensive experiments on benchmark face datasets and real-world samples show that our model outperforms the state-of-the-art sr methods in terms of quantitative indicators and visual effects . the code and configurations are released at https : //github.com/kuihua/atmfn . story_separator_special_tag we provide a position-patch based face hallucination method using convex optimization . recently , a novel position-patch based face hallucination method has been proposed to save computational time and achieve high-quality hallucinated results . this method has employed least square estimation to obtain the optimal weights for face hallucination . however , the least square estimation approach can provide biased solutions when the number of the training position-patches is much larger than the dimension of the patch . to overcome this problem , this letter proposes a new position-patch based face hallucination method which is based on convex optimization . experimental results demonstrate that our method is very effective in producing high-quality hallucinated face images . story_separator_special_tag to make the best use of the underlying structure of faces , the collective information through face datasets and the intermediate estimates during the upsampling process , here we introduce a fully convolutional multi-stage neural network for 4\xd7 super-resolution for face images . we implicitly impose facial component-wise attention maps using a segmentation network to allow our network to focus on face-inherent patterns . each stage of our network is composed of a stem layer , a residual backbone , and spatial upsampling layers . we recurrently apply stages to reconstruct an intermediate image , and then reuse its space-to-depth converted versions to bootstrap and enhance image quality progressively . our experiments show that our face super-resolution method achieves quantitatively superior and perceptually pleasing results in comparison to state of the art . story_separator_special_tag accurate recognition and tracking of human faces are indispensable in applications like face recognition , forensics , etc . the need for enhancing the low resolution faces for such applications has gathered more attention in the past few years . to recognize the faces from the surveillance video footage , the images need to be in a significantly recognizable size . image super-resolution ( sr ) algorithms aid in enlarging or super-resolving the captured low-resolution image into a high-resolution frame . it thereby improves the visual quality of the image for recognition . this paper discusses some of the recent methodologies in face super-resolution ( fsr ) along with an analysis of its performance on some benchmark databases . learning based methods are by far the immensely used technique . sparse representation techniques , neighborhood-embedding techniques , and bayesian learning techniques are all different approaches to learning based methods . the review here demonstrates that , in general , learning based techniques provides better accuracy/ performance even though the computational requirements are high . it is observed that neighbor embedding provides better performances among the learning based techniques . the focus of future research on learning based techniques , such story_separator_special_tag we describe a new training methodology for generative adversarial networks . the key idea is to grow both the generator and discriminator progressively : starting from a low resolution , we add new layers that model increasingly fine details as training progresses . this both speeds the training up and greatly stabilizes it , allowing us to produce images of unprecedented quality , e.g. , celeba images at 1024^2 . we also propose a simple way to increase the variation in generated images , and achieve a record inception score of 8.80 in unsupervised cifar10 . additionally , we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator . finally , we suggest a new metric for evaluating gan results , both in terms of image quality and variation . as an additional contribution , we construct a higher-quality version of the celeba dataset . story_separator_special_tag we propose an alternative generator architecture for generative adversarial networks , borrowing from style transfer literature . the new architecture leads to an automatically learned , unsupervised separation of high-level attributes ( e.g. , pose and identity when trained on human faces ) and stochastic variation in the generated images ( e.g. , freckles , hair ) , and it enables intuitive , scale-specific control of the synthesis . the new generator improves the state-of-the-art in terms of traditional distribution quality metrics , leads to demonstrably better interpolation properties , and also better disentangles the latent factors of variation . to quantify interpolation quality and disentanglement , we propose two new , automated methods that are applicable to any generator architecture . finally , we introduce a new , highly varied and high-quality dataset of human faces . story_separator_special_tag in this paper , we address the problem of face hallucination by proposing a novel multi-scale generative adversarial network ( gan ) architecture optimized for face verification . first , we propose a multi-scale generator architecture for face hallucination with a high up-scaling ratio factor , which has multiple intermediate outputs at different resolutions . the intermediate outputs have the growing goal of synthesizing small to large images . second , we incorporate a face verifier with the original gan discriminator and propose a novel discriminator which learns to discriminate different identities while distinguishing fake generated hr face images from their ground truth images . in particular , the learned generator cares for not only the visual quality of hallucinated face images but also preserving the discriminative features in the hallucination process . in addition , to capture perceptually relevant differences we employ a perceptual similarity loss , instead of similarity in pixel space . we perform a quantitative and qualitative evaluation of our framework on the lfw and celeba datasets . the experimental results show the advantages of our proposed method against the state-of-the-art methods on the 8x downsampled testing dataset . story_separator_special_tag face super-resolution ( sr ) is a subfield of the sr domain that specifically targets the reconstruction of face images . the main challenge of face sr is to restore essential facial features without distortion . we propose a novel face sr method that generates photo-realistic 8x super-resolved face images with fully retained facial details . to that end , we adopt a progressive training method , which allows stable training by splitting the network into successive steps , each producing output with a progressively higher resolution . we also propose a novel facial attention loss and apply it at each step to focus on restoring facial attributes in greater details by multiplying the pixel difference and heatmap values . lastly , we propose a compressed version of the state-of-the-art face alignment network ( fan ) for landmark heatmap extraction . with the proposed fan , we can extract the heatmaps suitable for face sr and also reduce the overall training time . experimental results verify that our method outperforms state-of-the-art methods in both qualitative and quantitative measurements , especially in perceptual quality . story_separator_special_tag we present a highly accurate single-image super-resolution ( sr ) method . our method uses a very deep convolutional network inspired by vgg-net used for imagenet classification \\cite { simonyan2015very } . we find increasing our network depth shows a significant improvement in accuracy . our final model uses 20 weight layers . by cascading small filters many times in a deep network structure , contextual information over large image regions is exploited in an efficient way . with very deep networks , however , convergence speed becomes a critical issue during training . we propose a simple yet effective training procedure . we learn residuals only and use extremely high learning rates ( $ 10^4 $ times higher than srcnn \\cite { dong2015image } ) enabled by adjustable gradient clipping . our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable . story_separator_special_tag face hallucination technique generates high-resolution face images from low-resolution ones . in this paper , we propose a patch based multitask deep learning method for face hallucination , which is robust to blurring of images . our method is based on fully connected feedforward neural network , and the weights of the final layers are fine-tuned separately on different clusters of patches . experimental results show that our system outperforms the prior state-of-the-art methods by a significant margin , while using less testing computation time . story_separator_special_tag face alignment is a crucial step in face recognition tasks . especially , using landmark localization for geometric face normalization has shown to be very effective , clearly improving the recognition results . however , no adequate databases exist that provide a sufficient number of annotated facial landmarks . the databases are either limited to frontal views , provide only a small number of annotated images or have been acquired under controlled conditions . hence , we introduce a novel database overcoming these limitations : annotated facial landmarks in the wild ( aflw ) . aflw provides a large-scale collection of images gathered from flickr , exhibiting a large variety in face appearance ( e.g. , pose , expression , ethnicity , age , gender ) as well as general imaging and environmental conditions . in total 25,993 faces in 21,997 real-world images are annotated with up to 21 landmarks per image . due to the comprehensive set of annotations aflw is well suited to train and test algorithms for multi-view face detection , facial landmark localization and face pose estimation . further , we offer a rich set of tools that ease the integration of other face databases and story_separator_special_tag the state-of-the-art convolutional neural network ( cnn ) -based methods have achieved promising recognition performance on human face images . however , the accuracy can not be retained when face images are at very low resolution ( lr ) . in this paper , we propose a novel loss function , called identity-preserved loss , which combines with the image-content loss to jointly supervise cnns , for performing face hallucination and recognition simultaneously . therefore , the trained network is able to perform face hallucination and identity preservation , even if the query face is of very low resolution . more importantly , experimental results show that our proposed method can preserve the identities for the lr images from unknown subjects , who are not included in the training set . the source code of our proposed method is available at : https : //github.com/johnnysclai/sr_lrfr . story_separator_special_tag convolutional neural networks have recently demonstrated high-quality reconstruction for single-image super-resolution . in this paper , we propose the laplacian pyramid super-resolution network ( lapsrn ) to progressively reconstruct the sub-band residuals of high-resolution images . at each pyramid level , our model takes coarse-resolution feature maps as input , predicts the high-frequency residuals , and uses transposed convolutions for upsampling to the finer level . our method does not require the bicubic interpolation as the pre-processing step and thus dramatically reduces the computational complexity . we train the proposed lapsrn with deep supervision using a robust charbonnier loss function and achieve high-quality reconstruction . furthermore , our network generates multi-scale predictions in one feed-forward pass through the progressive reconstruction , thereby facilitates resource-aware applications . extensive quantitative and qualitative evaluations on benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art methods in terms of speed and accuracy . story_separator_special_tag over the last years , with the advent of generative adversarial networks ( gans ) , many face analysis tasks have accomplished astounding performance , with applications including , but not limited to , face generation and 3d face reconstruction from a single `` in-the-wild '' image . nevertheless , to the best of our knowledge , there is no method which can produce high-resolution photorealistic 3d faces from `` in-the-wild '' images and this can be attributed to the : ( a ) scarcity of available data for training , and ( b ) lack of robust methodologies that can successfully be applied on very high-resolution data . in this paper , we introduce avatarme , the first method that is able to reconstruct photorealistic 3d faces from a single `` in-the-wild '' image with an increasing level of detail . to achieve this , we capture a large dataset of facial shape and reflectance and build on a state-of-the-art 3d texture and shape reconstruction method and successively refine its results , while generating the per-pixel diffuse and specular components that are required for realistic rendering . as we demonstrate in a series of qualitative and quantitative experiments , story_separator_special_tag we address the problem of interactive facial feature localization from a single image . our goal is to obtain an accurate segmentation of facial features on high-resolution images under a variety of pose , expression , and lighting conditions . although there has been significant work in facial feature localization , we are addressing a new application area , namely to facilitate intelligent high-quality editing of portraits , that brings requirements not met by existing methods . we propose an improvement to the active shape model that allows for greater independence among the facial components and improves on the appearance fitting step by introducing a viterbi optimization process that operates along the facial contours . despite the improvements , we do not expect perfect results in all cases . we therefore introduce an interaction model whereby a user can efficiently guide the algorithm towards a precise solution . we introduce the helen facial feature dataset consisting of annotated portrait images gathered from flickr that are more diverse and challenging than currently existing datasets . we present experiments that compare our automatic method to published results , and also a quantitative evaluation of the effectiveness of our interactive method . story_separator_special_tag though existing face hallucination methods achieve great performance on the global region evaluation , most of them can not recover local attributes accurately , especially when super-resolving a very low-resolution face image from 14 \xd7 12 pixels to its 8 \xd7 larger one . in this paper , we propose a brand new attribute augmented convolutional neural network ( aacnn ) to assist face hallucination by exploiting facial attributes . the goal is to augment face hallucination , particularly the local regions , with informative attribute description . more specifically , our method fuses the advantages of both image domain and attribute domain , which significantly assists facial attributes recovery . extensive experiments demonstrate that our proposed method achieves superior visual quality of hallucination on both local region and global region against the state-of-the-art methods . in addition , our aacnn still improves the performance of hallucination adaptively with partial attribute input . story_separator_special_tag surveillance cameras today often capture nir ( near infrared ) images in low-light environments . however , most face datasets accessible for training and verification are only collected in the vis ( visible light ) spectrum . it remains a challenging problem to match nir to vis face images due to the different light spectrum . recently , breakthroughs have been made for vis face recognition by applying deep learning on a huge amount of labeled vis face samples . the same deep learning approach can not be simply applied to nir face recognition for two main reasons : first , much limited nir face images are available for training compared to the vis spectrum . second , face galleries to be matched are mostly available only in the vis spectrum . in this paper , we propose an approach to extend the deep learning breakthrough for vis face recognition to the nir spectrum , without retraining the underlying deep models that see only vis faces . our approach consists of two core components , cross-spectral hallucination and low-rank embedding , to optimize respectively input and output of a vis deep model for cross-spectral face recognition . cross-spectral hallucination produces story_separator_special_tag faces are of particular concerns in video surveillance systems . it is challenging to reconstruct clear faces from low-resolution ( lr ) videos . in this paper , we propose a new method for face video super-resolution ( sr ) based on identity guided generative adversarial networks ( gans ) . we establish a two-stage convolutional neural network ( cnn ) for face video sr , and employ identity guided gans to recover high-resolution ( hr ) facial details . extensive experiments validate the effectiveness of our proposed method from the following aspects : fidelity , visual quality and robustness to pose , expression and illuminance variations . story_separator_special_tag we propose an id preserving face super-resolution generative adversarial networks ( ip-fsrgan ) to reconstruct realistic super-resolution face images from low-resolution ones . inspired by the success of generative adversarial networks ( gan ) , we introduce a novel id preserving module to help the generator learn to infer the facial details and synthesize more realistic super-resolution faces . our method produces satisfactory visual results and also quantitatively outperforms state-of-the-art super-resolution methods on the face datasets including casia-webface , celeba , and lfw datasets under the metrics of psnr , ssim , and cosine similarity . in addition , we propose a framework to apply ip-fsrgan model to address the face verification task on low-resolution face images . the synthesized $ 4\\times $ super-resolution faces achieve a verification accuracy of 97.6 % , improved from 92.8 % of low resolution faces . we also prove by experiments that the proposed ip-fsrgan model demonstrates excellent robustness under different downsample scaling factors and extensibility to various face verification models . story_separator_special_tag face hallucination aims to generate a high resolution face from a low resolution one . generic super resolution methods can not solve this problem well , because human face has a strong structure . with the rapid development of the deep learning technique , some convolutional neural networks ( cnns ) models for face hallucination emerged and achieved state-of-the-art performance . in this paper , we proposed a five-branch network based on five key parts of human face . each branch of this network aims to generate a high resolution key part . the final high resolution face is the combination of the five branches ' output . in addition , we designed a gated enhance unit ( geu ) and cascade it to form our network architecture . experimental results confirm that our method can generate pleasing high resolution faces . story_separator_special_tag face hallucination technique aims to generate high-resolution ( hr ) face images from low-resolution ( lr ) inputs . even though existing face hallucination methods have achieved great performance on the global region evaluation , most of them can not reasonably restore local attributes , especially when ultra-resolving tiny lr face image ( 16\xd716 pixels ) to its larger version ( 8\xd7upscaling factor ) . in this paper , we propose a novel attribute-guided face transfer and enhancement network for face hallucination . specifically , we first construct a face transfer network , which upsamples lr face images to hr feature maps , and then fuses facial attributes and the upsampled features to generate hr face images with rational attributes . finally , a face enhancement network is developed based on generative adversarial network ( gan ) to improve visual quality by exploiting a composite loss that combines image color , texture and content . extensive experiments demonstrate that our method achieves superior face hallucination results and outperforms the state-of-the-art . story_separator_special_tag face hallucination technique generates high-resolution ( hr ) face images from low-resolution ( lr ) ones . in this paper , we propose to use a coarse-to-fine method for face hallucination by constructing a two-branch network , which makes full use of the specific prior knowledge of face images and the advantages of generic image super-resolution ( sr ) methods . specifically , we jointly build a deep neural network ( dnn ) with a face image sr branch and a semantic face parsing branch . the former branch implements the image upsampling and feature extraction using a cascade of convolutional layers . the latter branch extracts facial semantic parsing as prior knowledge . then , we combine the image features and the prior know ledge to reconstruct hr face images . finally , we optimize the dnn , by using adversarial training and a perceptual loss , in order to obtain high realism . extensive experiments show that the proposed method outperforms the state-of-the-art alternatives in terms of accuracy and realism . story_separator_special_tag face super-resolution is a domain-specific super-resolution ( sr ) problem of generating high-resolution ( hr ) face images from low-resolution ( lr ) inputs . even though existing face sr methods have achieved great performance on the global region evaluation , most of them can not restore local attributes and structure reasonably , especially to ultra-resolve tiny lr face images ( 16 \xd7 16\xa0pixels ) to its larger version ( 8 \xd7 upscaling factor ) . in this paper , we propose an open source face sr framework based on facial semantic attribute transformation and self-attentive structure enhancement . specifically , the proposed framework introduces face semantic information ( i.e . , face attributes ) and face structure information ( i.e . , face boundaries ) in a successive two-stage fashion . in the first stage , an attribute transformation network ( at-net ) is established . it upsamples lr face images to hr feature maps and then combines facial attributes with these features to generate the intermediate hr results with rational attributes . in the second stage , a structure enhancement network ( se-net ) is built . it simultaneously extracts face features and estimates facial boundary heatmaps from story_separator_special_tag recent reference-based face restoration methods have received considerable attention due to their great capability in recovering high-frequency details on real low-quality images . however , most of these methods require a high-quality reference image of the same identity , making them only applicable in limited scenes . to address this issue , this paper suggests a deep face dictionary network ( termed as dfdnet ) to guide the restoration process of degraded observations . to begin with , we use k-means to generate deep dictionaries for perceptually significant face components ( \\ie , left/right eyes , nose and mouth ) from high-quality images . next , with the degraded input , we match and select the most similar component features from their corresponding dictionaries and transfer the high-quality details to the input via the proposed dictionary feature transfer ( dft ) block . in particular , component adain is leveraged to eliminate the style diversity between the input and dictionary features ( \\eg , illumination ) , and a confidence score is proposed to adaptively fuse the dictionary feature to the input . finally , multi-scale dictionaries are adopted in a progressive manner to enable the coarse-to-fine restoration . experiments story_separator_special_tag in the past a few years , we witnessed rapid advancement in face super-resolution from very low resolution ( vlr ) images . however , most of the previous studies focus on solving such problem without explicitly considering the impact of severe real-life image degradation ( e.g . blur and noise ) . we can show that robustly recover details from vlr images is a task beyond the ability of current state-of-the-art method . in this paper , we borrow ideas from `` facial composite '' and propose an alternative approach to tackle this problem . we endow the degraded vlr images with additional cues by integrating existing face components from multiple reference images into a novel learning pipeline with both low level and high level semantic loss function as well as a specialized adversarial based training scheme . we show that our method is able to effectively and robustly restore relevant facial details from 16x16 images with extreme degradation . we also tested our approach against real-life images and our method performs favorably against previous methods . story_separator_special_tag in many real-world face restoration applications , e.g. , smartphone photo albums and old films , multiple high-quality ( hq ) images of the same person usually are available for a given degraded low-quality ( lq ) observation . however , most existing guided face restoration methods are based on single hq exemplar image , and are limited in properly exploiting guidance for improving the generalization ability to unknown degradation process . to address these issues , this paper suggests to enhance blind face restoration performance by utilizing multi-exemplar images and adaptive fusion of features from guidance and degraded images . first , given a degraded observation , we select the optimal guidance based on the weighted affine distance on landmark sets , where the landmark weights are learned to make the guidance image optimized to hq image reconstruction . second , moving least-square and adaptive instance normalization are leveraged for { spatial } alignment and illumination translation of guidance image in the feature space . finally , for better feature fusion , multiple adaptive spatial feature fusion ( asff ) layers are introduced to incorporate guidance features in an adaptive and progressive manner , resulting in our asffnet . story_separator_special_tag this paper studies the problem of blind face restoration from an unconstrained blurry , noisy , low-resolution , or compressed image ( i.e. , degraded observation ) . for better recovery of fine facial details , we modify the problem setting by taking both the degraded observation and a high-quality guided image of the same identity as input to our guided face restoration network ( gfrnet ) . however , the degraded observation and guided image generally are different in pose , illumination and expression , thereby making plain cnns ( e.g. , u-net ) fail to recover fine and identity-aware facial details . to tackle this issue , our gfrnet model includes both a warping subnetwork ( warpnet ) and a reconstruction subnetwork ( recnet ) . the warpnet is introduced to predict flow field for warping the guided image to correct pose and expression ( i.e. , warped guidance ) , while the recnet takes the degraded observation and warped guidance as input to produce the restoration result . due to that the ground-truth flow field is unavailable , landmark loss together with total variation regularization are incorporated to guide the learning of warpnet . furthermore , to story_separator_special_tag due to the numerous important applications of face images , such as long-distance video surveillance and identity verification , face hallucination has been an active research topic in the last decade . this paper makes a survey of approaches to high quality face hallucination by looking at theoretical backgrounds and practical results . the strengths and weaknesses of these approaches are identified to form a base for a new sparsity-based method for super-resolving mis-aligned face images . story_separator_special_tag in this paper , we formulate the face hallucination as an image decomposition problem , and propose a morphological component analysis ( mca ) based method for hallucinating a single face image . a novel three-step framework is presented for the proposed method . firstly , a low-resolution input image is up-sampled via an interpolation . then , the interpolated image is decomposed into a global high-resolution image and an unsharp mask by using mca . finally , a residue compensation is performed on the global face to enhance its visual quality . in our proposal , the mca plays a vital role as mca can properly decompose a signal into several semantic sub-signals in accordance with specific dictionaries . by virtue of the multi-channel decomposition capability of mca , the proposed method can be also extended to simultaneous implementation of face hallucination and expression normalization . experimental results demonstrate the effectiveness of our method for the images from both lab environment and realistic scenarios . we also study the contribution of face hallucination to face recognition in the case that probe images and gallery images are under different resolutions . the main conclusion is that the contribution is significant story_separator_special_tag in this paper , we study face hallucination , or synthesizing a high-resolution face image from low-resolution input , with the help of a large collection of high-resolution face images . we develop a two-step statistical modeling approach that integrates both a global parametric model and a local nonparametric model . first , we derive a global linear model to learn the relationship between the high-resolution face images and their smoothed and down-sampled lower resolution ones . second , the residual between an original high-resolution image and the reconstructed high-resolution image by a learned linear model is modeled by a patch-based nonparametric markov network , to capture the high-frequency content of faces . by integrating both global and local models , we can generate photorealistic face images . our approach is demonstrated by extensive experiments with high-quality hallucinated faces . story_separator_special_tag face hallucination technique generates high-resolution clean faces from low-resolution ones . traditional technique generates facial features by incorporating manifold structure into patch representation . in recent years , deep learning techniques have achieved great success on the topic . these deep learning based methods can well maintain the middle and low frequency information . however , they still can not well recover the high-frequency facial features , especially when the input is contaminated by noise . to address this problem , we propose a novel noise robust face hallucination framework via cascaded model of deep convolutional networks and manifold learning . in general , we utilize convolutional network to remove the noise and generate medium and low frequency facial information ; then , we further utilize another convolutional network to compensate the lost high frequency with the help of personalized manifold learning method . experimental results on public dataset show the superiority of our method compared with state-of-the-art methods . story_separator_special_tag in recent years , deep learning has made great progress in many fields such as image recognition , natural language processing , speech recognition and video super-resolution . in this survey , we comprehensively investigate 33 state-of-the-art video super-resolution ( vsr ) methods based on deep learning . it is well known that the leverage of information within video frames is important for video super-resolution . thus we propose a taxonomy and classify the methods into six sub-categories according to the ways of utilizing inter-frame information . moreover , the architectures and implementation details of all the methods are depicted in detail . finally , we summarize and compare the performance of the representative vsr method on some benchmark datasets . we also discuss some challenges , which need to be further addressed by researchers in the community of vsr . to the best of our knowledge , this work is the first systematic review on vsr tasks , and it is expected to make a contribution to the development of recent studies in this area and potentially deepen our understanding to the vsr techniques based on deep learning . story_separator_special_tag face hallucination aims to produce a high-resolution face image from an input low-resolution face image , which is of great importance for many practical face applications , such as face recognition and face verification . since the structure of the face image is complex and sensitive , obtaining a super-resolved face image is more difficult than generic image super-resolution . recently , with great success in the high-level face recognition task , deep learning methods , especially generative adversarial networks ( gans ) , have also been applied to the low-level vision task face hallucination . this work is to provide a model evolvement survey on gan-based face hallucination . the principles of image resolution degradation and gan-based learning are presented firstly . then , a comprehensive review of the state-of-art gan-based face hallucination methods is provided . finally , the comparisons of these gan-based face hallucination methods and the discussions of the related issues for future research direction are also provided . story_separator_special_tag face super-resolved ( sr ) images aid human perception . the state-of-the-art face sr methods leverage the spatial location of facial components as prior knowledge . however , it remains a great challenge to generate natural textures . in this paper , we propose a component semantic prior guided generative adversarial network ( cspgan ) to synthesize faces . specifically , semantic probability maps of facial components are exploited to modulate features in the cspgan through affine transformation . to compensate for the overly smooth performance of the generative network , a gradient loss is proposed to recover the high-frequency details . meanwhile , the discriminative network is designed to perform multiple tasks which predict semantic category and distinguish authenticity simultaneously . the extensive experimental results demonstrate the superiority of the cspgan in reconstructing photorealistic textures . story_separator_special_tag aiming at the problems of the face image super-resolution reconstruction method based on convolutional neural network , such as single feature extraction scale , low utilization rate of features and blurred face images texture , a model combining convolutional neural network with self-attention mechanism is proposed . firstly , the shallow features of the image are extracted by the cascaded 3 \xd7 3 convolutional kernels , and then self-attention mechanism is combined with the residual blocks in depth residual network to extract the deep detail features of faces . finally , the extracted features are fused globally by skip connections , which provide more high-frequency details for face reconstruction . experiments on helen , celeba face datasets and real-world images showed that the proposed method could make full use of facial feature information , and its peak signal to noise ratio ( psnr ) and structural similarity ( ssim ) were both higher than the comparison methods with better subjective visual effects . story_separator_special_tag this paper addresses deep face recognition ( fr ) problem under open-set protocol , where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space . however , few existing algorithms can effectively achieve this criterion . to this end , we propose the angular softmax ( a-softmax ) loss that enables convolutional neural networks ( cnns ) to learn angularly discriminative features . geometrically , a-softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold , which intrinsically matches the prior that faces also lie on a manifold . moreover , the size of angular margin can be quantitatively adjusted by a parameter m. we further derive specific m to approximate the ideal feature criterion . extensive analysis and experiments on labeled face in the wild ( lfw ) , youtube faces ( ytf ) and megaface challenge 1 show the superiority of a-softmax loss in fr tasks . story_separator_special_tag predicting face attributes in the wild is challenging due to complex face variations . we propose a novel deep learning framework for attribute prediction in the wild . it cascades two cnns , lnet and anet , which are fine-tuned jointly with attribute tags , but pre-trained differently . lnet is pre-trained by massive general object categories for face localization , while anet is pre-trained by massive face identities for attribute prediction . this framework not only outperforms the state-of-the-art with a large margin , but also reveals valuable facts on learning face representation . ( 1 ) it shows how the performances of face localization ( lnet ) and attribute prediction ( anet ) can be improved by different pre-training strategies . ( 2 ) it reveals that although the filters of lnet are fine-tuned only with image-level attribute tags , their response maps over entire images have strong indication of face locations . this fact enables training lnet for face localization with only image-level annotations , but without face bounding boxes or landmarks , which are required by all attribute recognition works . ( 3 ) it also demonstrates that the high-level hidden neurons of anet automatically discover story_separator_special_tag despite the great progress of image super-resolution in recent years , face super-resolution has still much room to explore good visual quality while preserving original facial attributes for larger up-scaling factors . this paper investigates a new research direction in face super-resolution , called reference based face super-resolution ( refsr ) , in which a reference facial image containing genuine attributes is provided in addition to the low-resolution images for super-resolution . we focus on transferring the key information extracted from reference facial images to the super-resolution process to guarantee the content similarity between the reference and super-resolution image . we propose a novel conditional variational autoencoder model for this reference based face super-resolution ( refsr-vae ) . by using the encoder to map the reference image to the joint latent space , we can then use the decoder to sample the encoder results to super-resolve low-resolution facial images to generate super-resolution images with good visual quality . we create a benchmark dataset on reference based face super-resolution ( refsr-face ) for general research use , which contains reference images paired with low-resolution images of various pose , emotions , ages and appearance . both objective and subjective evaluations were story_separator_special_tag previous research on face restoration often focused on repairing a specific type of low-quality facial images such as low-resolution ( lr ) or occluded facial images . however , in the real world , both the above-mentioned forms of image degradation often coexist . therefore , it is important to design a model that can repair lr occluded images simultaneously . this paper proposes a multi-scale feature graph generative adversarial network ( mfg-gan ) to implement the face restoration of images in which both degradation modes coexist , and also to repair images with a single type of degradation . based on the gan , the mfg-gan integrates the graph convolution and feature pyramid network to restore occluded low-resolution face images to non-occluded high-resolution face images . the mfg-gan uses a set of customized losses to ensure that high-quality images are generated . in addition , we designed the network in an end-to-end format . experimental results on the public-domain celeba and helen databases show that the proposed approach outperforms state-of-the-art methods in performing face super-resolution ( up to 4x or 8x ) and face completion simultaneously . cross-database testing also revealed that the proposed approach has good generalizability . story_separator_special_tag face hallucination is a super-resolution algorithm specially designed to improve the resolution and quality of low-resolution ( lr ) input face images . although a deep neural network offers an end-to-end mapping from lr to high-resolution ( hr ) images , most of the deep learning-based face hallucinations neglect the structure prior for face images . to utilize the highly structured facial prior , a parallel region-based deep residual network ( prdrn ) was developed to predict the missing detailed information for accurate image reconstruction . initially , the image is divided into multiple regions with the symmetry of face structures . then , the sub-networks corresponding to multiple regions are trained in parallel . finally , all reconstructed regions are combined to form the hr image . the experimental results on fei , casia-webface and cmu-mit public face databases show that the proposed network outperforms other state-of-the-art approaches . story_separator_special_tag most deep learning based face hallucinations exploit random patch prior from training samples , then to learn the mapping functions between low-resolution ( lr ) and high-resolution ( hr ) images , and achieve satisfactory reconstruction performance . however , most of them do not take into account the prior information on facial structure , which is pivotal for face hallucination . different from random patch prior based deep learning approaches , in this paper , we utilize facial structural prior and develop a simple yet powerful face hallucination , named region-based deep convolutional networks ( rdcn ) . firstly , we divide facial image into several regions of interest , then to train multiple parallel subnetworks of these regions for exacting better structure priors , finally hr output is reconstructed by stitching facial parts . experiments on the fei database demonstrate that the proposed region-based convolution networks outperform other state-of-the-art , including recently proposed deep learning based approaches , both in subjective and objective reconstruction qualities . story_separator_special_tag we are interested in attribute-guided face generation : given a low-res face input image , an attribute vector that can be extracted from a high-res image ( attribute image ) , our new method generates a high-res face image for the low-res input that satisfies the given attributes . to address this problem , we condition the cyclegan and propose conditional cyclegan , which is designed to 1 ) handle unpaired training data because the training low/high-res and high-res attribute images may not necessarily align with each other , and to 2 ) allow easy control of the appearance of the generated face via the input attributes . we demonstrate impressive results on the attribute-guided conditional cyclegan , which can synthesize realistic face images with appearance easily controlled by user-supplied attributes ( e.g. , gender , makeup , hair color , eyeglasses ) . using the attribute image as identity to produce the corresponding conditional vector and by incorporating a face verification network , the attribute-guided network becomes the identity-guided conditional cyclegan which produces impressive and interesting results on identity transfer . we demonstrate three applications on identity-guided conditional cyclegan : identity-preserving face superresolution , face swapping , and frontal story_separator_special_tag most of the current state-of-the-art tiny face super-resolution ( sr ) methods aim at learning a single one-to-one mapping to super-resolve low-resolution ( lr ) face images . in contrast with high-resolution ( hr ) faces images , lr faces images lack fine facial details , implying that an lr face image can be mapped to many hr candidates or vice versa . this ambiguity may lead that an hr face image super-resolved by one-to-one sr methods can not preserve the accurate facial details . to alleviate this problem , we consider tiny face sr as an one-to-many mapping , and demonstrate that injecting reasonable additional facial prior knowledge can significantly reduce the ambiguity in face sr. specifically , with the gan architecture , we propose a novel face sr network consisting of an upsampling network and a discriminative network . the upsampling network is designed to embed facial prior knowledge ( represented as a vector ) into the residual features of lr inputs and super-resolve lr inputs $ ( 16 \\times 16 $ pixels ) with up-scaling factor of $ 4 \\times $ . the discriminative network aims at examining whether the generated hr face images matches the corresponding story_separator_special_tag recent works based on deep learning and facial priors have succeeded in super-resolving severely degraded facial images . however , the prior knowledge is not fully exploited in existing methods , since facial priors such as landmark and component maps are always estimated by low-resolution or coarsely super-resolved images , which may be inaccurate and thus affect the recovery performance . in this paper , we propose a deep face super-resolution ( fsr ) method with iterative collaboration between two recurrent networks which focus on facial image recovery and landmark estimation respectively . in each recurrent step , the recovery branch utilizes the prior knowledge of landmarks to yield higher-quality images which facilitate more accurate landmark estimation in turn . therefore , the iterative information interaction between two processes boosts the performance of each other progressively . moreover , a new attentive fusion module is designed to strengthen the guidance of landmark maps , where facial components are generated individually and aggregated attentively for better restoration . quantitative and qualitative experimental results show the proposed method significantly outperforms state-of-the-art fsr methods in recovering high-quality face images . story_separator_special_tag a novel face hallucination method is proposed in this paper for the reconstruction of a high-resolution face image from a low-resolution observation based on a set of high- and low-resolution training image pairs . different from most of the established methods based on probabilistic or manifold learning models , the proposed method hallucinates the high-resolution image patch using the same position image patches of each training image . the optimal weights of the training image position-patches are estimated and the hallucinated patches are reconstructed using the same weights . the final high-resolution facial image is formed by integrating the hallucinated patches . the necessity of two-step framework or residue compensation and the differences between hallucination based on patch and global image are discussed . experiments show that the proposed method without residue compensation generates higher-quality images and costs less computational time than some recent face image super-resolution ( hallucination ) techniques . story_separator_special_tag one of the most useful sub-fields of super-resolution ( sr ) is face sr. given a low-resolution ( lr ) image of a face , the high-resolution ( hr ) counterpart is demanded . however , performing sr task on extremely low resolution images is very challenging due to the image distortion in the hr results . many deep learning-based sr approaches have intended to solve this issue by using attribute domain information . however , they require more complex data and even additional networks . to simplify this process and yet preserve the precision , a novel multi-scale gradient gan with capsule network as its discriminator is proposed in this paper . msg-capsgan surpassed the state-of-the-art face sr networks in terms of psnr . this network is a step towards a precise pose invariant sr system . story_separator_special_tag we propose a novel method to use both audio and a low-resolution image to perform extreme face super-resolution ( a 16x increase of the input size ) . when the resolution of the input image is very low ( e.g. , 8x8 pixels ) , the loss of information is so dire that important details of the original identity have been lost and audio can aid the recovery of a plausible high-resolution image . in fact , audio carries information about facial attributes , such as gender and age . to combine the aural and visual modalities , we propose a method to first build the latent representations of a face from the lone audio track and then from the lone low-resolution image . we then train a network to fuse these two representations . we show experimentally that audio can assist in recovering attributes such as the gender , the age and the identity , and thus improve the correctness of the high-resolution image reconstruction process . our procedure does not make use of human annotation and thus can be easily trained with existing video datasets . moreover , we show that our model builds a factorized representation of story_separator_special_tag the primary aim of single-image super-resolution is to construct a high-resolution ( hr ) image from a corresponding low-resolution ( lr ) input . in previous approaches , which have generally been supervised , the training objective typically measures a pixel-wise average distance between the super-resolved ( sr ) and hr images . optimizing such metrics often leads to blurring , especially in high variance ( detailed ) regions . we propose an alternative formulation of the super-resolution problem based on creating realistic sr images that downscale correctly . we present a novel super-resolution algorithm addressing this problem , pulse ( photo upsampling via latent space exploration ) , which generates high-resolution , realistic images at resolutions previously unseen in the literature . it accomplishes this in an entirely self-supervised fashion and is not confined to a specific degradation operator used during training , unlike previous methods ( which require training on databases of lr-hr image pairs for supervised learning ) . instead of starting with the lr image and slowly adding detail , pulse traverses the high-resolution natural image manifold , searching for images that downscale to the original lr image . this is formalized through the downscaling loss story_separator_special_tag an important aim of research on the blind image quality assessment ( iqa ) problem is to devise perceptual models that can predict the quality of distorted images with as little prior knowledge of the images or their distortions as possible . current state-of-the-art general purpose no reference ( nr ) iqa algorithms require knowledge about anticipated distortions in the form of training examples and corresponding human opinion scores . however we have recently derived a blind iqa model that only makes use of measurable deviations from statistical regularities observed in natural images , without training on human-rated distorted images , and , indeed without any exposure to distorted images . thus , it is completely blind . the new iqa model , which we call the natural image quality evaluator ( niqe ) is based on the construction of a quality aware collection of statistical features based on a simple and successful space domain natural scene statistic ( nss ) model . these features are derived from a corpus of natural , undistorted images . experimental results show that the new index delivers performance comparable to top performing nr iqa models that require training on large databases of human story_separator_special_tag one of the challenges in the study of generative adversarial networks is the instability of its training . in this paper , we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator . our new normalization technique is computationally light and easy to incorporate into existing implementations . we tested the efficacy of spectral normalization on cifar10 , stl-10 , and ilsvrc2012 dataset , and we experimentally confirmed that spectrally normalized gans ( sn-gans ) is capable of generating images of better or equal quality relative to the previous training stabilization techniques . story_separator_special_tag over the last few years , increased interest has arisen with respect to age-related tasks in the computer vision community . as a result , several `` in-the-wild '' databases annotated with respect to the age attribute became available in the literature . nevertheless , one major drawback of these databases is that they are semi-automatically collected and annotated and thus they contain noisy labels . therefore , the algorithms that are evaluated in such databases are prone to noisy estimates . in order to overcome such drawbacks , we present in this paper the first , to the best of knowledge , manually collected `` in-the-wild '' age database , dubbed agedb , containing images annotated with accurate to the year , noise-free labels . as demonstrated by a series of experiments utilizing state-of-the-art algorithms , this unique property renders agedb suitable when performing experiments on age-invariant face verification , age estimation and face age progression `` in-the-wild '' . story_separator_special_tag the lack of resolution of imaging systems has critically adverse impacts on the recognition and performance of biometric systems , especially in the case of long range biometrics and surveillance such as face recognition at a distance , iris recognition and gait recognition . super-resolution , as one of the core innovations in computer vision , has been an attractive but challenging solution to address this problem in both general imaging systems and biometric systems . however , a fundamental difference exists between conventional super-resolution motivations and those required for biometrics . the former aims to enhance the visual clarity of the scene while the latter , more significantly , aims to improve the recognition accuracy of classifiers by exploiting specific characteristics of the observed biometric traits . this paper comprehensively surveys the state-of-the-art super-resolution approaches proposed for four major biometric modalities : face ( 2d+3d ) , iris , fingerprint and gait . we approach the super-resolution problem in biometrics from several different perspectives , including from the spatial and frequency domains , single and multiple input images , learning-based and reconstruction-based approaches . especially , we highlight two special categories : feature-domain super-resolution which performs super-resolution directly on story_separator_special_tag deep neural networks have been widely used in super-resolution ( sr ) for single-image recently . in this paper , we propose the super-resolution local training network ( srlt ) for face reconstruction . the first reflection on whether or not a face image is clear is to observe facial features , such as whether the lines around the eye , nose and mouth are clear . this instinctive reflection inspired me . we designed a network structure to crop the face image in the process of training , and used local training methods to train the faces and backgrounds respectively . because of the similarity of face structures , this new network is less affected by background when reconstructing faces . our proposed method performs better than existing methods in accuracy . story_separator_special_tag deep learning methods have been successfully used in many areas of computer vision , including super resolution . however , all of the previous deep learning methods have been proposed for generic image super resolution . in this paper , we proposed to use convolutional neural network for face hallucination ( fh ) by combining the domain specific prior knowledge of face images and properties of deep learning . in the proposed method , an end to end mapping is learned as a deep convolutional network between the low resolution ( lr ) images and their corresponding high resolution ( hr ) images to upscale the input face image directly . in order to achieve larger magnification factor , we consider to cascade several convolution neural networks each of which is with a fixed up-scaling factor and upscales the lr image step by step . experimental result shows that our proposed method can achieve better performance comparing to the traditional face hallucination methods . story_separator_special_tag this paper proposes a face hallucination method for the reconstruction of high-resolution facial images from single-frame , low-resolution facial images . the proposed method has been derived from example-based hallucination methods and morphable face models . first , we propose a recursive error back-projection method to compensate for residual errors , and a region-based reconstruction method to preserve characteristics of local facial regions . then , we define an extended morphable face model , in which an extended face is composed of the interpolated high-resolution face from a given low-resolution face , and its original high-resolution equivalent . then , the extended face is separated into an extended shape and an extended texture . we performed various hallucination experiments using the mpi , xm2vts , and kf databases , compared the reconstruction errors , structural similarity index , and recognition rates , and showed the effects of face detection errors and shape estimation errors . the encouraging results demonstrate that the proposed methods can improve the performance of face recognition systems . especially the proposed method can enhance the resolution of single-frame , low-resolution facial images . story_separator_special_tag the goal of this paper is face recognition from either a single photograph or from a set of faces tracked in a video . recent progress in this area has been due to two factors : ( i ) end to end learning for the task using a convolutional neural network ( cnn ) , and ( ii ) the availability of very large scale training datasets . we make two contributions : first , we show how a very large scale dataset ( 2.6m images , over 2.6k people ) can be assembled by a combination of automation and human in the loop , and discuss the trade off between data purity and time ; second , we traverse through the complexities of deep network training and face recognition to present methods and procedures to achieve comparable state of the art results on the standard lfw and ytf face benchmarks . story_separator_special_tag over the last couple of years , face recognition researchers have been developing new techniques . these developments are being fueled by advances in computer vision techniques , computer design , sensor design , and interest in fielding face recognition systems . such advances hold the promise of reducing the error rate in face recognition systems by an order of magnitude over face recognition vendor test ( frvt ) 2002 results . the face recognition grand challenge ( frgc ) is designed to achieve this performance goal by presenting to researchers a six-experiment challenge problem along with data corpus of 50,000 images . the data consists of 3d scans and high resolution still imagery taken under controlled and uncontrolled conditions . this paper describes the challenge problem , data corpus , and presents baseline performance and preliminary results on natural statistics of facial imagery . story_separator_special_tag in several real-world scenario , the recorded pictures often have various artifacts suchlike blur , noise , varying illuminations , occlusion , etc . due to many reasons including cheap and low-resolution imaging systems , different image processing errors , and far distance of an object from the camera/sensor . the facial images captured from such low-resolution pictures make severe impacts on the performance of various systems namely human-computer interaction , speaker recognition by mouth movements , visual speech recognition , facial expression recognition , face-recognition , etc . facial image super-resolution ( or hallucination ) , as one of the kernels innovations in the field of computer vision and image processing , has been an engaging but challenging technique to overcome above problems . this paper provides the comprehensive survey of existing state-of-the-art and recently published face hallucination methods . along with this , the detailed reconstruction procedure of most successful hallucination approach i.e. , position-patch based super-resolution is also provided in this work . moreover , some useful research directions are too presented at the end which may help the research community of this filed to design and develop the new face hallucination methods for providing the more story_separator_special_tag fast and robust three-dimensional reconstruction of facial geometric structure from a single image is a challenging task with numerous applications . here , we introduce a learning-based approach for reconstructing a three-dimensional face from a single image . recent face recovery methods rely on accurate localization of key characteristic points . in contrast , the proposed approach is based on a convolutional-neural-network ( cnn ) which extracts the face geometry directly from its image . although such deep architectures outperform other models in complex computer vision problems , training them properly requires a large dataset of annotated examples . in the case of three-dimensional faces , currently , there are no large volume data sets , while acquiring such big-data is a tedious task . as an alternative , we propose to generate random , yet nearly photo-realistic , facial images for which the geometric form is known . the suggested model successfully recovers facial shapes from real images , even for faces with extreme expressions and under various lighting conditions . story_separator_special_tag a constrained optimization type of numerical algorithm for removing noise from images is presented . the total variation of the image is minimized subject to constraints involving the statistics of the noise . the constraints are imposed using lagrange multipliers . the solution is obtained using the gradient-projection method . this amounts to solving a time dependent partial differential equation on a manifold determined by the constraints . as t -- -~ 0o the solution converges to a steady state which is the denoised image . the numerical algorithm is simple and relatively fast . the results appear to be state-of-the-art for very noisy images . the method is noninvasive , yielding sharp edges in the image . the technique could be interpreted as a first step of moving each level set of the image normal to itself with velocity equal to the curvature of the level set divided by the magnitude of the gradient of the image , and a second step which projects the image back onto the constraint set . story_separator_special_tag developing powerful deformable face models requires massive , annotated face databases on which techniques can be trained , validated and tested . manual annotation of each facial image in terms of landmarks requires a trained expert and the workload is usually enormous . fatigue is one of the reasons that in some cases annotations are inaccurate . this is why , the majority of existing facial databases provide annotations for a relatively small subset of the training images . furthermore , there is hardly any correspondence between the annotated land-marks across different databases . these problems make cross-database experiments almost infeasible . to overcome these difficulties , we propose a semi-automatic annotation methodology for annotating massive face datasets . this is the first attempt to create a tool suitable for annotating massive facial databases . we employed our tool for creating annotations for multipie , xm2vts , ar , and frgc ver . 2 databases . the annotations will be made publicly available from http : //ibug.doc.ic.ac.uk/ resources/facial-point-annotations/ . finally , we present experiments which verify the accuracy of produced annotations . story_separator_special_tag we present a variety of new architectural features and training procedures that we apply to the generative adversarial networks ( gans ) framework . using our new techniques , we achieve state-of-the-art results in semi-supervised classification on mnist , cifar-10 and svhn . the generated images are of high quality as confirmed by a visual turing test : our model generates mnist samples that humans can not distinguish from real data , and cifar-10 samples that yield a human error rate of 21.3 % . we also present imagenet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of imagenet classes . story_separator_special_tag real low-resolution ( lr ) face images contain degradations which are too varied and complex to be captured by known downsampling kernels and signal-independent noises . so , in order to successfully super-resolve real faces , a method needs to be robust to a wide range of noise , blur , compression artifacts etc . some of the recent works attempt to model these degradations from a dataset of real images using a generative adversarial network ( gan ) . they generate synthetically degraded lr images and use them with corresponding real high-resolution ( hr ) image to train a super-resolution ( sr ) network using a combination of a pixel-wise loss and an adversarial loss . in this paper , we propose a two module super-resolution network where the feature extractor module extracts robust features from the lr image , and the sr module generates an hr estimate using only these robust features . we train a degradation gan to convert bicubically downsampled clean images to real degraded images , and interpolate between the obtained degraded lr image and its clean lr counterpart . this interpolated lr image is then used along with it 's corresponding hr counterpart to story_separator_special_tag we provide an image deformation method based on moving least squares using various classes of linear functions including affine , similarity and rigid transformations . these deformations are realistic and give the user the impression of manipulating real-world objects . we also allow the user to specify the deformations using either sets of points or line segments , the later useful for controlling curves and profiles present in the image . for each of these techniques , we provide simple closed-form solutions that yield fast deformations , which can be performed in real-time . story_separator_special_tag we have collected a new face data set that will facilitate research in the problem of frontal to profile face verification in the wild . the aim of this data set is to isolate the factor of pose variation in terms of extreme poses like profile , where many features are occluded , along with other in the wild variations . we call this data set the celebrities in frontal-profile ( cfp ) data set . we find that human performance on frontal-profile verification in this data set is only slightly worse ( 94.57 % accuracy ) than that on frontal-frontal verification ( 96.24 % accuracy ) . however we evaluated many state-of-the-art algorithms , including fisher vector , sub-sml and a deep learning algorithm . we observe that all of them degrade more than 10 % from frontal-frontal to frontal-profile verification . the deep learning implementation , which performs comparable to humans on frontal-frontal , performs significantly worse ( 84.91 % accuracy ) on frontal-profile . this suggests that there is a gap between human performance and automatic face recognition methods for large pose variation in unconstrained images . story_separator_special_tag face hallucination is a domain-specific super-resolution problem that aims to generate a high-resolution ( hr ) face image from a low-resolution~ ( lr ) input . in contrast to the existing patch-wise super-resolution models that divide a face image into regular patches and independently apply lr to hr mapping to each patch , we implement deep reinforcement learning and develop a novel attention-aware face hallucination ( attention-fh ) framework , which recurrently learns to attend a sequence of patches and performs facial part enhancement by fully exploiting the global interdependency of the image . specifically , our proposed framework incorporates two components : a recurrent policy network for dynamically specifying a new attended region at each time step based on the status of the super-resolved image and the past attended region sequence , and a local enhancement network for selected patch hallucination and global state updating . the attention-fh model jointly learns the recurrent policy network and local enhancement network through maximizing a long-term reward that reflects the hallucination result with respect to the whole hr image . extensive experiments demonstrate that our attention-fh significantly outperforms the state-of-the-art methods on in-the-wild face images with large pose and illumination variations . story_separator_special_tag abstract : in this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting . our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small ( 3x3 ) convolution filters , which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers . these findings were the basis of our imagenet challenge 2014 submission , where our team secured the first and the second places in the localisation and classification tracks respectively . we also show that our representations generalise well to other datasets , where they achieve state-of-the-art results . we have made our two best-performing convnet models publicly available to facilitate further research on the use of deep visual representations in computer vision . story_separator_special_tag the gap between sensing patterns of different face modalities remains a challenging problem in heterogeneous face recognition ( hfr ) . this paper proposes an adversarial discriminative feature learning framework to close the sensing gap via adversarial learning on both raw-pixel space and compact feature space . this framework integrates cross-spectral face hallucination and discriminative feature learning into an end-to-end adversarial network . in the pixel space , we make use of generative adversarial networks to perform cross-spectral face hallucination . an elaborate two-path model is introduced to alleviate the lack of paired images , which gives consideration to both global structures and local textures . in the feature space , an adversarial loss and a high-order variance discrepancy loss are employed to measure the global and local discrepancy between two heterogeneous distributions respectively . these two losses enhance domain-invariant feature learning and modality independent noise removing . experimental results on three nir-vis databases show that our proposed approach outperforms state-of-the-art hfr methods , without requiring of complex network or large-scale training dataset . story_separator_special_tag we address the problem of restoring a high-resolution face image from a blurry low-resolution input . this problem is difficult as super-resolution and deblurring need to be tackled simultaneously . moreover , existing algorithms can not handle face images well as low-resolution face images do not have much texture which is especially critical for deblurring . in this paper , we propose an effective algorithm by utilizing the domain-specific knowledge of human faces to recover high-quality faces . we first propose a facial component guided deep convolutional neural network ( cnn ) to restore a coarse face image , which is denoted as the base image where the facial component is automatically generated from the input face image . however , the cnn based method can not handle image details well . we further develop a novel exemplar-based detail enhancement algorithm via facial component matching . extensive experiments show that the proposed method outperforms the state-of-the-art algorithms both quantitatively and qualitatively . story_separator_special_tag we propose a two-stage method for face hallucination . first , we generate facial components of the input image using cnns . these components represent the basic facial structures . second , we synthesize fine-grained facial structures from high resolution training images . the details of these structures are transferred into facial components for enhancement . therefore , we generate facial components to approximate ground truth global appearance in the first stage and enhance them through recovering details in the second stage . the experiments demonstrate that our method performs favorably against state-of-the-art methods story_separator_special_tag the reconstruction of dense 3d models of face geometry and appearance from a single image is highly challenging and ill-posed . to constrain the problem , many approaches rely on strong priors , such as parametric face models learned from limited 3d scan data . however , prior models restrict generalization of the true diversity in facial geometry , skin reflectance and illumination . to alleviate this problem , we present the first approach that jointly learns 1 ) a regressor for face shape , expression , reflectance and illumination on the basis of 2 ) a concurrently learned parametric face model . our multi-level face model combines the advantage of 3d morphable models for regularization with the out-of-space generalization of a learned corrective space . we train end-to-end on in-the-wild images without dense annotations by fusing a convolutional encoder with a differentiable expert-designed renderer and a self-supervised training loss , both defined at multiple detail levels . our approach compares favorably to the state-of-the-art in terms of reconstruction quality , better generalizes to real world faces , and runs at over 250 hz . story_separator_special_tag in this work we propose a novel model-based deep convolutional autoencoder that addresses the highly challenging problem of reconstructing a 3d human face from a single in-the-wild color image . to this end , we combine a convolutional encoder network with an expert-designed generative model that serves as decoder . the core innovation is our new differentiable parametric decoder that encapsulates image formation analytically based on a generative model . our decoder takes as input a code vector with exactly defined semantic meaning that encodes detailed face pose , shape , expression , skin reflectance and scene illumination . due to this new way of combining cnn-based with model-based face reconstruction , the cnn-based encoder learns to extract semantically meaningful parameters from a single monocular input image . for the first time , a cnn encoder and an expert-designed generative model can be trained end-to-end in an unsupervised manner , which renders training on very large ( unlabeled ) real world data feasible . the obtained reconstructions compare favorably to current state-of-the-art approaches in terms of quality and richness of representation . story_separator_special_tag face hallucination , which is the task of generating a high-resolution face image from a low-resolution input image , is a well-studied problem that is useful in widespread application areas . face hallucination is particularly challenging when the input face resolution is very low ( e.g. , 10 x 12 pixels ) and/or the image is captured in an uncontrolled setting with large pose and illumination variations . in this paper , we revisit the algorithm introduced in [ 1 ] and present a deep interpretation of this framework that achieves state-of-the-art under such challenging scenarios . in our deep network architecture the global and local constraints that define a face can be efficiently modeled and learned end-to-end using training data . conceptually our network design can be partitioned into two sub-networks : the first one implements the holistic face reconstruction according to global constraints , and the second one enhances face-specific details and enforces local patch statistics . we optimize the deep network using a new loss function for super-resolution that combines reconstruction error with a learned face quality measure in adversarial setting , producing improved visual results . we conduct extensive experiments in both controlled and uncontrolled setups story_separator_special_tag face verification and recognition problems have seen rapid progress in recent years , however recognition from small size images remains a challenging task that is inherently intertwined with the task of face super-resolution . tackling this problem using multiple frames is an attractive idea , yet requires solving the alignment problem that is also challenging for low-resolution faces . here we present a holistic system for multi-frame recognition , alignment , and superresolution of faces . our neural network architecture restores the central frame of each input sequence additionally taking into account a number of adjacent frames and making use of sub-pixel movements . we present our results using the popular dataset for video face recognition ( youtube faces ) . we show a notable improvement of identification score compared to several baselines including the one based on single-image super-resolution . story_separator_special_tag we propose to restore old photos that suffer from severe degradation through a deep learning approach . unlike conventional restoration tasks that can be solved through supervised learning , the degradation in real photos is complex and the domain gap between synthetic images and real old photos makes the network fail to generalize . therefore , we propose a novel triplet domain translation network by leveraging real photos along with massive synthetic image pairs . specifically , we train two variational autoencoders ( vaes ) to respectively transform old photos and clean photos into two latent spaces . and the translation between these two latent spaces is learned with synthetic paired data . this translation generalizes well to real photos because the domain gap is closed in the compact latent space . besides , to address multiple degradations mixed in one old photo , we design a global branch with a partial nonlocal block targeting to the structured defects , such as scratches and dust spots , and a local branch targeting to the unstructured defects , such as noises and blurriness . two branches are fused in the latent space , leading to improved capability to restore old photos story_separator_special_tag face hallucination that aims to transform a low-resolution ( lr ) face image to a high-resolution ( hr ) one is an active domain-specific image super-resolution problem . the performance of existing methods is usually not satisfactory , especially when the upscaling factor is large , such as 8\xd7 . in this paper , we propose an effective two- step face hallucination method based on a deep neural network with multi-scale channel and spatial attention mechanism . specifically , we develop a parsingnet to extract the prior knowledge of an input lr face , which is then fed into a carefully designed fishsrnet to recover the target hr face . experimental results demonstrate that our method outperforms the state-of-the-arts in terms of quantitative metrics and visual quality . story_separator_special_tag given a really low resolution input image of a face ( say \\ ( 16\\ , { \\times } \\,16\\ ) or \\ ( 8\\ , { \\times } \\,8\\ ) pixels ) , the goal of this paper is to reconstruct a high-resolution version thereof . this , by itself , is an ill-posed problem , as the high-frequency information is missing in the low-resolution input and needs to be hallucinated , based on prior knowledge about the image content . rather than relying on a generic face prior , in this paper we explore the use of a set of exemplars , i.e . other high-resolution images of the same person . these guide the neural network as we condition the output on them . multiple exemplars work better than a single one . to combine the information from multiple exemplars effectively , we introduce a pixel-wise weight generation module . besides standard face super-resolution , our method allows to perform subtle face editing simply by replacing the exemplars with another set with different facial features . a user study is conducted and shows the super-resolved images can hardly be distinguished from real images on the celeba dataset story_separator_special_tag depth map estimation and 3-d reconstruction from a single or a few face images is an important research field in computer vision . many approaches have been proposed and developed over the last decade . however , issues like robustness are still to be resolved through additional research . with the advent of the gpu computational methods , convolutional neural networks are being applied to many computer vision problems . later , conditional generative adversarial networks ( cgan ) have attracted attention for its easy adaptation for many picture-to-picture problems . cgans have been applied for a wide variety of tasks , such as background masking , segmentation , medical image processing , and superresolution . in this work , we developed a gan-based method for depth map estimation from any given single face image . many variants of gans have been tested for the depth estimation task for this work . we conclude that conditional wasserstein gan structure offers the most robust approach . we have also compared the method with other two state-of-the-art methods based on deep learning and traditional approaches and experimentally shown that the proposed method offers great opportunities for estimation of face depth maps from story_separator_special_tag this paper comprehensively surveys the development of face hallucination ( fh ) , including both face super-resolution and face sketch-photo synthesis techniques . indeed , these two techniques share the same objective of inferring a target face image ( e.g . high-resolution face image , face sketch and face photo ) from a corresponding source input ( e.g . low-resolution face image , face photo and face sketch ) . considering the critical role of image interpretation in modern intelligent systems for authentication , surveillance , law enforcement , security control , and entertainment , fh has attracted growing attention in recent years . existing fh methods can be grouped into four categories : bayesian inference approaches , subspace learning approaches , a combination of bayesian inference and subspace learning approaches , and sparse representation-based approaches . in spite of achieving a certain level of development , fh is limited in its success by complex application conditions such as variant illuminations , poses , or views . this paper provides a holistic understanding and deep insight into fh , and presents a comparative analysis of representative methods and promising future directions . story_separator_special_tag recently , 3d face reconstruction from a single image has achieved great success with the help of deep learning and shape prior knowledge , but they often fail to produce accurate geometry details . on the other hand , photometric stereo methods can recover reliable geometry details , but require dense inputs and need to solve a complex optimization problem . in this paper , we present a lightweight strategy that only requires sparse inputs or even a single image to recover high-fidelity face shapes with images captured under near-field lights . to this end , we construct a dataset containing 84 different subjects with 29 expressions under 3 different lights . data augmentation is applied to enrich the data in terms of diversity in identity , lighting , expression , etc . with this constructed dataset , we propose a novel neural network specially designed for photometric stereo based 3d face reconstruction . extensive experiments and comparisons demonstrate that our method can generate high-quality reconstruction results with one to three facial images captured under near-field lights . our full framework is available at https : //github.com/juyong/facepsnet . story_separator_special_tag abstract the super-resolution of a very low-resolution face image is a challenge task in single image super-resolution . most of deep learning methods learn a non-linear mapping of input-to-target space by one-step upsampling . these methods are difficult to reconstruct a high-resolution face image from single very low-resolution face image . in this paper , we propose an asymptotic residual back-projection network ( rbpnet ) to gradually learn residual between the reconstructed face image and the ground truth by multi-step residual learning . firstly , the reconstructed high-resolution feature map is projected to the original low-resolution feature space to generate low-resolution feature map ( the projected low-resolution feature map ) . secondly , the projected low-resolution feature map is subtracted by original feature map to generate low-resolution residual feature map . and finally , the low-resolution residual feature map is mapped to high-resolution feature space . the network will get a more accurate high-resolution image by iterative residual learning . meanwhile , we explicitly reconstruct the edge map of face image and embed it into the reconstruction of high-resolution face image to reduce distortion of super-resolution results . extensive experiments demonstrate the effectiveness and advantages of our proposed rbpnet qualitatively story_separator_special_tag in video surveillance , the faces of interest are often of small size . image resolution is an important factor affecting face recognition by human and computer . in this paper , we propose a new face hallucination method using eigentransformation . different from most of the proposed methods based on probabilistic models , this method views hallucination as a transformation between different image styles . we use principal component analysis ( pca ) to fit the input face image as a linear combination of the low-resolution face images in the training set . the high-resolution image is rendered by replacing the low-resolution training images with high-resolution ones , while retaining the same combination coefficients . experiments show that the hallucinated face images are not only very helpful for recognition by humans , but also make the automatic recognition procedure easier , since they emphasize the face difference by adding more high-frequency details . story_separator_special_tag recently , convolutional neural networks ( cnns ) have been widely employed to promote the face hallucination due to the ability to predict high-frequency details from a large number of samples . however , most of them fail to take into account the overall facial profile and fine texture details simultaneously , resulting in reduced naturalness and fidelity of the reconstructed face , and further impairing the performance of downstream tasks ( e.g. , face detection , facial recognition ) . to tackle this issue , we propose a novel external-internal split attention group ( esag ) , which encompasses two paths responsible for facial structure information and facial texture details , respectively . by fusing the features from these two paths , the consistency of facial structure and the fidelity of facial details are strengthened at the same time . then , we propose a split-attention in split-attention network ( sisn ) to reconstruct photorealistic high-resolution facial images by cascading several esags . experimental results on face hallucination and face recognition unveil that the proposed method not only significantly improves the clarity of hallucinated faces , but also encourages the subsequent face recognition performance substantially . codes have been story_separator_special_tag super-resolution microscopy overcomes the diffraction limit of conventional light microscopy in spatial resolution . by providing novel spatial or spatiotemporal information on biological processes at nanometer resolution with molecular specificity , it plays an increasingly important role in life sciences . however , its technical limitations require trade-offs to balance its spatial resolution , temporal resolution , and light exposure of samples . recently , deep learning has achieved breakthrough performance in many image processing and computer vision tasks . it has also shown great promise in pushing the performance envelope of super-resolution microscopy . in this brief review , we survey recent advances in using deep learning to enhance performance of superresolution microscopy . we focus primarily on how deep learning advances reconstruction of super-resolution images . related key technical challenges are discussed . despite the challenges , deep learning is set to play an indispensable and transformative role in the development of super-resolution microscopy . we conclude with an outlook on how deep learning could shape the future of this new generation of light microscopy technology . story_separator_special_tag the structural similarity image quality paradigm is based on the assumption that the human visual system is highly adapted for extracting structural information from the scene , and therefore a measure of structural similarity can provide a good approximation to perceived image quality . this paper proposes a multiscale structural similarity method , which supplies more flexibility than previous single-scale methods in incorporating the variations of viewing conditions . we develop an image synthesis method to calibrate the parameters that define the relative importance of different scales . experimental comparisons demonstrate the effectiveness of the proposed method . story_separator_special_tag computer vision systems have demonstrated considerable improvement in recognizing and verifying faces in digital images . still , recognizing faces appearing in unconstrained , natural conditions remains a challenging task . in this paper , we present a face-image , pair-matching approach primarily developed and tested on the labeled faces in the wild ( lfw ) benchmark that reflects the challenges of face recognition from unconstrained images . the approach we propose makes the following contributions . 1 ) we present a family of novel face-image descriptors designed to capture statistics of local patch similarities . 2 ) we demonstrate how unlabeled background samples may be used to better evaluate image similarities . to this end , we describe a number of novel , effective similarity measures . 3 ) we show how labeled background samples , when available , may further improve classification performance , by employing a unique pair-matching pipeline . we present state-of-the-art results on the lfw pair-matching benchmarks . in addition , we show our system to be well suited for multilabel face classification ( recognition ) problem , on both the lfw images and on images from the laboratory controlled multi-pie database . story_separator_special_tag deep models have achieved impressive performance for face hallucination tasks . however , we observe that directly feeding the hallucinated facial images into recog- nition models can even degrade the recognition performance despite the much better visualization quality . in this paper , we address this problem by jointly learning a deep model for two tasks , i.e . face hallucination and recognition . in particular , we design an end-to-end deep convolution network with hallucination sub-network cascaded by recognition sub-network . the recognition sub- network are responsible for producing discriminative feature representations using the hallucinated images as inputs generated by hallucination sub-network . during training , we feed lr facial images into the network and optimize the parameters by minimizing two loss items , i.e . 1 ) face hallucination loss measured by the pixel wise difference between the ground truth hr images and network-generated images ; and 2 ) verification loss which is measured by the classification error and intra-class distance . we extensively evaluate our method on lfw and ytf datasets . the experimental results show that our method can achieve recognition accuracy 97.95 % on 4x down-sampled lfw testing set , outperforming the accuracy 96.35 % story_separator_special_tag we present a novel boundary-aware face alignment algorithm by utilising boundary lines as the geometric structure of a human face to help facial landmark localisation . unlike the conventional heatmap based method and regression based method , our approach derives face landmarks from boundary lines which remove the ambiguities in the landmark definition . three questions are explored and answered by this work : 1. why using boundary ? 2. how to use boundary ? 3. what is the relationship between boundary estimation and landmarks localisation ? our boundary-aware face alignment algorithm achieves 3.49 % mean error on 300-w fullset , which outperforms state-of-the-art methods by a large margin . our method can also easily integrate information from other datasets . by utilising boundary information of 300-w dataset , our method achieves 3.92 % mean error with 0.39 % failure rate on cofw dataset , and 1.25 % mean error on aflw-full dataset . moreover , we propose a new dataset wflw to unify training and testing across different factors , including poses , expressions , illuminations , makeups , occlusions , and blurriness . dataset and model are publicly available at https : //wywu.github.io/projects/lab/lab.html story_separator_special_tag facial prior knowledge based methods recently achieved great success on the task of face image super-resolution ( sr ) . the combination of different type of facial knowledge could be leveraged for better super-resolving face images , e.g. , facial attribute information with texture and shape information . in this paper , we present a novel deep end-to-end network for face super resolution , named residual attribute attention network ( raan ) , which realizes the efficient feature fusion of various types of facial information . specifically , we construct a multi-block cascaded structure network with dense connection . each block has three branches : texture prediction network ( tpn ) , shape generation network ( sgn ) and attribute analysis network ( aan ) . we divide the task of face image reconstruction into three steps : extracting the pixel level representation information from the input very low resolution ( lr ) image via tpn and sgn , extracting the semantic level representation information by aan from the input , and finally combining the pixel level and semantic level information to recover the high resolution ( hr ) image . experiments on benchmark database illustrate that raan significantly outperforms story_separator_special_tag existing face super-resolution ( sr ) methods mainly assume the input image to be noise-free . their performance degrades drastically when applied to real-world scenarios where the input image is always contaminated by noise . in this paper , we propose a facial attribute capsules network ( facn ) to deal with the problem of high-scale super-resolution of noisy face image . capsule is a group of neurons whose activity vector models different properties of the same entity . inspired by the concept of capsule , we propose an integrated representation model of facial information , which named facial attribute capsule ( fac ) . in the sr processing , we first generated a group of facs from the input lr face , and then reconstructed the hr face from this group of facs . aiming to effectively improve the robustness of fac to noise , we generate fac in semantic , probabilistic and facial attributes manners by means of integrated learning strategy . each fac can be divided into two sub-capsules : semantic capsule ( sc ) and probabilistic capsule ( pc ) . them describe an explicit facial attribute in detail from two aspects of semantic representation and story_separator_special_tag video super-resolution ( vsr ) methods have recently achieved a remarkable success due to the development of deep convolutional neural networks ( cnn ) . current state-of-the-art cnn methods usually treat the vsr problem as a large number of separate multi-frame super-resolution tasks , at which a batch of low resolution ( lr ) frames is utilized to generate a single high resolution ( hr ) frame , and running a slide window to select lr frames over the entire video would obtain a series of hr frames . however , duo to the complex temporal dependency between frames , with the number of lr input frames increase , the performance of the reconstructed hr frames become worse . the reason is in that these methods lack the ability to model complex temporal dependencies and hard to give an accurate motion estimation and compensation for vsr process . which makes the performance degrade drastically when the motion in frames is complex . in this paper , we propose a motion-adaptive feedback cell ( mafc ) , a simple but effective block , which can efficiently capture the motion compensation and feed it back to the network in an adaptive way story_separator_special_tag most of the conventional face hallucination methods assume the input image is sufficiently large and aligned , and all require the input image to be noise-free . their performance degrades drastically if the input image is tiny , unaligned , and contaminated by noise . in this paper , we introduce a novel transformative discriminative autoencoder to 8x super-resolve unaligned noisy and tiny ( 16x16 ) low-resolution face images . in contrast to encoder-decoder based autoencoders , our method uses decoder-encoder-decoder networks . we first employ a transformative discriminative decoder network to upsample and denoise simultaneously . then we use a transformative encoder network to project the intermediate hr faces to aligned and noise-free lr faces . finally , we use the second decoder to generate hallucinated hr images . our extensive evaluations on a very large face dataset show that our method achieves superior hallucination results and outperforms the state-of-the-art by a large margin of 1.82db psnr . story_separator_special_tag despite that convolutional neural networks ( cnn ) have recently demonstrated high-quality reconstruction for single-image super-resolution ( sr ) , recovering natural and realistic texture remains a challenging problem . in this paper , we show that it is possible to recover textures faithful to semantic classes . in particular , we only need to modulate features of a few intermediate layers in a single network conditioned on semantic segmentation probability maps . this is made possible through a novel spatial feature transform ( sft ) layer that generates affine transformation parameters for spatial-wise feature modulation . sft layers can be trained end-to-end together with the sr network using the same loss function . during testing , it accepts an input image of arbitrary size and generates a high-resolution image with just a single forward pass conditioned on the categorical priors . our final results show that an sr network equipped with sft can generate more realistic and visually pleasing textures in comparison to state-of-the-art srgan [ 27 ] and enhancenet [ 38 ] . story_separator_special_tag we present an algorithm to directly restore a clear highresolution image from a blurry low-resolution input . this problem is highly ill-posed and the basic assumptions for existing super-resolution methods ( requiring clear input ) and deblurring methods ( requiring high-resolution input ) no longer hold . we focus on face and text images and adopt a generative adversarial network ( gan ) to learn a category-specific prior to solve this problem . however , the basic gan formulation does not generate realistic highresolution images . in this work , we introduce novel training losses that help recover fine details . we also present a multi-class gan that can process multi-class image restoration tasks , i.e. , face and text images , using a single generator network . extensive experiments demonstrate that our method performs favorably against the state-of-the-art methods on both synthetic and real-world images at a lower computational cost . story_separator_special_tag super-resolution reconstruction technology is an important research topic in many fields such as image processing and computer vision . this technology can be used widely for security monitoring , old image reconstruction , image compression transmission and other fields . in this paper , super-resolution image reconstruction is performed on a low-resolution image of four times magnification . we propose the dense convolutional networks used as a generator instead of residual networks , and set perceptual loss as the optimization goal . we use the vgg network feature map as the loss function instead of mean square error , which combines the perceptual loss with the adversarial loss and is beneficial for compensating the shortcomings of previous methods that lack high frequency detail . experimental results show that our method can produce clearer face images than the traditional methods . these reconstructed images have higher resolution and peak signal to noise ratio ( psnr ) and structural similarity index ( ssim ) than the images generated by the deep residual networks . story_separator_special_tag image deblurring and super-resolution are very important in image processing such as face verification . however , when in the outdoors , we often get blurry and low resolution images . to solve the problem , we propose a deep gated fusion attention network ( dgfan ) to generate a high resolution image without blurring artifacts . we extract features from two task-independent structures for deburring and super-resolution to avoid the error propagation in the cascade structure of deblurring and super-resolution . we also add an attention module in our network by using channel-wise and spatial-wise features for better features and propose an edge loss function to make the model focus on facial features like eyes and nose . dgfan performs favorably against the state-of-arts methods in terms of psnr and ssim . also , using the clear images generated by dgfan can improve the accuracy on face verification . story_separator_special_tag the goal of face hallucination is to generate high-resolution images with fidelity from low-resolution ones . in contrast to existing methods based on patch similarity or holistic constraints in the image space , we propose to exploit local image structures for face hallucination . each face image is represented in terms of facial components , contours and smooth regions . the image structure is maintained via matching gradients in the reconstructed high-resolution output . for facial components , we align input images to generate accurate exemplars and transfer the high-frequency details for preserving structural consistency . for contours , we learn statistical priors to generate salient structures in the high-resolution images . a patch matching method is utilized on the smooth regions where the image gradients are preserved . experimental results demonstrate that the proposed algorithm generates hallucinated face images with favorable quality and adaptability . story_separator_special_tag most of the face hallucination methods are designed for complete inputs . they will not work well if the inputs are very tiny or contaminated by large occlusion . inspired by this fact , we propose an obscured face hallucination network ( ofhnet ) . the ofhnet consists of four parts : an inpainting network , an upsampling network , a discriminative network , and a fixed facial landmark detection network . the inpainting network restores the low-resolution ( lr ) obscured face images . the following upsampling network is to upsample the output of inpainting network . in order to ensure the generated high-resolution ( hr ) face images more photo-realistic , we utilize the discriminative network and the facial landmark detection network to better the result of upsampling network . in addition , we present a semantic structure loss , which makes the generated hr face images more pleasing . extensive experiments show that our framework can restore the appealing hr face images from 1/4 missing area lr face images with a challenging scaling factor of 8x . story_separator_special_tag face restoration is an inherently ill-posed problem , where additional prior constraints are typically considered crucial for mitigating such pathology . however , real-world image prior are often hard to simulate with precise mathematical models , which inevitably limits the performance and generalization ability of existing prior-regularized restoration methods . in this paper , we study the problem of face restoration under a more practical `` dual blind '' setting , i.e. , without prior assumptions or hand-crafted regularization terms on the degradation profile or image contents . to this end , a novel implicit subspace prior learning ( ispl ) framework is proposed as a generic solution to dual-blind face restoration , with two key elements : 1 ) an implicit formulation to circumvent the ill-defined restoration mapping and 2 ) a subspace prior decomposition and fusion mechanism to dynamically handle inputs at varying degradation levels with consistent high-quality restoration results . experimental results demonstrate significant perception-distortion improvement of ispl against existing state-of-the-art methods for a variety of restoration subtasks , including a 3.69db psnr and 45.8 % fid gain against esrgan , the 2018 ntire sr challenge winner . overall , we prove that it is possible to story_separator_special_tag existing face restoration researches typically rely on either the image degradation prior or explicit guidance labels for training , which often lead to limited generalization ability over real-world images with heterogeneous degradation and rich background contents . in this paper , we investigate a more challenging and practical `` dual-blind '' version of the problem by lifting the requirements on both types of prior , termed as `` face renovation '' ( fr ) . specifically , we formulate fr as a semantic-guided generation problem and tackle it with a collaborative suppression and replenishment ( csr ) approach . this leads to hifacegan , a multi-stage framework containing several nested csr units that progressively replenish facial details based on the hierarchical semantic guidance extracted from the front-end content-adaptive suppression modules . extensive experiments on both synthetic and real face images have verified the superior performance of our hifacegan over a wide range of challenging restoration subtasks , demonstrating its versatility , robustness and generalization ability towards real-world face processing applications . code is available at https : //github.com/lotayou/face-renovation . story_separator_special_tag face detection is one of the most studied topics in the computer vision community . much of the progresses have been made by the availability of face detection benchmark datasets . we show that there is a gap between current face detection performance and the real world requirements . to facilitate future face detection research , we introduce the wider face dataset , which is 10 times larger than existing datasets . the dataset contains rich annotations , including occlusions , poses , event categories , and face bounding boxes . faces in the proposed dataset are extremely challenging due to large variations in scale , pose and occlusion , as shown in fig . 1. furthermore , we show that wider face dataset is an effective training source for face detection . we benchmark several representative detection systems , providing an overview of state-of-the-art performance and propose a solution to deal with large scale variation . finally , we discuss common failure cases that worth to be further investigated . dataset can be downloaded at : mmlab.ie.cuhk.edu.hk/projects/widerface story_separator_special_tag single image super-resolution ( sisr ) is a notoriously challenging ill-posed problem that aims to obtain a high-resolution output from one of its low-resolution versions . recently , powerful deep learning algorithms have been applied to sisr and have achieved state-of-the-art performance . in this survey , we review representative deep learning-based sisr methods and group them into two categories according to their contributions to two essential aspects of sisr : the exploration of efficient neural network architectures for sisr and the development of effective optimization objectives for deep sisr learning . for each category , a baseline is first established , and several critical limitations of the baseline are summarized . then , representative works on overcoming these limitations are presented based on their original content , as well as our critical exposition and analyses , and relevant comparisons are conducted from a variety of perspectives . finally , we conclude this review with some current challenges and future trends in sisr that leverage deep learning algorithms . story_separator_special_tag recently , some generative adversarial network ( gan ) -based super-resolution ( sr ) methods have progressed to the point where they can produce photo-realistic natural images by using a generator ( g ) and discriminator ( d ) adversarial scheme . however , vanilla gan-based sr methods can not achieve good reconstruction and perceptual fidelity on real-world facial images at the same time . because of d loss , them are hard to converge stably , which may cause the model collapse . in this paper , we present an enhanced discriminative generative adversarial network ( edgan ) for sr facial recognition to achieve better reconstruction and perceptual fidelities . first , we discover that a versatile d boosts the adversarial framework to a preferable nash equilibrium . then , we design the d via dense connections , which brings more stable adversarial loss . furthermore , a novel perceptual loss function , by reusing the intermediate features of d , is used to eliminate the gradient vanishing problem of gs . to our knowledge , this is the first framework to focus on improving the performance of the d. quantitatively , experimental results show the advantages of edgan story_separator_special_tag pushing by big data and deep convolutional neural network ( cnn ) , the performance of face recognition is becoming comparable to human . using private large scale training datasets , several groups achieve very high performance on lfw , ie , 97 % to 99 % . while there are many open source implementations of cnn , none of large scale face dataset is publicly available . the current situation in the field of face recognition is that data is more important than algorithm . to solve this problem , this paper proposes a semi- automatical way to collect face images from internet and builds a large scale dataset containing about 10,000 subjects and 500,000 images , called casiawebface . based on the database , we use a 11-layer cnn to learn discriminative representation and obtain state- of-theart accuracy on lfw and ytf . story_separator_special_tag super-resolution ( sr ) and landmark localization of tiny faces are highly correlated tasks . on the one hand , landmark localization could obtain higher accuracy with faces of high-resolution ( hr ) . on the other hand , face sr would benefit from prior knowledge of facial attributes such as landmarks . thus , we propose a joint alignment and sr network to simultaneously detect facial landmarks and super-resolve tiny faces . more specifically , a shared deep encoder is applied to extract features for both tasks by leveraging complementary information . to exploit representative power of the hierarchical encoder , intermediate layers of a shared feature extraction module are fused to form efficient feature representations . the fused features are then fed to task-specific modules to detect landmarks and super-resolve face images in parallel . extensive experiments demonstrate that the proposed model significantly outperforms the state-of-the-art in both landmark localization and sr of faces . we show a large improvement for landmark localization of tiny faces ( i.e. , 16 \xd7 16 ) . furthermore , the proposed framework yields comparable results for landmark localization on low-resolution ( lr ) faces ( i.e. , 64 \xd7 64 ) story_separator_special_tag to narrow the inherent sensing gap in heterogeneous face recognition ( hfr ) , recent methods have resorted to generative models and explored the recognition via generation framework . even though , it remains a very challenging task to synthesize photo-realistic visible faces ( vis ) from near-infrared ( nir ) images especially when paired training data are unavailable . we present an approach to avert the data misalignment problem and faithfully preserve pose , expression and identity information during cross-spectral face hallucination . at the pixel level , we introduce an unsupervised attention mechanism to warping that is jointly learned with the generator to derive pixelwise correspondence from unaligned data . at the image level , an auxiliary generator is employed to facilitate the learning of mapping from nir to vis domain . at the domain level , we first apply the mutual information constraint to explicitly measure the correlation between domains and thus benefit synthesis . extensive experiments on three heterogeneous face datasets demonstrate that our approach not only outperforms current state-of-the-art hfr methods but also produce visually appealing results at a high resolution ( 256\xd7256 ) . story_separator_special_tag state-of-the-art face super-resolution methods leverage deep convolutional neural networks to learn a mapping between low-resolution ( lr ) facial patterns and their corresponding high-resolution ( hr ) counterparts by exploring local appearance information . however , most of these methods do not account for facial structure and suffer from degradations due to large pose variations and misalignments . in this paper , we propose a method that explicitly incorporates structural information of faces into the face super-resolution process by using a multi-task convolutional neural network ( cnn ) . our cnn has two branches : one for super-resolving face images and the other branch for predicting salient regions of a face coined facial component heatmaps . these heatmaps encourage the upsampling stream to generate super-resolved faces with higher-quality details . our method not only uses low-level information ( i.e. , intensity similarity ) , but also middle-level information ( i.e. , face structure ) to further explore spatial constraints of facial components from lr inputs images . therefore , we are able to super-resolve very small unaligned face images \\ ( ( 16\\ , \\times \\,16\\hbox { pixels } ) \\ ) with a large upscaling factor of 8\\ ( story_separator_special_tag given a tiny face image , existing face hallucination methods aim at super-resolving its high-resolution ( hr ) counterpart by learning a mapping from an exemplar dataset . since a low-resolution ( lr ) input patch may correspond to many hr candidate patches , this ambiguity may lead to distorted hr facial details and wrong attributes such as gender reversal . an lr input contains low-frequency facial components of its hr version while its residual face image , defined as the difference between the hr ground-truth and interpolated lr images , contains the missing high-frequency facial details . we demonstrate that supplementing residual images or feature maps with additional facial attribute information can significantly reduce the ambiguity in face super-resolution . to explore this idea , we develop an attribute-embedded upsampling network , which consists of an upsampling network and a discriminative network . the upsampling network is composed of an autoencoder with skip-connections , which incorporates facial attribute vectors into the residual features of lr inputs at the bottleneck of the autoencoder and deconvolutional layers used for upsampling . the discriminative network is designed to examine whether super-resolved faces contain the desired attributes or not and then its loss story_separator_special_tag given a tiny face image , existing face hallucination methods aim at super-resolving its high-resolution ( hr ) counterpart by learning a mapping from an exemplary dataset . since a low-resolution ( lr ) input patch may correspond to many hr candidate patches , this ambiguity may lead to distorted hr facial details and wrong attributes such as gender reversal and rejuvenation . an lr input contains low-frequency facial components of its hr version while its residual face image , defined as the difference between the hr ground-truth and interpolated lr images , contains the missing high-frequency facial details . we demonstrate that supplementing residual images or feature maps with additional facial attribute information can significantly reduce the ambiguity in face super-resolution . to explore this idea , we develop an attribute-embedded upsampling network , which consists of an upsampling network and a discriminative network . the upsampling network is composed of an autoencoder with skip-connections , which incorporates facial attribute vectors into the residual features of lr inputs at the bottleneck of the autoencoder , and deconvolutional layers used for upsampling . the discriminative network is designed to examine whether super-resolved faces contain the desired attributes or not and story_separator_special_tag conventional face super-resolution methods , also known as face hallucination , are limited up to \\ ( 2 \\ ! \\sim \\ ! 4\\times \\ ) scaling factors where \\ ( 4 \\sim 16\\ ) additional pixels are estimated for each given pixel . besides , they become very fragile when the input low-resolution image size is too small that only little information is available in the input image . to address these shortcomings , we present a discriminative generative network that can ultra-resolve a very low resolution face image of size \\ ( 16 \\times 16\\ ) pixels to its \\ ( 8\\times \\ ) larger version by reconstructing 64 pixels from a single pixel . we introduce a pixel-wise \\ ( \\ell _2\\ ) regularization term to the generative model and exploit the feedback of the discriminative network to make the upsampled face images more similar to real ones . in our framework , the discriminative network learns the essential constituent parts of the faces and the generative network blends these parts in the most accurate fashion to the input image . since only frontal and ordinary aligned images are used in training , our method can ultra-resolve story_separator_special_tag conventional face hallucination methods rely heavily on accurate alignment of low-resolution ( lr ) faces before upsampling them . misalignment often leads to deficient results and unnatural artifacts for large upscaling factors . however , due to the diverse range of poses and different facial expressions , aligning an lr input image , in particular when it is tiny , is severely difficult . to overcome this challenge , here we present an end-to-end transformative discriminative neural network ( tdn ) devised for super-resolving unaligned and very small face images with an extreme upscaling factor of 8. our method employs an upsampling network where we embed spatial transformation layers to allow local receptive fields to line-up with similar spatial supports . furthermore , we incorporate a class-specific loss in our objective through a successive discriminative network to improve the alignment and upsampling performance with semantic information . extensive experiments on large face datasets show that the proposed method significantly outperforms the state-of-the-art . story_separator_special_tag conventional face hallucination methods heavily rely on accurate alignment of low-resolution ( lr ) faces before upsampling them . misalignment often leads to deficient results and unnatural artifacts for large upscaling factors . however , due to the diverse range of poses and different facial expressions , aligning an lr input image , in particular when it is tiny , is severely difficult . in addition , when the resolutions of lr input images vary , previous deep neural network based face hallucination methods require the interocular distances of input face images to be similar to the ones in the training datasets . downsampling lr input faces to a required resolution will lose high-frequency information of the original input images . this may lead to suboptimal super-resolution performance for the state-of-the-art face hallucination networks . to overcome these challenges , we present an end-to-end multiscale transformative discriminative neural network devised for super-resolving unaligned and very small face images of different resolutions ranging from 16 $ $ \\times $ $ 16 to 32 $ $ \\times $ $ 32 pixels in a unified framework . our proposed network embeds spatial transformation layers to allow local receptive fields to line-up with similar story_separator_special_tag in popular tv programs ( such as csi ) , a very low-resolution face image of a person , who is not even looking at the camera in many cases , is digitally super-resolved to a degree that suddenly the person 's identity is made visible and recognizable . of course , we suspect that this is merely a cinematographic special effect and such a magical transformation of a single image is not technically possible . or , is it ? in this paper , we push the boundaries of super-resolving ( hallucinating to be more accurate ) a tiny , non-frontal face image to understand how much of this is possible by leveraging the availability of large datasets and deep networks . to this end , we introduce a novel transformative adversarial neural network ( tann ) to jointly frontalize very-low resolution ( i.e. , 16 \xd7 16 pixels ) out-of-plane rotated face images ( including profile views ) and aggressively super-resolve them ( 8\xd7 ) , regardless of their original poses and without using any 3d information . tann is composed of two components : a transformative upsampling network which embodies encoding , spatial transformation and deconvolutional layers story_separator_special_tag facial image super-resolution ( sr ) is an important aspect of facial analysis , and it can contribute significantly to tasks such as face alignment , face recognition , and image-based 3d reconstruction . recent convolutional neural network ( cnn ) based models have exhibited significant advancements by learning mapping relations using pairs of low-resolution ( lr ) and high-resolution ( hr ) facial images . however , because these methods are conventionally aimed at increasing the psnr and ssim metrics , the reconstructed hr images might be blurry and have an overall unsatisfactory perceptual quality even when state-of-the-art quantitative results are achieved . in this study , we address this limitation by proposing an adversarial framework intended to reconstruct perceptually high-quality hr facial images while simultaneously removing blur . to this end , a simple five-layer cnn is employed to extract feature maps from lr facial images , and this feature information is provided to two-branch encoder-decoder networks that generate hr facial images with and without blur . in addition , local and global discriminators are combined to focus on the reconstruction of hr facial structures . both qualitative and quantitative results demonstrate the effectiveness of the proposed method story_separator_special_tag in this paper , we present a new benchmark ( menpo benchmark ) for facial landmark localisation and summarise the results of the recent competition , so-called menpo challenge , run in conjunction to cvpr 2017. the menpo benchmark , contrary to the previous benchmarks such as 300-w and 300-vw , contains facial images both in ( nearly ) frontal , as well as in profile pose ( annotated with a different markup of facial landmarks ) . furthermore , we increase considerably the number of annotated images so that deep learning algorithms can be robustly applied to the problem . the results of the menpo challenge demonstrate that recent deep learning architectures when trained with the abundance of data lead to excellent results . finally , we discuss directions for future benchmarks in the topic . story_separator_special_tag face hallucination is a generative task to super-resolve the facial image with low resolution while human perception of face heavily relies on identity information . however , previous face hallucination approaches largely ignore facial identity recovery . this paper proposes super-identity convolutional neural network ( sicnn ) to recover identity information for generating faces closed to the real identity . specifically , we define a super-identity loss to measure the identity difference between a hallucinated face and its corresponding high-resolution face within the hypersphere identity metric space . however , directly using this loss will lead to a dynamic domain divergence problem , which is caused by the large margin between the high-resolution domain and the hallucination domain . to overcome this challenge , we present a domain-integrated training approach by constructing a robust identity metric for faces from these two domains . extensive experimental evaluations demonstrate that the proposed sicnn achieves superior visual quality over the state-of-the-art methods on a challenging task to super-resolve 12 \\ ( \\times \\ ) 14 faces with an 8\\ ( \\times \\ ) upscaling factor . in addition , sicnn significantly improves the recognizability of ultra-low-resolution faces . story_separator_special_tag for many face-related multimedia applications , low-resolution face images may greatly degrade the face recognition performance and necessitate face super-resolution ( sr ) . among the current sr methods , mse-oriented sr methods often produce over-smoothed outputs and could miss some texture details while gan-oriented sr methods may generate artifacts which are harmful to face recognition . to resolve the above issues , this paper presents a supervised pixel-wise generative adversarial network ( spgan ) that can resolve a very low-resolution face image of < inline-formula > < tex-math notation= '' latex '' > $ 16\\times 16 $ < /tex-math > < /inline-formula > or smaller pixel-size to its larger version of multiple scaling factors ( < inline-formula > < tex-math notation= '' latex '' > $ 2\\times $ < /tex-math > < /inline-formula > , < inline-formula > < tex-math notation= '' latex '' > $ 4\\times $ < /tex-math > < /inline-formula > , < inline-formula > < tex-math notation= '' latex '' > $ 8\\times $ < /tex-math > < /inline-formula > and even < inline-formula > < tex-math notation= '' latex '' > $ 16\\times $ < /tex-math > < /inline-formula > ) in a unified story_separator_special_tag obtaining a high-quality frontal face image from a low-resolution ( lr ) non-frontal face image is primarily important for many facial analysis applications . however , mainstreams either focus on super-resolving near-frontal lr faces or frontalizing non-frontal high-resolution ( hr ) faces . it is desirable to perform both tasks seamlessly for daily-life unconstrained face images . in this paper , we present a novel vivid face hallucination generative adversarial network ( vividgan ) for simultaneously super-resolving and frontalizing tiny non-frontal face images . vividgan consists of coarse-level and fine-level face hallucination networks ( fhnet ) and two discriminators , i.e . , coarse-d and fine-d. the coarse-level fhnet generates a frontal coarse hr face and then the fine-level fhnet makes use of the facial component appearance prior , i.e . , fine-grained facial components , to attain a frontal hr face image with authentic details . in the fine-level fhnet , we also design a facial component-aware module that adopts the facial geometry guidance as clues to accurately align and merge the frontal coarse hr face and prior information . meanwhile , two-level discriminators are designed to capture both the global outline of a face image as well as story_separator_special_tag existing face hallucination methods based on convolutional neural networks ( cnn ) have achieved impressive performance on low-resolution ( lr ) faces in a normal illumination condition . however , their performance degrades dramatically when lr faces are captured in low or non-uniform illumination conditions . this paper proposes a copy and paste generative adversarial network ( cpgan ) to recover authentic high-resolution ( hr ) face images while compensating for low and non-uniform illumination . to this end , we develop two key components in our cpgan : internal and external copy and paste nets ( cpnets ) . specifically , our internal cpnet exploits facial information residing in the input image to enhance facial details ; while our external cpnet leverages an external hr face for illumination compensation . a new illumination compensation loss is thus developed to capture illumination from the external guided face image effectively . furthermore , our method offsets illumination and upsamples facial details alternatively in a coarse-to-fine fashion , thus alleviating the correspondence ambiguity between lr inputs and external hr inputs . extensive experiments demonstrate that our method manifests authentic hr face images in a uniform illumination condition and outperforms state-of-the-art methods qualitatively story_separator_special_tag the majority of face super-resolution ( fsr ) approaches apply specific facial priors as guidance in super-resolving the given low-resolution ( lr ) into high-resolution ( hr ) images . to improve the fsr performance , various kinds of facial representations were explored in the past decades . nevertheless , there remains a challenge in estimating high-quality facial representations for lr images . to address this problem , we propose novel facial representation - enhanced facial boundaries . by semantically connecting the facial landmark points , enhanced facial boundaries retain rich semantic information and are robust to different spatial resolution scales . based on the enhanced facial boundaries , we design a novel multi-stage fsr ( ms-fsr ) approach , which applies the multi-stage strategy to recover high-quality face images progressively . the enhanced facial boundaries and the coarse-to-fine supervision facilitate the facial boundaries estimation process in producing high quality facial representation . the one-time projection of the fsr task is decomposed into multiple simpler sub-processes . in these ways , the msfsr estimates a more robust facial representation and achieves better performance . experimental results indicate the superiority of our approach to the state-of-the-art approaches in both qualitative and story_separator_special_tag in this paper , we propose a novel patch-based face hallucination method that consists of two patch-based sparse autoencoder ( sae ) networks and a deep fully connected network ( namely traversal network ) . the sae networks are used to capture the intrinsic features of low-resolution ( lr ) images and high-resolution ( hr ) images in the hidden layers , while the traversal network is used to map features from the lr hidden layer to the hr hidden layer . in the training stage , these three networks are jointly optimized . compared with previous network-based methods that learn an end-to-end mapping from lr images to hr images , our method learns the mapping between hidden layers , which can better alleviate the over-fitting problem . experimental results demonstrate that our method is efficient and robust for hallucinating face images from both lab environment and the wild . the proposal achieves state-of-the-art performance when conducting face hallucination in cas-peal-r1 database , cmu-pie database and casia database . story_separator_special_tag face super-resolution ( face sr ) is a sub-domain of sr that reconstructs high-resolution face images from low-resolution ones . the prior knowledge of face is widely used for recovering more realistic facial details , which will increase the complexity of the network and introduce additional knowledge extraction procession both in the training and evaluating stage . to address the above issues , we propose to combine face semantic prior extraction and face sr with the attention adaptation model and design a semantic attention adaptation network ( saan ) for face sr. specifically , we train the face semantic parsing network and face sr network jointly , by adopting the semantic attention adaptation ( saa ) model to transfer the ability of extracting face prior knowledge to the sr network . then our sr network can work independently in the testing stage without using the prior knowledge extraction network . to generate realistic face images , we also utilize gan loss to enrich the texture with more details ( i.e . saan-g ) . extensive experiments on the benchmark dataset illustrate that our saan and saan-g improve the state-of-the-art both on quality and efficiency . story_separator_special_tag face super-resolution has been studied for decades , and many approaches have been proposed to upsample low-resolution face images using information mined from paired low-resolution ( lr ) images and high-resolution ( hr ) images . however , most of this kind of works only simply sharpen the blurry edges in the upsampled face images and typically no photo-realistic face is reconstructed in the final result . in this paper , we present a gan-based algorithm for face super-resolution which properly synthesizes photo-realistic super-recovered face . to this end , we introduce semi-dual optimal transport to optimize our model such that the distribution of its generated data can match the distribution of a target domain as much as possible . this way would endow our model with learning the mapping of distribution from unpaired lr images and hr images with desired properties . we demonstrate the robustness of our algorithm by testing it on color feret database and show that its performance is considerably superior to all state-of-the-art approaches . story_separator_special_tag face hallucination aims to produce a high-resolution ( hr ) face image from an input low-resolution ( lr ) face image , which is of great importance for many practical face applications , such as face recognition and face verification . since the structure features of face image is complex and sensitive , obtaining a super-resolved face image is more difficult than generic image super-resolution ( sr ) . to address these limitations , we present a novel gan ( generative adversarial network ) based feature-preserving face hallucination approach for very low resolution ( \\ ( 16 \\times 16\\ ) pixels ) faces and large scale upsampling ( 8\\ ( \\times \\ ) ) . specifically , we design a new residual structure based face generator and adopt two different discriminators - an image discriminator and a feature discriminator , to encourage the model to acquire more realistic face features rather than artifacts . the evaluations based on both psnr and visual result reveal that the proposed model is superior to the state-of-the-art methods . story_separator_special_tag depth images are widely used in 3d head pose estimation and face reconstruction . the device-specific noise and the lack of textual constraints pose a major problem for estimating a nonrigid deformable face from a single noisy depth image . in this article , we present a deep neural network-based framework to infer a 3d face consistent with a single depth image captured by a consumer depth camera kinect . confronted with a lack of annotated depth images with facial parameters , we utilize the bidirectional cyclegan-based generator for denoising and noisy image simulation , which helps to generalize the model learned from synthetic depth images to real noisy ones . we generate the code regressors in the source ( synthetic ) and the target ( noisy ) depth image domains and present a fusion scheme in the parametric space for 3d face inference . the proposed multi-level shape consistency constraint , concerning the embedded features , depth maps , and 3d surfaces , couples the code regressor and the domain adaptation , avoiding shape distortions in the cyclegan-based generators . experiments demonstrate that the proposed method is effective in depth-based 3d head pose estimation and expressive face reconstruction compared story_separator_special_tag face hallucination method is proposed to generate high-resolution images from low-resolution ones for better visualization . however , conventional hallucination methods are often designed for controlled settings and can not handle varying conditions of pose , resolution degree , and blur . in this paper , we present a new method of face hallucination , which can consistently improve the resolution of face images even with large appearance variations . our method is based on a novel network architecture called bi-channel convolutional neural network ( bi-channel cnn ) . it extracts robust face representations from raw input by using deep convolu-tional network , then adaptively integrates two channels of information ( the raw input image and face representations ) to predict the high-resolution image . experimental results show our system outperforms the prior state-of-the-art methods . story_separator_special_tag we propose a new universal objective image quality index , which is easy to calculate and applicable to various image processing applications . instead of using traditional error summation methods , the proposed index is designed by modeling any image distortion as a combination of three factors : loss of correlation , luminance distortion , and contrast distortion . although the new index is mathematically defined and no human visual system model is explicitly employed , our experiments on various image distortion types indicate that it performs significantly better than the widely used distortion metric mean squared error . demonstrative images and an efficient matlab implementation of the algorithm are available online at http : //anchovy.ece.utexas.edu//spl sim/zwang/research/quality_index/demo.html . story_separator_special_tag we present a novel framework for hallucinating faces of unconstrained poses and with very low resolution ( face size as small as 5pxiod ) . in contrast to existing studies that mostly ignore or assume pre-aligned face spatial configuration ( e.g . facial landmarks localization or dense correspondence field ) , we alternatingly optimize two complementary tasks , namely face hallucination and dense correspondence field estimation , in a unified framework . in addition , we propose a new gated deep bi-network that contains two functionality-specialized branches to recover different levels of texture details . extensive experiments demonstrate that such formulation allows exceptional hallucination quality on in-the-wild low-res faces with significant pose and illumination variations . story_separator_special_tag a two-phase face hallucination approach is proposed in this paper to infer high-resolution face image from the low-resolution observation based on a set of training image pairs . the proposed locality preserving hallucination ( lph ) algorithm combines locality preserving projection ( lpp ) and radial basis function ( rbf ) regression together to hallucinate the global high-resolution face . furthermore , in order to compensate the inferred global face with detailed inartificial facial features , the neighbor reconstruction based face residue hallucination is used . compared with existing approaches , the proposed lph algorithm can generate global face more similar to the ground truth face efficiently , moreover , the patch structure and search strategy carefully designed for the neighbor reconstruction algorithm greatly reduce the computational complexity without diminishing the quality of high-resolution face detail . the details of synthetic high-resolution face are further improved by a global linear smoother . experiments indicate that our approach can synthesize distinct high-resolution faces with various facial appearances such as facial expressions , eyeglasses efficiently .
this work focuses on a quadratic dependence measure which can be used for blind source separation . after defining it , we show some links with other quadratic dependence measures used by feuerverger and rosenblatt . we develop a practical way for computing this measure , which leads us to a new solution for blind source separation in the case of nonlinear mixtures . it consists in first estimating the theoretical quadratic measure , then computing its relative gradient , finally minimizing it through a gradient descent method . some examples illustrate our method in the post nonlinear mixtures . story_separator_special_tag this review outlines concepts of mathematical statistics , elements of probability theory , hypothesis tests and point estimation for use in the analysis of modern astronomical data . least squares , maximum likelihood , and bayesian approaches to statistical inference are treated . resampling methods , particularly the bootstrap , provide valuable procedures when distributions functions of statistics are not known . several approaches to model selection and goodness of fit are considered . applied statistics relevant to astronomical research are briefly discussed : nonparametric methods for use when little is known about the behavior of the astronomical populations or processes ; data smoothing with kernel density estimation and nonparametric regression ; unsupervised clustering and supervised classification procedures for multivariate problems ; survival analysis for astronomical datasets with nondetections ; timeand frequency-domain times series analysis for light curves ; and spatial statistics to interpret the spatial distributions of points in low dimensions . two types of resources are presented : about 40 recommended texts and monographs in various fields of statistics , and second , the public domain r software system for statistical analysis . together with its 3500 ( and growing ) add-on cran packages , r implements a story_separator_special_tag sobolev spaces and calculus of variations . a first course in sobolev spaces second . functional analysis sobolev spaces and partial . sobolev spaces in mathematics i ii iii vladimir maz ya . sobolev spaces in mathematics i sobolev type inequalities . pdf optimal sobolev type inequalities in lorentz spaces . sobolev spaces in mathematics i springerlink . new characterizations of magnetic sobolev spaces introduction . fourier transformation and sobolev spaces . pdf on inequalities of hardy sobolev type . real analysis what are functions in the sobolev space h. in math how is a banach space different to a sobolev space . sobolev extension by linear operators math . products in sobolev spaces mit mathematics . research article higher order sobolev type spaces on the . pdf lebesgue and sobolev spaces with variable exponents . ap analysis of pdes sobolev spaces on boundaries . sobolev space encyclopedia of mathematics . optimal sobolev imbedding spaces request pdf . juha kinnunen sobolev spaces aalto . a first course in sobolev spaces second edition . notes on sobolev spaces wiki math ntnu no . lecture 18 mit opencourseware . read download sobolev spaces pdf pdf download . weighted sobolev spaces and capacity story_separator_special_tag linear operators in hilbert spaces | springerlink abstract . we recall some fundamental notions of the theory of linear operators in hilbert spaces which are required for a rigorous formulation of the rules of quantum mechanics in the one-body case . in particular , we introduce and discuss the main properties of bounded and unbounded operators , adjoint operators , symmetric and self-adjoint operators , self-adjointness criterion and stability of self-adjointness under small perturbations , spectrum , isometric and unitary operators , spectral story_separator_special_tag a class of tests for the two sample problem that is based on the empirical characteristic function is investigated . they can be applied to continuous as well as to discrete data of any arbitrary fixed dimension . the tests are consistent against any fixed alternatives for adequate choices of the weight function involved in the definition of the test statistic . both the bootstrap and the permutation procedures can be employed to estimate consistently the null distribution . the goodness of these approximations and the power of some tests in this class for finite sample sizes are investigated by simulation . story_separator_special_tag kernel methods are among the most popular techniques in machine learning . from a frequentist/discriminative perspective they play a central role in regularization theory as they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kernel hilbert spaces . from a bayesian/generative perspective they are the key in the context of gaussian processes , where the kernel function is also known as the covariance function . traditionally , kernel methods have been used in supervised learning problem with scalar outputs and indeed there has been a considerable amount of work devoted to designing and learning kernels . more recently there has been an increasing interest in methods that deal with multiple outputs , motivated partly by frameworks like multitask learning . in this paper , we review different methods to design or learn valid kernel functions for multiple outputs , paying particular attention to the connection between probabilistic and functional methods . story_separator_special_tag the expected kernel for missing features is introduced and applied to training a support vector machine . the expected kernel is a measure of the mean similarity with respect to the distribution of the missing features . we compare the expected kernel svm with the robust second-order cone program ( socp ) svm , which accounts for missing kernel values by estimating the mean and covariance of missing similarities . further , we extend the socp svm to utilize the expected kernel by deriving the expected kernel variance . results show that the expected kernel-used with a traditional svm solver-shows competitive performance on benchmark datasets to the socp svm at a far-reduced computational burden . story_separator_special_tag abstract test statistics are proposed for testing equality of two p -variate probability density functions . the statistics are based on the integrated square distance between two kernel-based density estimates and are two-sample versions of the statistic studied by hall ( 1984 , j. multivariate anal . 14 1-16 ) . particular emphasis is laid on the case where the two bandwidths are fixed and equal . asymptotic distributional results and power calculations are supplemented by an empirical study based on univariate examples . story_separator_special_tag we review adaptive markov chain monte carlo algorithms ( mcmc ) as a mean to optimise their performance . using simple toy examples we review their theoretical underpinnings , and in particular show why adaptive mcmc algorithms might fail when some fundamental properties are not satisfied . this leads to guidelines concerning the design of correct algorithms . we then review criteria and the useful framework of stochastic approximation , which allows one to systematically optimise generally used criteria , but also analyse the properties of adaptive mcmc algorithms . we then propose a series of novel adaptive algorithms which prove to be robust and reliable in practice . these algorithms are applied to artificial and high dimensional scenarios , but also to the classic mine disaster dataset inference problem . story_separator_special_tag summary this paper investigates the roles of partial correlation and conditional correlation as measures of the conditional independence of two random variables . it first establishes a sufficient condition for the coincidence of the partial correlation with the conditional correlation . the condition is satisfied not only for multivariate normal but also for elliptical , multivariate hypergeometric , multivariate negative hypergeometric , multinomial and dirichlet distributions . such families of distributions are characterized by a semigroup property as a parametric family of distributions . a necessary and sufficient condition for the coincidence of the partial covariance with the conditional covariance is also derived . however , a known family of multivariate distributions which satisfies this condition can not be found , except for the multivariate normal . the paper also shows that conditional independence has no close ties with zero partial correlation except in the case of the multivariate normal distribution ; it has rather close ties to the zero conditional correlation . it shows that the equivalence between zero conditional covariance and conditional independence for normal variables is retained by any monotone transformation of each variable . the results suggest that care must be taken when using such correlations story_separator_special_tag we consider supervised learning problems within the positive-definite kernel framework , such as kernel ridge regression , kernel logistic regression or the support vector machine . with kernels leading to infinite-dimensional feature spaces , a common practi cal limiting difficulty is the necessity of computing the kernel matrix , which most frequently leads to algorithms with running time at least quadratic in the number of observations n , i.e. , o ( n 2 ) . low-rank approximations of the kernel matrix are often considered as they allow the reduction of running time complexities to o ( p 2 n ) , where p is the rank of the approximation . the practicality of such methods thus depends on the required rank p. in this paper , we show that in the context of kernel ridge regression , for approximations based on a random subset of columns of the original kernel matrix , the rank p may be chosen to be linear in the degrees of freedom associated with the problem , a quantity which is classically used in the statistical analysis of such method s , and is often seen as the implicit number of parameters of non-parametric estimators story_separator_special_tag we show that kernel-based quadrature rules for computing integrals can be seen as a special case of random feature expansions for positive definite kernels , for a particular decomposition that always exists for such kernels . we provide a theoretical analysis of the number of required samples for a given approximation error , leading to both upper and lower bounds that are based solely on the eigenvalues of the associated integral operator and match up to logarithmic terms . in particular , we show that the upper bound may be obtained from independent and identically distributed samples from a specific non-uniform distribution , while the lower bound if valid for any set of points . applying our results to kernel-based quadrature , while our results are fairly general , we recover known upper and lower bounds for the special cases of sobolev spaces . moreover , our results extend to the more general problem of full function approximations ( beyond simply computing an integral ) , with results in l2- and l -norm that match known results for special cases . applying our results to random features , we show an improvement of the number of random features needed to story_separator_special_tag we present a class of algorithms for independent component analysis ( ica ) which use contrast functions based on canonical correlations in a reproducing kernel hilbert space . on the one hand , we show that our contrast functions are related to mutual information and have desirable mathematical properties as measures of statistical dependence . on the other hand , building on recent developments in kernel methods , we show that these criteria and their derivatives can be computed efficiently . minimizing these criteria leads to flexible and robust algorithms for ica . we illustrate with simulations involving a wide variety of source distributions , showing that our algorithms outperform many of the presently known algorithms . story_separator_special_tag we show that the herding procedure of welling ( 2009 ) takes exactly the form of a standard convex optimization algorithm -- namely a conditional gradient algorithm minimizing a quadratic moment discrepancy . this link enables us to invoke convergence results from convex optimization and to consider faster alternatives for the task of approximating integrals in a reproducing kernel hilbert space . we study the behavior of the different variants through numerical simulations . the experiments indicate that while we can improve over herding on the task of approximating integrals , the original herding algorithm tends to approach more often the maximum entropy distribution , shedding more light on the learning bias behind herding . story_separator_special_tag a new expression for average mutual information ( ami ) of two gaussian processes is obtained , extending earlier work of gel fand and yaglom . this result is expressed in terms of the covariance and cross-covariance operators of the two processes , while the previous results were stated in terms of projection operators in random variable space . relations are also obtained between the occurrence of finite or infinite ami and nonsingular or singular detection for a gaussian signal imbedded in additive gaussian noise . story_separator_special_tag let h1 ( resp. , h2 ) be a real and separable hilbert space with borel o-field r1 ( resp. , r2 ) , and let ( h1 x h2 , r , x r2 ) be the product measurable space generated by the measurable rectangles . this paper develops relations between probability measures on ( h1 x h2 , rj x r2 ) , i.e. , joint measures , and the projections of such measures on ( h1 , rl ) and ( h2 , r2 ) . in particular , the class of all joint gaussian measures having two specified gaussian measures as projections is characterized , and conditions are ob- tained for two joint gaussian measures to be mutually absolutely continuous . the cross-covariance operator of a joint measure plays a major role in these results and these operators are characterized . ( * ) ih ~~~~~~~~~llx 11 2 djli ( x ) < oo story_separator_special_tag in this paper we discuss a relation between learning theory and regularization of linear ill-posed inverse problems . it is well known that tikhonov regularization can be profitably used in the context of supervised learning , where it usually goes under the name of regularized least-squares algorithm . moreover , the gradient descent algorithm was studied recently , which is an analog of landweber regularization scheme . in this paper we show that a notion of regularization defined according to what is usually done for ill-posed inverse problems allows to derive learning algorithms which are consistent and provide a fast convergence rate . it turns out that for priors expressed in term of variable hilbert scales in reproducing kernel hilbert spaces our results for tikhonov regularization match those in smale and zhou [ learning theory estimates via integral operators and their approximations , submitted for publication , retrievable at , 2005 ] and improve the results for landweber iterations obtained in yao et al . [ on early stopping in gradient descent learning , constructive approximation ( 2005 ) , submitted for publication ] . the remarkable fact is that our analysis shows that the same properties are shared by story_separator_special_tag the class of schoenberg transformations , embedding euclidean distances into higher dimensional euclidean spaces , is presented , and derived from theorems on positive definite and conditionally negative definite matrices . original results on the arc lengths , angles and curvature of the transformations are proposed , and visualized on artificial data sets by classical multidimensional scaling . a simple distance-based discriminant algorithm illustrates the theory , intimately connected to the gaussian kernels of machine learning . story_separator_special_tag abstract : the purpose of this paper is to discuss the asymptotic behavior of the sequence ( f sub n ( i ) ) generated by a nonlinear recurrence relation . this problem arises in connection with an equipment replacement problem , cf . s. dreyfus , a note on an industrial replacement process . story_separator_special_tag from the publisher : an introduction to the mathematical theory of multistage decision processes , this text takes a functional equation approach to the discovery of optimum policies . written by a leading developer of such policies , it presents a series of methods , uniqueness and existence theorems , and examples for solving the relevant equations . the text examines existence and uniqueness theorems , the optimal inventory equation , bottleneck problems in multistage production processes , a new formalism in the calculus of variation , strategies behind multistage games , and markovian decision processes . each chapter concludes with a problem set that eric v. denardo of yale university , in his informative new introduction , calls a rich lode of applications and research topics . 1957 edition . 37 figures . story_separator_special_tag abstract the problem of global estimation of the mean function ( \xb7 ) of a quite arbitrary gaussian process is considered . the loss function in estimating by a function a ( \xb7 ) is assumed to be of the form l ( , a ) = [ ( t ) a ( t ) ] 2 ( dt ) , and estimators are evaluated in terms of their risk function ( expected loss ) . the usual minimax estimator of is shown to be inadmissible via the stein phenomenon ; in estimating the function we are trying to simultaneously estimate a larger number of normal means . estimators improving upon the usual minimax estimator are constructed , including an estimator which allows the incorporation of prior information about . the analysis is carried out by using a version of the karhunen-loeve expansion to represent the original problem as the problem of estimating a countably infinite sequence of means from independent normal distributions . story_separator_special_tag a common statistical problem is the testing of independence of two ( response ) variables conditionally on a third ( control ) variable . in the first part of this paper , we extend hoeding s concept of estimability of degree r to testability of degree r , and show that independence is testable of degree two , while conditional independence is not testable of any degree if the control variable is continuous . hence , in a welldefined sense , conditional independence is much harder to test than independence . in the second part of the paper , a new method is introduced for the nonparametric testing of conditional independence of continuous responses given an arbitrary , not necessarily continuous , control variable . the method allows the automatic conversion of any test of independence to a test of conditional independence . hence , robust tests and tests with power against broad ranges of alternatives can be used , which are favorable properties not shared by the most commonly used test , namely the one based on the partial correlation coecient . the method is based on a new concept , the partial copula , which is an average story_separator_special_tag 1 theory.- 2 rkhs and stochastic processes.- 3 nonparametric curve estimation.- 4 measures and random measures.- 5 miscellaneous applications.- 6 computational aspects.- 7 a collection of examples.- to sobolev spaces.- a.l schwartz-distributions or generalized functions.- a.1.1 spaces and their topology.- a.1.2 weak-derivative or derivative in the sense of distributions.- a.1.3 facts about fourier transforms.- a.2 sobolev spaces.- a.2.1 absolute continuity of functions of one variable.- a.2.2 sobolev space with non negative integer exponent.- a.2.3 sobolev space with real exponent.- a.2.4 periodic sobolev space.- a.3 beppo-levi spaces . story_separator_special_tag many applications require the analysis of complex interactions between time series . these interactions can be non-linear and involve vector valued as well as complex data structures such as graphs or strings . here we provide a general framework for the statistical analysis of these dependencies when random variables are sampled from stationary time-series of arbitrary objects . to achieve this goal , we study the properties of the kernel cross-spectral density ( kcsd ) operator induced by positive definite kernels on arbitrary input domains . this framework enables us to develop an independence test between time series , as well as a similarity measure to compare different types of coupling . the performance of our test is compared to the hsic test using i.i.d . assumptions , showing improvements in terms of detection errors , as well as the suitability of this approach for testing dependency in complex dynamical systems . this similarity measure enables us to identify different types of interactions in electrophysiological neural time series . story_separator_special_tag we present two simple and explicit procedures for testing homogeneity of two independent multivariate samples of size n. the nonparametric tests are based on the statistic t/sub n/ , which is the l/sub 1/ distance between the two empirical distributions restricted to a finite partition . both tests reject the null hypothesis of homogeneity if t/sub n/ becomes large , i.e. , if t/sub n/ exceeds a threshold . we first discuss chernoff-type large deviation properties of t/sub n/ . this results in a distribution-free strong consistent test of homogeneity . then the asymptotic null distribution of the test statistic is obtained , leading to an asymptotically /spl alpha/-level test procedure . story_separator_special_tag probability distributions.- linear models for regression.- linear models for classification.- neural networks.- kernel methods.- sparse kernel machines.- graphical models.- mixture models and em.- approximate inference.- sampling methods.- continuous latent variables.- sequential data.- combining models . story_separator_special_tag we consider the problem of assigning class labels to an unlabeled test data set , given several labeled training data sets drawn from similar distributions . this problem arises in several applications where data distributions fluctuate because of biological , technical , or other sources of variation . we develop a distribution-free , kernel-based approach to the problem . this approach involves identifying an appropriate reproducing kernel hilbert space and optimizing a regularized empirical risk over the space . we present generalization error analysis , describe universal kernels , and establish universal consistency of the proposed methodology . experimental results on flow cytometry data are presented . story_separator_special_tag summary let g be a symmetric function with k arguments . a u-statistic is the arithmetic mean of g 's based on the n = n ! / { k ! ( n- k ) ! } subsamples of size k taken from a sample of size n. when n is large , it may be convenient to use instead an 'incomplete ' u-statistic based on m suitably selected subsamples . the variance of such a statistic is studied , exactly and asymptotically for large m and n. it is shown that an incomplete statistic may be asymptotically efficient compared with the 'complete ' one even when m increases much less rapidly than n. some sufficient conditions for asymptotic normality of an incomplete u-statistic are given . story_separator_special_tag random projection is a simple technique that has had a number of applications in algorithm design . in the context of machine learning , it can provide insight into questions such as why is a learning problem easier if data is separable by a large margin ? and in what sense is choosing a kernel much like choosing a set of features ? this talk is intended to provide an introduction to random projection and to survey some simple learning algorithms and other applications to learning based on it . i will also discuss how , given a kernel as a black-box function , we can use various forms of random projection to extract an explicit small feature space that captures much of what the kernel is doing . this talk is based in large part on work in [ bb05 , bbv04 ] joint with nina balcan and santosh vempala . story_separator_special_tag predictive state representations ( psrs ) are an expressive class of models for controlled stochastic processes . psrs represent state as a set of predictions of future observable events . because psrs are defined entirely in terms of observable data , statistically consistent estimates of psr parameters can be learned efficiently by manipulating moments of observed training data . most learning algorithms for psrs have assumed that actions and observations are finite with low cardinality . in this paper , we generalize psrs to infinite sets of observations and actions , using the recent concept of hilbert space embeddings of distributions . the essence is to represent the state as a nonparametric conditional embedding operator in a reproducing kernel hilbert space ( rkhs ) and leverage recent work in kernel methods to estimate , predict , and update the representation . we show that these hilbert space embeddings of psrs are able to gracefully handle continuous actions and observations , and that our learned models outperform competing system identification algorithms on several prediction benchmarks . story_separator_special_tag motivation : many problems in data integration in bioinformatics can be posed as one common question : are two sets of observations generated by the same distribution ? we propose a kernel-based statistical test for this problem , based on the fact that two distributions are different if and only if there exists at least one function having different expectation on the two distributions . consequently we use the maximum discrepancy between function means as the basis of a test statistic . the maximum mean discrepancy ( mmd ) can take advantage of the kernel trick , which allows us to apply it not only to vectors , but strings , sequences , graphs , and other common structured data types arising in molecular biology . results : we study the practical feasibility of an mmd-based test on three central data integration tasks : testing cross-platform comparability of microarray data , cancer diagnosis , and data-content based schema matching for two different protein function classification schemas . in all of these experiments , including high-dimensional ones , mmd is very accurate in finding samples that were generated from the same distribution , and outperforms its best competitors . conclusions : story_separator_special_tag a training algorithm that maximizes the margin between the training patterns and the decision boundary is presented . the technique is applicable to a wide variety of the classification functions , including perceptrons , polynomials , and radial basis functions . the effective number of parameters is adjusted automatically to match the complexity of the problem . the solution is expressed as a linear combination of supporting patterns . these are the subset of training patterns that are closest to the decision boundary . bounds on the generalization performance based on the leave-one-out method and the vc-dimension are given . experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms . story_separator_special_tag we present a new type of probabilistic model which we call dissimilarity coefficient networks ( disco nets ) . disco nets allow us to efficiently sample from a posterior distribution parametrised by a neural network . during training , disco nets are learned by minimising the dissimilarity coefficient between the true distribution and the estimated distribution . this allows us to tailor the training to the loss related to the task at hand . we empirically show that ( i ) by modeling uncertainty on the output value , disco nets outperform equivalent non-probabilistic predictive networks and ( ii ) disco nets accurately model the uncertainty of the output , outperforming existing probabilistic models based on deep neural networks . story_separator_special_tag we describe a novel non-parametric statistical hypothesis test of relative dependence between a source variable and two candidate target variables . such a test enables us to determine whether one source variable is significantly more dependent on a first target variable or a second . dependence is measured via the hilbert-schmidt independence criterion ( hsic ) , resulting in a pair of empirical dependence measures ( source-target 1 , source-target 2 ) . we test whether the first dependence measure is significantly larger than the second . modeling the covariance between these hsic statistics leads to a provably more powerful test than the construction of independent hsic statistics by subsampling . the resulting test is consistent and unbiased , and ( being based on u-statistics ) has favorable convergence properties . the test can be computed in quadratic time , matching the computational complexity of standard empirical hsic estimators . the effectiveness of the test is demonstrated on several real-world problems : we identify language groups from a multilingual corpus , and we prove that tumor location is more dependent on gene expression than chromosomal imbalances . source code is available for download at https : //github.com/wbounliphone/reldep . story_separator_special_tag abstract : probabilistic generative models provide a powerful framework for representing data that avoids the expense of manual annotation typically needed by discriminative approaches . model selection in this generative setting can be challenging , however , particularly when likelihoods are not easily accessible . to address this issue , we introduce a statistical test of relative similarity , which is used to determine which of two models generates samples that are significantly closer to a real-world reference dataset of interest . we use as our test statistic the difference in maximum mean discrepancies ( mmds ) between the reference dataset and each model dataset , and derive a powerful , low-variance test based on the joint asymptotic distribution of the mmds between each reference-model pair . in experiments on deep generative models , including the variational auto-encoder and generative moment matching network , the tests provide a meaningful ranking of model performance as a function of parameter and training settings . story_separator_special_tag the tutorial starts with an overview of the concepts of vc dimension and structural risk minimization . we then describe linear support vector machines ( svms ) for separable and non-separable data , working through a non-trivial example in detail . we describe a mechanical analogy , and discuss when svm solutions are unique and when they are global . we describe how support vector training can be practically implemented , and discuss in detail the kernel mapping technique which is used to construct svm solutions which are nonlinear in the data . we show how support vector machines can have very large ( even infinite ) vc dimension by computing the vc dimension for homogeneous polynomial and gaussian radial basis function kernels . while very high vc dimension would normally bode ill for generalization performance , and while at present there exists no theory which shows that good generalization performance is guaranteed for svms , there are several arguments which support the observed high accuracy of svms , which we review . results of some experiments which were inspired by these arguments are also presented . we give numerous examples and proofs of most of the key theorems . story_separator_special_tag we develop a theoretical analysis of the performance of the regularized least-square algorithm on a reproducing kernel hilbert space in the supervised learning setting . the presented results hold in the general framework of vector-valued functions ; therefore they can be applied to multitask problems . in particular , we observe that the concept of effective dimension plays a central role in the definition of a criterion for the choice of the regularization parameter as a function of the number of samples . moreover , a complete minimax analysis of the problem is described , showing that the convergence rates obtained by regularized least-squares estimators are indeed optimal over a suitable class of priors defined by the considered kernel . finally , we give an improved lower rate result describing worst asymptotic behavior on individual probability measures rather than over classes of priors . story_separator_special_tag in this paper we are concerned with reproducing kernel hilbert spaces hk of functions from an input space into a hilbert space y , an environment appropriate for multi-task learning . the reproducing kernel k associated to hk has its values as operators on y. our primary goal here is to derive conditions which ensure that the kernel k is universal . this means that on every compact subset of the input space , every continuous function with values in y can be uniformly approximated by sections of the kernel . we provide various characterizations of universal kernels and highlight them with several concrete examples of some practical importance . our analysis uses basic principles of functional analysis and especially the useful notion of vector measures which we describe in sufficient detail to clarify our results . story_separator_special_tag we characterize the reproducing kernel hilbert spaces whose elements are p-integrable functions in terms of the boundedness of the integral operator whose kernel is the reproducing kernel . moreover , for p = 2 , we show that the spectral decomposition of this integral operator gives a complete description of the reproducing kernel , extending the mercer theorem . story_separator_special_tag the vicinal risk minimization principle establishes a bridge between generative models and methods derived from the structural risk minimization principle such as support vector machines or statistical regularization . we explain how vrm provides a framework which integrates a number of existing algorithms , such as parzen windows , support vector machines , ridge regression , constrained logistic classifiers and tangent-prop . we then show how the approach implies new algorithms for solving problems usually associated with generative models . new algorithms are described for dealing with pattern recognition problems with very different pattern distributions and dealing with unlabeled data . preliminary empirical results are presented . story_separator_special_tag we extend the herding algorithm to continuous spaces by using the kernel trick . the resulting `` kernel herding '' algorithm is an infinite memory deterministic process that learns to approximate a pdf with a collection of samples . we show that kernel herding decreases the error of expectations of functions in the hilbert space at a rate o ( 1/t ) which is much faster than the usual o ( 1/t ) for iid random samples . we illustrate kernel herding by approximating bayesian predictive distributions . story_separator_special_tag causal discovery via the asymmetry between the cause and the effect has proved to be a promising way to infer the causal direction from observations . the basic idea is to assume that the mechanism generating the cause distribution px and that generating the conditional distribution py|x correspond to two independent natural processes and thus px and py|x fulfill some sort of independence condition . however , in many situations , the independence condition does not hold for the anticausal direction ; if we consider px , y as generated via pypx|y , then there are usually some contrived mutual adjustments between py and px|y . this kind of asymmetry can be exploited to identify the causal direction . based on this postulate , in this letter , we define an uncorrelatedness criterion between px and py|x and , based on this uncorrelatedness , show asymmetry between the cause and the effect in terms that a certain complexity metric on px and py|x is less than the complexity metric on py and px|y . we propose a hilbert space embedding-based method emd an abbreviation for embedding to calculate the complexity metric and show that this method preserves the relative magnitude story_separator_special_tag during the last years support vector machines ( svms ) have been successfully applied in situations where the input space x is not necessarily a subset of d. examples include svms for the analysis of histograms or colored images , svms for text classification and web mining , and svms for applications from computational biology using , e.g. , kernels for trees and graphs . moreover , svms are known to be consistent to the bayes risk , if either the input space is a complete separable metric space and the reproducing kernel hilbert space ( rkhs ) h lp ( px ) is dense , or if the svm uses a universal kernel k. so far , however , there are no kernels of practical interest known that satisfy these assumptions , if x d. we close this gap by providing a general technique based on taylor-type kernels to explicitly construct universal kernels on compact metric spaces which are not subset of d. we apply this technique for the following special cases : universal kernels on the set of probability measures , universal kernels based on fourier transforms , and universal kernels for signal processing . story_separator_special_tag a new non parametric approach to the problem of testing the independence of two random process is developed . the test statistic is the hilbert schmidt independence criterion ( hsic ) , which was used previously in testing independence for i.i.d pairs of variables . the asymptotic behaviour of hsic is established when computed from samples drawn from random processes . it is shown that earlier bootstrap procedures which worked in the i.i.d . case will fail for random processes , and an alternative consistent estimate of the p-values is proposed . tests on artificial data and real-world forex data indicate that the new test procedure discovers dependence which is missed by linear approaches , while the earlier bootstrap procedure returns an elevated number of false positives . the code is available online : this https url . story_separator_special_tag a wild bootstrap method for nonparametric hypothesis tests based on kernel distribution embeddings is proposed . this bootstrap method is used to construct provably consistent tests that apply to random processes , for which the naive permutation-based bootstrap fails . it applies to a large group of kernel tests based on v-statistics , which are degenerate under the null hypothesis , and non-degenerate elsewhere . to illustrate this approach , we construct a two-sample test , an instantaneous independence test and a multiple lag independence test for time series . in experiments , the wild bootstrap gives strong performance on synthetic examples , on audio data , and in performance benchmarking for the gibbs sampler . story_separator_special_tag we propose a class of nonparametric two-sample tests with a cost linear in the sample size . two tests are given , both based on an ensemble of distances between analytic functions representing each of the distributions . the first test uses smoothed empirical characteristic functions to represent the distributions , the second uses distribution embeddings in a reproducing kernel hilbert space . analyticity implies that differences in the distributions may be detected almost surely at a finite number of randomly chosen locations/frequencies . the new tests are consistent against a larger class of alternatives than the previous linear-time tests based on the ( non-smoothed ) empirical characteristic functions , while being much faster than the current state-of-the-art quadratic-time kernel-based or energy distance-based tests . experiments on artificial benchmarks and on challenging real-world testing problems demonstrate that our tests give a better power/time tradeoff than competing approaches , and in some cases , better outright power than even the most expensive quadratic-time tests . this performance advantage is retained even in high dimensions , and in cases where the difference in distributions is not observable with low order statistics . story_separator_special_tag we propose a nonparametric statistical test for goodness-of-fit : given a set of samples , the test determines how likely it is that these were generated from a target density function . the measure of goodness-of-fit is a divergence constructed via stein 's method using functions from a reproducing kernel hilbert space . our test statistic is based on an empirical estimate of this divergence , taking the form of a v-statistic in terms of the log gradients of the target density and the kernel . we derive a statistical test , both for i.i.d . and non-i.i.d . samples , where we estimate the null distribution quantiles using a wild bootstrap procedure . we apply our test to quantifying convergence of approximate markov chain monte carlo methods , statistical model criticism , and evaluating quality of fit vs model complexity in nonparametric density estimation . story_separator_special_tag the problem of learning a transduction , that is a string-to-string mapping , is a common problem arising in natural language processing and computational biology . previous methods proposed for learning such mappings are based on classification techniques . this paper presents a new and general regression technique for learning transductions and reports the results of experiments showing its effectiveness . our transduction learning consists of two phases : the estimation of a set of regression coefficients and the computation of the pre-image corresponding to this set of coefficients . a novel and conceptually cleaner formulation of kernel dependency estimation provides a simple framework for estimating the regression coefficients , and an efficient algorithm for computing the pre-image from the regression coefficients extends the applicability of kernel dependency estimation to output sequences . we report the results of a series of experiments illustrating the application of our regression technique for learning transductions . story_separator_special_tag we examine the problem of approximating the mean of a set of vectors as a sparse linear combination of those vectors . this problem is motivated by a common methodology in machine learning where a probability distribution is represented as the sample mean of kernel functions . in applications where this kernel mean function is evaluated repeatedly , having a sparse approximation is essential for scalability . however , existing sparse approximation algorithms such as matching and basis pursuit scale quadratically in the sample size , and are therefore not well suited to this problem for large sample sizes . we introduce an approximation bound involving a novel incoherence measure , and propose bound minimization as a sparse approximation strategy . in the context of sparsely approximating a kernel mean function , the bound is efficiently minimized by solving an appropriate instance of the k-center problem , and the resulting algorithm has linear complexity in the sample size . story_separator_special_tag ( 1 ) a main theme of this report is the relationship of approximation to learning and the primary role of sampling ( inductive inference ) . we try to emphasize relations of the theory of learning to the mainstream of mathematics . in particular , there are large roles for probability theory , for algorithms such as least squares , and for tools and ideas from linear algebra and linear analysis . an advantage of doing this is that communication is facilitated and the power of core mathematics is more easily brought to bear . we illustrate what we mean by learning theory by giving some instances . ( a ) the understanding of language acquisition by children or the emergence of languages in early human cultures . ( b ) in manufacturing engineering , the design of a new wave of machines is anticipated which uses sensors to sample properties of objects before , during , and after treatment . the information gathered from these samples is to be analyzed by the machine to decide how to better deal with new input objects ( see [ 43 ] ) . ( c ) pattern recognition of objects ranging story_separator_special_tag we present a family of positive definite kernels on measures , characterized by the fact that the value of the kernel between two measures is a function of their sum . these kernels can be used to derive kernels on structured objects , such as images and texts , by representing these objects as sets of components , such as pixels or words , or more generally as measures on the space of components . several kernels studied in this work make use of common quantities defined on measures such as entropy or generalized variance to detect similarities . given an a priori kernel on the space of components itself , the approach is further extended by restating the previous results in a more efficient and flexible framework using the `` kernel trick '' . finally , a constructive approach to such positive definite kernels through an integral representation theorem is proved , before presenting experimental results on a benchmark experiment of handwritten digits classification to illustrate the validity of the approach . story_separator_special_tag do two data samples come from different distributions ? recent studies of this fundamental problem focused on embedding probability distributions into sufficiently rich characteristic reproducing kernel hilbert spaces ( rkhss ) , to compare distributions by the distance between their embeddings . we show that regularized maximum mean discrepancy ( rmmd ) , our novel measure for kernel-based hypothesis testing , yields substantial improvements even when sample sizes are small , and excels at hypothesis tests involving multiple comparisons with power control . we derive asymptotic distributions under the null and alternative hypotheses , and assess power control . outstanding results are obtained on : challenging eeg data , mnist , the berkley covertype , and the flare-solar dataset . story_separator_special_tag a result of johnson and lindenstrauss [ 13 ] shows that a set of n points in high dimensional euclidean space can be mapped into an o ( log n/e2 ) -dimensional euclidean space such that the distance between any two points changes by only a factor of ( 1 \xb1 e ) . in this note , we prove this theorem using elementary probabilistic techniques . story_separator_special_tag summary we define partial v2 and additive partial ( d2 measures of association between two random variables and we characterize conditional independence by the lack of correlation between functions of the random variables . we study decompositions of ( d2 which generalize the correlation ratio decomposition . application to regression on qualitative variables is an illustration . this article is in the spirit of lancaster 's work ( 1969 ) . he studied the ( 2 measure of association between random variables . in the next section we recall lancaster 's definition of ( 2 and its properties . the definition and results of ? ? 3 , 4 and 5 are new . in ? 3 we give a useful characterization of conditional independence , and define partial qd2 and additive partial association measures , the latter being a simplified form of the former . we prove a conjecture of wermuth ( 1976 ) . in ? 4 we use the definitions and results of ? 3 for studying the problem of probabilistic multiple regression , and the decomposition of the prediction space and of ( d2 * sections 2 , 3 and 4 of this paper shed light story_separator_special_tag this wide ranging but self-contained account of the spectral theory of non-self-adjoint linear operators is ideal for postgraduate students and researchers , and contains many illustrative examples and exercises . fredholm theory , hilbert-schmidt and trace class operators are discussed , as are one-parameter semigroups and perturbations of their generators . two chapters are devoted to using these tools to analyze markov semigroups . the text also provides a thorough account of the new theory of pseudospectra , and presents the recent analysis by the author and barry simon of the form of the pseudospectra at the boundary of the numerical range . this was a key ingredient in the determination of properties of the zeros of certain orthogonal polynomials on the unit circle . finally , two methods , both very recent , for obtaining bounds on the eigenvalues of non-self-adjoint schrodinger operators are described . the text concludes with a description of the surprising spectral properties of the non-self-adjoint harmonic oscillator . story_separator_special_tag theoretical neuroscience provides a quantitative basis for describing what nervous systems do , determining how they function , and uncovering the general principles by which they operate . this text introduces the basic mathematical and computational methods of theoretical neuroscience and presents applications in a variety of areas including vision , sensory-motor integration , development , learning , and memory . the book is divided into three parts . part i discusses the relationship between sensory stimuli and neural responses , focusing on the representation of information by the spiking activity of neurons . part ii discusses the modeling of neurons and neural circuits on the basis of cellular and synaptic biophysics . part iii analyzes the role of plasticity in development and learning . an appendix covers the mathematical methods used , and exercises are available on the book 's web site . story_separator_special_tag in this paper we show that a large class of regularization methods designed for solving ill-posed inverse problems gives rise to novel learning algorithms . all these algorithms are consistent kernel methods which can be easily implemented . the intuition behind our approach is that , by looking at regularization from a filter function perspective , filtering out undesired components of the target function ensures stability with respect to the random sampling thereby inducing good generalization properties . we present a formal derivation of the methods under study by recalling that learning can be written as the inversion of a linear embedding equation given a stochastic discretization . consistency as well as finite sample bounds are derived for both regression and classification . story_separator_special_tag kernel k-means and spectral clustering have both been used to identify clusters that are non-linearly separable in input space . despite significant research , these methods have remained only loosely related . in this paper , we give an explicit theoretical connection between them . we show the generality of the weighted kernel k-means objective function , and derive the spectral clustering objective of normalized cut as a special case . given a positive definite similarity matrix , our results lead to a novel weighted kernel k-means algorithm that monotonically decreases the normalized cut . this has important implications : a ) eigenvector-based algorithms , which can be computationally prohibitive , are not essential for minimizing normalized cuts , b ) various techniques , such as local search and acceleration schemes , may be used to improve the quality as well as speed of kernel k-means . finally , we present results on several interesting data sets , including diametrical clustering of large gene-expression matrices and a handwriting recognition data set . story_separator_special_tag vector integration . the stochastic integral . martingales . processes with finite variation . processes with finite semivariation . the ito formula . stochastic integration in the plane . two-parameter martingales . two-parameter processes with finite variation . two-parameter processes with finite semivariation . references . story_separator_special_tag i propose to investigate learning in the multiple-instance ( mi ) framework as a problem of learning from distributions . in many mi applications , bags of instances can be thought of as samples from bag-generating distributions . recent kernel approaches for learning from distributions have the potential to be successfully applied to these domains and other mi learning problems . understanding when distribution-based techniques work for mi learning will lead to new theoretical insights , improved algorithms , and more accurate solutions for real-world problems . story_separator_special_tag determining conditional independence ( ci ) relationships between random variables is a challenging but important task for problems such as bayesian network learning and causal discovery . we propose a new kernel ci test that uses a single , learned permutation to convert the ci test problem into an easier two-sample test problem . the learned permutation leaves the joint distribution unchanged if and only if the null hypothesis of ci holds . then , a kernel two-sample test , which has been studied extensively in prior work , can be applied to a permuted and an unpermuted sample to test for ci . we demonstrate that the test ( 1 ) easily allows the incorporation of prior knowledge during the permutation step , ( 2 ) has power competitive with state-of-the-art kernel ci tests , and ( 3 ) accurately estimates the null distribution of the test statistic , even as the dimensionality of the conditioning variable grows . story_separator_special_tag provides a unified , comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition . the topics treated include bayesian decision theory , supervised and unsupervised learning , nonparametric techniques , discriminant analysis , clustering , preprosessing of pictorial data , spatial filtering , shape description techniques , perspective transformations , projective invariants , linguistic procedures , and artificial intelligence techniques for scene analysis . story_separator_special_tag over the past five years a new approach to privacy-preserving data analysis has born fruit [ 13 , 18 , 7 , 19 , 5 , 37 , 35 , 8 , 32 ] . this approach differs from much ( but not all ! ) of the related literature in the statistics , databases , theory , and cryptography communities , in that a formal and ad omnia privacy guarantee is defined , and the data analysis techniques presented are rigorously proved to satisfy the guarantee . the key privacy guarantee that has emerged is differential privacy . roughly speaking , this ensures that ( almost , and quantifiably ) no risk is incurred by joining a statistical database . in this survey , we recall the definition of differential privacy and two basic techniques for achieving it . we then show some interesting applications of these techniques , presenting algorithms for three specific tasks and three general results on differentially private learning . story_separator_special_tag we consider training a deep neural network to generate samples from an unknown distribution given i.i.d . data . we frame learning as an optimization minimizing a two-sample test statistic -- -informally speaking , a good generator network produces samples that cause a two-sample test to fail to reject the null hypothesis . as our two-sample test statistic , we use an unbiased estimate of the maximum mean discrepancy , which is the centerpiece of the nonparametric kernel two-sample test proposed by gretton et al . ( 2012 ) . we compare to the adversarial nets framework introduced by goodfellow et al . ( 2014 ) , in which learning is a two-player game between a generator network and an adversarial discriminator network , both trained to outwit the other . from this perspective , the mmd statistic plays the role of the discriminator . in addition to empirical comparisons , we prove bounds on the generalization error incurred by optimizing the empirical mmd . story_separator_special_tag we present a novel bayesian approach to the problem of value function estimation in continuous state spaces . we define a probabilistic generative model for the value function by imposing a gaussian prior over value functions and assuming a gaussian noise model . due to the gaussian nature of the random processes involved , the posterior distribution of the value function is also gaussian and is therefore described entirely by its mean and covariance . we derive exact expressions for the posterior process moments , and utilizing an efficient sequential sparsification method , we describe an on-line algorithm for learning them . we demonstrate the operation of the algorithm on a 2-dimensional continuous spatial navigation domain . story_separator_special_tag much of research in machine learning has centered around the search for inference algorithms that are both general-purpose and efficient . the problem is extremely challenging and general inference remains computationally expensive . we seek to address this problem by observing that in most specific applications of a model , we typically only need to perform a small subset of all possible inference computations . motivated by this , we introduce just-in-time learning , a framework for fast and flexible inference that learns to speed up inference at run-time . through a series of experiments , we show how this framework can allow us to combine the flexibility of sampling with the efficiency of deterministic message-passing . story_separator_special_tag certain probability properties of $ c_n ( t ) $ , the empirical characteristic function $ ( \\operatorname { ecf } ) $ are investigated . more specifically it is shown under some general restrictions that $ c_n ( t ) $ converges uniformly almost surely to the population characteristic function $ c ( t ) . $ the weak convergence of $ n^ { \\frac { 1 } { 2 } } ( c_n ( t ) - c ( t ) ) $ to a gaussian complex process is proved . it is suggested that the ecf may be a useful tool in numerous statistical problems . application of these ideas is illustrated with reference to testing for symmetry about the origin : the statistic $ \\int\\lbrack\\mathbf { im } c_n ( t ) \\rbrack^2 dg ( t ) $ is proposed and its asymptotic distribution evaluated . story_separator_special_tag svm training is a convex optimization problem which scales with the training set size rather than the feature space dimension . while this is usually considered to be a desired quality , in large scale problems it may cause training to be impractical . the common techniques to handle this difficulty basically build a solution by solving a sequence of small scale subproblems . our current effort is concentrated on the rank of the kernel matrix as a source for further enhancement of the training procedure . we first show that for a low rank kernel matrix it is possible to design a better interior point method ( ipm ) in terms of storage requirements as well as computational complexity . we then suggest an efficient use of a known factorization technique to approximate a given kernel matrix by a low rank matrix , which in turn will be used to feed the optimizer . finally , we derive an upper bound on the change in the objective function value based on the approximation error and the number of active constraints ( support vectors ) . this bound is general in the sense that it holds regardless of the approximation story_separator_special_tag in applied fields , practitioners hoping to apply causal structure learning or causal orientation algorithms face an important question : which independence test is appropriate for my data ? in the case of real-valued iid data , linear dependencies , and gaussian error terms , partial correlation is sufficient . but once any of these assumptions is modified , the situation becomes more complex . kernel-based tests of independence have gained popularity to deal with nonlinear dependencies in recent years , but testing for conditional independence remains a challenging problem . we highlight the important issue of non-iid observations : when data are observed in space , time , or on a network , nearby observations are likely to be similar . this fact biases estimates of dependence between variables . inspired by the success of gaussian process regression for handling non-iid observations in a wide variety of areas and by the usefulness of the hilbert-schmidt independence criterion ( hsic ) , a kernel-based independence test , we propose a simple framework to address all of these issues : first , use gaussian process regression to control for certain variables and to obtain residuals . second , use hsic to story_separator_special_tag we present a new solution to the `` ecological inference '' problem , of learning individual-level associations from aggregate data . this problem has a long history and has attracted much attention , debate , claims that it is unsolvable , and purported solutions . unlike other ecological inference techniques , our method makes use of unlabeled individual-level data by embedding the distribution over these predictors into a vector in hilbert space . our approach relies on recent learning theory results for distribution regression , using kernel embeddings of distributions . our novel approach to distribution regression exploits the connection between gaussian process regression and kernel ridge regression , giving us a coherent , bayesian approach to learning and inference and a convenient way to include prior information in the form of a spatial covariance function . our approach is highly scalable as it relies on fastfood , a randomized explicit feature representation for kernel embeddings . we apply our approach to the challenging political science problem of modeling the voting behavior of demographic groups based on aggregate voting data . we consider the 2012 us presidential election , and ask : what was the probability that members of various story_separator_special_tag kernel methods are one of the mainstays of machine learning , but the problem of kernel learning remains challenging , with only a few heuristics and very little theory . this is of particular importance in methods based on estimation of kernel mean embeddings of probability measures . for characteristic kernels , which include most commonly used ones , the kernel mean embedding uniquely determines its probability measure , so it can be used to design a powerful statistical testing framework , which includes non-parametric two-sample and independence tests . in practice , however , the performance of these tests can be very sensitive to the choice of kernel and its lengthscale parameters . to address this central issue , we propose a new probabilistic model for kernel mean embeddings , the bayesian kernel embedding model , combining a gaussian process prior over the reproducing kernel hilbert space containing the mean embedding with a conjugate likelihood function , thus yielding a closed form posterior over the mean embedding . the posterior mean of our model is closely related to recently proposed shrinkage estimators for kernel mean embeddings , while the posterior uncertainty is a new , interesting feature with various story_separator_special_tag we propose a novel method of dimensionality reduction for supervised learning problems . given a regression or classification problem in which we wish to predict a response variable y from an explanatory variable x , we treat the problem of dimensionality reduction as that of finding a low-dimensional `` effective subspace '' for x which retains the statistical relationship between x and y. we show that this problem can be formulated in terms of conditional independence . to turn this formulation into an optimization problem we establish a general nonparametric characterization of conditional independence using covariance operators on reproducing kernel hilbert spaces . this characterization allows us to derive a contrast function for estimation of the effective subspace . unlike many conventional methods for dimensionality reduction in supervised learning , the proposed method requires neither assumptions on the marginal distribution of x , nor a parametric model of the conditional distribution of y. we present experiments that compare the performance of the method with conventional methods . story_separator_special_tag while kernel canonical correlation analysis ( cca ) has been applied in many contexts , the convergence of finite sample estimates of the associated functions to their population counterparts has not yet been established . this paper gives a mathematical proof of the statistical convergence of kernel cca , providing a theoretical justification for the method . the proof uses covariance operators defined on reproducing kernel hilbert spaces , and analyzes the convergence of their empirical estimates of finite rank to their population counterparts , which can have infinite rank . the result also gives a sufficient condition for convergence on the regularization coefficient involved in kernel cca : this should decrease as n-1/3 , where n is the number of data . story_separator_special_tag we propose a new measure of conditional dependence of random variables , based on normalized cross-covariance operators on reproducing kernel hilbert spaces . unlike previous kernel dependence measures , the proposed criterion does not depend on the choice of kernel in the limit of infinite data , for a wide class of kernels . at the same time , it has a straightforward empirical estimate with good convergence behaviour . we discuss the theoretical properties of the measure , and demonstrate its application in experiments . story_separator_special_tag we present a new methodology for sufficient dimension reduction ( sdr ) . our methodology derives directly from the formulation of sdr in terms of the conditional independence of the covariate x from the response y , given the projection of x on the central subspace [ cf . j. amer . statist . assoc . 86 ( 1991 ) 316 342 and regression graphics ( 1998 ) wiley ] . we show that this conditional independence assertion can be characterized in terms of conditional covariance operators on reproducing kernel hilbert spaces and we show how this characterization leads to an m-estimator for the central subspace . the resulting estimator is shown to be consistent under weak conditions ; in particular , we do not have to impose linearity or ellipticity conditions of the kinds that are generally invoked for sdr methods . we also present empirical results showing that the new methodology is competitive in practice . story_separator_special_tag embeddings of random variables in reproducing kernel hilbert spaces ( rkhss ) may be used to conduct statistical inference based on higher order moments . for sufficiently rich ( characteristic ) rkhss , each probability distribution has a unique embedding , allowing all statistical properties of the distribution to be taken into consideration . necessary and sufficient conditions for an rkhs to be characteristic exist for n. in the present work , conditions are established for an rkhs to be characteristic on groups and semigroups . illustrative examples are provided , including characteristic kernels on periodic domains , rotation matrices , and n+ . story_separator_special_tag a kernel method for realizing bayes ' rule is proposed , based on representations of probabilities in reproducing kernel hilbert spaces . probabilities are uniquely characterized by the mean of the canonical map to the rkhs . the prior and conditional probabilities are expressed in terms of rkhs functions of an empirical sample : no explicit parametric model is needed for these quantities . the posterior is likewise an rkhs mean of a weighted sample . the estimator for the expectation of a function of the posterior is derived , and rates of consistency are shown . some representative applications of the kernel bayes ' rule are presented , including bayesian computation without likelihood and filtering with a nonparametric state-space model . story_separator_special_tag kernel methods in general and support vector machines in particular have been successful in various learning tasks on data represented in a single table . much 'real-world ' data , however , is structured - it has no natural representation in a single table . usually , to apply kernel methods to 'real-world ' data , extensive pre-processing is performed to embed the data into areal vector space and thus in a single table . this survey describes several approaches of defining positive definite kernels on structured instances directly . story_separator_special_tag learning from structured data is becoming increasingly important . however , most prior work on kernel methods has focused on learning from attribute-value data . only recently , research started investigating kernels for structured data . this paper considers kernels for multi-instance problems a class of concepts on individuals represented by sets . the main result of this paper is a kernel on multi-instance data that can be shown to separate positive and negative sets under natural assumptions . this kernel compares favorably with state of the art multi-instance learning algorithms in an empirical study . finally , we give some concluding remarks and propose future work that might further improve the results . story_separator_special_tag in this paper , we present classes of kernels for machine learning from a statistics perspective . indeed , kernels are positive definite functions and thus also covariances . after discussing key properties of kernels , as well as a new formula to construct kernels , we present several important classes of kernels : anisotropic stationary kernels , isotropic stationary kernels , compactly supported kernels , locally stationary kernels , nonstationary kernels , and separable nonstationary kernels . compactly supported kernels and separable nonstationary kernels are of prime interest because they provide a computational reduction for kernel-based methods . we describe the spectral representation of the various classes of kernels and conclude with a discussion on the characterization of nonlinear maps that reduce nonstationary kernels to either stationarity or local stationarity . story_separator_special_tag we consider the problem of multi-step ahead prediction in time series analysis using the non-parametric gaussian process model . k-step ahead forecasting of a discrete-time non-linear dynamic system can be performed by doing repeated one-step ahead predictions . for a state-space model of the form yt = f ( yt-1 , . , yt-l ) , the prediction of y at time t + k is based on the point estimates of the previous outputs . in this paper , we show how , using an analytical gaussian approximation , we can formally incorporate the uncertainty about intermediate regressor values , thus updating the uncertainty on the current prediction . story_separator_special_tag remote sensing image classification constitutes a challenging problem since very few labeled pixels are typically available from the analyzed scene . in such situations , labeled data extracted from other images modeling similar problems might be used to improve the classification accuracy . however , when training and test samples follow even slightly different distributions , classification is very difficult . this problem is known as sample selection bias . in this paper , we propose a new method to combine labeled and unlabeled pixels to increase classification reliability and accuracy . a semisupervised support vector machine classifier based on the combination of clustering and the mean map kernel is proposed . the method reinforces samples in the same cluster belonging to the same class by combining sample and cluster similarities implicitly in the kernel space . a soft version of the method is also proposed where only the most reliable training samples , in terms of likelihood of the image data distribution , are used . capabilities of the proposed method are illustrated in a cloud screening application using data from the medium resolution imaging spectrometer ( meris ) instrument onboard the european space agency envisat satellite . cloud story_separator_special_tag we propose a new framework for estimating generative models via an adversarial process , in which we simultaneously train two models : a generative model g that captures the data distribution , and a discriminative model d that estimates the probability that a sample came from the training data rather than g. the training procedure for g is to maximize the probability of d making a mistake . this framework corresponds to a minimax two-player game . in the space of arbitrary functions g and d , a unique solution exists , with g recovering the training data distribution and d equal to \xbd everywhere . in the case where g and d are defined by multilayer perceptrons , the entire system can be trained with backpropagation . there is no need for any markov chains or unrolled approximate inference networks during either training or generation of samples . experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples . story_separator_special_tag the very early appearance of abstract knowledge is often taken as evidence for innateness . we explore the relative learning speeds of abstract and specific knowledge within a bayesian framework and the role for innate structure . we focus on knowledge about causality , seen as a domain-general intuitive theory , and ask whether this knowledge can be learned from co-occurrence of events . we begin by phrasing the causal bayes nets theory of causality and a range of alternatives in a logical language for relational theories . this allows us to explore simultaneous inductive learning of an abstract theory of causality and a causal model for each of several causal systems . we find that the correct theory of causality can be learned relatively quickly , often becoming available before specific causal theories have been learned -- an effect we term the blessing of abstraction . we then explore the effect of providing a variety of auxiliary evidence and find that a collection of simple perceptual input analyzers can help to bootstrap abstract knowledge . together , these results suggest that the most efficient route to causal knowledge may be to build in not an abstract notion of causality story_separator_special_tag probabilistic programs are usual functional or imperative programs with two added constructs : ( 1 ) the ability to draw values at random from distributions , and ( 2 ) the ability to condition values of variables in a program via observations . models from diverse application areas such as computer vision , coding theory , cryptographic protocols , biology and reliability analysis can be written as probabilistic programs . probabilistic inference is the problem of computing an explicit representation of the probability distribution implicitly specified by a probabilistic program . depending on the application , the desired output from inference may vary -- -we may want to estimate the expected value of some function f with respect to the distribution , or the mode of the distribution , or simply a set of samples drawn from the distribution . in this paper , we describe connections this research area called `` probabilistic programming '' has with programming languages and software engineering , and this includes language design , and the static and dynamic analysis of programs . we survey current state of the art and speculate on promising directions for future research . story_separator_special_tag this tutorial reviews a series of reinforcement learning ( rl ) methods implemented in a reproducing kernel hilbert space ( rkhs ) developed to address the challenges imposed on decoder design . rl-based decoders enable the user to learn the prosthesis control through interactions without desired signals and better represent the subject 's goal to complete the task . the numerous actions in complex tasks and nonstationary neural states form a vast and dynamic state-action space , imposing a computational challenge in the decoder to detect the emerging neural patterns as well as quickly establish and adjust the globally optimal policy . story_separator_special_tag we propose an independence criterion based on the eigen-spectrum of covariance operators in reproducing kernel hilbert spaces ( rkhss ) , consisting of an empirical estimate of the hilbert-schmidt norm of the cross-covariance operator ( we term this a hilbert-schmidt independence criterion , or hsic ) . this approach has several advantages , compared with previous kernel-based independence criteria . first , the empirical estimate is simpler than any other kernel dependence test , and requires no user-defined regularisation . second , there is a clearly defined population quantity which the empirical estimate approaches in the large sample limit , with exponential convergence guaranteed between the two : this ensures that independence tests based on hsic do not suffer from slow learning rates . finally , we show in the context of independent component analysis ( ica ) that the performance of hsic is competitive with that of previously published kernel-based criteria , and of other recently published ica methods . story_separator_special_tag we introduce two new functionals , the constrained covariance and the kernel mutual information , to measure the degree of independence of random variables . these quantities are both based on the covariance between functions of the random variables in reproducing kernel hilbert spaces ( rkhss ) . we prove that when the rkhss are universal , both functionals are zero if and only if the random variables are pairwise independent . we also show that the kernel mutual information is an upper bound near independence on the parzen window estimate of the mutual information . analogous results apply for two correlation-based dependence functionals introduced earlier : we show the kernel canonical correlation and the kernel generalised variance to be independence measures for universal kernels , and prove the latter to be an upper bound on the mutual information near independence . the performance of the kernel dependence functionals in measuring independence is verified in the context of independent component analysis . story_separator_special_tag we propose a framework for analyzing and comparing distributions , allowing us to design statistical tests to determine if two samples are drawn from different distributions . our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel hilbert space ( rkhs ) . we present two tests based on large deviation bounds for the test statistic , while a third is based on the asymptotic distribution of this statistic . the test statistic can be computed in quadratic time , although efficient linear time approximations are available . several classical metrics on distributions are recovered when the function space used to compute the difference in expectations is allowed to be more general ( eg . a banach space ) . we apply our two-sample tests to a variety of problems , including attribute matching for databases using the hungarian marriage method , where they perform strongly . excellent performance is also obtained when comparing distributions over graphs , for which these are the first such tests . story_separator_special_tag we propose a framework for analyzing and comparing distributions , which we use to construct statistical tests to determine if two samples are drawn from different distributions . our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel hilbert space ( rkhs ) , and is called the maximum mean discrepancy ( mmd ) .we present two distribution free tests based on large deviation bounds for the mmd , and a third test based on the asymptotic distribution of this statistic . the mmd can be computed in quadratic time , although efficient linear time approximations are available . our statistic is an instance of an integral probability metric , and various classical metrics on distributions are obtained when alternative function classes are used in place of an rkhs . we apply our two-sample tests to a variety of problems , including attribute matching for databases using the hungarian marriage method , where they perform strongly . excellent performance is also obtained when comparing distributions over graphs , for which these are the first such tests . story_separator_special_tag given samples from distributions p and q , a two-sample test determines whether to reject the null hypothesis that p = q , based on the value of a test statistic measuring the distance between the samples . one choice of test statistic is the maximum mean discrepancy ( mmd ) , which is a distance between embeddings of the probability distributions in a reproducing kernel hilbert space . the kernel used in obtaining these embeddings is critical in ensuring the test has high power , and correctly distinguishes unlike distributions with high probability . a means of parameter selection for the two-sample test based on the mmd is proposed . for a given test level ( an upper bound on the probability of making a type i error ) , the kernel is chosen so as to maximize the test power , and minimize the probability of making a type ii error . the test statistic , test threshold , and optimization over the kernel parameters are obtained with cost linear in the sample size . these properties make the kernel selection and test procedures suited to data streams , where the observations can not all be stored in story_separator_special_tag we propose a new , nonparametric approach to learning and representing transition dynamics in markov decision processes ( mdps ) , which can be combined easily with dynamic programming methods for policy optimisation and value estimation . this approach makes use of a recently developed representation of conditional distributions as embeddings in a reproducing kernel hilbert space ( rkhs ) . such representations bypass the need for estimating transition probabilities or densities , and apply to any domain on which kernels can be defined . this avoids the need to calculate intractable integrals , since expectations are represented as rkhs inner products whose computation has linear complexity in the number of points used to represent the embedding . we provide guarantees for the proposed applications in mdps : in the context of a value iteration algorithm , we prove convergence to either the optimal policy , or to the closest projection of the optimal policy in our model class ( an rkhs ) , under reasonable assumptions . in experiments , we investigate a learning task in a typical classical control setting ( the under-actuated pendulum ) , and on a navigation problem where only images from a sensor are story_separator_special_tag we demonstrate an equivalence between reproducing kernel hilbert space ( rkhs ) embeddings of conditional distributions and vector-valued regressors . this connection introduces a natural regularized loss function which the rkhs embeddings minimise , providing an intuitive understanding of the embeddings and a justification for their use . furthermore , the equivalence allows the application of vector-valued regression methods and results to the problem of learning conditional distributions . using this link we derive a sparse version of the embedding by considering alternative formulations . further , by applying convergence results for vector-valued regression to the embedding problem we derive minimax convergence rates which are o ( log ( n ) /n ) - compared to current state of the art rates of o ( n-1/4 ) - and are valid under milder and more intuitive assumptions . these minimax upper rates coincide with lower rates up to a logarithmic factor , showing that the embedding method achieves nearly optimal rates . we study our sparse embedding algorithm in a reinforcement learning task where the algorithm shows significant improvement in sparsity over an incomplete cholesky decomposition . story_separator_special_tag we develop a generic approach to form smooth versions of basic mathematical operations like multiplication , composition , change of measure , and conditional expectation , among others . operations which result in functions outside the reproducing kernel hilbert space ( such as the product of two rkhs functions ) are approximated via a natural cost function , such that the solution is guaranteed to be in the targeted rkhs . this approximation problem is reduced to a regression problem using an adjoint trick , and solved in a vector-valued rkhs , consisting of continuous , linear , smooth operators which map from an input , real-valued rkhs to the desired target rkhs . important constraints , such as an almost everywhere positive density , can be enforced or approximated naturally in this framework , using convex constraints on the operators . finally , smooth operators can be composed to accomplish more complex machine learning tasks , such as the sum rule and kernelized approximate bayesian inference , where state-of-the-art convergence rates are obtained . story_separator_special_tag we are interested in learning causal relationships between pairs of random variables , purely from observational data . to effectively address this task , the state-of-the-art relies on strong assumptions on the mechanisms mapping causes to effects , such as invertibility or the existence of additive noise , which only hold in limited situations . on the contrary , this short paper proposes to learn how to perform causal inference directly from data , without the need of feature engineering . in particular , we pose causality as a kernel mean embedding classification problem , where inputs are samples from arbitrary probability distributions on pairs of random variables , and labels are types of causal relationships . we validate the performance of our method on synthetic and real-world data against the state-of-the-art . moreover , we submitted our algorithm to the chalearn 's `` fast causation coefficient challenge '' competition , with which we won the fastest code prize and ranked third in the overall leaderboard . story_separator_special_tag variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variables are available . these areas include text processing of internet documents , gene expression array analysis , and combinatorial chemistry . the objective of variable selection is three-fold : improving the prediction performance of the predictors , providing faster and more cost-effective predictors , and providing a better understanding of the underlying process that generated the data . the contributions of this special issue cover a wide range of aspects of such problems : providing a better definition of the objective function , feature construction , feature ranking , multivariate feature selection , efficient search methods , and feature validity assessment methods . story_separator_special_tag we test the notion that many microstructures have an underlying stationary probability distribution . the stationary probability distribution is ubiquitous : we know that different windows taken from a polycrystalline microstructure are generally 'statistically similar ' . to enable computation of such a probability distribution , microstructures are represented in the form of undirected probabilistic graphs called markov random fields ( mrfs ) . in the model , pixels take up integer or vector states and interact with multiple neighbors over a window . using this lattice structure , algorithms are developed to sample the conditional probability density for the state of each pixel given the known states of its neighboring pixels . the sampling is performed using reference experimental images . 2d microstructures are artificially synthesized using the sampled probabilities . statistical features such as grain size distribution and autocorrelation functions closely match with those of the experimental images . the mechanical properties of the synthesized microstructures were computed using the finite element method and were also found to match the experimental values . story_separator_special_tag we propose to investigate test statistics for testing homogeneity based on kernel fisher discriminant analysis . asymptotic null distributions under null hypothesis are derived , and consistency against fixed alternatives is assessed . finally , experimental evidence of the performance of the proposed approach on both artificial and real datasets is provided . story_separator_special_tag we introduce a kernel-based method for change-point analysis within a sequence of temporal observations . change-point analysis of an unlabelled sample of observations consists in , first , testing whether a change in the distribution occurs within the sample , and second , if a change occurs , estimating the change-point instant after which the distribution of the observations switches from one distribution to another different distribution . we propose a test statistic based upon the maximum kernel fisher discriminant ratio as a measure of homogeneity between segments . we derive its limiting distribution under the null hypothesis ( no change occurs ) , and establish the consistency under the alternative hypothesis ( a change occurs ) . this allows to build a statistical hypothesis testing procedure for testing the presence of a change-point , with a prescribed false-alarm probability and detection probability tending to one in the large-sample setting . if a change actually occurs , the test statistic also yields an estimator of the change-point location . promising experimental results in temporal segmentation of mental tasks from bci data and pop song indexation are presented . story_separator_special_tag we introduce a regularized kernel-based rule for unsupervised change detection based on a simpler version of the recently proposed kernel fisher discriminant ratio . compared to other kernel-based change detectors found in the literature , the proposed test statistic is easier to compute and has a known asymptotic distribution which can effectively be used to set the false alarm rate a priori . this technique is applied for segmenting tracks from tv shows , both for segmentation into semantically homogeneous sections ( applause , movie , music , etc . ) and for speaker diarization within the speech sections . on these tasks , the proposed approach outperforms other kernel-based tests and is competitive with a standard hmm-based supervised alternative . story_separator_special_tag we establish a link between fourier optics and a recent construction from the machine learning community termed the kernel mean map . using the fraunhofer approximation , it identifies the kernel with the squared fourier transform of the aperture . this allows us to use results about the invertibility of the kernel mean map to provide a statement about the invertibility of fraunhofer diffraction , showing that imaging processes with arbitrarily small apertures can in principle be invertible , i.e. , do not lose information , provided the objects to be imaged satisfy a generic condition . a real world experiment shows that we can super-resolve beyond the rayleigh limit . story_separator_special_tag we introduce a new method of constructing kernels on sets whose elements are discrete structures like strings , trees and graphs . the method can be applied iteratively to build a kernel on a innnite set from kernels involving generators of the set . the family of kernels generated generalizes the family of radial basis kernels . it can also be used to deene kernels in the form of joint gibbs probability distributions . kernels can be built from hidden markov random elds , generalized regular expressions , pair-hmms , or anova de-compositions . uses of the method lead to open problems involving the theory of innnitely divisible positive deenite functions . fundamentals of this theory and the theory of reproducing kernel hilbert spaces are reviewed and applied in establishing the validity of the method . story_separator_special_tag this paper gives a survey of results in the mathematical literature on positive definite kernels and their associated structures . we concentrate on properties which seem potentially relevant for machine learning and try to clarify some results that have been misused in the literature . moreover we consider different lines of generalizations of positive definite kernels . namely we deal with operator-valued kernels and present the general framework of hilbertian subspaces of schwartz which we use to introduce kernels which are distributions . finally indefinite kernels and their associated reproducing kernel spaces are considered . story_separator_special_tag we investigate the problem of defining hilbertian metrics resp . positive definite kernels on probability measures , continuing the work in [ 5 ] . this type of kernels has shown very good results in text classification and has a wide range of possible applications . in this paper we extend the two-parameter family of hilbertian metrics of topsoe such that it now includes all commonly used hilbertian metrics on probability measures . this allows us to do model selection among these metrics in an elegant and unified way . second we investigate further our approach to incorporate similarity information of the probability space into the kernel . the analysis provides a better understanding of these kernels and gives in some cases a more efficient way to compute them . finally we compare all proposed kernels in two text and two image classification problems . story_separator_special_tag consider samples from continuous distributions f ( x ) and f ( x ) . we may test the hypothesis = 0 by using the two-sample wilcoxon test . we show in section 1 that its asymptotic pitman efficiency , relative to the f-test , never falls below 0.864. this result also holds for the kruskal-wallis test compared with the jf-test , and for testing the location parameter of a single symmetric distribution . story_separator_special_tag let x 1 , x n be n independent random vectors , x v = , and ( x 1 , x m ) a function of m ( n ) vectors . a statistic of the form , where the sum is extended over all permutations ( 1 , m ) of different integers , 1 ( i n , is called a u-statistic . if x 1 , , x n have the same ( cumulative ) distribution function ( d.f . ) f ( x ) , u is an unbiased estimate of the population characteristic ( f ) = f f ( x 1 , , x m ) df ( x 1 ) df ( x m ) . ( f ) is called a regular functional of the d.f . f ( x ) . certain optimal properties of u-statistics as unbiased estimates of regular functionals have been established by halmos [ 9 ] ( cf . section 4 ) story_separator_special_tag we review machine learning methods employing positive definite kernels . these methods formulate learning and estimation problems in a reproducing kernel hilbert space ( rkhs ) of functions defined on the data domain , expanded in terms of a kernel . working in linear spaces of function has the benefit of facilitating the construction and analysis of learning algorithms while at the same time allowing large classes of functions . the latter include nonlinear functions as well as functions defined on nonvectorial data . we cover a wide range of methods , ranging from binary classifiers to sophisticated methods for estimation with structured data . story_separator_special_tag hidden markov models ( hmms ) are one of the most fundamental and widely used statistical tools for modeling discrete time series . in general , learning hmms from data is computationally hard ( under cryptographic assumptions ) , and practitioners typically resort to search heuristics which suffer from the usual local optima issues . we prove that under a natural separation condition ( bounds on the smallest singular value of the hmm parameters ) , there is an efficient and provably correct algorithm for learning hmms . the sample complexity of the algorithm does not explicitly depend on the number of distinct ( discrete ) observations -- -it implicitly depends on this quantity through spectral properties of the underlying hmm . this makes the algorithm particularly applicable to settings with a large number of observations , such as those in natural language processing where the space of observation is sometimes the words in a language . the algorithm is also simple , employing only a singular value decomposition and matrix multiplications . story_separator_special_tag we consider the scenario where training and test data are drawn from different distributions , commonly referred to as sample selection bias . most algorithms for this setting try to first recover sampling distributions and then make appropriate corrections based on the distribution estimate . we present a nonparametric method which directly produces resampling weights without distribution estimation . our method works by matching distributions between training and testing sets in feature space . experimental results demonstrate that our method works well in practice . story_separator_special_tag this paper contains a new approach toward a theory of robust estimation ; it treats in detail the asymptotic theory of estimating a location parameter for contaminated normal distributions , and exhibits estimators intermediaries between sample mean and sample median that are asymptotically most robust ( in a sense to be specified ) among all translation invariant estimators . for the general background , see tukey ( 1960 ) ( p. 448 ff . ) story_separator_special_tag herding and kernel herding are deterministic methods of choosing samples which summarise a probability distribution . a related task is choosing samples for estimating integrals using bayesian quadrature . we show that the criterion minimised when selecting samples in kernel herding is equivalent to the posterior variance in bayesian quadrature . we then show that sequential bayesian quadrature can be viewed as a weighted version of kernel herding which achieves performance superior to any other weighted herding method . we demonstrate empirically a rate of convergence faster than o ( 1/n ) . our results also imply an upper bound on the empirical error of the bayesian quadrature estimate . story_separator_special_tag one often wants to estimate statistical models where the probability density function is known only up to a multiplicative normalization constant . typically , one then has to resort to markov chain monte carlo methods , or approximations of the normalization constant . here , we propose that such models can be estimated by minimizing the expected squared distance between the gradient of the log-density given by the model and the gradient of the log-density of the observed data . while the estimation of the gradient of log-density function is , in principle , a very difficult non-parametric problem , we prove a surprising result that gives a simple formula for this objective function . the density function of the observed data does not appear in this formula , which simplifies to a sample average of a sum of some derivatives of the log-density given by the model . the validity of the method is demonstrated on multivariate gaussian and independent component analysis models , and by estimating an overcomplete filter set for natural image data . story_separator_special_tag cochlear implants ( cis ) are implantable medical devices that can restore the hearing sense of people suffering from profound hearing loss . the ci uses a set of electrode contacts placed inside the cochlea to stimulate the auditory nerve with current pulses . the exact location of these electrodes may be an important parameter to improve and predict the performance with these devices . currently the methods used in clinics to characterize the geometry of the cochlea as well as to estimate the electrode positions are manual , error-prone and time consuming.we propose a markov random field ( mrf ) model for ci electrode localization for cone beam computed tomography ( cbct ) data-sets . intensity and shape of electrodes are included as prior knowledge as well as distance and angles between contacts . mrf inference is based on slice sampling particle belief propagation and guided by several heuristics . a stochastic search finds the best maximum a posteriori estimation among sampled mrf realizations.we evaluate our algorithm on synthetic and real cbct data-sets and compare its performance with two state of the art algorithms . an increase of localization precision up to 31.5 % ( mean ) , or story_separator_special_tag generative probability models such as hidden markov models provide a principled way of treating missing information and dealing with variable length sequences . on the other hand , discriminative methods such as support vector machines enable us to construct flexible decision boundaries and often result in classification performance superior to that of the model based approaches . an ideal classifier should combine these two complementary approaches . in this paper , we develop a natural way of achieving this combination by deriving kernel functions for use in discriminative methods such as support vector machines from generative probability models . we provide a theoretical justification for this combination as well as demonstrate a substantial improvement in the classification performance in the context of dna and protein sequence analysis . story_separator_special_tag a widely used class of models for stochastic systems is hidden markov models . systems that can be modeled by hidden markov models are a proper subclass of linearly dependent processes , a class of s . story_separator_special_tag it has long been customary to measure the adequacy of an estimator by the smallness of its mean squared error . the least squares estimators were studied by gauss and by other authors later in the nineteenth century . a proof that the best unbiased estimator of a linear function of the means of a set of observed random variables is the least squares estimator was given by markov [ 12 ] , a modified version of whose proof is given by david and neyman [ 4 ] . a slightly more general theorem is given by aitken [ 1 ] . fisher [ 5 ] indicated that for large samples the maximum likelihood estimator approximately minimizes the mean squared error when compared with other reasonable estimators . this paper will be concerned with optimum properties or failure of optimum properties of the natural estimator in certain special problems with the risk usually measured by the mean squared error or , in the case of several parameters , by a quadratic function of the estimators . we shall first mention some recent papers on this subject and then give some results , mostly unpublished , in greater detail . story_separator_special_tag we describe a method that infers whether statistical dependences between two observed variables x and y are due to a `` direct '' causal link or only due to a connecting causal path that contains an unobserved variable of low complexity , e.g. , a binary variable . this problem is motivated by statistical genetics . given a genetic marker that is correlated with a phenotype of interest , we want to detect whether this marker is causal or it only correlates with a causal one . our method is based on the analysis of the location of the conditional distributions p ( y|x ) in the simplex of all distributions of y. we report encouraging results on semi-empirical data . story_separator_special_tag importance prior neuroimaging studies have suggested that alterations in brain structure may be a consequence of cannabis use . siblings discordant for cannabis use offer an opportunity to use cross-sectional data to disentangle such causal hypotheses from shared effects of genetics and familial environment on brain structure and cannabis use . objectives to determine whether cannabis use is associated with differences in brain structure in a large sample of twins/siblings and to examine sibling pairs discordant for cannabis use to separate potential causal and predispositional factors linking lifetime cannabis exposure to volumetric alterations . design , setting , and participants cross-sectional diagnostic interview , behavioral , and neuroimaging data were collected from community sampling and established family registries from august 2012 to september 2014. this study included data from 483 participants ( 22-35 years old ) enrolled in the ongoing human connectome project , with 262 participants reporting cannabis exposure ( ie , ever used cannabis in their lifetime ) . main outcomes and measures cannabis exposure was measured with the semistructured assessment for the genetics of alcoholism . whole-brain , hippocampus , amygdala , ventral striatum , and orbitofrontal cortex volumes were related to lifetime cannabis use ( ever story_separator_special_tag the advantages of discriminative learning algorithms and kernel machines are combined with generative modeling using a novel kernel between distributions . in the probability product kernel , data points in the input space are mapped to distributions over the sample space and a general inner product is then evaluated as the integral of the product of pairs of distributions . the kernel is straightforward to evaluate for all exponential family models such as multinomials and gaussians and yields interesting nonlinear kernels . furthermore , the kernel is computable in closed form for latent distributions such as mixture models , hidden markov models and linear dynamical systems . for intractable models , such as switching linear dynamical systems , structured mean-field approximations can be brought to bear on the kernel evaluation . for general distributions , even if an analytic expression for the kernel is not feasible , we show a straightforward sampling method to evaluate it . thus , the kernel permits discriminative learning methods , including support vector machines , to exploit the properties , metrics and invariances of the generative models we infer from each datum . experiments are shown using multinomial models for text , hidden markov story_separator_special_tag we generalize traditional goals of clustering towards distinguishing components in a non-parametric mixture model . the clusters are not necessarily based on point locations , but on higher order criteria . this framework can be implemented by embedding probability distributions in a hilbert space . the corresponding clustering objective is very general and relates to a range of common clustering concepts . story_separator_special_tag we propose an efficient nonparametric strategy for learning a message operator in expectation propagation ( ep ) , which takes as input the set of incoming messages to a factor node , and produces an outgoing message as output . this learned operator replaces the multivariate integral required in classical ep , which may not have an analytic expression . we use kernel-based regression , which is trained on a set of probability distributions representing the incoming messages , and the associated outgoing messages . the kernel approach has two main advantages : first , it is fast , as it is implemented using a novel two-layer random feature representation of the input message distributions ; second , it has principled uncertainty estimates , and can be cheaply updated online , meaning it can request and incorporate new training data when it encounters inputs on which it is uncertain . in experiments , our approach is able to solve learning problems where a single message operator is required for multiple , substantially different data sets ( logistic regression for a variety of classification problems ) , where it is essential to accurately assess uncertainty and to efficiently and robustly update story_separator_special_tag two semimetrics on probability distributions are proposed , given as the sum of differences of expectations of analytic functions evaluated at spatial or frequency locations ( i.e , features ) . the features are chosen so as to maximize the distinguishability of the distributions , by optimizing a lower bound on test power for a statistical test using these features . the result is a parsimonious and interpretable indication of how and where two distributions differ locally . an empirical estimate of the test power criterion converges with increasing sample size , ensuring the quality of the returned features . in real-world benchmarks on high-dimensional text and image data , linear-time tests using the proposed semimetrics achieve comparable performance to the state-of-the-art quadratic-time maximum mean discrepancy test , while returning human-interpretable features that explain the test results . story_separator_special_tag recent advances of kernel methods have yielded a framework for nonparametric statistical inference called rkhs embeddings , in which all probability distributions are represented as elements in a reproducing kernel hilbert space , namely kernel means . in this paper , we consider the recovery of the information of a distribution from an estimate of the kernel mean , when a gaussian kernel is used . to this end , we theoretically analyze the properties of a consistent estimator of a kernel mean , which is represented as a weighted sum of feature vectors . first , we prove that the weighted average of a function in a besov space , whose weights and samples are given by the kernel mean estimator , converges to the expectation of the function . as corollaries , we show that the moments and the probability measures on intervals can be recovered from an estimate of the kernel mean . we also prove that a consistent estimator of the density of a distribution can be defined using a kernel mean estimator . this result confirms that we can in fact completely recover the information of distributions from rkhs embeddings . story_separator_special_tag this paper addresses the problem of filtering with a state-space model . standard approaches for filtering assume that a probabilistic model for observations ( i.e . the observation model ) is given explicitly or at least parametrically . we consider a setting where this assumption is not satisfied ; we assume that the knowledge of the observation model is only provided by examples of state-observation pairs . this setting is important and appears when state variables are defined as quantities that are very different from the observations . we propose kernel monte carlo filter , a novel filtering method that is focused on this setting . our approach is based on the framework of kernel mean embeddings , which enables nonparametric posterior inference using the state-observation examples . the proposed method represents state distributions as weighted samples , propagates these samples by sampling , estimates the state posteriors by kernel bayes ' rule , and resamples by kernel herding . in particular , the sampling and resampling procedures are novel in being expressed using kernel mean embeddings , so we theoretically analyze their behaviors . we reveal the following properties , which are similar to those of corresponding procedures in story_separator_special_tag kernel-based quadrature rules are becoming important in machine learning and statistics , as they achieve super- $ \xa5sqrt { n } $ convergence rates in numerical integration , and thus provide alternatives to monte carlo integration in challenging settings where integrands are expensive to evaluate or where integrands are high dimensional . these rules are based on the assumption that the integrand has a certain degree of smoothness , which is expressed as that the integrand belongs to a certain reproducing kernel hilbert space ( rkhs ) . however , this assumption can be violated in practice ( e.g. , when the integrand is a black box function ) , and no general theory has been established for the convergence of kernel quadratures in such misspecified settings . our contribution is in proving that kernel quadratures can be consistent even when the integrand does not belong to the assumed rkhs , i.e. , when the integrand is less smooth than assumed . specifically , we derive convergence rates that depend on the ( unknown ) lesser smoothness of the integrand , where the degree of smoothness is expressed via powers of rkhss or via sobolev spaces . story_separator_special_tag a modification of a test for independence based on the empirical characteristic function is investigated . the initial test is not consistent in the general case . the modification makes the test always consistent and asymptotically distribution free . it is based on a special transformation of the data . story_separator_special_tag approximating non-linear kernels using feature maps has gained a lot of interest in recent years due to applications in reducing training and testing times of svm classifiers and other kernel based learning algorithms . we extend this line of work and present low distortion embeddings for dot product kernels into linear euclidean spaces . we base our results on a classical result in harmonic analysis characterizing all dot product kernels and use it to define randomized feature maps into explicit low dimensional euclidean spaces in which the native dot product provides an approximation to the dot product kernel with high confidence . story_separator_special_tag example-based explanations are widely used in the effort to improve the interpretability of highly complex distributions . however , prototypes alone are rarely sufficient to represent the gist of the complexity . in order for users to construct better mental models and understand complex data distributions , we also need { \\em criticism } to explain what are \\textit { not } captured by prototypes . motivated by the bayesian model criticism framework , we develop \\texttt { mmd-critic } which efficiently learns prototypes and criticism , designed to aid human interpretability . a human subject pilot study shows that the \\texttt { mmd-critic } selects prototypes and criticism that are useful to facilitate human understanding and reasoning . we also evaluate the prototypes selected by \\texttt { mmd-critic } via a nearest prototype classifier , showing competitive performance compared to baselines . story_separator_special_tag we propose a method for nonparametric density estimation that exhibits robustness to contamination of the training sample . this method achieves robustness by combining a traditional kernel density estimator ( kde ) with ideas from classical $ m $ -estimation . we interpret the kde based on a radial , positive semi-definite kernel as a sample mean in the associated reproducing kernel hilbert space . since the sample mean is sensitive to outliers , we estimate it robustly via $ m $ -estimation , yielding a robust kernel density estimator ( rkde ) . an rkde can be computed efficiently via a kernelized iteratively re-weighted least squares ( irwls ) algorithm . necessary and sufficient conditions are given for kernelized irwls to converge to the global minimizer of the $ m $ -estimator objective function . the robustness of the rkde is demonstrated with a representer theorem , the influence function , and experimental results for density estimation and anomaly detection . story_separator_special_tag in recent years , kernel principal component analysis ( kpca ) has been suggested for various image processing tasks requiring an image model such as , e.g. , denoising or compression . the original form of kpca , however , can be only applied to strongly restricted image classes due to the limited number of training examples that can be processed . we therefore propose a new iterative method for performing kpca , the kernel hebbian algorithm , which iteratively estimates the kernel principal components with only linear order memory complexity . in our experiments , we compute models for complex image classes such as faces and natural images which require a large number of training examples . the resulting image models are tested in single-frame super-resolution and denoising applications . the kpca model is not specifically tailored to these tasks ; in fact , the same model can be used in super-resolution with variable input resolution , or denoising with unknown noise characteristics , in spite of this , both super-resolution and denoising performance are comparable to existing methods . story_separator_special_tag in various application domains , including image recognition , it is natural to represent each example as a set of vectors . with a base kernel we can implicitly map these vectors to a hilbert space and fit a gaussian distribution to the whole set using kernel pca . we define our kernel between examples as bhattacharyya 's measure of affinity between such gaussians . the resulting kernel is computable in closed form and enjoys many favorable properties , including graceful behavior under transformations , potentially justifying the vector set representation even in cases when more conventional representations also exist . story_separator_special_tag many nonparametric regressors were recently shown to converge at rates that depend only on the intrinsic dimension of data . these regressors thus escape the curse of dimension when high-dimensional data has low intrinsic dimension ( e.g . a manifold ) . we show that k-nn regression is also adaptive to intrinsic dimension . in particular our rates are local to a query x and depend only on the way masses of balls centered at x vary with radius . furthermore , we show a simple way to choose k = k ( x ) locally at any x so as to nearly achieve the minimax rate at x in terms of the unknown intrinsic dimension in the vicinity of x. we also establish that the minimax rate does not depend on a particular choice of metric space or distribution , but rather that this minimax rate holds for any metric space and doubling measure . story_separator_special_tag 1. review of basic concepts 2. an introduction to regression and correlation analysis 3. statistical inferences in the simple regression model 4. multiple regression : using two or more predictor variables 5. residual analysis and model specification 6. using qualitative and limited dependent variables 7. heteroscedasticity 8. autocorrelation 9. non-linear regression and the selection of the proper functional form 10. simultaneous equations : two stage least squares 11. forecasting with time series data and distributed lag models story_separator_special_tag determinantal point processes ( dpps ) are elegant probabilistic models of repulsion that arise in quantum physics and random matrix theory . in contrast to traditional structured models like markov random fields , which become intractable and hard to approximate in the presence of negative correlations , dpps offer efficient and exact algorithms for sampling , marginalization , conditioning , and other inference tasks . while they have been studied extensively by mathematicians , giving rise to a deep and beautiful theory , dpps are relatively new in machine learning . determinantal point processes for machine learning provides a comprehensible introduction to dpps , focusing on the intuitions , algorithms , and extensions that are most relevant to the machine learning community , and shows how dpps can be applied to real-world applications like finding diverse sets of high-quality search results , building informative summaries by selecting diverse sentences from documents , modeling non-overlapping human poses in images or video , and automatically building timelines of important news stories . it presents the general mathematical background to dpps along with a range of modeling extensions , efficient algorithms , and theoretical results that aim to enable practical modeling and learning story_separator_special_tag in this paper , we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel . this is of central importance in some kernel applications , such as on using kernel principal component analysis ( pca ) for image denoising . unlike the traditional method in which relies on nonlinear optimization , our proposed method directly finds the location of the pre-image based on distance constraints in the feature space . it is noniterative , involves only linear algebra and does not suffer from numerical instability or local minimum problems . evaluations on performing kernel pca and kernel clustering on the usps data set show much improved performance . story_separator_special_tag recently , the frank-wolfe optimization algorithm was suggested as a procedure to obtain adaptive quadrature rules for integrals of functions in a reproducing kernel hilbert space ( rkhs ) with a potentially faster rate of convergence than monte carlo integration ( and `` kernel herding '' was shown to be a special case of this procedure ) . in this paper , we propose to replace the random sampling step in a particle filter by frank-wolfe optimization . by optimizing the position of the particles , we can obtain better accuracy than random or quasi-monte carlo sampling . in applications where the evaluation of the emission probabilities is expensive ( such as in robot localization ) , the additional computational cost to generate the particles through optimization can be justified . experiments on standard synthetic examples as well as on a robot localization task indicate indeed an improvement of accuracy over random and quasi-monte carlo sampling . story_separator_special_tag despite their successes , what makes kernel methods difficult to use in many large scale problems is the fact that computing the decision function is typically expensive , especially at prediction time . in this paper , we overcome this difficulty by proposing fastfood , an approximation that accelerates such computation significantly . key to fastfood is the observation that hadamard matrices when combined with diagonal gaussian matrices exhibit properties similar to dense gaussian random matrices . yet unlike the latter , hadamard and diagonal matrices are inexpensive to multiply and store . these two matrices can be used in lieu of gaussian matrices in random kitchen sinks ( rahimi & recht , 2007 ) and thereby speeding up the computation for a large range of kernel functions . specifically , fastfood requires o ( n log d ) time and o ( n ) storage to compute n non-linear basis functions in d dimensions , a significant improvement from o ( nd ) computation and storage , without sacrificing accuracy . we prove that the approximation is unbiased and has low variance . extensive experiments show that we achieve similar accuracy to full kernel expansions and random kitchen sinks story_separator_special_tag we consider the problem of learning deep generative models from data . we formulate a method that generates an independent sample via a single feedforward pass through a multilayer perceptron , as in the recently proposed generative adversarial networks ( goodfellow et al. , 2014 ) . training a generative adversarial network , however , requires careful optimization of a difficult minimax program . instead , we utilize a technique from statistical hypothesis testing known as maximum mean discrepancy ( mmd ) , which leads to a simple objective that can be interpreted as matching all orders of statistics between a dataset and samples from the model , and can be trained by backpropagation . we further boost the performance of this approach by combining our generative network with an auto-encoder network , using mmd to learn to generate codes that can then be decoded to produce samples . we show that the combination of these techniques yields excellent generative models compared to baseline approaches as measured on mnist and the toronto face database . story_separator_special_tag because k ( \xb7 , x ) is in the stein class of p for any x , we can show that x k ( \xb7 , x ) is also in the stein class , since x x ( p ( x ) x k ( x , x ) ) dx = x x x ( p ( x ) k ( x , x ) ) dx = 0 , and hence v ( \xb7 , x ) is also in the stein class ; apply lemma 2.3 on v ( \xb7 , x ) with fixed x gives s ( p , q ) = e x , x p [ ( s q ( x ) s p ( x ) ) v ( x , x ) ) ] = e x , x p [ s q ( x ) v ( x , x ) + trace ( x v ( x , x ) ) ] the result then follows by noting that x v ( x , x ) = x k ( x , x ) s q ( x ) + x x k ( x , x story_separator_special_tag we propose an exploratory approach to statistical model criticism using maximum mean discrepancy ( mmd ) two sample tests . typical approaches to model criticism require a practitioner to select a statistic by which to measure discrepancies between data and a statistical model . mmd two sample tests are instead constructed as an analytic maximisation over a large space of possible statistics and therefore automatically select the statistic which most shows any discrepancy . we demonstrate on synthetic data that the selected statistic , called the witness function , can be used to identify where a statistical model most misrepresents the data it was trained on . we then apply the procedure to real data where the models being assessed are restricted boltzmann machines , deep belief networks and gaussian process regression and demonstrate the ways in which these models fail to capture the properties of the data they are trained on . story_separator_special_tag recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation . however , as deep features eventually transition from general to specific along the network , the feature transferability drops significantly in higher layers with increasing domain discrepancy . hence , it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers . in this paper , we propose a new deep adaptation network ( dan ) architecture , which generalizes deep convolutional neural network to the domain adaptation scenario . in dan , hidden representations of all task-specific layers are embedded in a reproducing kernel hilbert space where the mean embeddings of different domain distributions can be explicitly matched . the domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching . dan can learn transferable features with statistical guarantees , and can scale linearly by unbiased estimate of kernel embedding . extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks . story_separator_special_tag the recent success of deep neural networks relies on massive amounts of labeled data . for a target task where labeled data is unavailable , domain adaptation can transfer a learner from a different source domain . in this paper , we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain . we relax a shared-classifier assumption made by previous methods and assume that the source classifier and target classifier differ by a residual function . we enable classifier adaptation by plugging several layers into deep network to explicitly learn the residual function with reference to the target classifier . we fuse features of multiple layers with tensor product and embed them into reproducing kernel hilbert spaces to match distributions for feature adaptation . the adaptation can be achieved in most feed-forward models by extending them with new residual layers and loss functions , which can be trained efficiently via back-propagation . empirical evidence shows that the new approach outperforms state of the art methods on standard domain adaptation benchmarks . story_separator_special_tag we pose causal inference as the problem of learning to classify probability distributions . in particular , we assume access to a collection { ( si , li ) } in=1 , where each si is a sample drawn from the probability distribution of xi\xd7yi , and li is a binary label indicating whether `` xi yi '' or `` xi yi '' . given these data , we build a causal inference rule in two steps . first , we featurize each si using the kernel mean embedding associated with some characteristic kernel . second , we train a binary classifier on such embeddings to distinguish between causal directions . we present generalization bounds showing the statistical consistency and learning rates of the proposed approach , and provide a simple implementation that achieves state-of-the-art cause-effect inference . furthermore , we extend our ideas to infer causal relationships between more than two variables . story_separator_special_tag consider the problem of estimating simultaneously the means $ \\theta_i $ of independent normal random variables $ x_i $ with unit variance . under the weighted quadratic loss $ l ( \\theta , a ) = \\sum_i\\lambda_i ( \\theta_i - a_i ) ^2 $ with positive weights it is well known that : ( 1 ) an estimator which is admissible under one set of weights is admissible under all weights . ( 2 ) estimating individual coordinates by proper bayes estimators results in an admissible estimator . ( 3 ) estimating individual coordinates by admissible estimators may result in an inadmissible estimator , when the number of coordinates is large enough . a dominating estimator must link observations in the sense that at least one $ \\theta_i $ is estimated using observations other than $ x_i $ . we consider an infinite model with a countable number of coordinates . in the infinite model admissibility does depend on the weights used and by linking coordinates it is possible to dominate even estimators which are proper bayes for individual coordinates . specifically , we show that when $ \\theta_i $ are square summable , the estimator $ \\delta_i ( x story_separator_special_tag positive definite kernels on probability measures have been recently applied to classification problems involving text , images , and other types of structured data . some of these kernels are related to classic information theoretic quantities , such as ( shannon 's ) mutual information and the jensen-shannon ( js ) divergence . meanwhile , there have been recent advances in nonextensive generalizations of shannon 's information theory . this paper bridges these two trends by introducing nonextensive information theoretic kernels on probability measures , based on new js-type divergences . these new divergences result from extending the the two building blocks of the classical js divergence : convexity and shannon 's entropy . the notion of convexity is extended to the wider concept of q-convexity , for which we prove a jensen q-inequality . based on this inequality , we introduce jensen-tsallis ( jt ) q-differences , a nonextensive generalization of the js divergence , and define a k-th order jt q-difference between stochastic processes . we then define a new family of nonextensive mutual information kernels , which allow weights to be assigned to their arguments , and which includes the boolean , js , and linear kernels story_separator_special_tag we present a novel estimation algorithm for filtering and regression with a number of advantages over existing methods . the algorithm has wide application in robotics as no assumptions are made about the underlying distributions , it can represent non-gaussian multi-modal posteriors , and learn arbitrary non-linear models from noisy data . our method is a generalisation of the kernel bayes ' rule that produces multi-modal posterior estimates represented as gaussian mixtures . the algorithm learns non-linear state transition and observation models from data and represents all distributions internally as elements in a reproducing kernel hilbert space . inference occurs in the hilbert space and can be performed recursively . when an estimate of the posterior distribution is required , we apply a quadratic programming pre-image method to determine the gaussian mixture components of the posterior representation . we demonstrate our algorithm with two filtering experiments and one regression experiment ; a multi-modal tracking simulation , a real tracking problem involving a miniature slot-car with an attached inertial measurement unit , and a regression problem of estimating the velocity field of a set of pedestrian paths for robot path-planning . our algorithm compares favourably with the gaussian process in the story_separator_special_tag 1 basic concepts.- 1.1 preliminaries.- 1.2 norms.- 1.3 first properties of normed spaces.- 1.4 linear operators between normed spaces.- 1.5 baire category.- 1.6 three fundamental theorems.- 1.7 quotient spaces.- 1.8 direct sums.- 1.9 the hahn-banach extension theorems.- 1.10 dual spaces.- 1.11 the second dual and reflexivity.- 1.12 separability.- 1.13 characterizations of reflexivity.- 2 the weak and weak topologies.- 2.1 topology and nets.- 2.2 vector topologies.- 2.3 metrizable vector topologies.- 2.4 topologies induced by families of functions.- 2.5 the weak topology.- 2.6 the weak topology.- 2.7 the bounded weak topology.- 2.8 weak compactness.- 2.9 james 's weak compactness theorem.- 2.10 extreme points.- 2.11 support points and subreflexivity.- 3 linear operators.- 3.1 adjoint operators.- 3.2 projections and complemented subspaces.- 3.3 banach algebras and spectra.- 3.4 compact operators.- 3.5 weakly compact operators.- 4 schauder bases.- 4.1 first properties of schauder bases.- 4.2 unconditional bases.- 4.3 equivalent bases.- 4.4 bases and duality.- 4.5 james 's space j.- 5 rotundity and smoothness.- 5.1 rotundity.- 5.2 uniform rotundity.- 5.3 generalizations of uniform rotundity.- 5.4 smoothness.- 5.5 uniform smoothness.- 5.6 generalizations of uniform smoothness.- a prerequisites.- b metric spaces.- d ultranets.- references.- list of symbols . story_separator_special_tag we introduce two kernels that extend the mean map , which embeds probability measures in hilbert spaces . the generative mean map kernel ( gmmk ) is a smooth similarity measure between probabilistic models . the latent mean map kernel ( lmmk ) generalizes the non-iid formulation of hilbert space embeddings of empirical distributions in order to incorporate latent variable models . when comparing certain classes of distributions , the gmmk exhibits beneficial regularization and generalization properties not shown for previous generative kernels . we present experiments comparing support vector machine performance using the gmmk and lmmk between hidden markov models to the performance of other methods on discrete and continuous observation sequence data . the results suggest that , in many cases , the gmmk has generalization error competitive with or better than other methods . story_separator_special_tag this memoir is concerned with continuous symmetric functions k ( s , t ) for which the double integral a b a b k ( s , t ) ( s ) ( t ) dsdt is either not negative , or not positive , for each function ( s ) which is continuous in the interval ( a , b ) ; in the former case the function k ( s , t ) is said to be of positive type , while in the latter it is said to be of negative type . the importance of these classes of functions in the theory of integral equations will be gathered from part i. the greater portion of the second part is devoted to a proof of the theorem that the necessary and sufficient condition , under which a continuous symmetric function , k ( s , t ) , is of positive type , is that the functions k ( s 1 , s 1 ) , k ( s 1 , s 2 s 1 , s 2 ) , . , k ( s 1 , s 2 . , s n s 1 , s story_separator_special_tag in this letter , we provide a study of learning in a hilbert space of vectorvalued functions . we motivate the need for extending learning theory of scalar-valued functions by practical considerations and establish some basic results for learning vector-valued functions that should prove useful in applications . specifically , we allow an output space y to be a hilbert space , and we consider a reproducing kernel hilbert space of functions whose values lie in y. in this setting , we derive the form of the minimal norm interpolant to a finite set of data and apply it to study some regularization functionals that are important in learning theory . we consider specific examples of such functionals corresponding to multiple-output regularization networks and support vector machines , for both regression and classification . finally , we provide classes of operator-valued kernels of the dot product and translation-invariant type . story_separator_special_tag this graduate-level textbook introduces fundamental concepts and methods in machine learning . it describes several important modern algorithms , provides the theoretical underpinnings of these algorithms , and illustrates key aspects for their application . the authors aim to present novel theoretical tools and concepts while giving concise proofs even for relatively advanced topics . foundations of machine learning fills the need for a general textbook that also offers theoretical details and an emphasis on proofs . certain topics that are often treated with insufficient attention are discussed in more detail here ; for example , entire chapters are devoted to regression , multi-class classification , and ranking . the first three chapters lay the theoretical foundation for what follows , but each remaining chapter is mostly self-contained . the appendix offers a concise probability review , a short introduction to convex optimization , tools for concentration bounds , and several basic properties of matrices and norms used in the book . the book is intended for graduate students and researchers in machine learning , statistics , and related areas ; it can be used either as a textbook or as a reference text for a research seminar . story_separator_special_tag the discovery of causal relationships from purely observational data is a fundamental problem in science . the most elementary form of such a causal discovery problem is to decide whether x causes y or , alternatively , y causes x , given joint observations of two variables x , y. an example is to decide whether altitude causes temperature , or vice versa , given only joint measurements of both variables . even under the simplifying assumptions of no confounding , no feedback loops , and no selection bias , such bivariate causal discovery problems are challenging . nevertheless , several approaches for addressing those problems have been proposed in recent years . we review two families of such methods : additive noise methods ( anm ) and information geometric causal inference ( igci ) . we present the benchmark causeeffectpairs that consists of data for 100 different cause-effect pairs selected from 37 datasets from various domains ( e.g. , meteorology , biology , medicine , engineering , economy , etc . ) and motivate our decisions regarding the `` ground truth '' causal directions of all pairs . we evaluate the performance of several bivariate causal discovery methods on story_separator_special_tag over the last years significant efforts have been made to develop kernels that can be applied to sequence data such as dna , text , speech , video and images . the fisher kernel and similar variants have been suggested as good ways to combine an underlying generative model in the feature space and discriminant classifiers such as svm 's . in this paper we suggest an alternative procedure to the fisher kernel for systematically finding kernel functions that naturally handle variable length sequence data in multimedia domains . in particular for domains such as speech and images we explore the use of kernel functions that take full advantage of well known probabilistic models such as gaussian mixtures and single full covariance gaussian models . we derive a kernel distance based on the kullback-leibler ( kl ) divergence between generative models . in effect our approach combines the best of both generative and discriminative methods and replaces the standard svm kernels . we perform experiments on speaker identification/verification and image classification tasks and show that these new kernels have the best performance in speaker verification and mostly outperform the fisher kernel based svm 's and the generative classifiers in speaker story_separator_special_tag we propose one-class support measure machines ( ocsmms ) for group anomaly detection . unlike traditional anomaly detection , ocsmms aim at recognizing anomalous aggregate behaviors of data points . the ocsmms generalize well-known one-class support vector machines ( ocsvms ) to a space of probability measures . by formulating the problem as quantile estimation on distributions , we can establish interesting connections to the ocsvms and variable kernel density estimators ( vkdes ) over the input space on which the distributions are defined , bridging the gap between large-margin methods and kernel density estimators . in particular , we show that various types of vkdes can be considered as solutions to a class of regularization problems studied in this paper . experiments on sloan digital sky survey dataset and high energy particle physics dataset demonstrate the benefits of the proposed framework in real-world applications . story_separator_special_tag this paper presents a kernel-based discriminative learning framework on probability measures . rather than relying on large collections of vectorial training examples , our framework learns using a collection of probability distributions that have been constructed to meaningfully represent training data . by representing these probability distributions as mean embeddings in the reproducing kernel hilbert space ( rkhs ) , we are able to apply many standard kernel-based learning techniques in straightforward fashion . to accomplish this , we construct a generalization of the support vector machine ( svm ) called a support measure machine ( smm ) . our analyses of smms provides several insights into their relationship to traditional svms . based on such insights , we propose a flexible svm ( flex-svm ) that places different kernel functions on each training example . experimental results on both synthetic and real-world data demonstrate the effectiveness of our proposed framework . story_separator_special_tag this paper investigates domain generalization : how to take knowledge acquired from an arbitrary number of related domains and apply it to previously unseen domains ? we propose domain-invariant component analysis ( dica ) , a kernel-based optimization algorithm that learns an invariant transformation by minimizing the dissimilarity across domains , whilst preserving the functional relationship between input and output variables . a learning-theoretic analysis shows that reducing dissimilarity improves the expected generalization ability of classifiers on new domains , motivating the proposed algorithm . experimental results on synthetic and real-world datasets demonstrate that dica successfully learns invariant features and improves classifier performance in practice . story_separator_special_tag a mean function in a reproducing kernel hilbert space ( rkhs ) , or a kernel mean , is an important part of many algorithms ranging from kernel principal component analysis to hilbert-space embedding of distributions . given a finite sample , an empirical average is the standard estimate for the true kernel mean . we show that this estimator can be improved due to a well-known phenomenon in statistics called stein 's phenomenon . after consideration , our theoretical analysis reveals the existence of a wide class of estimators that are better than the standard one . focusing on a subset of this class , we propose efficient shrinkage estimators for the kernel mean . empirical evaluations on several applications clearly demonstrate that the proposed estimators outperform the standard kernel mean estimator . story_separator_special_tag the problem of estimating the kernel mean in a reproducing kernel hilbert space ( rkhs ) is central to kernel methods in that it is used by classical approaches ( e.g. , when centering a kernel pca matrix ) , and it also forms the core inference step of modern kernel methods ( e.g. , kernel-based non-parametric tests ) that rely on embedding probability distributions in rkhss . muandet et al . ( 2014 ) has shown that shrinkage can help in constructing `` better '' estimators of the kernel mean than the empirical estimator . the present paper studies the consistency and admissibility of the estimators in muandet et al . ( 2014 ) , and proposes a wider class of shrinkage estimators that improve upon the empirical estimator by considering appropriate basis functions . using the kernel pca basis , we show that some of these estimators can be constructed using spectral filtering algorithms which are shown to be consistent under some technical assumptions . our theoretical analysis also reveals a fundamental connection to the kernel-based supervised learning framework . the proposed estimators are simple to implement and perform well in practice . story_separator_special_tag a mean function in a reproducing kernel hilbert space ( rkhs ) , or a kernel mean , is central to kernel methods in that it is used by many classical algorithms such as kernel principal component analysis , and it also forms the core inference step of modern kernel methods that rely on embedding probability distributions in rkhss . given a finite sample , an empirical average has been used commonly as a standard estimator of the true kernel mean . despite a widespread use of this estimator , we show that it can be improved thanks to the well-known stein phenomenon . we propose a new family of estimators called kernel mean shrinkage estimators ( kmses ) , which benefit from both theoretical justifications and good empirical performance . the results demonstrate that the proposed estimators outperform the standard one , especially in a `` large d , small n '' paradigm . story_separator_special_tag we consider probability metrics of the following type : for a class of functions and probability measures p , q we define a unified study of such integral probability metrics is given . we characterize the maximal class of functions that generates such a metric . further , we show how some interesting properties of these probability metrics arise directly from conditions on the generating class of functions . the results are illustrated by several examples , including the kolmogorov metric , the dudley metric and the stop-loss metric . story_separator_special_tag today 's web-enabled deluge of electronic data calls for automated methods of data analysis . machine learning provides these , developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data . this textbook offers a comprehensive and self-contained introduction to the field of machine learning , based on a unified , probabilistic approach . the coverage combines breadth and depth , offering necessary background material on such topics as probability , optimization , and linear algebra as well as discussion of recent developments in the field , including conditional random fields , l1 regularization , and deep learning . the book is written in an informal , accessible style , complete with pseudo-code for the most important algorithms . all topics are copiously illustrated with color images and worked examples drawn from such application domains as biology , text processing , computer vision , and robotics . rather than providing a cookbook of different heuristic methods , the book stresses a principled model-based approach , often using the language of graphical models to specify models in a concise and intuitive way . almost all the models described have been implemented in story_separator_special_tag probabilistic inference is an attractive approach to uncertain reasoning and empirical learning in artificial intelligence . computational difficulties arise , however , because probabilistic models with the necessary realism and flexibility lead to complex distributions over high-dimensional spaces . related problems in other fields have been tackled using monte carlo methods based on sampling using markov chains , providing a rich array of techniques that can be applied to problems in artificial intelligence . the metropolis algorithm has been used to solve difficult problems in statistical physics for over forty years , and , in the last few years , the related method of gibbs sampling has been applied to problems of statistical inference . concurrently , an alternative method for solving problems in statistical physics by means of dynamical simulation has been developed as well , and has recently been unified with the metropolis algorithm to produce the hybrid monte carlo method . in computer science , markov chain sampling is the basis of the heuristic optimization technique of simulated annealing , and has recently been used in randomized algorithms for approximate counting of large sets . in this review , i outline the role of probabilistic inference in story_separator_special_tag we develop and analyze an algorithm for nonparametric estimation of divergence functionals and the density ratio of two probability distributions . our method is based on a variational characterization of f-divergences , which turns the estimation into a penalized convex risk minimization problem . we present a derivation of our kernel-based estimation algorithm and an analysis of convergence rates for the estimator . our simulation results demonstrate the convergence behavior of the method , which compares favorably with existing methods in the literature . story_separator_special_tag we connect shift-invariant characteristic kernels to infinitely divisible distributions on rd . characteristic kernels play an important role in machine learning applications with their kernel means to distinguish any two probability measures . the contribution of this paper is twofold . first , we show , using the levy-khintchine formula , that any shift-invariant kernel given by a bounded , continuous , and symmetric probability density function ( pdf ) of an infinitely divisible distribution on rd is characteristic . we mention some closure properties of such characteristic kernels under addition , pointwise product , and convolution . second , in developing various kernel mean algorithms , it is fundamental to compute the following values : ( i ) kernel mean values mp ( x ) , x , and ( ii ) kernel mean rkhs inner products mp ; mq h , for probability measures p , q . if p , q , and kernel k are gaussians , then the computation of ( i ) and ( ii ) results in gaussian pdfs that are tractable . we generalize this gaussian combination to more general cases in the class of infinitely divisible distributions . we then introduce story_separator_special_tag a nonparametric approach for policy learning for pomdps is proposed . the approach represents distributions over the states , observations , and actions as embeddings in feature spaces , which are reproducing kernel hilbert spaces . distributions over states given the observations are obtained by applying the kernel bayes ' rule to these distribution embeddings . policies and value functions are defined on the feature space over states , which leads to a feature space expression for the bellman equation . value iteration may then be used to estimate the optimal value function and associated policy . experimental results confirm that the correct policy is learned using the feature space representation . story_separator_special_tag a non-parametric extension of control variates is presented . these leverage gradient information on the sampling density to achieve substantial variance reduction . it is not required that the sampling density be normalised . the novel contribution of this work is based on two important insights ; ( i ) a trade-off between random sampling and deterministic approximation and ( ii ) a new gradient-based function space derived from stein 's identity . unlike classical control variates , our estimators achieve super-root- $ n $ convergence , often requiring orders of magnitude fewer simulations to achieve a fixed level of precision . theoretical and empirical results are presented , the latter focusing on integration problems arising in hierarchical models and models based on non-linear ordinary differential equations . story_separator_special_tag we analyze 'distribution to distribution regression ' where one is regressing a mapping where both the covariate ( inputs ) and response ( outputs ) are distributions . no parameters on the input or output distributions are assumed , nor are any strong assumptions made on the measure from which input distributions are drawn from . we develop an estimator and derive an upper bound for the l2 risk ; also , we show that when the effective dimension is small enough ( as measured by the doubling dimension ) , then the risk converges to zero with a polynomial rate . story_separator_special_tag we study the problem of distribution to real regression , where one aims to regress a mapping f that takes in a distribution input covariate p 2 i ( for a non-parametric family of distributionsi ) and outputs a real-valued response y = f ( p ) + . this setting was recently studied in [ 15 ] , where the \\kernelkernel '' estimator was introduced and shown to have a polynomial rate of convergence . however , evaluating a new prediction with the kernel-kernel estimator scales as ( n ) . this causes the dicult situation where a large amount of data may be necessary for a low estimation risk , but the computation cost of estimation becomes infeasible when the data-set is too large . to this end , we propose the double-basis estimator , which looks to alleviate this big data problem in two ways : rst , the double-basis estimator is shown to have a computation complexity that is independent of the number of of instances n when evaluating new predictions after training ; secondly , the double-basis estimator is shown to have a fast rate of convergence for a general class of mappings f2f . story_separator_special_tag domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain . intuitively , discovering a good feature representation across domains is crucial . in this paper , we first propose to find such a representation through a new learning method , transfer component analysis ( tca ) , for domain adaptation . tca tries to learn some transfer components across domains in a reproducing kernel hilbert space using maximum mean miscrepancy . in the subspace spanned by these transfer components , data properties are preserved and data distributions in different domains are close to each other . as a result , with the new representations in this subspace , we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain . furthermore , in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains , we extend tca in a semisupervised learning setting , which encodes label information into transfer components learning . we call this extension semisupervised tca . the main contribution of our work is that we propose a story_separator_special_tag complicated generative models often result in a situation where computing the likelihood of observed data is intractable , while simulating from the conditional density given a parameter value is relatively easy . approximate bayesian computation ( abc ) is a paradigm that enables simulation-based posterior inference in such cases by measuring the similarity between simulated and observed data in terms of a chosen set of summary statistics . however , there is no general rule to construct sufficient summary statistics for complex models . insufficient summary statistics will `` leak '' information , which leads to abc algorithms yielding samples from an incorrect ( partial ) posterior . in this paper , we propose a fully nonparametric abc paradigm which circumvents the need for manually selecting summary statistics . our approach , k2-abc , uses maximum mean discrepancy ( mmd ) as a dissimilarity measure between the distributions over observed and simulated data . mmd is easily estimated as the squared difference between their empirical kernel embeddings . experiments on a simulated scenario and a real-world biological problem illustrate the effectiveness of the proposed algorithm . story_separator_special_tag from the publisher : probabilistic reasoning in intelligent systems is a complete andaccessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty . the author provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other ai approaches to uncertainty , such as the dempster-shafer formalism , truth maintenance systems , and nonmonotonic logic . the author distinguishes syntactic and semantic approaches to uncertainty\x97and offers techniques , based on belief networks , that provide a mechanism for making semantics-based systems operational . specifically , network-propagation techniques serve as a mechanism for combining the theoretical coherence of probability theory with modern demands of reasoning-systems technology : modular declarative inputs , conceptually meaningful inferences , and parallel distributed computation . application areas include diagnosis , forecasting , image interpretation , multi-sensor fusion , decision support systems , plan recognition , planning , speech recognition\x97in short , almost every task requiring that conclusions be drawn from uncertain clues and incomplete information . probabilistic reasoning in intelligent systems will be of special interest to scholars and researchers in ai , decision theory , statistics , logic , philosophy , story_separator_special_tag 1. introduction to probabilities , graphs , and causal models 2. a theory of inferred causation 3. causal diagrams and the identification of causal effects 4. actions , plans , and direct effects 5. causality and structural models in the social sciences 6. simpson 's paradox , confounding , and collapsibility 7. structural and counterfactual models 8. imperfect experiments : bounds and counterfactuals 9. probability of causation : interpretation and identification epilogue : the art and science of cause and effect . story_separator_special_tag ( 1901 ) . liii . on lines and planes of closest fit to systems of points in space . the london , edinburgh , and dublin philosophical magazine and journal of science : vol . 2 , no . 11 , pp . 559-572 . story_separator_special_tag compact explicit feature maps provide a practical framework to scale kernel methods to large-scale learning , but deriving such maps for many types of kernels remains a challenging open problem . among the commonly used kernels for nonlinear classification are polynomial kernels , for which low approximation error has thus far necessitated explicit feature maps of large dimensionality , especially for higher-order polynomials . meanwhile , because polynomial kernels are unbounded , they are frequently applied to data that has been normalized to unit l2 norm . the question we address in this work is : if we know a priori that data is normalized , can we devise a more compact map ? we show that a putative affirmative answer to this question based on random fourier features is impossible in this setting , and introduce a new approximation paradigm , spherical random fourier ( srf ) features , which circumvents these issues and delivers a compact approximation to polynomial kernels for data on the unit sphere . compared to prior work , srf features are less rank-deficient , more compact , and achieve better kernel approximation , especially for higher-order polynomials . the resulting predictions have lower variance story_separator_special_tag approximation of non-linear kernels using random feature mapping has been successfully employed in large-scale data analysis applications , accelerating the training of kernel machines . while previous random feature mappings run in o ( ndd ) time for $ n $ training samples in d-dimensional space and d random feature maps , we propose a novel randomized tensor product technique , called tensor sketching , for approximating any polynomial kernel in o ( n ( d+d \\log { d } ) ) time . also , we introduce both absolute and relative error bounds for our approximation to guarantee the reliability of our estimation algorithm . empirically , tensor sketching achieves higher accuracy and often runs orders of magnitude faster than the state-of-the-art approach for large-scale real-world datasets . story_separator_special_tag low-dimensional embedding , manifold learning , clustering , classification , and anomaly detection are among the most important problems in machine learning . the existing methods usually consider the case when each instance has a fixed , finite-dimensional feature representation . here we consider a different setting . we assume that each instance corresponds to a continuous probability distribution . these distributions are unknown , but we are given some i.i.d . samples from each distribution . our goal is to estimate the distances between these distributions and use these distances to perform low-dimensional embedding , clustering/classification , or anomaly detection for the distributions . we present estimation algorithms , describe how to apply them for machine learning tasks on distributions , and show empirical results on synthetic data , real word images , and astronomical data sets . story_separator_special_tag we introduce a new discriminative learning method for image classification . we assume that the images are represented by unordered , multi-dimensional , finite sets of feature vectors , and that these sets might have different cardinality . this allows us to use consistent nonparametric divergence estimators to define new kernels over these sets , and then apply them in kernel classifiers . our numerical results demonstrate that in many cases this approach can outperform state-of-the-art competitors on both simulated and challenging real-world datasets . story_separator_special_tag distribution regression refers to the situation where a response y depends on a covariate p where p is a probability distribution . the model is y = f ( p ) + where f is an unknown regression function and is a random error . typically , we do not observe p directly , but rather , we observe a sample from p . in this paper we develop theory and methods for distribution-free versions of distribution regression . this means that we do not make strong distributional assumptions about the error term and covariate p . we prove that when the eective dimension is small enough ( as measured by the doubling dimension ) , then the excess prediction risk converges to zero with a polynomial rate . story_separator_special_tag we propose a novel approach to optimize partially observable markov decisions processes ( pomdps ) defined on continuous spaces . to date , most algorithms for model-based pomdps are restricted to discrete states , actions , and observations , but many real-world problems such as , for instance , robot navigation , are naturally defined on continuous spaces . in this work , we demonstrate that the value function for continuous pomdps is convex in the beliefs over continuous state spaces , and piecewise-linear convex for the particular case of discrete observations and actions but still continuous states . we also demonstrate that continuous bellman backups are contracting and isotonic ensuring the monotonic convergence of value-iteration algorithms . relying on those properties , we extend the algorithm , originally developed for discrete pomdps , to work in continuous state spaces by representing the observation , transition , and reward models using gaussian mixtures , and the beliefs using gaussian mixtures or particle sets . with these representations , the integrals that appear in the bellman backup can be computed in closed form and , therefore , the algorithm is computationally feasible . finally , we further extend to deal with story_separator_special_tag we consider the nonparametric functional estimation of the drift of a gaussian process via minimax and bayes estimators . in this context , we construct superefficient estimators of stein type for such drifts using the malliavin integration by parts formula and superharmonic functionals on gaussian space . our results are illustrated by numerical simulations and extend the construction of james-stein type estimators for gaussian processes by berger and wolpert [ j. multivariate anal . 13 ( 1983 ) 401-424 ] . story_separator_special_tag object matching is a fundamental operation in data analysis . it typically requires the definition of a similarity measure between the classes of objects to be matched . instead , we develop an approach which is able to perform matching by requiring a similarity measure only within each of the classes . this is achieved by maximizing the dependency between matched pairs of observations by means of the hilbert-schmidt independence criterion . this problem can be cast as one of maximizing a quadratic assignment problem with special structure and we present a simple algorithm for finding a locally optimal solution . story_separator_special_tag this paper introduces a novel mathematical and computational framework , namely log-hilbert-schmidt metric between positive definite operators on a hilbert space . this is a generalization of the log-euclidean metric on the riemannian manifold of positive definite matrices to the infinite-dimensional setting . the general framework is applied in particular to compute distances between co-variance operators on a reproducing kernel hilbert space ( rkhs ) , for which we obtain explicit formulas via the corresponding gram matrices . empirically , we apply our formulation to the task of multi-category image classification , where each image is represented by an infinite-dimensional rkhs covariance operator . on several challenging datasets , our method significantly outperforms approaches based on covariance matrices computed directly on the original input features , including those using the log-euclidean metric , stein and jeffreys divergences , achieving new state of the art results . story_separator_special_tag to accelerate the training of kernel machines , we propose to map the input data to a randomized low-dimensional feature space and then apply existing fast linear methods . the features are designed so that the inner products of the transformed data are approximately equal to those in the feature space of a user specified shift-invariant kernel . we explore two sets of random features , provide convergence bounds on their ability to approximate various radial basis kernels , and show that in large-scale classification and regression tasks linear machine learning algorithms applied to these features outperform state-of-the-art large-scale kernel machines . story_separator_special_tag this paper deals with the problem of nonparametric independence testing , a fundamental decision-theoretic problem that asks if two arbitrary ( possibly multivariate ) random variables $ x , y $ are independent or not , a question that comes up in many fields like causality and neuroscience . while quantities like correlation of $ x , y $ only test for ( univariate ) linear independence , natural alternatives like mutual information of $ x , y $ are hard to estimate due to a serious curse of dimensionality . a recent approach , avoiding both issues , estimates norms of an \\textit { operator } in reproducing kernel hilbert spaces ( rkhss ) . our main contribution is strong empirical evidence that by employing \\textit { shrunk } operators when the sample size is small , one can attain an improvement in power at low false positive rates . we analyze the effects of stein shrinkage on a popular test statistic called hsic ( hilbert-schmidt independence criterion ) . our observations provide insights into two recently proposed shrinkage estimators , scose and fcose - we prove that scose is ( essentially ) the optimal linear shrinkage method for story_separator_special_tag nonparametric two sample testing is a decision theoretic problem that involves identifying differences between two random variables without making parametric assumptions about their underlying distributions . we refer to the most common settings as mean difference alternatives ( mda ) , for testing differences only in first moments , and general difference alternatives ( gda ) , which is about testing for any difference in distributions . a large number of test statistics have been proposed for both these settings . this paper connects three classes of statistics - high dimensional variants of hotelling 's t-test , statistics based on reproducing kernel hilbert spaces , and energy statistics based on pairwise distances . we ask the question : how much statistical power do popular kernel and distance based tests for gda have when the unknown distributions differ in their means , compared to specialized tests for mda ? we formally characterize the power of popular tests for gda like the maximum mean discrepancy with the gaussian kernel ( gmmd ) and bandwidth-dependent variants of the energy distance with the euclidean norm ( eed ) in the high-dimensional mda regime . some practically important properties include ( a ) eed and
on considere l'existence de fonctions harmoniques bornees sur des varietes simplement connexes n n de courbure negative story_separator_special_tag this paper studies the interaction between the geometry of complete riemannian manifolds of negative curvature and some aspects of function theory on these spaces . the study of harmonic functions on the unit disc provides a classical and beautiful example of this interaction ; we recall some aspects of this below . there is a well-known representation of positive harmonic functions on the unit disc u. due to herglotz [ 13 ] , in terms of positive borel measures tt on the circle s1 : story_separator_special_tag given a closed 3-manifold m endowed with a radial symmetric metric of negative sectional curvature , we define the cross curvature flow on m ; using the maximum principle theorem , we demonstrated that the solution to the cross curvature flow exists for all time and converges pointwise to a hyperbolic metric . story_separator_special_tag we study the dirichlet problem for the following prescribed mean curvature pde $ $ \\begin { aligned } { \\left\\ { \\begin { array } { ll } - { \\text { div } } \\dfrac { abla v } { \\sqrt { 1+| abla v|^ { 2 } } } =f ( x , v ) \\text { in } \\omega \\\\ v=\\varphi \\text { on } \\partial \\omega , \\end { array } \\right . } \\end { aligned } $ $ where $ $ \\omega $ $ is a domain contained in a complete riemannian manifold m , \xa0 $ $ f : \\omega \\times \\mathbb { r\\rightarrow r } $ $ is a fixed function and $ $ \\varphi $ $ is a given continuous function on $ $ \\partial \\omega $ $ . this is done in three parts . in the first one we consider this problem in the most general form , proving the existence of solutions when $ $ \\omega $ $ is a bounded $ $ c^ { 2 , \\alpha } $ $ domain , under suitable conditions on f , with no restrictions on m besides completeness . in story_separator_special_tag m. t. anderson and d. sullivan showed that the dirichlet problem at infinity for simply connected manifolds is solvable if the curvature satisfies -a2 0. his theorem , which is the same as d. sullivan 's [ 8 ] is the following : theorem b. let nn be a complete simply connected riemannian manifold with sectional curvature k , satisfying -a2 0. then the dirichlet problem at infinity for a is uniquely solvable for any f e co ( sn-1 ( 00 ) ) . received by the editors july 5 , 1990 and , in revised form , september 10 , 1990 . 1980 mathematics subject classification ( 1985 revision ) . primary 58g20 ; secondary 53c20 . story_separator_special_tag abstract the nonsolvability of the dirichlet problem at infinity for negatively curved manifolds was proved recently by a. ancona , using brownian motion and probability theory . the aim of this paper is to give a different , nonprobabilistic proof of this result . the existence of bounded nontrivial harmonic functions which can not be constructed via choi 's method using convex sets , is also shown . story_separator_special_tag we study the asymptotic dirichlet problem for the minimal graph equation on a cartan hadamard manifold m whose radial sectional curvatures outside a compact set satisfy an upper bound $ $ \\begin { aligned } k ( p ) \\le - \\frac { \\phi ( \\phi -1 ) } { r ( x ) ^2 } \\end { aligned } $ $ and a pointwise pinching condition $ $ \\begin { aligned } |k ( p ) |\\le c_k|k ( p ' ) | \\end { aligned } $ $ for some constants $ $ \\phi > 1 $ $ and $ $ c_k\\ge 1 $ $ , where p and $ $ p ' $ $ are any 2-dimensional subspaces of $ $ t_xm $ $ containing the ( radial ) vector $ $ abla r ( x ) $ $ and $ $ r ( x ) =d ( o , x ) $ $ is the distance to a fixed point $ $ o\\in m $ $ . we solve the asymptotic dirichlet problem with any continuous boundary data for dimensions $ $ n=\\dim m > 4/\\phi +1 $ $ . story_separator_special_tag we prove that every entire solution of the minimal graph equation that is bounded from below and has at most linear growth must be constant on a complete riemannian manifold $ m $ with only one end if $ m $ has asymptotically non-negative sectional curvature . on the other hand , we prove the existence of bounded non-constant minimal graphic and $ p $ -harmonic functions on rotationally symmetric cartan-hadamard manifolds under optimal assumptions on the sectional curvatures . story_separator_special_tag we study the asymptotic dirichlet problem for killing graphs with prescribed mean curvature h in warped product manifolds $ $ m\\times _\\varrho \\mathbb { r } $ $ . in the first part of the paper , we prove the existence of killing graphs with prescribed boundary on geodesic balls under suitable assumptions on h and the mean curvature of the killing cylinders over geodesic spheres . in the process we obtain a uniform interior gradient estimate improving previous results by dajczer and de lira . in the second part we solve the asymptotic dirichlet problem in a large class of manifolds whose sectional curvatures are allowed to go to 0 or to $ $ -\\infty $ $ provided that h satisfies certain bounds with respect to the sectional curvatures of m and the norm of the killing vector field . finally we obtain non-existence results if the prescribed mean curvature function h grows too fast . story_separator_special_tag we study the dirichlet problem at infinity on a cartan-hadamard manifold for a large class of operators containing in particular the p-laplacian and the minimal graph operator . story_separator_special_tag we study the asymptotic dirichlet and plateau problems on cartan hadamard manifolds satisfying the so-called strict convexity ( abbr . sc ) condition . the main part of the paper consists in studying the sc condition on a manifold whose sectional curvatures are bounded from above and below by certain functions depending on the distance to a fixed point . in particular , we are able to verify the sc condition on manifolds whose curvature lower bound can go to $ $ -\\infty $ $ - and upper bound to 0 simultaneously at certain rates , or on some manifolds whose sectional curvatures go to $ $ -\\infty $ $ - faster than any prescribed rate . these improve previous results of anderson , borb\xe9ly , and ripoll and telichevsky . we then solve the asymptotic plateau problem for locally rectifiable currents with $ $ \\mathbb { z } _2 $ $ z2-multiplicity in acartan hadamard manifold satisfying the sc condition given any compact topologically embedded $ $ ( k-1 ) $ $ ( k-1 ) -dimensional submanifold of $ $ \\partial _ { \\infty } m , \\ 2\\le k\\le n-1 $ $ m,2 k n-1 , as the story_separator_special_tag we define the asymptotic dirichlet problem and give a sufficient condition for solving it . this proves an existence of nontrivial bounded harmonic functions on certain classes of noncompact complete riemannian manifolds . 0. introduction . in this paper we will prove the existence of nonconstant bounded harmonic functions on certain classes of noncompact riemannian manifolds by defining and solving an asymptotically defined dirichlet problem for harmonic functions . the motivation comes from the classical uniformization theorem of riemann surfaces which says that a simply connected riemann surface is biholo- morphic to the riemann sphere s2 , the complex plane c , or the unit disk d. this is a geometric theorem , but its original proof due to koebe relies heavily on function theory . the function-theoretic interpretation of this theorem is the following : among the noncompact simply connected surfaces , c is characterized by the fact that it admits no nonconstant bounded harmonic functions , and d by that it admits nonconstant bounded harmonic functions . the geometric aspect of the uniformization theorem could be roughly stated as follows : let m be a simply connected riemann surface equipped with a complete riemannian metric with gaussian curvature story_separator_special_tag we study complete minimal graphs in x , which take asymptotic boundary values plus and minus infinity on alternating sides of an ideal inscribed polygon in . we give necessary and sufficient conditions on the `` lengths '' of the sides of the polygon ( and all inscribed polygons in ) that ensure the existence of such a graph . we then apply this to construct entire minimal graphs in x that are conformally the complex plane c. the vertical projection of such a graph yields a harmonic diffeomorphism from c onto , disproving a conjecture of rick schoen and s.-t. yau . story_separator_special_tag we extend the interior gradient estimate due to korevaar-simon for solutions of the mean curvature equation from the case of euclidean graphs to the general case of killing graphs . our main application is the proof of existence of killing graphs with prescribed mean curvature function for continuous boundary data , thus extending a result due to dajczer , hinojosa , and lira . in addition , we prove the existence and uniqueness of radial graphs in hyperbolic space with prescribed mean curvature function and asymptotic boundary data at infinity . story_separator_special_tag it is proved the existence and uniqueness of killing graphs with prescribed mean curvature in a large class of riemannian manifolds . story_separator_special_tag in this article we extend a well known theorem of j. serrin about existence and uniqueness of graphs of constant mean curvature in euclidean space to a broad class of riemannian manifolds . our result also generalizes several others proved recently and includes the new case of euclidean rotational graphs with constant mean curvature . story_separator_special_tag abstract it is proved the existence of a solution to the dirichlet problem for the minimal graph equation with prescribed asymptotic boundary in a product space m \xd7 r where m is a complete , simply connected , n-dimensional riemannian manifold with sectional curvature k satisfying k k 2 , k > 0 . story_separator_special_tag abstract it is proved the existence of solutions to the exterior dirichlet problem for the minimal hypersurface equation in complete noncompact riemannian manifolds either with negative sectional curvature and simply connected or with nonnegative ricci curvature under a growth condition on the sectional curvature . story_separator_special_tag understanding what a character observes in a game as they move through the space is useful for game analysis and design . in this work we describe a tool that generates visibility manifolds given a game terrain and game agent path . our tool accommodates multiple kinds of game agent motion and considers the impact of visual occlusion in order to efficiently generate a closed 3d mesh representation of the area seen over time . the resulting shape demonstrates unintuitive properties of game agent observations , and an efficient unity implementation allows the constructed shape to be used in interactive game design . story_separator_special_tag we construct harmonic diffeomorphisms from the complex plane $ { \\bf c } $ onto any hadamard surface $ \\mathbb { m } $ whose curvature is bounded above by a negative constant . for that , we prove a jenkins-serrin type theorem for minimal graphs in $ \\mathbb { m } \\times \\mathbb { r } $ over domains of $ \\mathbb { m } $ bounded by ideal geodesic polygons and show the existence of a sequence of minimal graphs over polygonal domains converging to an entire minimal graph in $ \\mathbb { m } \\times \\mathbb { r } $ with the conformal structure of $ { \\bf c } $ . story_separator_special_tag chapter 1. introduction part i : linear equations chapter 2. laplace 's equation 2.1 the mean value inequalities 2.2 maximum and minimum principle 2.3 the harnack inequality 2.4 green 's representation 2.5 the poisson integral 2.6 convergence theorems 2.7 interior estimates of derivatives 2.8 the dirichlet problem the method of subharmonic functions 2.9 capacity problems chapter 3. the classical maximum principle 3.1 the weak maximum principle 3.2 the strong maximum principle 3.3 apriori bounds 3.4 gradient estimates for poisson 's equation 3.5 a harnack inequality 3.6 operators in divergence form notes problems chapter 4. poisson 's equation and newtonian potential 4.1 holder continuity 4.2 the dirichlet problem for poisson 's equation 4.3 holder estimates for the second derivatives 4.4 estimates at the boundary 4.5 holder estimates for the first derivatives notes problems chapter 5. banach and hilbert spaces 5.1 the contraction mapping 5.2 the method of cintinuity 5.3 the fredholm alternative 5.4 dual spaces and adjoints 5.5 hilbert spaces 5.6 the projection theorem 5.7 the riesz representation theorem 5.8 the lax-milgram theorem 5.9 the fredholm alternative in hilbert spaces 5.10 weak compactness notes problems chapter 6. classical solutions the schauder approach 6.1 the schauder interior estimates 6.2 boundary and global story_separator_special_tag there exists a well-known criterion for the solvability of the dirichlet problem for the constant mean curvature equation in bounded smooth domains in euclidean space . this classical result was established by serrin in 1969. focusing the dirichlet problem for radial vertical graphs p.-a . nitsche has established an existence and non-existence results on account of a criterion based on the notion of a hyperbolic cylinder . in this work we carry out a similar but distinct result in hyperbolic space considering a different dirichlet problem based on another system of coordinates . we consider a non standard cylinder generated by horocycles cutting orthogonally a geodesic plane $ \\mathcal p $ along the boundary of a domain $ \\omega\\subset \\mathcal p. $ we prove that a non strict inequality between the mean curvature $ \\mathcal h ' _ { \\mathcal c } ( y ) $ of this cylinder along $ \\partial \\omega $ and the prescribed mean curvature $ \\mathcal h ( y ) , $ i.e $ \\mathcal h ' _ { \\mathcal c } ( y ) \\geq |\\mathcal h ( y ) | , \\forall y\\in\\partial\\omega $ yields existence of our dirichlet problem . thus we story_separator_special_tag o tema que da unidade aos artigos [ a , b , c , d , e ] que compoem esta dissertacao e a existencia e nao-existencia de solucoes continuas , inteiras , de equacoes diferenciais nao-lineares em uma variedade riemanniana m. os resultados de existencia de tais solucoes sao demonstrados estudando-se o problema de dirichlet assintotico sob diversas hipoteses relativas a geometria da variedade . funcoes que definem gra cos minimos sao estudadas nos artigos [ a ] e [ d ] . o artigo [ a ] lida com um resultado de exist encia , ao passo que , em [ d ] , obtemos tanto resultados de exist encia quanto de n ao-exist encia com respeito a curvatura de m. al\xb4em disso , fun\xb8c oes p-harm onicas s ao tamb\xb4em estudadas em [ d ] . o artigo [ b ] lida com a exist encia de fun\xb8c oes a -harm onicas sob hip\xb4oteses de curvatura similares ` aquelas em [ a ] . no artigo [ c ] , estudamos a exist encia de gr\xb4a cos f- m\xb4inimos , os quais generalizam os gr\xb4a cos m\xb4inimos usuais . por m , no artigo [ e ] story_separator_special_tag we show the existence of nonconstant bounded p-harmonic functions on cartan-hadamard manifolds of pinched negative curvature by solving the asymptotic dirichlet problem at infinity for the p-laplacian . more precisely , we prove that given a continuous function h on the sphere at infinity there exists a unique p-harmonic function u on m with boundary values h . story_separator_special_tag we show , by modifying borbely s example , that there are 3-dimensional cartan hadamard manifolds m , with sectional curvatures 1 , such that the asymptotic dirichlet problem for a class of quasilinear elliptic pdes , including the minimal graph equation , is not solvable . story_separator_special_tag we study the dirichlet problem at infinity for the p-laplacian and p-regularity of points at infinity on cartan-hadamard manifolds . we also survey the recent result of the authors and lang on the asymptotic dirich- let problem for p-harmonic functions on gromov hyperbolic metric measure spaces . story_separator_special_tag we consider brownian motion $ x $ on a rotationally symmetric manifold $ m_g = ( \\mathbb { r } ^n , ds^2 ) , ds^2 = dr^2 + g ( r ) ^2 d\\theta^2 $ . an integral test is presented which gives a necessary and sufficient condition for the nontriviality of the invariant $ \\sigma $ -field of $ x $ , hence for the existence of nonconstant bounded harmonic functions on $ m_g $ . conditions on the sectional curvatures are given which imply the convergence or the divergence of the test integral . story_separator_special_tag in this paper , we develop the theory of properly embedded minimal surfaces in m \xd7r , where m is a closed orientable riemannian surface . we construct many examples of different topology and geometry . we establish several global results . the first of these theorems states that examples of bounded curvature have linear area growth , and so , are quasiperiodic . we then apply this theorem to study and classify the stable examples . we prove the topological result that every example has a finite number of ends . we apply the recent theory of colding and minicozzi to prove that examples of finite topology have bounded curvature . also we prove the topological unicity of the embedding of some of these surfaces . story_separator_special_tag ( 1977 ) . on deciding whether a surface is parabolic or hyperbolic . the american mathematical monthly : vol . 84 , no . 1 , pp . 43-46 . story_separator_special_tag abstract we prove regularity results for certain degenerate quasilinear elliptic systems with coefficients which depend on two different weights . by using sobolev- and poincare inequalities due to chanillo and wheeden [ s. chanillo , r.l . wheeden , weighted poincare and sobolev inequalities and estimates for weighted peano maximal functions , amer . j. math . 107 ( 1985 ) 1191 1226 ; s. chanillo , r.l . wheeden , harnack 's inequality and mean-value inequalities for solutions of degenerate elliptic equations , comm . partial differential equations 11 ( 1986 ) 1111 1134 ] we derive a new weak harnack inequality and adapt an idea due to l. caffarelli [ l.a. caffarelli , regularity theorems for weak solutions of some nonlinear systems , comm . pure appl . math . 35 ( 1982 ) 833 838 ] to prove a priori estimates for bounded weak solutions . for example we show that every bounded weak solution of the system d ( a ( x , u , u ) d u i ) = 0 with | x | 2 | | 2 a | x | | | 2 , | x | 1 , ( 1 story_separator_special_tag in this paper , we study the asymptotic plateau problem in 2 \xd7 . we construct the first examples of non-fillable finite curves with no thin tail in s 1 \xd7 . story_separator_special_tag we obtain conditions on the behavior at infinity of the mean curvature of a graph under volume growth assumptions . an l q comparison result is also given . story_separator_special_tag it is proved that if m is a rotationally symmetric hadamard surface which is conformally equivalent to the hyperbolic disk then the asymptotic dirichlet problem for the minimal surface equation is uniquely solvable for any continuous asymptotic boundary data . this result gives a partial answer of a question in galvez and rosenberg ( am j math 132:1249 1273 , 2010 ) about the existence of entire minimal graphs on hadamard surfaces with sectional curvature possibly degenerating at infinity . story_separator_special_tag let $ m $ be hadamard manifold with sectional curvature $ k_ { m } \\leq-k^ { 2 } $ , $ k > 0 $ . denote by $ \\partial_ { \\infty } m $ the asymptotic boundary of $ m $ . we say that $ m $ satisfies the strict convexity condition ( sc condition ) if , given $ x\\in\\partial_ { \\infty } m $ and a relatively open subset $ w\\subset\\partial_ { \\infty } m $ containing $ x $ , there exists a $ c^ { 2 } $ open subset $ \\omega\\subset m $ such that $ x\\in\\operatorname * { int } ( \\partial_ { \\infty } \\omega ) \\subset w $ and $ m\\setminus\\omega $ is convex . we prove that the sc condition implies that $ m $ is regular at infinity relative to the operator $ $ \\mathcal { q } [ u ] : =\\mathrm { div } ( \\frac { a ( | abla u| ) } { | abla u| } abla u ) , $ $ subject to some conditions . it follows that under the sc condition , the dirichlet problem for the minimal hypersurface story_separator_special_tag we show that a properly immersed minimal hypersurface in m x r_+ equals some m x { c } when m is a complete , recurrent n-dimensional riemannian manifold with bounded curvature . if on the other hand , m has nonnegative ricci curvature with curvature bounded below , the same result holds for any positive entire minimal graph over m . story_separator_special_tag in this paper we prove a general and sharp asymptotic theorem for minimal surfaces in $ h^2\\times r $ . as a consequence , we prove that there is no properly immersed minimal surface whose asymptotic boundary $ c $ is a jordan curve homologous to zero in the asymptotic boundary of $ h^2\\times r , $ say $ \\partial_\\infty h^2\\times r $ , such that $ c $ is contained in a slab between two horizontal circles of $ \\partial_\\infty h^2\\times r $ with width equal to $ \\pi. $ we construct minimal vertical graphs in $ h^2\\times r $ over certain unbounded admissible domains taking certain prescribed finite boundary data and certain prescribed asymptotic boundary data . our admissible unbounded domains $ \\om $ in $ h^2\\times \\ { 0\\ } $ are non necessarily convex and non necessarily bounded by convex arcs ; each component of its boundary is properly embedded with zero , one or two points on its asymptotic boundary , satisfying a further geometric condition . story_separator_special_tag this paper deals with the local behavior of solutions of quasi-linear partial differential equations of second order in n/ > 2 independent variables . ~re shall be concerned specifically with the a priori majorization of solutions , the nature of removable singularities , and the behavior of a positive solution in the neighborhood of an isolated singularity . corresponding results are for the most par t well known for the case of the laplace equation ; roughly speaking , our work constitutes an extension of these results to a wide class of non-linear equations . throughout the paper we are concerned with real quasi-linear equations of the general form div .,4 ( x , u , uz ) = ~ ( x , u , ux ) . ( 1 ) story_separator_special_tag this paper is concerned with the existence of solutions of the dirichlet problem for quasilinear elliptic partial differential equations of second order , the conclusions being in the form of necessary conditions and sufficient conditions for this problem to be solvable in a given domain with arbitrarily assigned smooth boundary data . a central position in the discussion is played by the concept of global barrier functions and by certain fundamental invariants of the equation . with the help of these invariants we are able to distinguish an important class of regularly elliptic5 equations which , as far as the dirichlet problem is concerned , behave comparably to uniformly elliptic equations . for equations which are not regularly elliptic it is necessary to impose significant restrictions on the curvatures of the boundaries of the underlying domains in order for the dirichlet problem to be generally solvable ; the determination of the precise form of these restrictions constitutes a second primary aim of the paper . by maintaining a high level of generality throughout , we are able to treat as special examples the minimal surface equation , the equation for surfaces having prescribed mean curvature , and a number of story_separator_special_tag given an unbounded domain $ \\omega $ of a hadamard manifold $ m $ , it makes sense to consider the problem of finding minimal graphs with prescribed continuous data on its cone-topology-boundary , i.e. , on its ordinary boundary together with its asymptotic boundary . in this article it is proved that under the hypothesis that the sectional curvature of $ m $ is $ \\le -1 $ this dirichlet problem is solvable if $ \\omega $ satisfies certain convexity condition at infinity and if $ \\partial \\omega $ is mean convex . we also prove that mean convexity of $ \\partial \\omega $ is a necessary condition , extending to unbounded domains some results that are valid on bounded ones . story_separator_special_tag we study the dirichlet problem at infinity for a-harmonic functions on a cartan-hadamard manifold m and give a sufficient condition for a point at infinityx0 2 m ( 1 ) to be a-regular . this condition is local in the sense that it only involves sectional curvatures of m in a set u \\m , where u is an arbitrary neighborhood of x0 in the cone topology . the results apply to the laplacian and p-laplacian , 1 < p < 1 , as special cases . story_separator_special_tag we consider one formulation of the dirichlet problem for a-harmonic functions on an unbounded domain of a riemannian manifold . more specifically if m is a connected riemannian manifold , m is an unbounded domain , and : m ! r is a bounded lipschitz function , then we provide a sufficient condition so that there exists an a-harmonic function u : ! r such that limx ! x0 u ( x ) = ( x0 ) for every x0 2 @ and |u ( x ) ( x ) | ! 0 as d ( x , o ) ! 1 , where o 2 m is a fixed basepoint . this condition involves geometric inequalities for m and an integral condition for |r | . we then apply this results in the context of the dirichlet problem at infinity on a cartan-hadamard manifold and prove new solvability results . the existence of globally defined bounded nonconstant harmonic functions on a given riemannian manifold m = m n depends heavily on the manifold . yau [ 13 ] proved that if m has nonnegative ricci curvature , then there are no positive ( or bounded ) harmonic functions other
templates are an important asset for question answering over knowledge graphs , simplifying the semantic parsing of input utterances and generating structured queries for interpretable answers . state-of-the-art methods rely on hand-crafted templates with limited coverage . this paper presents quint , a system that automatically learns utterance-query templates solely from user questions paired with their answers . additionally , quint is able to harness language compositionality for answering complex questions without having any templates for the entire question . experiments with different benchmarks demonstrate the high quality of quint . story_separator_special_tag translating natural language questions to semantic representations such as sparql is a core challenge in open-domain question answering over knowledge bases ( kb-qa ) . existing methods rely on a clear separation between an offline training phase , where a model is learned , and an online phase where this model is deployed . two major shortcomings of such methods are that ( i ) they require access to a large annotated training set that is not always readily available and ( ii ) they fail on questions from before-unseen domains . to overcome these limitations , this paper presents neqa , a continuous learning paradigm for kb-qa . offline , neqa automatically learns templates mapping syntactic structures to semantic ones from a small number of training question-answer pairs . once deployed , continuous learning is triggered on cases where templates are insufficient . using a semantic similarity function between questions and by judicious invocation of non-expert user feedback , neqa learns new templates that capture previously-unseen syntactic structures . this way , neqa gradually extends its template repository . neqa periodically re-trains its underlying models , allowing it to adapt to the language used after deployment . our experiments story_separator_special_tag webquestions and simplequestions are two benchmark data-sets commonly used in recent knowledge-based question answering ( kbqa ) work . most questions in them are simple questions which can be answered based on a single relation in the knowledge base . such data-sets lack the capability of evaluating kbqa systems on complicated questions . motivated by this issue , we release a new data-set , namely complexquestions , aiming to measure the quality of kbqa systems on multi-constraint questions which require multiple knowledge base relations to get the answer . beside , we propose a novel systematic kbqa approach to solve multi-constraint questions . compared to state-of-the-art methods , our approach not only obtains comparable results on the two existing benchmark data-sets , but also achieves significant improvements on the complexquestions . story_separator_special_tag real-world factoid or list questions often have a simple structure , yet are hard to match to facts in a given knowledge base due to high representational and linguistic variability . for example , to answer `` who is the ceo of apple '' on freebase requires a match to an abstract `` leadership '' entity with three relations `` role '' , `` organization '' and `` person '' , and two other entities `` apple inc '' and `` managing director '' . recent years have seen a surge of research activity on learning-based solutions for this method . we further advance the state of the art by adopting learning-to-rank methodology and by fully addressing the inherent entity recognition problem , which was neglected in recent works . we evaluate our system , called aqqu , on two standard benchmarks , free917 and webquestions , improving the previous best result for each benchmark considerably . these two benchmarks exhibit quite different challenges , and many of the existing approaches were evaluated ( and work well ) only for one of them . we also consider efficiency aspects and take care that all questions can be answered interactively ( story_separator_special_tag a central challenge in semantic parsing is handling the myriad ways in which knowledge base predicates can be expressed . traditionally , semantic parsers are trained primarily from text paired with knowledge base information . our goal is to exploit the much larger amounts of raw text not tied to any knowledge base . in this paper , we turn semantic parsing on its head . given an input utterance , we first use a simple method to deterministically generate a set of candidate logical forms with a canonical realization in natural language for each . then , we use a paraphrase model to choose the realization that best paraphrases the input , and output the corresponding logical form . we present two simple paraphrase models , an association model and a vector space model , and train them jointly from question-answer pairs . our system parasempre improves stateof-the-art accuracies on two recently released question-answering datasets . story_separator_special_tag in this paper , we train a semantic parser that scales up to freebase . instead of relying on annotated logical forms , which is especially expensive to obtain at large scale , we learn from question-answer pairs . the main challenge in this setting is narrowing down the huge number of possible logical predicates for a given question . we tackle this problem in two ways : first , we build a coarse mapping from phrases to predicates using a knowledge base and a large text corpus . second , we use a bridging operation to generate additional predicates based on neighboring predicates . on the dataset of cai and yates ( 2013 ) , despite not having annotated logical forms , our system outperforms their state-of-the-art parser . additionally , we collected a more realistic and challenging dataset of question-answer pairs and improves over a natural baseline . story_separator_special_tag recent years have seen a surge of knowledge-based question answering ( kb-qa ) systems which provide crisp answers to user-issued questions by translating them to precise structured queries over a knowledge base ( kb ) . a major challenge in kb-qa is bridging the gap between natural language expressions and the complex schema of the kb . as a result , existing methods focus on simple questions answerable with one main relation path in the kb and struggle with complex questions that require joining multiple relations . we propose a kb-qa system , textray , which answers complex questions using a novel decompose-execute-join approach . it constructs complex query patterns using a set of simple queries . it uses a semantic matching model which is able to learn simple queries using implicit supervision from question-answer pairs , thus eliminating the need for complex query patterns . our proposed system significantly outperforms existing kb-qa systems on complex questions while achieving comparable results on simple questions . story_separator_special_tag freebase is a practical , scalable tuple database used to structure general human knowledge . the data in freebase is collaboratively created , structured , and maintained . freebase currently contains more than 125,000,000 tuples , more than 4000 types , and more than 7000 properties . public read/write access to freebase is allowed through an http-based graph-query api using the metaweb query language ( mql ) as a data query and manipulation language . mql provides an easy-to-use object-oriented interface to the tuple data in freebase and is designed to facilitate the creation of collaborative , web-based data-oriented applications . story_separator_special_tag training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions . this paper studies the impact of multitask and transfer learning for simple question answering ; a setting for which the reasoning required to answer is quite easy , as long as one can retrieve the correct evidence given a question , which can be difficult in large-scale conditions . to this end , we introduce a new dataset of 100k questions that we use in conjunction with existing benchmarks . we conduct our study within the framework of memory networks ( weston et al. , 2015 ) because this perspective allows us to eventually scale up to more complex reasoning , and show that memory networks can be successfully trained to achieve excellent performance . story_separator_special_tag one of the most remarkable properties of word embeddings is the fact that they capture certain types of semantic and syntactic relationships . recently , pre-trained language models such as bert have achieved groundbreaking results across a wide range of natural language processing tasks . however , it is unclear to what extent such models capture relational knowledge beyond what is already captured by standard word embeddings . to explore this question , we propose a methodology for distilling relational knowledge from a pre-trained language model . starting from a few seed instances of a given relation , we first use a large text corpus to find sentences that are likely to express this relation . we then use a subset of these extracted sentences as templates . finally , we fine-tune a language model to predict whether a given word pair is likely to be an instance of some relation , when given an instantiated template for that relation as input . story_separator_special_tag supervised training procedures for semantic parsers produce high-quality semantic parsers , but they have difficulty scaling to large databases because of the sheer number of logical constants for which they must see labeled training data . we present a technique for developing semantic parsers for large databases based on a reduction to standard supervised training algorithms , schema matching , and pattern learning . leveraging techniques from each of these areas , we develop a semantic parser for freebase that is capable of parsing questions with an f1 that improves by 0.42 over a purely-supervised learning algorithm . story_separator_special_tag question answering has emerged as an intuitive way of querying structured data sources , and has attracted significant advancements over the years . in this article , we provide an overview over these recent advancements , focusing on neural network based question answering systems over knowledge graphs . we introduce readers to the challenges in the tasks , current paradigms of approaches , discuss notable advancements , and outline the emerging trends in the field . through this article , we aim to provide newcomers to the field with a suitable entry point , and ease their process of making informed decisions while creating their own qa system . story_separator_special_tag in relation extraction for knowledge-based question answering , searching from one entity to another entity via a single relation is called one hop . in related work , an exhaustive search from all one-hop relations , two-hop relations , and so on to the max-hop relations in the knowledge graph is necessary but expensive . therefore , the number of hops is generally restricted to two or three . in this paper , we propose uhop , an unrestricted-hop framework which relaxes this restriction by use of a transition-based search framework to replace the relation-chain-based search one . we conduct experiments on conventional 1- and 2-hop questions as well as lengthy questions , including datasets such as webqsp , pathquestion , and grid world . results show that the proposed framework enables the ability to halt , works well with state-of-the-art models , achieves competitive performance without exhaustive searches , and opens the performance gap for long relation paths . story_separator_special_tag formal query building is an important part of complex question answering over knowledge bases . it aims to build correct executable queries for questions . recent methods try to rank candidate queries generated by a state-transition strategy . however , this candidate generation strategy ignores the structure of queries , resulting in a considerable number of noisy queries . in this paper , we propose a new formal query building approach that consists of two stages . in the first stage , we predict the query structure of the question and leverage the structure to constrain the generation of the candidate queries . we propose a novel graph generation framework to handle the structure prediction task and design an encoder-decoder model to predict the argument of the predetermined operation in each generative step . in the second stage , we follow the previous methods to rank the candidate queries . the experimental results show that our formal query building approach outperforms existing methods on complex questions while staying competitive on simple questions . story_separator_special_tag formal query generation aims to generate correct executable queries for question answering over knowledge bases ( kbs ) , given entity and relation linking results . current approaches build universal paraphrasing or ranking models for the whole questions , which are likely to fail in generating queries for complex , long-tail questions . in this paper , we propose subqg , a new query generation approach based on frequent query substructures , which helps rank the existing ( but nonsignificant ) query structures or build new query structures . our experiments on two benchmark datasets show that our approach significantly outperforms the existing ones , especially for complex questions . also , it achieves promising performance with limited training data and noisy entity/relation linking results . story_separator_special_tag answering natural language questions over a knowledge base is an important and challenging task . most of existing systems typically rely on hand-crafted features and rules to conduct question understanding and/or answer ranking . in this paper , we introduce multi-column convolutional neural networks ( mccnns ) to understand questions from three different aspects ( namely , answer path , answer context , and answer type ) and learn their distributed representations . meanwhile , we jointly learn low-dimensional embeddings of entities and relations in the knowledge base . question-answer pairs are used to train the model to rank candidate answers . we also leverage question paraphrases to train the column networks in a multi-task learning manner . we use freebase as the knowledge base and conduct extensive experiments on the webquestions dataset . experimental results show that our method achieves better or comparable performance compared with baseline systems . in addition , we develop a method to compute the salience scores of question words in different column networks . the results help us intuitively understand what mccnns learn . story_separator_special_tag question answering ( qa ) over knowledge base ( kb ) aims to automatically answer natural language questions via well-structured relation information between entities stored in knowledge bases . in order to make kbqa more applicable in actual scenarios , researchers have shifted their attention from simple questions to complex questions , which require more kb triples and constraint inference . in this paper , we introduce the recent advances in complex qa . besides traditional methods relying on templates and rules , the research is categorized into a taxonomy that contains two main branches , namely information retrieval-based and neural semantic parsing-based . after describing the methods of these branches , we analyze directions for future research and introduce the models proposed by the alime team . story_separator_special_tag odzyskiwanie utraconego g osu : katartyczna proza su tonga i yu hua rewolucja kulturalna pozostawi a w pami ci chi czyk\xf3w wyra ne blizny i wywar a g boki wp yw na chi sk literatur , szczeg\xf3lnie na utwory napisane przez przedstawicieli pokolenia , kt\xf3re osobi cie do wiadczy o jej eksces\xf3w . yu hua w powie ci y ! oraz kroniki sprzedawcy krwi i su tong w powie ci binu zmagaj si z cierpieniem i traumami zwi zanymi z chi sk histori . pr\xf3buj c przezwyci y traum , czerpi oni z tradycji literatury oralnej w poszukiwaniu nowych , linearnych , sp\xf3jnych , a czasem nawet optymistycznych narracji o przesz ych do wiadczeniach zwyk ych ludzi , kt\xf3rzy otrzymuj szans stworzenia w asnych opowie ci , przeciwstawiaj cych si dominuj cemu dyskursowi historiograficznemu . teoria oralno ci waltera onga oraz poj cie kontrhistorii ukute przez michela foucaulta otwieraj nowe mo liwo ci dla badacza usi uj cego przeanalizowa i zrozumie te utwory i ich katartyczne dzia anie . uwa na lektura powie ci su tonga i yu hua ka e r\xf3wnie postawi pytanie o autentyczno prezentowanego w\xa0nich uleczenia z traumy i rozwa y ryzyko ich wsp\xf3 udzia u w story_separator_special_tag the incompleteness of knowledge base ( kb ) is a vital factor limiting the performance of question answering ( qa ) . this paper proposes a novel qa method by leveraging text information to enhance the incomplete kb . the model enriches the entity representation through semantic information contained in the text , and employs graph convolutional networks to update the entity status . furthermore , to exploit the latent structural information of text , we treat the text as hyperedges connecting entities among it to complement the deficient relations in kb , and hypergraph convolutional networks are further applied to reason on the hypergraph-formed text . extensive experiments on the webquestionssp benchmark with different kb settings prove the effectiveness of our model . story_separator_special_tag multi-hop knowledge base question answering ( kbqa ) aims at finding the answers to a factoid question by reasoning across multiple triples . note that when human performs multi-hop reasoning , one tends to concentrate on specific relation at different hops and pinpoint a group of entities connected by the relation . hypergraph convolutional networks ( hgcn ) can simulate this behavior by leveraging hyperedges to connect more than two nodes more than pairwise connection . however , hgcn is for undirected graphs and does not consider the direction of information transmission . we introduce the directed-hgcn ( dhgcn ) to adapt to the knowledge graph with directionality . inspired by human s hop-by-hop reasoning , we propose an interpretable kbqa model based on dhgcn , namely two-phase hypergraph based reasoning with dynamic relations , which explicitly updates relation information and dynamically pays attention to different relations at different hops . moreover , the model predicts relations hop-by-hop to generate an intermediate relation path . we conduct extensive experiments on two widely used multi-hop kbqa datasets to prove the effectiveness of our model . story_separator_special_tag the task of knowledge graph completion ( kgc ) aims to automatically infer the missing fact information in knowledge graph ( kg ) . in this paper , we take a new perspective that aims to leverage rich user-item interaction data ( user interaction data for short ) for improving the kgc task . our work is inspired by the observation that many kg entities correspond to online items in application systems . however , the two kinds of data sources have very different intrinsic characteristics , and it is likely to hurt the original performance using simple fusion strategy . to address this challenge , we propose a novel adversarial learning approach by leveraging user interaction data for the kgc task . our generator is isolated from user interaction data , and serves to improve the performance of the discriminator . the discriminator takes the learned useful information from user interaction data as input , and gradually enhances the evaluation capacity in order to identify the fake samples generated by the generator . to discover implicit entity preference of users , we design an elaborate collaborative learning algorithms based on graph neural networks , which will be jointly optimized story_separator_special_tag multi-hop knowledge base question answering ( kbqa ) aims to find the answer entities that are multiple hops away in the knowl- edge base ( kb ) from the entities in the question . a major challenge is the lack of supervision signals at intermediate steps . therefore , multi-hop kbqa algorithms can only receive the feedback from the final answer , which makes the learning unstable or ineffective . to address this challenge , we propose a novel teacher-student approach for the multi-hop kbqa task . in our approach , the stu- dent network aims to find the correct answer to the query , while the teacher network tries to learn intermediate supervision signals for improving the reasoning capacity of the student network . the major novelty lies in the design of the teacher network , where we utilize both forward and backward reasoning to enhance the learning of intermediate entity distributions . by considering bidi- rectional reasoning , the teacher network can produce more reliable intermediate supervision signals , which can alleviate the issue of spurious reasoning . extensive experiments on three benchmark datasets have demonstrated the effectiveness of our approach on the kbqa task . story_separator_special_tag rdf question/answering ( q/a ) allows users to ask questions in natural languages over a knowledge base represented by rdf . to answer a natural language question , the existing work takes a two-stage approach : question understanding and query evaluation . their focus is on question understanding to deal with the disambiguation of the natural language phrases . the most common technique is the joint disambiguation , which has the exponential search space . in this paper , we propose a systematic framework to answer natural language questions over rdf repository ( rdf q/a ) from a graph data-driven perspective . we propose a semantic query graph to model the query intention in the natural language question in a structural way , based on which , rdf q/a is reduced to subgraph matching problem . more importantly , we resolve the ambiguity of natural language questions at the time when matches of query are found . the cost of disambiguation is saved if there are no matching found . more specifically , we propose two different frameworks to build the semantic query graph , one is relation ( edge ) -first and the other one is node-first . we story_separator_special_tag although natural language question answering over knowledge graphs have been studied in the literature , existing methods have some limitations in answering complex questions . to address that , in this paper , we propose a state transition-based approach to translate a complex natural language question n to a semantic query graph ( sqg ) , which is used to match the underlying knowledge graph to find the answers to question n. in order to generate sqg , we propose four primitive operations ( expand , fold , connect and merge ) and a learning-based state transition approach . extensive experiments on several benchmarks ( such as qald , webquestions and complexquestions ) with two knowledge bases ( dbpedia and freebase ) confirm the superiority of our approach compared with state-of-the-arts . story_separator_special_tag programming massively parallel processors : a hands-on approach shows both student and professional alike the basic concepts of parallel programming and gpu architecture . various techniques for constructing parallel programs are explored in detail . case studies demonstrate the development process , which begins with computational thinking and ends with effective and efficient parallel programs . topics of performance , floating-point format , parallel patterns , and dynamic parallelism are covered in depth . this best-selling guide to cuda and gpu parallel programming has been revised with more parallel programming examples , commonly-used libraries such as thrust , and explanations of the latest tools . with these improvements , the book retains its concise , intuitive , practical approach based on years of road-testing in the authors ' own parallel computing courses . updates in this new edition include : new coverage of cuda 5.0 , improved performance , enhanced development tools , increased hardware support , and more increased coverage of related technology , opencl and new material on algorithm patterns , gpu clusters , host programming , and data parallelism two new case studies ( on mri reconstruction and molecular visualization ) explore the latest applications of cuda story_separator_special_tag question answering is an effective method for obtaining information from knowledge bases ( kb ) . in this paper , we propose ns-cqa , a data-efficient reinforcement learning framework for complex question answering by using only a modest number of training samples . our framework consists of a neural generator and a symbolic executor that , respectively , transforms a natural-language question into a sequence of primitive actions , and executes them over the knowledge base to compute the answer . we carefully formulate a set of primitive symbolic actions that allows us to not only simplify our neural network design but also accelerate model convergence . to reduce search space , we employ the copy and masking mechanisms in our encoder-decoder architecture to drastically reduce the decoder output vocabulary and improve model generalizability . we equip our model with a memory buffer that stores high-reward promising programs . besides , we propose an adaptive reward function . by comparing the generated trial with the trials stored in the memory buffer , we derive the curriculum-guided reward bonus , i.e. , the proximity and the novelty . to mitigate the sparse reward problem , we combine the adaptive reward and story_separator_special_tag we introduce the neural state machine , seeking to bridge the gap between the neural and symbolic views of ai and integrate their complementary strengths for the task of visual reasoning . given an image , we first predict a probabilistic graph that represents its underlying semantics and serves as a structured world model . then , we perform sequential reasoning over the graph , iteratively traversing its nodes to answer a given question or draw a new inference . in contrast to most neural architectures that are designed to closely interact with the raw sensory data , our model operates instead in an abstract latent space , by transforming both the visual and linguistic modalities into semantic concept-based representations , thereby achieving enhanced transparency and modularity . we evaluate our model on vqa-cp and gqa , two recent vqa datasets that involve compositionality , multi-step inference and diverse reasoning skills , achieving state-of-the-art results in both cases . we provide further experiments that illustrate the model 's strong generalization capacity across multiple dimensions , including novel compositions of concepts , changes in the answer distribution , and unseen linguistic structures , demonstrating the qualities and efficacy of our approach story_separator_special_tag in the task of question answering , memory networks have recently shown to be quite effective towards complex reasoning as well as scalability , in spite of limited range of topics covered in training data . in this paper , we introduce factual memory network , which learns to answer questions by extracting and reasoning over relevant facts from a knowledge base . our system generate distributed representation of questions and kb in same word vector space , extract a subset of initial candidate facts , then try to find a path to answer entity using multi-hop reasoning and refinement . additionally , we also improve the run-time efficiency of our model using various computational heuristics . story_separator_special_tag recent work has presented intriguing results examining the knowledge contained in language models ( lm ) by having the lm fill in the blanks of prompts such as `` obama is a _ by profession '' . these prompts are usually manually created , and quite possibly sub-optimal ; another prompt such as `` obama worked as a _ '' may result in more accurately predicting the correct profession . because of this , given an inappropriate prompt , we might fail to retrieve facts that the lm does know , and thus any given prompt only provides a lower bound estimate of the knowledge contained in an lm . in this paper , we attempt to more accurately estimate the knowledge contained in lms by automatically discovering better prompts to use in this querying process . specifically , we propose mining-based and paraphrasing-based methods to automatically generate high-quality and diverse prompts , as well as ensemble methods to combine answers from different prompts . extensive experiments on the lama benchmark for extracting relational knowledge from lms demonstrate that our methods can improve accuracy from 31.1 % to 39.6 % , providing a tighter lower bound on what lms know story_separator_special_tag knowledge base question answering ( kbqa ) is an important task in natural language processing . existing approaches face significant challenges including complex question understanding , necessity for reasoning , and lack of large training datasets . in this work , we propose a semantic parsing and reasoning-based neuro-symbolic question answering ( nsqa ) system , that leverages ( 1 ) abstract meaning representation ( amr ) parses for task-independent question under-standing ; ( 2 ) a novel path-based approach to transform amr parses into candidate logical queries that are aligned to the kb ; ( 3 ) a neuro-symbolic reasoner called logical neural net-work ( lnn ) that executes logical queries and reasons over kb facts to provide an answer ; ( 4 ) system of systems approach , which integrates multiple , reusable modules that are trained specifically for their individual tasks ( e.g . semantic parsing , entity linking , and relationship linking ) and do not require end-to-end training data . nsqa achieves state-of-the-art performance on qald-9 and lc-quad 1.0. nsqa 's novelty lies in its modular neuro-symbolic architecture and its task-general approach to interpreting natural language questions . story_separator_special_tag we consider the challenge of learning semantic parsers that scale to large , open-domain problems , such as question answering with freebase . in such settings , the sentences cover a wide variety of topics and include many phrases whose meaning is difficult to represent in a fixed target ontology . for example , even simple phrases such as daughter and number of people living in can not be directly represented in freebase , whose ontology instead encodes facts about gender , parenthood , and population . in this paper , we introduce a new semantic parsing approach that learns to resolve such ontological mismatches . the parser is learned from question-answer pairs , uses a probabilistic ccg to build linguistically motivated logicalform meaning representations , and includes an ontology matching model that adapts the output logical forms for each target ontology . experiments demonstrate state-of-the-art performance on two benchmark semantic parsing datasets , including a nine point accuracy improvement on a recent freebase qa corpus . story_separator_special_tag previous work on answering complex questions from knowledge bases usually separately addresses two types of complexity : questions with constraints and questions with multiple hops of relations . in this paper , we handle both types of complexity at the same time . motivated by the observation that early incorporation of constraints into query graphs can more effectively prune the search space , we propose a modified staged query graph generation method with more flexible ways to generate query graphs . our experiments clearly show that our method achieves the state of the art on three benchmark kbqa datasets . story_separator_special_tag audio pattern recognition is an important research topic in the machine learning area , and includes several tasks such as audio tagging , acoustic scene classification , music classification , speech emotion classification and sound event detection . recently , neural networks have been applied to tackle audio pattern recognition problems . however , previous systems are built on specific datasets with limited durations . recently , in computer vision and natural language processing , systems pretrained on large-scale datasets have generalized well to several tasks . however , there is limited research on pretraining systems on large-scale datasets for audio pattern recognition . in this paper , we propose pretrained audio neural networks ( panns ) trained on the large-scale audioset dataset . these panns are transferred to other audio related tasks . we investigate the performance and computational complexity of panns modeled by a variety of convolutional neural networks . we propose an architecture called wavegram-logmel-cnn using both log-mel spectrogram and waveform as input feature . our best pann system achieves a state-of-the-art mean average precision ( map ) of 0.439 on audioset tagging , outperforming the best previous system of 0.392. we transfer panns to six audio story_separator_special_tag knowledge base question answering ( kbqa ) is an important task in natural language processing . existing methods for kbqa usually start with entity linking , which considers mostly named entities found in a question as the starting points in the kb to search for answers to the question . however , relying only on entity linking to look for answer candidates may not be sufficient . in this paper , we propose to perform topic unit linking where topic units cover a wider range of units of a kb . we use a generation-and-scoring approach to gradually refine the set of topic units . furthermore , we use reinforcement learning to jointly learn the parameters for topic unit linking and answer candidate ranking in an end-to-end manner . experiments on three commonly used benchmark datasets show that our method consistently works well and outperforms the previous state of the art on two datasets . story_separator_special_tag knowledge base question answering ( kbqa ) has attracted much attention and recently there has been more interest in multi-hop kbqa . in this paper , we propose a novel iterative sequence matching model to address several limitations of previous methods for multi-hop kbqa . our method iteratively grows the candidate relation paths that may lead to answer entities . the method prunes away less relevant branches and incrementally assigns matching scores to the paths . empirical results demonstrate that our method can significantly outperform existing methods on three different benchmark datasets . story_separator_special_tag the dbpedia community project extracts structured , multilingual knowledge from wikipedia and makes it freely available on the web using semantic web and linked data technologies . the project extracts knowledge from 111 different language editions of wikipedia . the largest dbpedia knowledge base which is extracted from the english edition of wikipedia consists of over 400 million facts that describe 3.7 million things . the dbpedia knowledge bases that are extracted from the other 110 wikipedia editions together consist of 1.46 billion facts and describe 10 million additional things . the dbpedia project maps wikipedia infoboxes from 27 different language editions to a single shared ontology consisting of 320 classes and 1,650 properties . the mappings are created via a world-wide crowd-sourcing effort and enable knowledge from the different wikipedia editions to be combined . the project publishes releases of all dbpedia knowledge bases for download and provides sparql query access to 14 out of the 111 language editions via a global network of local dbpedia chapters . in addition to the regular releases , the project maintains a live knowledge base which is updated whenever a page in wikipedia changes . dbpedia sets 27 million rdf links pointing story_separator_special_tag the availability of large amounts of open , distributed and structured semantic data on the web has no precedent in the history of computer science . in recent years , there have been important advances in semantic search and question answering over rdf data . in particular , natural language interfaces to online semantic data have the advantage that they can exploit the expressive power of semantic web data models and query languages , while at the same time hiding their complexity from the user . however , despite the increasing interest in this area , there are no evaluations so far that systematically evaluate this kind of systems , in contrast to traditional question answering and search interfaces to document spaces . to address this gap , we have set up a series of evaluation challenges for question answering over linked data . the main goal of the challenge was to get insight into the strengths , capabilities and current shortcomings of question answering systems as interfaces to query linked data sources , as well as benchmarking how these interaction paradigms can deal with the fact that the amount of rdf data available on the web is very large story_separator_special_tag answering complex questions that involve multiple entities and multiple relations using a standard knowledge base is an open and challenging task . most existing kbqa approaches focus on simpler questions and do not work very well on complex questions because they were not able to simultaneously represent the question and the corresponding complex query structure . in this work , we encode such complex query structure into a uniform vector representation , and thus successfully capture the interactions between individual semantic components within a complex question . this approach consistently outperforms existing methods on complex questions while staying competitive on simple questions . story_separator_special_tag in this paper , we conduct an empirical investigation of neural query graph ranking approaches for the task of complex question answering over knowledge graphs . we experiment with six different ranking models and propose a novel self-attention based slot matching model which exploits the inherent structure of query graphs , our logical form of choice . our proposed model generally outperforms the other models on two qa datasets over the dbpedia knowledge graph , evaluated in different settings . in addition , we show that transfer learning from the larger of those qa datasets to the smaller dataset yields substantial improvements , effectively offsetting the general lack of training data . story_separator_special_tag directly reading documents and being able to answer questions from them is an unsolved challenge . to avoid its inherent difficulty , question answering ( qa ) has been directed towards using knowledge bases ( kbs ) instead , which has proven effective . unfortunately kbs often suffer from being too restrictive , as the schema can not support certain types of answers , and too sparse , e.g . wikipedia contains much more information than freebase . in this work we introduce a new method , key-value memory networks , that makes reading documents more viable by utilizing different encodings in the addressing and output stages of the memory read operation . to compare using kbs , information extraction or wikipedia documents directly in a single framework we construct an analysis tool , wikimovies , a qa dataset that contains raw text alongside a preprocessed kb , in the domain of movies . our method reduces the gap between all three settings . it also achieves state-of-the-art results on the existing wikiqa benchmark . story_separator_special_tag distant supervision , heuristically labeling a corpus using a knowledge base , has emerged as a popular choice for training relation extractors . in this paper , we show that a significant number of negative examples generated by the labeling process are false negatives because the knowledge base is incomplete . therefore the heuristic for generating negative examples has a seriousflaw . building on a state-of-the-art distantly-supervised extraction algorithm , we proposed an algorithm that learns from only positive and unlabeled labels at the pair-of-entity level . experimental results demonstrate its advantage over existing algorithms . story_separator_special_tag objective : to report two patients infected with severe acute respiratory syndrome coronavirus-2 ( sars-cov-2 ) who acutely presented with miller fisher syndrome and polyneuritis cranialis , respectively . methods : patient data were obtained from medical records from the university hospital principe de asturias , alcala de henares , madrid , spain and from the university hospital 12 de octubre , madrid , spain . results : the first patient was a 50-year-old man who presented with anosmia , ageusia , right internuclear ophthalmoparesis , right fascicular oculomotor palsy , ataxia , areflexia , albuminocytologic dissociation and positive testing for gd1b-igg antibodies . five days before , he had developed a cough , malaise , headache , low back pain , and a fever . the second patient was a 39-year-old man who presented with ageusia , bilateral abducens palsy , areflexia and albuminocytologic dissociation . three days before , he had developed diarrhea , a low-grade fever , and a poor general condition . the oropharyngeal swab test for coronavirus disease 2019 ( covid-19 ) by qualitative real-time reverse-transcriptase polymerase-chain-reaction assay was positive in both patients and negative in the cerebrospinal fluid . the first patient was treated story_separator_special_tag knowledge graph question answering aims to automatically answer natural language questions via well-structured relation information between entities stored in knowledge graphs . when faced with a multi-relation question , existing embedding-based approaches take the whole topic-entity-centric subgraph into account , resulting in high time complexity . meanwhile , due to the high cost for data annotations , it is impractical to exactly show how to answer a complex question step by step , and only the final answer is labeled , as weak supervision . to address these challenges , this paper proposes a neural method based on reinforcement learning , namely stepwise reasoning network , which formulates multi-relation question answering as a sequential decision problem . the proposed model performs effective path search over the knowledge graph to obtain the answer , and leverages beam search to reduce the number of candidates significantly . meanwhile , based on the attention mechanism and neural networks , the policy network can enhance the unique impact of different parts of a given question over triple selection . moreover , to alleviate the delayed and sparse reward problem caused by weak supervision , we propose a potential-based reward shaping strategy , which can story_separator_special_tag knowledge graph question answering aims to automatically answer natural language questions via well-structured relation information between entities stored in knowledge graphs . when faced with a complex question with compositional semantics , query graph generation is a practical semantic parsing-based method . but existing works rely on heuristic rules with limited coverage , making them impractical on more complex questions . this paper proposes a director-actor-critic framework to overcome these challenges . through options over a markov decision process , query graph generation is formulated as a hierarchical decision problem . the director determines which types of triples the query graph needs , the actor generates corresponding triples by choosing nodes and edges , and the critic calculates the semantic similarity between the generated triples and the given questions . moreover , to train from weak supervision , we base the framework on hierarchical reinforcement learning with intrinsic motivation . to accelerate the training process , we pre-train the critic with high-reward trajectories generated by hand-crafted rules , and leverage curriculum learning to gradually increase the complexity of questions during query graph generation . extensive experiments conducted over widely-used benchmark datasets demonstrate the effectiveness of the proposed framework . story_separator_special_tag in this paper we introduce a novel semantic parsing approach to query freebase in natural language without requiring manual annotations or question-answer pairs . our key insight is to represent natural language via semantic graphs whose topology shares many commonalities with freebase . given this representation , we conceptualize semantic parsing as a graph matching problem . our model converts sentences to semantic graphs using ccg and subsequently grounds them to freebase guided by denotations as a form of weak supervision . evaluation experiments on a subset of the free917 and webquestions benchmark datasets show our semantic parser improves over the state of the art . story_separator_special_tag recent years have seen increasingly complex question-answering on knowledge bases ( kbqa ) involving logical , quantitative , and comparative reasoning over kb subgraphs . neural program induction ( npi ) . story_separator_special_tag knowledge graphs ( kg ) are multi-relational graphs consisting of entities as nodes and relations among them as typed edges . goal of the question answering over kg ( kgqa ) task is to answer natural language queries posed over the kg . multi-hop kgqa requires reasoning over multiple edges of the kg to arrive at the right answer . kgs are often incomplete with many missing links , posing additional challenges for kgqa , especially for multi-hop kgqa . recent research on multi-hop kgqa has attempted to handle kg sparsity using relevant external text , which isn t always readily available . in a separate line of research , kg embedding methods have been proposed to reduce kg sparsity by performing missing link prediction . such kg embedding methods , even though highly relevant , have not been explored for multi-hop kgqa so far . we fill this gap in this paper and propose embedkgqa . embedkgqa is particularly effective in performing multi-hop kgqa over sparse kgs . embedkgqa also relaxes the requirement of answer selection from a pre-specified neighborhood , a sub-optimal constraint enforced by previous multi-hop kgqa methods . through extensive experiments on multiple benchmark datasets , story_separator_special_tag complex question answering over knowledge base ( complex kbqa ) is challenging because it requires the compositional reasoning capability . existing benchmarks have three shortcomings that limit the development of complex kbqa : 1 ) they only provide qa pairs without explicit reasoning processes ; 2 ) questions are either generated by templates , leading to poor diversity , or on a small scale ; and 3 ) they mostly only consider the relations among entities but not attributes . to this end , we introduce kqa pro , a large-scale dataset for complex kbqa . we generate questions , sparqls , and functional programs with recursive templates and then paraphrase the questions by crowdsourcing , giving rise to around 120k diverse instances . the sparqls and programs depict the reasoning processes in various manners , which can benefit a large spectrum of qa methods . we contribute a unified codebase and conduct extensive evaluations for baselines and state-of-the-arts : a blind gru obtains 31.58\\ % , the best model achieves only 35.15\\ % , and humans top at 97.5\\ % , which offers great research potential to fill the gap . story_separator_special_tag open domain question answering ( qa ) is evolving from complex pipelined systems to end-to-end deep neural networks . specialized neural models have been developed for extracting answers from either text alone or knowledge bases ( kbs ) alone . in this paper we look at a more practical setting , namely qa over the combination of a kb and entity-linked text , which is appropriate when an incomplete kb is available with a large text corpus . building on recent advances in graph representation learning we propose a novel model , graft-net , for extracting answers from a question-specific subgraph containing text and kb entities and relations . we construct a suite of benchmark tasks for this problem , varying the difficulty of questions , the amount of training data , and kb completeness . we show that graft-net is competitive with the state-of-the-art when tested using either kbs or text alone , and vastly outperforms existing methods in the combined setting . source code is available at this https url . story_separator_special_tag we consider open-domain question answering ( qa ) where answers are drawn from either a corpus , a knowledge base ( kb ) , or a combination of both of these . we focus on a setting in which a corpus is supplemented with a large but incomplete kb , and on questions that require non-trivial ( e.g. , multi-hop ) reasoning . we describe pullnet , an integrated framework for ( 1 ) learning what to retrieve and ( 2 ) reasoning with this heterogeneous information to find the best answer . pullnet uses an iterative process to construct a question-specific subgraph that contains information relevant to the question . in each iteration , a graph convolutional network ( graph cnn ) is used to identify subgraph nodes that should be expanded using retrieval ( or pull ) operations on the corpus and/or kb . after the subgraph is complete , another graph cnn is used to extract the answer from the subgraph . this retrieve-and-reason process allows us to answer multi-hop questions using large kbs and corpora . pullnet is weakly supervised , requiring question-answer pairs but not gold inference paths . experimentally pullnet improves over the prior story_separator_special_tag semantic parsing transforms a natural language question into a formal query over a knowledge base . many existing methods rely on syntactic parsing like dependencies . however , the accuracy of producing such expressive formalisms is not satisfying on long complex questions . in this paper , we propose a novel skeleton grammar to represent the high-level structure of a complex question . this dedicated coarse-grained formalism with a bert-based parsing algorithm helps to improve the accuracy of the downstream fine-grained semantic parsing . besides , to align the structure of a question with the structure of a knowledge base , our multi-strategy method combines sentence-level and word-level semantics . our approach shows promising performance on several datasets . story_separator_special_tag answering complex questions is a time-consuming activity for humans that requires reasoning and integration of information . recent work on reading comprehension made headway in answering simple questions , but tackling complex questions is still an ongoing research challenge . conversely , semantic parsers have been successful at handling compositionality , but only when the information resides in a target knowledge-base . in this paper , we present a novel framework for answering broad and complex questions , assuming answering simple questions is possible using a search engine and a reading comprehension model . we propose to decompose complex questions into a sequence of simple questions , and compute the final answer from the sequence of answers . to illustrate the viability of our approach , we create a new dataset of complex questions , complexwebquestions , and present a model that decomposes questions and interacts with the web to compute an answer . we empirically demonstrate that question decomposition improves performance from 20.8 precision @ 1 to 27.5 precision @ 1 on this new dataset . story_separator_special_tag collaborative knowledge bases that make their data freely available in a machine-readable form are central for the data strategy of many projects and organizations . the two major collaborative knowledge bases are wikimedia 's wikidata and google 's freebase . due to the success of wikidata , google decided in 2014 to offer the content of freebase to the wikidata community . in this paper , we report on the ongoing transfer efforts and data mapping challenges , and provide an analysis of the effort so far . we describe the primary sources tool , which aims to facilitate this and future data migrations . throughout the migration , we have gained deep insights into both wikidata and freebase , and share and discuss detailed statistics on both knowledge bases . story_separator_special_tag being able to access knowledge bases in an intuitive way has been an active area of research over the past years . in particular , several question answering ( qa ) approaches which allow to query rdf datasets in natural language have been developed as they allow end users to access knowledge without needing to learn the schema of a knowledge base and learn a formal query language . to foster this research area , several training datasets have been created , e.g . in the qald ( question answering over linked data ) initiative . however , existing datasets are insufficient in terms of size , variety or complexity to apply and evaluate a range of machine learning based qa approaches for learning complex sparql queries . with the provision of the large-scale complex question answering dataset ( lc-quad ) , we close this gap by providing a dataset with 5000 questions and their corresponding sparql queries over the dbpedia dataset . in this article , we describe the dataset creation process and how we ensure a high variety of questions , which should enable to assess the robustness and accuracy of the next generation of qa systems for story_separator_special_tag question answering over knowledge base ( kbqa ) is a problem that a natural language question can be answered in knowledge bases accurately and concisely . the core task of kbqa is to understand the real semantics of a natural language question and extract it to match in the whole semantics of a knowledge base . however , it is exactly a big challenge due to variable semantics of natural language questions in a real world . recently , there are more and more out-of-shelf approaches of kbqa in many applications . it becomes interesting to compare and analyze them so that users could choose well . in this paper , we give a survey of kbqa approaches by classifying them in two categories . following the two categories , we introduce current mainstream techniques in kbqa , and discuss similarities and differences among them . finally , based on this discussion , we outlook some interesting open problems . story_separator_special_tag compared with ms-coco , the dataset for the competition has a larger proportion of large objects which area is greater than 96x96 pixels . as getting fine boundaries is vitally important for large object segmentation , mask r-cnn with pointrend is selected as the base segmentation framework to output high-quality object boundaries . besides , a better engine that integrates resnest , fpn and dcnv2 , and a range of effective tricks that including multi-scale training and test time augmentation are applied to improve segmentation performance . our best performance is an ensemble of four models ( three pointrend-based models and solov2 ) , which won the 2nd place in ijcai-pricai 3d ai challenge 2020 : instance segmentation . story_separator_special_tag we propose a new end-to-end question answering model , which learns to aggregate answer evidence from an incomplete knowledge base ( kb ) and a set of retrieved text snippets.under the assumptions that structured data is easier to query and the acquired knowledge can help the understanding of unstructured text , our model first accumulates knowledge ofkb entities from a question-related kb sub-graph ; then reformulates the question in the latent space and reads the text with the accumulated entity knowledge at hand . the evidence from kb and text are finally aggregated to predict answers . on the widely-used kbqa benchmark webqsp , our model achieves consistent improvements across settings with different extents of kb incompleteness . story_separator_special_tag traditional key-value memory neural networks ( kv-memnns ) are proved to be effective to support shallow reasoning over a collection of documents in domain specific question answering or reading comprehension tasks . however , extending kv-memnns to knowledge based question answering ( kb-qa ) is not trivia , which should properly decompose a complex question into a sequence of queries against the memory , and update the query representations to support multi-hop reasoning over the memory . in this paper , we propose a novel mechanism to enable conventional kv-memnns models to perform interpretable reasoning for complex questions . to achieve this , we design a new query updating strategy to mask previously-addressed memory information from the query representations , and introduce a novel stop strategy to avoid invalid or repeated memory reading without strong annotation signals . this also enables kv-memnns to produce structured queries and work in a semantic parsing fashion . experimental results on benchmark datasets show that our solution , trained with question-answer pairs only , can provide conventional kv-memnns models with better reasoning abilities on complex questions , and achieve state-of-art performances . story_separator_special_tag we propose a novel semantic parsing framework for question answering using a knowledge base . we define a query graph that resembles subgraphs of the knowledge base and can be directly mapped to a logical form . semantic parsing is reduced to query graph generation , formulated as a staged search problem . unlike traditional approaches , our method leverages the knowledge base in an early stage to prune the search space and thus simplifies the semantic matching problem . by applying an advanced entity linking system and a deep convolutional neural network model that matches questions and predicate sequences , our system outperforms previous methods substantially , and achieves an f1 measure of 52.5 % on the webquestions dataset . story_separator_special_tag we demonstrate the value of collecting semantic parse labels for knowledge base question answering . in particular , ( 1 ) unlike previous studies on small-scale datasets , we show that learning from labeled semantic parses significantly improves overall performance , resulting in absolute 5 point gain compared to learning from answers , ( 2 ) we show that with an appropriate user interface , one can obtain semantic parses with high accuracy and at a cost comparable or lower than obtaining just answers , and ( 3 ) we have created and shared the largest semantic-parse labeled dataset to date in order to advance research in question answering . story_separator_special_tag knowledge graph ( kg ) is known to be helpful for the task of question answering ( qa ) , since it provides well-structured relational information between entities , and allows one to further infer indirect facts . however , it is challenging to build qa systems which can learn to reason over knowledge graphs based on question-answer pairs alone . first , when people ask questions , their expressions are noisy ( for example , typos in texts , or variations in pronunciations ) , which is non-trivial for the qa system to match those mentioned entities to the knowledge graph . second , many questions require multi-hop logic reasoning over the knowledge graph to retrieve the answers . to address these challenges , we propose a novel and unified deep learning architecture , and an end-to-end variational learning algorithm which can handle noise in questions , and learn multi-hop reasoning simultaneously . our method achieves state-of-the-art performance on a recent benchmark dataset in the literature . we also derive a series of new benchmark datasets , including questions for multi-hop reasoning , questions paraphrased by neural translation model , and questions in human voice . our method yields story_separator_special_tag the gap between unstructured natural language and structured data makes it challenging to build a system that supports using natural language to query large knowledge graphs . many existing methods construct a structured query for the input question based on a syntactic parser . once the input question is parsed incorrectly , a false structured query will be generated , which may result in false or incomplete answers . the problem gets worse especially for complex questions . in this paper , we propose a novel systematic method to understand natural language questions by using a large number of binary templates rather than semantic parsers . as sufficient templates are critical in the procedure , we present a low-cost approach that can build a huge number of templates automatically . to reduce the search space , we carefully devise an index to facilitate the online template decomposition . moreover , we design effective strategies to perform the two-level disambiguations ( i.e. , entity-level ambiguity and structure-level ambiguity ) by considering the query semantics . extensive experiments over several benchmarks demonstrate that our proposed approach is effective as it significantly outperforms state-of-the-art methods in terms of both precision and recall . story_separator_special_tag abstract in recent years , many knowledge bases have been constructed or populated . these knowledge bases link real-world entities by their relationships on a large scale , serving as good resources to answer factoid questions . to answer a natural language question using a knowledge base , the main task is mapping it to a structured query of the same meaning , whose results from the knowledge base will be used as the question s answers . this mapping task is non-trivial since different questions can express a same meaning and many queries can arise from a knowledge base . to fulfill the task , an important thing is to model a query s structure as it conveys a part of the meaning and affects word orders in the question . however , state-of-the-art methods based on deep learning have neglected query structures and focused only on capturing semantic correlations between a question and a simple relation chain . in this paper , we instead take a query as a tree , and encode the orders of entities and relations into its representations to better distinguish candidate queries of a given question . overall , we first construct candidate
the concept of a reduction between subsets of a given space is described , giving rise to various complexity hierarchies , studied both in descriptive set theory and in automata theory . we discuss in particular the wadge and lipschitz hierarchies for subsets of the baire and cantor spaces and the hierarchy of borel reducibility for finitary relations on standard borel spaces . the notions of wadge and lipschitz reductions are related to corresponding perfect information games . story_separator_special_tag different kinds of infinite behaviours of different kind of transition systems are characterized by their topological properties . story_separator_special_tag this chapter is devoted to context-free languages . context-free languages and grammars were designed initially to formalize grammatical properties of natural languages [ 9 ] . they subsequently appeared to be well adapted to the formal description of the syntax of programming languages . this led to a considerable development of the theory . story_separator_special_tag we are interested in infinitary languages recognized by a pushdown automaton . we , then , give theorems of characterization of such closed , central , normal or perfect languages ( considering a number of hypothesis of continuity in computations of the automaton , for last three classes ) . besides , it is proved that , given the same hypothesis , the largest central ( respectively normal , perfect , language included in an algebraic infinitary language , remains algebraic . story_separator_special_tag linear codes . nonlinear codes , hadamard matrices , designs and the golay code . an introduction to bch codes and finite fields . finite fields . dual codes and their weight distribution . codes , designs and perfect codes . cyclic codes . cyclic codes : idempotents and mattson-solomon polynomials . bch codes . reed-solomon and justesen codes . mds codes . alternant , goppa and other generalized bch codes . reed-muller codes . first-order reed-muller codes . second-order reed-muller , kerdock and preparata codes . quadratic-residue codes . bounds on the size of a code . methods for combining codes . self-dual codes and invariant theory . the golay codes . association schemes . appendix a. tables of the best codes known . appendix b. finite geometries . bibliography . index . story_separator_special_tag in this paper we give some new results about context-free sets of infinite words . the presentation will be a generalization of mcnaughton 's approach in [ 7 ] , where he analyzed regular sets of infinite words . however , our extension to the regular case is not straightforward and thus distinguishes from the approach given in [ 4 ] . story_separator_special_tag abstract this paper studies context-free sets of finite and infinite words . in particular , it gives a natural way of associating to a language a set of infinite words . it then becomes possible to begin a study of families of sets of infinite words rather similar to the classical studies of families of languages . story_separator_special_tag we consider infinite two-player games on pushdown graphs , the reachability game where the first player must reach a given set of vertices to win , and the buchi game where he must reach this set infinitely often . we provide an automata theoretic approach to compute uniformly the winning region of a player and corresponding winning strategies , if the goal set is regular . two kinds of strategies are computed : positional ones which however require linear execution time in each step , and strategies with pushdown memory where a step can be executed in constant time . story_separator_special_tag we study infinite two-player games over pushdown graphs with a winning condition that refers explicitly to the infinity of the game graph : a play is won by player 0 if some vertex is visited infinity often during the play . we show that the set of winning plays is a proper ? 3-set in the borel hierarchy , thus transcending the boolean closure of ? 2-sets which arises with the standard automata theoretic winning conditions ( such as the muller , rabin , or parity condition ) . we also show that this ? 3-game over pushdown graphs can be solved effectively ( by a computation of the winning region of player 0 and his memoryless winning strategy ) . this seems to be a first example of an effectively solvable game beyondt he second level of the borel hierarchy . story_separator_special_tag in his thesis baire defined functions of baire class 1. a function f is of baire class 1 if it is the pointwise limit of a sequence of continuous functions . baire proves the following theorem . a function f is not of class 1 if and only if there exists a closed nonempty set f such that the restriction of f to f has no point of continuity . we prove the automaton version of this theorem . an -rational function is not of class 1 if and only if there exists a closed nonempty set f recognized by a buchi automaton such that the restriction of f to f has no point of continuity . this gives us the opportunity for a discussion on hausdorff 's analysis of \xb02 , ordinals , transfinite induction and some applications of computer science . story_separator_special_tag we introduce several equivalent notions that generalize ones introduced by klaus wagner for finite muller automata under the name of chains and superchains . we define such objects in relation to -rational sets , muller automata or also -semigroups . we prove their equivalence and derive some basic properties of these objects . in a subsequent paper , we show how these concepts allow us to derive a new presentation of the hierarchy due to k. wagner and w. wadge . story_separator_special_tag we investigate several conceptions of linguistic structure to determine whether or not they can provide simple and `` revealing '' grammars that generate all of the sentences of english and only these . we find that no finite-state markov process that produces symbols with transition from state to state can serve as an english grammar . furthermore , the particular subclass of such processes that produce n -order statistical approximations to english do not come closer , with increasing n , to matching the output of an english grammar . we formalize-the notions of `` phrase structure '' and show that this gives us a method for describing language which is essentially more powerful , though still representable as a rather elementary type of finite-state process . nevertheless , it is successful only when limited to a small subset of simple sentences . we study the formal properties of a set of grammatical transformations that carry sentences with phrase structure into new sentences with derived phrase structure , showing that transformational grammars are processes of the same elementary type as phrase-structure grammars ; that the grammar of english is materially simplified if phrase structure description is limited to a kernel story_separator_special_tag using a combinatorial lemma on regular sets , and a technique of attaching a control unit to a parallel battery of finite automata , a simple and transparent development of mcnaughton 's theory of automata on @ w-tapes is given . the lemma and the technique are then used to give an independent and equally simple development of buchi 's theory of nondeterministic automata on these tapes . some variants of these models are also studied . finally a third independent approach , modelled after a simplified version of rabin 's theory of automata on infinite trees , is developed . story_separator_special_tag abstract the theory of finite automata and regular expressions over a finite alphabet is here generalized to infinite tapes x = x 1 x k , where x i , are themselves tapes of length n , for some n 0. closure under the usual set-theoretical operations is established , and the equivalence of deterministic and nondeterministic automata is proved . a kleene-type characterization of the definable sets is given and finite-length generalized regular expressions are developed for finitely denoting these sets . decision problems are treated ; a characterization of regular tapes by multiperiodic sets is specified . characterization by equivalence relations is discussed while stressing dissimilarities with the finite case . story_separator_special_tag the subject of this thesis is the proof theory of logics with fixed points , such as the -calculus , linear-logic with fixed points , etc . these logics are usually equipped with finitary deductive systemsthat rely on park s rules for induction . other proof systems for these logics exist , which relyon infinitary proofs , but they are much less developped . this thesis contributes to reduce thisdeficiency by developing the infinitary proof-theory of logics with fixed points , with two domainsof application in mind : programming languages with ( co ) inductive data types and verification ofreactive systems.this thesis contains three parts . in the first part , we recall the two main approaches to theproof theory for logics with fixed points : the finitary and the infinitary one , then we show theirrelationships . in the second part , we argue that infinitary proofs have a true proof-theoreticalstatus by showing that the multiplicative additive linear-logic with fixed points admits focalizationand cut-elimination . in the third part , we apply our proof-theoretical investigations to obtain aconstructive proof of completeness for the linear-time -calculus w.r.t . kozen s axiomatization . story_separator_special_tag deterministic pushdown machines working on -tapes are studied ; the -languages recognized by such machines are called -dcfl 's . various -recognition mechanisms in the machines are considered , yielding a hierarchy of i-recognizable classes of -dcfl 's . algebraic characterizations are obtained for each of these classes . certain decision problems , generally undecidable , are shown to be decidable within some of the classes . story_separator_special_tag abstract the paper develops the theory of turing machines as recognizers of infinite ( -type ) input tapes . various models of -type turing acceptors are considered , varying mainly in their mechanism for recognizing -tapes . a comparative study of the models is made . it is shown that regardless of the -recognition model considered , non-deterministic -turing acceptors are strictly more powerful than their deterministic counterparts . canonical forms are obtained for each of the -turing acceptor models . the corresponding families of -sets are studied ; normal forms and algebraic characterizations are derived for each family . story_separator_special_tag par ensemble borelien de reels nous entendons un sous-ensemble a de l'ensemble des suites infinies sur un alphabet au plus denombrable , borelien pour la topologie produit de la topologie discrete sur l'alphabet . etant donnes deux tels ensembles a et b , a se reduit continuement a b s'il existe une fonction continue f telle que a soit l'image inverse de b par cette fonction . pour chaque ensemble borelien de reels a , de rang fini , nous donnons une forme normale pour a , en exhibant un ensemble borelien b de simplicite maximum tel que a et b se reduisent continuement l'un a l'autre . en termes plus techniques nous definissons des operations boreliennes simples qui sont homomorphes aux operations ordinales de somme , multiplication par un ordinal denombrable , exponentiation de base le premier ordinal non denombrable , par la fonction qui envoie chaque ensemble borelien de rang fini a sur son degre de wadge . nous considerons alors la forme normale de cantor , de base le premier ordinal non denombrable , de cet ordinal que constitue le degre de wadge de a. celle-ci s'exprimant a partir des operations sus-citees , nous obtenons l'ensemble canonique story_separator_special_tag abstract we consider borel sets of finite rank a where cardinality of is less than some uncountable regular cardinal . we obtain a normal form of a , by finding a borel set , such that a and continuously reduce to each other . in more technical terms : we define simple borel operations which are homomorphic to ordinal sum , to multiplication by a countable ordinal , and to ordinal exponentiation of base , under the map which sends every borel set a of finite rank to its wadge degree . story_separator_special_tag twenty years ago , klaus . w. wagner came up with a hierarchy of ? -regular sets that actually bears his name . it turned out to be exactly the wadge hierarchy of the sets of ? -words recognized by deterministic finite automata . we describe the wadge hierarchy of context-free ? -languages , which stands as an extension of wagner 's work from automata to pushdown automata . story_separator_special_tag in construction of a composite golf club head made up of several shell components united together along their mating edges , a face shell component is made of titanium alloy and a rear shell component is made of pure titanium . the titanium alloy well withstands hard impact at shooting balls by a face whereas use of cheep pure titanium allows plastic shaping of the intricate rear shell component even at a low temperature to significantly lower the total production cost of the golf club head . story_separator_special_tag ( i ) wadge defined a natural refinement of the borel hierarchy , now called the wadge hierarchy wh . the fundamental properties of wh follow from the results of kuratowski , martin , wadge and louveau . we give a transparent restatement and proof of wadge 's main theorem . our method is new for it yields a wide and unexpected extension : from borel sets of reals to a class of natural but non borel sets of infinite sequences . wadge 's theorem is quite ineffective and our generalization clearly worsens in this respect . yet paradoxically our method is appropriate to effectivize this whole theory in the context discussed below . ( ii ) wagner defined on buchi automata ( accepting words of length ) a hierarchy and proved for it an effective analog of wadge 's results . we extend wagner 's results to more general kinds of automata : counters , push-down automata and buchi automata reading transfinite words . the notions and methods developed in ( i ) are quite useful for this extension , and we start to use them in order to look for extensions of the fundamental effective determinacy results of story_separator_special_tag in 1997 , following the works of klaus w. wagner on -regular sets , olivier carton and dominique perrin introduced the notions of chains and superchains for -semigroups . there is a clear correspondence between the algebraic representation of each of these operations and the automata-theoretical one . unfortunately , chains and superchains do not suffice to describe the whole wagner hierarchy . we introduce a third notion that completes the task undertaken by these two authors . story_separator_special_tag an iterated pushdown is a pushdown of pushdowns of . of pushdowns . an iterated exponential function is 2 to the 2 to the . to the 2 to some polynomial . the main result is that nondeterministic 2-way and multi-head iterated pushdown automata characterize deterministic iterated exponential time complexity classes . this is proved by investigating both nondeterministic and alternating auxiliary iterated pushdown automata , for which similar characterization results are given . in particular it is shown that alternation corresponds to one more iteration of pushdowns . these results are applied to the 1-way iterated pushdown automata : ( 1 ) they form a proper hierarchy with respect to the number of iterations , ( 2 ) their emptiness problem is complete in deterministic iterated exponential time . story_separator_special_tag abstract for any storage type x , the -languages accepted by x -automata are investigated . six accepting conditions ( including those introduced by landweber ) are compared for x -automata . the inclusions between the corresponding six families of -languages are essentially the same as for finite-state automata . apart from unrestricted automata , also real-time and deterministic automata are considered . the main tools for this investigation are : ( 1 ) a characterization of the -languages accepted by x -automata in terms of inverse x -transductions of finite-state -languages ; and ( 2 ) the existence of topological upper bounds on some of the families of accepted -languages ( independent of the storage type x ) . story_separator_special_tag the extension of the wagner hierarchy to blind counter automata accepting infinite words with a muller acceptance condition is effective . we determine precisely this hierarchy . story_separator_special_tag the main result of this paper is that the length of the wadge hierarchy of omega context free languages is greater than the cantor ordinal epsilon_omega , which is the omega-th fixed point of the ordinal exponentiation of base omega and the same result holds for the conciliating wadge hierarchy , defined by j. duparc , of infinitary context free languages , studied by d. beauquier . story_separator_special_tag abstract this paper is a study of topological properties of omega context-free languages ( - cfl ) . we first extend some decidability results for the deterministic ones ( - dcfl ) , proving that one can decide whether an - dcfl is in a given borel class , or in the wadge class of a given -regular language . we prove that - cfl exhaust the hierarchy of borel sets of finite rank , and that one can not decide the borel class of an - cfl , giving an answer to a question of lescow and thomas ( a decade of concurrency , springer lecture notes in computer science , vol . 803 , springer , berlin , 1994 , pp . 583 621 ) . we give also a ( partial ) answer to a question of simmonet ( automates et theorie descriptive , ph.d. thesis , universite paris 7 , march 1992 ) about omega powers of finitary languages . we show that buchi landweber 's theorem can not be extended to even closed - cfl : in a gale stewart game with a ( closed ) - cfl winning set , one can not decide story_separator_special_tag abstract the main result of this paper is that the length of the wadge hierarchy of omega context-free languages is greater than the cantor ordinal e 0 , and the same result holds for the conciliating wadge hierarchy , defined by duparc ( j. symbolic logic , to appear ) , of infinitary context-free languages , studied by beauquier ( ph.d. thesis , universite paris 7 , 1984 ) . in the course of our proof , we get results on the wadge hierarchy of iterated counter -languages , which we define as an extension of classical ( finitary ) iterated counter languages to -languages . story_separator_special_tag we extend the well-known notions of ambiguity and of degrees of ambiguity of finitary context free languages to the case of omega context free languages ( -cfl ) accepted by buchi or muller pushdown automata . we show that these notions may be defined independently of the buchi or muller acceptance condition which is considered . we investigate first properties of the subclasses of omega context free languages we get in that way , giving many examples and studying topological properties of -cfl of a given degree of ambiguity . story_separator_special_tag we give in this paper additional answers to questions of lescow and thomas [ logical specifications of infinite computations , in : '' a decade of concurrency '' , springer lncs 803 ( 1994 ) , 583-621 ] , proving new topological properties of omega context free languages : there exist some omega-cfl which are non borel sets . and one can not decide whether an omega-cfl is a borel set . we give also an answer to questions of niwinski and simonnet about omega powers of finitary languages , giving an example of a finitary context free language l such that l^omega is not a borel set . then we prove some recursive analogues to preceding properties : in particular one can not decide whether an omega-cfl is an arithmetical set . story_separator_special_tag this paper is a continuation of the study of topological properties of omega context free languages ( omega-cfl ) . we proved before that the class of omega-cfl exhausts the hierarchy of borel sets of finite rank , and that there exist some omega-cfl which are analytic but non borel sets . we prove here that there exist some omega context free languages which are borel sets of infinite ( but not finite ) rank , giving additional answer to questions of lescow and thomas [ logical specifications of infinite computations , in : '' a decade of concurrency '' , springer lncs 803 ( 1994 ) , 583-621 ] . story_separator_special_tag omega-powers of finitary languages are omega languages in the form v^omega , where v is a finitary language over a finite alphabet x. since the set of infinite words over x can be equipped with the usual cantor topology , the question of the topological complexity of omega-powers naturally arises and has been raised by niwinski , by simonnet , and by staiger . it has been recently proved that for each integer n > 0 , there exist some omega-powers of context free languages which are pi^0_n-complete borel sets , and that there exists a context free language l such that l^omega is analytic but not borel . but the question was still open whether there exists a finitary language v such that v^omega is a borel set of infinite rank . we answer this question in this paper , giving an example of a finitary language whose omega-power is borel of infinite rank . story_separator_special_tag we show that the borel hierarchy of the class of context free $ \\omega $ -languages , or even of the class of $ \\omega $ -languages accepted by b\xfcchi 1-counter automata , is the same as the borel hierarchy of the class of $ \\omega $ -languages accepted by turing machines with a b\xfcchi acceptance condition . in particular , for each recursive non-null ordinal $ \\alpha $ , there exist some $ { \\bf \\sigma } ^0_\\alpha $ -complete and some $ { \\bf \\pi } ^0_\\alpha $ -complete $ \\omega $ -languages accepted by b\xfcchi 1-counter automata . and the supremum of the set of borel ranks of context free $ \\omega $ -languages is an ordinal $ \\gamma_2^1 $ that is strictly greater than the first non-recursive ordinal $ \\omega_1^ { \\mathrm { ck } } $ . we then extend this result , proving that the wadge hierarchy of context free $ \\omega $ -languages , or even of $ \\omega $ -languages accepted by b\xfcchi 1-counter automata , is the same as the wadge hierarchy of $ \\omega $ -languages accepted by turing machines with a b\xfcchi or a muller acceptance condition . story_separator_special_tag we prove in this paper that the length of the wadge hierarchy of -context-free languages is greater than the cantor ordinal e , which is the th fixed point of the ordinal exponentiation of base . we show also that there exist some 0-complete -context-free languages , improving previous results on -context-free languages and the borel hierarchy . story_separator_special_tag some decidable winning conditions of arbitrarily high finite borel complexity for games on finite graphs or on pushdown graphs have been recently presented by o. serre in [ games with winning conditions of high borel complexity , in the proceedings of the international conference icalp 2004 , lncs , volume 3142 , p. 1150-1162 ] . we answer in this paper several questions which were raised by serre in the above cited paper . we first show that , for every positive integer n , the class c_n ( a ) , which arises in the definition of decidable winning conditions , is included in the class of non-ambiguous context free omega languages , and that it is neither closed under union nor under intersection . we prove also that there exists pushdown games , equipped with such decidable winning conditions , where the winning sets are not deterministic context free languages , giving examples of winning sets which are non-deterministic non-ambiguous context free languages , inherently ambiguous context free languages , or even non context free languages . story_separator_special_tag we show that , from the topological point of view , 2-tape buchi automata have the same accepting power as turing machines equipped with a buchi acceptance condition . the borel and the wadge hierarchies of the class rat_omega of infinitary rational relations accepted by 2-tape buchi automata are equal to the borel and the wadge hierarchies of omega-languages accepted by real-time buchi 1-counter automata or by buchi turing machines . in particular , for every non-null recursive ordinal $ \\alpha $ , there exist some $ \\sigma^0_\\alpha $ -complete and some $ \\pi^0_\\alpha $ -complete infinitary rational relations . and the supremum of the set of borel ranks of infinitary rational relations is an ordinal $ \\gamma^1_2 $ which is strictly greater than the first non-recursive ordinal $ \\omega_1^ { ck } $ . this very surprising result gives answers to questions of simonnet ( 1992 ) and of lescow and thomas ( 1988,1994 ) . story_separator_special_tag the operationv v is a fundamental operation over finitary languages leading to -languages . since the set of infinite words over a finite alphabet can be equipped with the usual cantor topology , the question of the topological complexity of -powers of finitary languages naturally arises and has been posed by niwinski [ niw90 ] , simonnet [ sim92 ] and staige r [ sta97a ] . it has been recently proved that for each integer n 1 , there exist some -powers of context free languages which are 0n-complete borel sets , [ fin01 ] , that there exists a context free language l such thatl is analytic but not borel , [ fin03 ] , and that there exists a finitary language v such thatv is a borel set of infinite rank , [ fin04 ] . but it was still unknown which could be the possible infinite borel ranks of -powers . we fill this gap here , proving the following very surprising r esult which shows that -powers exhibit a great topological complexity : for each no n-null countable ordinal , there exist some 0 -complete -powers , and some 0 -complete -powers . story_separator_special_tag we study the links between the topological complexity of an omega context free language and its degree of ambiguity . in particular , using known facts from classical descriptive set theory , we prove that non borel omega context free languages which are recognized by buchi pushdown automata have a maximum degree of ambiguity . this result implies that degrees of ambiguity are really not preserved by the operation of taking the omega power of a finitary context free language . we prove also that taking the adherence or the delta-limit of a finitary language preserves neither unambiguity nor inherent ambiguity . on the other side we show that methods used in the study of omega context free languages can also be applied to study the notion of ambiguity in infinitary rational relations accepted by buchi 2-tape automata and we get first results in that direction . story_separator_special_tag this paper provides an overview of the latest advances in road vehicle suspension design , dynamics , and control , together with the authors ' perspectives , in the context of vehicle ride , handling , and stability . the general aspects of road vehicle suspension dynamics and design are discussed , followed by descriptions of road-roughness excitations with a particular emphasis on road potholes . passive suspension system designs and their effects on road vehicle dynamics and stability are presented in terms of in-plane and full-vehicle arrangements . controlled suspensions are also reviewed and discussed . the paper concludes with some potential research topics , in particular those associated with the development of hybrid and electric vehicles . story_separator_special_tag we consider one-way nondeterministic machines which have counters allowed to hold positive or negative integers and which accept by final state with all counters zero . such machines are called blind if their action depends on state and input alone and not on the counter configuration . they are partially blind if they block when any counter is negative ( i.e. , only nonnegative counter contents are permissible ) but do not know whether or not any of the counters contain zero . blind multicounter machines are equivalent in power to the reversal bounded multicounter machines of baker and book [ 1 ] , and for both blind and reversal bounded multicounter machines , the quasirealtime family is as powerful as the full family . the family of languages accepted by blind multicounter machines is the least intersection closed semiafl containing { anbn|n 0 } and also the least intersection closed semiafl containing the two-sided dyck set on one letter . blind multicounter machines are strictly less powerful than quasirealtime partially blind multicounter machines . quasirealtime partially blind multicounter machines accept the family of computation state sequences or petri net languages which is equal to the least intersection closed semiafl story_separator_special_tag this book is a rigorous exposition of formal languages and models of computation , with an introduction to computational complexity . the authors present the theory in a concise and straightforward manner , with an eye out for the practical applications . exercises at the end of each chapter , including some that have been solved , help readers confirm and enhance their understanding of the material . this book is appropriate for upper-level computer science undergraduates who are comfortable with mathematical arguments . story_separator_special_tag descriptive set theory has been one of the main areas of research in set theory for almost a century . this text attempts to present a largely balanced approach , which combines many elements of the different traditions of the subject . it includes a wide variety of examples , exercises ( over 400 ) , and applications , in order to illustrate the general concepts and results of the theory . this text provides a first basic course in classical descriptive set theory and covers material with which mathematicians interested in the subject for its own sake or those that wish to use it in their field should be familiar . over the years , researchers in diverse areas of mathematics , such as logic and set theory , analysis , topology , probability theory , etc. , have brought to the subject of descriptive set theory their own intuitions , concepts , terminology and notation . story_separator_special_tag the results in this paper were motivated by the following question of sacks . suppose t is a recursive theory with countably many countable models . what can you say about the least ordinal such that all models of t have scott rank below ? if martin 's conjecture is true for t then \xb7 2. our goal was to look at this problem in a more abstract setting . let e be a equivalence relation on with countably many classes each of which is borel . what can you say about the least such that each equivalence class is ? this problem is closely related to the following question . suppose x is and borel . what can you say about the least such that x is ? in \xa71 we answer these questions in zfc . in \xa72 we give more informative answers under the added assumptions v = l or -determinacy . the final section contains related results on the separation of sets by borel sets . our notation is standard . the reader may consult moschovakis [ 5 ] for undefined terms . some of these results were proved first by sami and rediscovered by kechris story_separator_special_tag a two-dimensional finite automaton has a read-only input head that moves in four directions on a finite array of cells labelled by symbols of the input alphabet . a three-way two-dimensional automaton is prohibited from making upward moves , while a two-way two-dimensional automaton can only move downward and rightward . we show that the language emptiness problem for unary three-way nondeterministic two-dimensional automata is np-complete , and is in p for general-alphabet two-way nondeterministic two-dimensional automata . we show that the language equivalence problem for two-way deterministic two-dimensional automata is decidable , while both the equivalence and universality problems for two-way nondeterministic two-dimensional automata are undecidable . the deterministic case is the first known positive decidability result for the equivalence problem on two-dimensional automata over a general alphabet . we show that there exists a unary three-way deterministic two-dimensional automaton with a nonregular column projection , and we show that the row projection of a unary three-way nondeterministic two-dimensional automaton is always regular . story_separator_special_tag we study the sets of the infinite sentences constructible with a dictionary over a finite alphabet , from the viewpoint of descriptive set theory . among other things , this gives some true co-analytic sets . the case where the dictionary is finite is studied and gives a natural example of a set at the level omega of the wadge hierarchy . story_separator_special_tag starting from an identification of infinite computations with -words , we present a framework in which different classification schemes for specifications are naturally compared . thereby we connect logical formalisms with hierarchies of descriptive set theory ( e.g. , the borel hierarchy ) , of recursion theory , and with the hierarchy of acceptance conditions of -automata . in particular , it is shown in which sense these hierarchies can be viewed as classifications of logical formulas by the complexity measure of quantifier alternation . in this context , the automaton theoretic approach to logical specifications over -words turns out to be a technique to reduce quantifier complexity of formulas . finally , we indicate some perspectives of this approach , discuss variants of the logical framework ( first-order logic , temporal logic ) , and outline the effects which arise when branching computations are considered ( i.e. , when infinite trees instead of -words are taken as model of computation ) . story_separator_special_tag new families of -languages ( sets of infinite sequences ) associated with context-free languages and pushdown automata are introduced . their basic properties , such as inclusion relations , closure under the boolean operations and periodicity , are studied and compared with the corresponding properties of the families of -languages accepted by finite automata . moreover , a number of solvability and unsolvability results are proved . the results obtained imply that there is a definite difference between the family of -languages accepted by pushdown automata and the family associated with context-free languages . story_separator_special_tag abstract it is shown that the problem whether an effectively given deterministic -context-free language is in the family of all closures of deterministic context-free languages is decidable . story_separator_special_tag abstract this article is an investigation to the theory of -languages . the characterization of linear -languages by automaton and grammars is given . the closure properties by boolean and topological operations for such a family of -languages are studied . story_separator_special_tag descriptive set theory is the study of sets in separable , complete metric spaces that can be defined ( or constructed ) , and so can be expected to have special properties not enjoyed by arbitrary pointsets . this subject was started by the french analysts at the turn of the 20th century , most prominently lebesgue , and , initially , was concerned primarily with establishing regularity properties of borel and lebesgue measurable functions , and analytic , coanalytic , and projective sets . its rapid development came to a halt in the late 1930s , primarily because it bumped against problems which were independent of classical axiomatic set theory . the field became very active again in the 1960s , with the introduction of strong set-theoretic hypotheses and methods from logic ( especially recursion theory ) , which revolutionized it . this monograph develops descriptive set theory systematically , from its classical roots to the modern 'effective ' theory and the consequences of strong ( especially determinacy ) hypotheses . the book emphasizes the foundations of the subject , and it sets the stage for the dramatic results ( established since the 1980s ) relating large cardinals and story_separator_special_tag a regular set may also be thought of as a projection of a se't of paths in a finite directed linear graph . this linear graph model is the one which is often most convenient to use . for example , if we regard a regular set as the . set of possible state sequences of an inputless , nondeterministic machine , . then the linear graph model is quite natural . such machines are considered in the theory of speed-independent circuits . however , state sequences of inputless , nondeterministic mach~nes may either terminate in some equilibrium condition , or else the machine may pass from one state to another without ever reaching equ~librium . story_separator_special_tag bfichi ( 1962 ) has given a decision procedure for a system of logic known as `` the sequential calculus , '' by showing that each well formed formula of the system is equivalent to a fornmla that , roughly speaking , says something about the infinite input history of a finite automaton . in so doing he managed to answer an open question that was of concern to pure logicians , some of whom had no interest in the theory of automata . muller ( 1963 ) came upon quite similar concepts in studying a problem in asynchronous switching theory . the problem was to describe the behavior of an asynchronous circuit tha t does not reach any stability condition when starting from a certain state and subject to a certain input condition . many different things can happen , since there is no control over how fast various parts of the circuit react with respect to each other . since at no time during the presence of that input condition will the circuit reach a terminal condition , it will be possible to describe the total set of possibilities in an ideal fashioll only if an infinite amount story_separator_special_tag several concepts of context-freeness of sets of finite/infinite words are characterized by means of greatest solutions of systems of equations of the form xi = gi , i = 1 , , n , where gi is a ( not necessarily finite ) union of monomials . consideration of the systems with the components gi context-free , regular or finite leads to characterizations of the following classes of -languages : the -kleene closure of the family of context-free languages , -algebraic languages infinitely generated by context-free grammars in the sense of nivat ( 1977 ) and cantor-like topological closures of context-free languages , respectively . story_separator_special_tag consider the sequence $ $ \\ { { f } _ { n } \\ } _ { n \\ge 0 } $ $ { f n } n 0 of fibonacci numbers defined by $ $ { f } _0=0 $ $ f 0 = 0 , $ $ { f } _1 =1 $ $ f 1 = 1 , and $ $ { f } _ { { n } +2 } = { f } _ { { n } +1 } + { f } _ { n } $ $ f n + 2 = f n + 1 + f n for all $ $ n\\ge 0 $ $ n 0 . in this paper , we find all integers c having at least two representations as a difference between a fibonacci number and a power of 3 . story_separator_special_tag introduction . in this paper we solve the decision problem of a certain secondorder mathematical theory and apply it to obtain a large number of decidability results . the method of solution involves the development of a theory of automata on infinite trees a chapter in combinatorial mathematics which may be of independent interest . let \xa3 = { 0 , 1 } , and denote by t the set of all words ( finite sequences ) on 2. let r0 : t^-t and rx : t > t be , respectively , the successor functions ro ( x ) =x0 and r1 ( x ) = xl , xet . our main result is that the ( monadic ) second-order theory of the structure ( t , r0 , rxy of two successor functions is decidable . this answers a question raised by biichi [ 1 ] . it turns out that this result is very powerful and many difficult decidability results follow from it by simple reductions . the decision procedures obtained by this method are elementary recursive ( in the sense of kalmar ) . the applications include the following . ( whenever we refer , in story_separator_special_tag by applying descriptive set theory we get several facts on the fine structure of regular -languages considered by k.wagner . we present quite different , shorter proofs for main his results and get new results . our description of the fine structure is new , very clear and automata-free . we prove also a closure property of the fine structure under boolean operations . our results demonstrate deep interconnections between descriptive set theory and theory of -languages . story_separator_special_tag we describe wadge degrees of -languages recognizable by deterministic turing machines . in particular , it is shown that the ordinal corresponding to these degrees is where = ck 1 is the first non-recursive ordinal known as the church-kleene ordinal . this answers a question raised in [ du0 ? ] . story_separator_special_tag les jeux a deux joueurs sur des graphes finis ou infinis permettent de modeliser de nombreux problemes lies a la verification des systemes . le systeme specifie depend de la nature du graphe de jeu considere tandis que la propriete a verifier est decrite par la condition de gain . le premier joueur , eve , represente un programme qui evolue dans un environnement hostile represente par le second joueur , adam . dans ce formalisme , eve possede une strategie gagnante si et seulement si le programme peut etre controle de sorte a satisfaire la propriete specifiee par la condition de gain . on souhaite alors decider si eve possede une strategie gagnante et si oui la determiner , afin de synthetiser ensuite un controleur . dans cette these , les graphes de jeu consideres sont des graphes de processus a pile qui offrent une representation finie simple de systemes infinis relativement complexes . sur de tels graphes , on peut considerer des conditions de gain classiques ( accessibilite , buchi ou parite ) mais aussi des conditions plus specifiques au modele comme celles portant sur le bornage de la pile . on peut egalement combiner ces dernieres entre story_separator_special_tag we first consider infinite two-player games on pushdown graphs . in previous work , cachat et al . [ solving pushdown games with a 3-winning condition , in : proc . 11th annu . conf . of the european association for computer science logic , csl 2002 , lecture notes in computer science , vol . 2471 , springer , berlin , 2002 , pp . 322-336 ] have presented a winning decidable condition that is 3-complete in the borel hierarchy . this was the first example of a decidable winning condition of such borel complexity . we extend this result by giving a family of decidable winning conditions of arbitrary finite borel complexity . from this family , we deduce a family of decidable winning conditions of arbitrary finite borel complexity for games played on finite graphs . the problem of deciding the winner for these conditions is shown to be non-elementary . story_separator_special_tag nous etudions dans ce travail la complexite topologique des ensembles reconnaissables par automates de mots infinis et d'arbres infinis , et les liens entre les hierarchies d'automates , et les hierarchies classiques et effectives de l'analyse story_separator_special_tag algebraic theory of processes provides the first general and systematic introduction to the semantics of concurrent systems , a relatively new research area in computer science . it develops the mathematical foundations of the algebraic approach to the formal semantics of languages and applies these ideas to a particular semantic theory of distributed processes . the book is unique in developing three complementary views of the semantics of concurrent processes : a behavioral view where processes are deemed to be equivalent if they can not be distinguished by any experiment ; a denotational model where processes are interpreted as certain kinds of trees ; and a proof-theoretic view where processes may be transformed into equivalent processes using valid equations or transformations . it is an excellent guide on how to reason about and relate behavioral , denotational , and proof-theoretical aspects of languages in general : all three views are developed for a sequence of increasingly complex algebraic languages for concurrency and in each case they are shown to be equivalent . algebraic theory of processes is a valuable source of information for theoretical computer scientists , not only as an elegant and comprehensive introduction to the field but also story_separator_special_tag we introduce stanza , an open-source python natural language processing toolkit supporting 66 human languages . compared to existing widely used toolkits , stanza features a language-agnostic fully neural pipeline for text analysis , including tokenization , multi-word token expansion , lemmatization , part-of-speech and morphological feature tagging , dependency parsing , and named entity recognition . we have trained stanza on a total of 112 datasets , including the universal dependencies treebanks and other multilingual corpora , and show that the same neural architecture generalizes well and achieves competitive performance on all languages tested . additionally , stanza includes a native python interface to the widely used java stanford corenlp software , which further extends its functionality to cover other tasks such as coreference resolution and relation extraction . source code , documentation , and pretrained models for 66 languages are available at https : //stanfordnlp.github.io/stanza/ . story_separator_special_tag the innnite or ! -power is one of the basic operations to associate with a language of nite words ( a nitary language ) an ! -language . it plays a crucial role in the characterization of regular and of context-free ! -languages , that is , ! -languages accepted by ( nondeterministic ) nite or pushdown automata , respectively ( cf . the surveys st87a , th90 ] ) . but in connection with the determinization of nite ! -automata it turned out that the properties of the ! -power are remarkable elusive ; resulting in the well-known complicated proof of macnaughton 's theorem mn66 ] . later work tb70 , ei74 , ch74 ] showed a connection between the ! -power of regular ! -languages and a limit operation ( called here-limit ) transferring languages to ! -languages . it was , therefore , asked in ch74 ] for more transparent relationships between the ! -power and the-limit of languages . it turned out that this-limit is a useful tool in translating the nite to the innnite behaviour of deterministic accepting devices ( cf . li76 , cg78 , st87a , th90 , eh93 ] ) . as story_separator_special_tag publisher summary this chapter focuses on finite automata on infinite sequences and infinite trees . the chapter discusses the complexity of the complementation process and the equivalence test . deterministic muller automata and nondeterministic buchi automata are equivalent in recognition power . any nonempty rabin recognizable set contains a regular tree and shows that the emptiness problem for rabin tree automata is decidable . the chapter discusses the formulation of two interesting generalizations of rabin 's tree theorem and presents some remarks on the undecidable extensions of the monadic theory of the binary tree . a short overview of the work that studies the fine structure of the class of rabin recognizable sets of trees is also presented in the chapter . depending on the formalism in which tree properties are classified , the results fall in three categories : monadic second-order logic , tree automata , and fixed-point calculi . story_separator_special_tag the purpose of this tutorial is to survey the essentials of the algorithmic theory of infinite games , its role in automatic program synthesis and verification , and some challenges of current research . story_separator_special_tag the problem of identifying an unknown regular set from examples of its members and nonmembers is addressed . it is assumed that the regular set is presented by a minimamy adequate teacher , which can answer membership queries about the set and can also test a conjecture and indicate whether it is equal to the unknown set and provide a counterexample if not . ( a counterexample is a string in the symmetric difference of the correct set and the conjectured set . ) a learning algorithm l * is described that correctly learns any regular set from any minimally adequate teacher in time polynomial in the number of states of the minimum dfa for the set and the maximum length of any counterexample provided by the teacher . it is shown that in a stochastic setting the ability of the teacher to test conjectures may be replaced by a random sampling oracle , ex ( ) . a polynomial-time learning algorithm is shown for a particular problem of context-free language identification . story_separator_special_tag games given by transition graphs of pushdown processes are considered . it is shown that if there is a winning strategy in such a game then there is a winning strategy that is realized by a pushdown process . this fact turns out to be connected with the model checking problem for the pushdown automata and the propositional mu-calculus . it is shown that this model checking problem is dexptime-complete . story_separator_special_tag based on a detailed graph theoretical analysis , wagner 's fundamental results of 1979 are turned into efficient algorithms to compute the wadge degree , the lifschitz degree , and the rabin index of a regular -language : the two former can be computed in time o ( f 2qb+klogk ) and the latter in time o ( f 2qb ) if the language is represented by a deterministic muller automaton over an alphabet of cardinality b , with f accepting sets , q states , and k strongly connected components .
automatic face recognition performance has been steadily improving over years of research , however it remains significantly affected by a number of factors such as illumination , pose , expression , resolution and other factors that can impact matching scores . the focus of this paper is the pose problem which remains largely overlooked in most real-world applications . specifically , we focus on one-to-one matching scenarios where a query face image of a random pose is matched against a set of gallery images . we propose a method that relies on two fundamental components : ( a ) a 3d modeling step to geometrically correct the viewpoint of the face . for this purpose , we extend a recent technique for efficient synthesis of 3d face models called 3d generic elastic model . ( b ) a sparse feature extraction step using subspace modeling and l1-minimization to induce pose-tolerance in coefficient space . this in return enables the synthesis of an equivalent frontal-looking face , which can be used towards recognition . we show significant performance improvements in verification rates compared to commercial matchers , and also demonstrate the resilience of the proposed method with respect to degrading input story_separator_special_tag this paper presents a novel and efficient facial image representation based on local binary pattern ( lbp ) texture features . the face image is divided into several regions from which the lbp feature distributions are extracted and concatenated into an enhanced feature vector to be used as a face descriptor . the performance of the proposed method is assessed in the face recognition problem under different challenges . other applications and several extensions are also discussed story_separator_special_tag canonical correlation analysis is a technique to extract common features from a pair of multivariate data . in complex situations , however , it does not extract useful features because of its linearity . on the other hand , kernel method used in support vector machine is an efficient approach to improve such a linear method . in this paper , we investigate the effectiveness of applying kernel method to canonical correlation analysis . keyword : multivariate analysis , multimodal data , kernel method , regularization story_separator_special_tag in this paper , we present a complete framework to inverse render faces with a 3d morphable model ( 3dmm ) . by decomposing the image formation process into geometric and photometric parts , we are able to state the problem as a multilinear system which can be solved accurately and efficiently . as we treat each contribution as independent , the objective function is convex in the parameters and a global solution is guaranteed . we start by recovering 3d shape using a novel algorithm which incorporates generalization error of the model obtained from empirical measurements . we then describe two methods to recover facial texture , diffuse lighting , specular reflectance , and camera properties from a single image . the methods make increasingly weak assumptions and can be solved in a linear fashion . we evaluate our findings on a publicly available database , where we are able to outperform an existing state-of-the-art algorithm . we demonstrate the usability of the recovered parameters in a recognition experiment conducted on the cmu-pie database . story_separator_special_tag face recognition in the wild can be defined as recognizing individuals unabated by pose , illumination , expression , and uncertainties from the image acquisition . in this paper , we propose a framework recognizing human faces under such uncertainties by focusing on the pose problem while considering the other factors together . the proposed work introduces an automatic front-end stereo-based system , which starts with image acquisition and ends by face recognition . once an individual is detected by one of the stereo cameras , its facial features are identified using a facial features extraction model . these features are used to steer the second camera to see the same subject . then , a stereo pair is captured and 3d face is reconstructed . the proposed stereo matching approach carefully handles illumination variance , occlusion , and disparity discontinuity . the reconstructed 3d shape is used to synthesize virtual 2d views in novel poses . all these steps are done off-line in an enrollment stage . to recognize a face from a 2d image , which is captured under unknown environmental conditions , another fast on-line stage starts by facial features detection . then , a facial signature story_separator_special_tag we introduce deep canonical correlation analysis ( dcca ) , a method to learn complex nonlinear transformations of two views of data such that the resulting representations are highly linearly correlated . parameters of both transformations are jointly learned to maximize the ( regularized ) total correlation . it can be viewed as a nonlinear extension of the linear method canonical correlation analysis ( cca ) . it is an alternative to the nonparametric method kernel canonical correlation analysis ( kcca ) for learning correlated nonlinear transformations . unlike kcca , dcca does not require an inner product , and has the advantages of a parametric method : training time scales well with data size and the training data need not be referenced when computing the representations of unseen instances . in experiments on two real-world datasets , we find that dcca learns representations with significantly higher correlation than those learned by cca and kcca . we also introduce a novel non-saturating sigmoid function based on the cube root that may be useful more generally in feedforward neural networks . story_separator_special_tag subspace-based face representation can be looked as a regression problem . from this viewpoint , we first revisited the problem of recognizing faces across pose differences , which is a bottleneck in face recognition . then , we propose a new approach for cross-pose face recognition using a regressor with a coupled bias-variance tradeoff . we found that striking a coupled balance between bias and variance in regression for different poses could improve the regressor-based cross-pose face representation , i.e. , the regressor can be more stable against a pose difference . with the basic idea , ridge regression and lasso regression are explored . experimental results on cmu pie , the feret , and the multi-pie face databases show that the proposed bias-variance tradeoff can achieve considerable reinforcement in recognition performance . story_separator_special_tag a pose-invariant face recognition system based on an image matching method formulated on mrfs is presented . the method uses the energy of the established match between a pair of images as a measure of goodness-of-match . the method can tolerate moderate global spatial transformations between the gallery and the test images and alleviate the need for geometric preprocessing of facial images by encapsulating a registration step as part of the system . it requires no training on nonfrontal face images . a number of innovations , such as a dynamic block size and block shape adaptation , as well as label pruning and error prewhitening measures have been introduced to increase the effectiveness of the approach . the experimental evaluation of the method is performed on two publicly available databases . first , the method is tested on the rotation shots of the xm2vts data set in a verification scenario . next , the evaluation is conducted in an identification scenario on the cmu-pie database . the method compares favorably with the existing 2d or 3d generative model-based methods on both databases in both identification and verification scenarios . story_separator_special_tag the paper addresses the problem of pose-invariant recognition of faces via an mrf matching model . unlike previous costly matching approaches , the proposed algorithm employs effective techniques to reduce the mrf inference time . to this end , processing is done in a parallel fashion on a gpu employing a dual decomposition framework . the optimisation is further accelerated taking a multi-resolution approach based on the renormalisation group theory ( rgt ) along with efficient methods for message passing and the incremental subgradient approach . for the graph construction , daisy features are used as node attributes exhibiting high cross-pose invariance , while high discriminatory capability in the classification stage is obtained via multi-scale lbp histograms . the experimental evaluation of the method is performed via extensive tests on the databases of xm2vts , feret and lfw in verification , identification and the unseen pair-matching paradigms . the proposed approach achieves state-of-the-art performance in pose-invariant recognition of faces and performs as well or better than the existing methods in the unconstrained settings of the challenging lfw database using a single feature for classification . story_separator_special_tag this paper addresses face verification in unconstrained settings . for this purpose , first , a nonlinear binary class-specific kernel discriminant analysis classifier ( cs-kda ) based on spectral regression kernel discriminant analysis is proposed . by virtue of the two-class formulation , the proposed cs-kda approach offers a number of desirable properties such as specificity of the transformation for each subject , computational efficiency , simplicity of training , isolation of the enrolment of each client from others and increased speed in probe testing . using the proposed cs-kda approach , a regional discriminative face image representation based on a multiscale variant of the binarized statistical image features is proposed next . the proposed component-based representation when coupled with the dense pixel-wise alignments provided by a symmetric mrf matching model reduces the sensitivity to misalignments and pose variations , gauging the similarity more effectively . finally , the discriminative representation is combined with two other effective image descriptors , namely the multiscale local binary patterns and the multiscale local phase quantization histograms via a kernel fusion approach to further enhance system accuracy . the experimental evaluation of the proposed methodology on challenging databases demonstrates its advantage over other methods story_separator_special_tag recognition of faces in arbitrary pose is addressed in this paper . for this task , an mrf-based classification approach is proposed which employs the energy of the established match between a pair of images as a criterion of goodness-of-match . by incorporating an image matching method as part of the recognition process , the system is made robust to moderate global spatial transformations . the approach draws on a method [ 1 ] which has the potential to cope with pose changes but a direct application of which suffers from several shortcomings . in order to overcome these problems , a number of enhancements are proposed . first , by adopting a multi-scale relaxation scheme based on super coupling transform , the inference using sequential tree re-weighted message passing approach [ 2 ] is accelerated . next , by taking advantage of a statistical shape prior for the matching , the results are regularized and constrained , making the system robust to spurious structures and outliers . for classification , both textural and structural similarities of the facial images are taken into account . the method is evaluated on two databases and promising results are obtained . story_separator_special_tag variation due to viewpoint is one of the key challenges that stand in the way of a complete solution to the face recognition problem . it is easy to note that local regions of the face change differently in appearance as the viewpoint varies . recently , patch-based approaches , such as those of kanade and yamada , have taken advantage of this effect resulting in improved viewpoint invariant face recognition . in this paper we propose a data-driven extension to their approach , in which we not only model how a face patch varies in appearance , but also how it deforms spatially as the viewpoint varies . we propose a novel alignment strategy which we refer to as ldquostack flowrdquo that discovers viewpoint induced spatial deformities undergone by a face at the patch level . one can then view the spatial deformation of a patch as the correspondence of that patch between two viewpoints . we present improved identification and verification results to demonstrate the utility of our technique . story_separator_special_tag in this paper we propose a framework for gradient descent image alignment in the fourier domain . specifically , we propose an extension to the classical lucas & kanade ( lk ) algorithm where we represent the source and template image 's intensity pixels in the complex 2d fourier domain rather than in the 2d spatial domain . we refer to this approach as the fourier lk ( flk ) algorithm . the flk formulation is especially advantageous , over traditional lk , when it comes to pre-processing the source and template images with a bank of filters ( e.g. , gabor filters ) as : ( i ) it can handle substantial illumination variations , ( ii ) the inefficient pre-processing filter bank step can be subsumed within the flk algorithm as a sparse diagonal weighting matrix , ( iii ) unlike traditional lk the computational cost is invariant to the number of filters and as a result far more efficient , ( iv ) this approach can be extended to the inverse compositional form of the lk algorithm where nearly all steps ( including fourier transform and filter bank pre-processing ) can be pre-computed leading to an extremely story_separator_special_tag we present a novel approach to pose-invariant face recognition that handles continuous pose variations , is not database-specific , and achieves high accuracy without any manual intervention . our method uses multidimensional gaussian process regression to learn a nonlinear mapping function from the 2d shapes of faces at any non-frontal pose to the corresponding 2d frontal face shapes . we use this mapping to take an input image of a new face at an arbitrary pose and pose-normalize it , generating a synthetic frontal image of the face that is then used for recognition . our fully automatic system for face recognition includes automatic methods for extracting 2d facial feature points and accurately estimating 3d head pose , and this information is used as input to the 2d pose-normalization algorithm . the current system can handle pose variation up to 45 degrees to the left or right ( yaw angle ) and up to 30 degrees up or down ( pitch angle ) . the system demonstrates high accuracy in recognition experiments on the cmu-pie , usf 3d , and multi-pie databases , showing excellent generalization across databases and convincingly outperforming other automatic methods . story_separator_special_tag an ideal approach to the problem of pose-invariant face recognition would handle continuous pose variations , would not be database specific , and would achieve high accuracy without any manual intervention . most of the existing approaches fail to match one or more of these goals . in this paper , we present a fully automatic system for pose-invariant face recognition that not only meets these requirements but also outperforms other comparable methods . we propose a 3d pose normalization method that is completely automatic and leverages the accurate 2d facial feature points found by the system . the current system can handle 3d pose variation up to \xb145\xb0 in yaw and \xb130\xb0 in pitch angles . recognition experiments were conducted on the usf 3d , multi-pie , cmu-pie , feret , and facepix databases . our system not only shows excellent generalization by achieving high accuracy on all 5 databases but also outperforms other methods convincingly . story_separator_special_tag face recognition in real-world conditions requires the ability to deal with a number of conditions , such as variations in pose , illumination and expression . in this paper , we focus on variations in head pose and use a computationally efficient regression-based approach for synthesising face images in different poses , which are used to extend the face recognition training set . in this data-driven approach , the correspondences between facial landmark points in frontal and non-frontal views are learnt offline from manually annotated training data via gaussian process regression . we then use this learner to synthesise non-frontal face images from any unseen frontal image . to demonstrate the utility of this approach , two frontal face recognition systems ( the commonly used pca and the recent multi-region histograms ) are augmented with synthesised non-frontal views for each person . this synthesis and augmentation approach is experimentally validated on the feret dataset , showing a considerable improvement in recognition rates for \xb140 and \xb160 views , while maintaining high recognition rates for \xb115 and \xb125 views . story_separator_special_tag driven by key law enforcement and commercial applications , research on face recognition from video sources has intensified in recent years . the ensuing results have demonstrated that videos possess unique properties that allow both humans and automated systems to perform recognition accurately in difficult viewing conditions . however , significant research challenges remain as most video-based applications do not allow for controlled recordings . in this survey , we categorize the research in this area and present a broad and deep review of recently proposed methods for overcoming the difficulties encountered in unconstrained settings . we also draw connections between the ways in which humans and current algorithms recognize faces . an overview of the most popular and difficult publicly available face video databases is provided to complement these discussions . finally , we cover key research challenges and opportunities that lie ahead for the field as a whole . story_separator_special_tag we propose a method of face verification that takes advantage of a reference set of faces , disjoint by identity from the test faces , labeled with identity and face part locations . the reference set is used in two ways . first , we use it to perform an identity-preserving alignment , warping the faces in a way that reduces differences due to pose and expression but preserves differences that indicate identity . second , using the aligned faces , we learn a large set of identity classifiers , each trained on images of just two people . we call these tom-vs-pete classifiers to stress their binary nature . we assemble a collection of these classifiers able to discriminate among a wide variety of subjects and use their outputs as features in a same-or-different classifier on face pairs . we evaluate our method on the labeled faces in the wild benchmark , achieving an accuracy of 93.10 % , significantly improving on the published state of the art . story_separator_special_tag as face recognition applications progress from constrained sensing and cooperative subjects scenarios ( e.g. , driver s license and passport photos ) to unconstrained scenarios with uncooperative subjects ( e.g. , video surveillance ) , new challenges are encountered . these challenges are due to variations in ambient illumination , image resolution , background clutter , facial pose , expression , and occlusion . in forensic investigations where the goal is to identify a person of interest , often based on low quality face images and videos , we need to utilize whatever source of information is available about the person . this could include one or more video tracks , multiple still images captured by bystanders ( using , for example , their mobile phones ) , 3-d face models constructed from image ( s ) and video ( s ) , and verbal descriptions of the subject provided by witnesses . these verbal descriptions can be used to generate a face sketch and provide ancillary information about the person of interest ( e.g. , gender , race , and age ) . while traditional face matching methods generally take a single media ( i.e. , a still face story_separator_special_tag to create a pose-invariant face recognizer , one strategy is the view-based approach , which uses a set of real example views at different poses . but what if we only have one real view available , such as a scanned passport photo-can we still recognize faces under different poses ? given one real view at a known pose , it is still possible to use the view-based approach by exploiting prior knowledge of faces to generate virtual views , or views of the face as seen from different poses . to represent prior knowledge , we use 2d example views of prototype faces under different rotations . we develop example-based techniques for applying the rotation seen in the prototypes to essentially `` rotate '' the single real view which is available . next , the combined set of one real and multiple virtual views is used as example views for a view-based , pose-invariant face recognizer . oar experiments suggest that among the techniques for expressing prior knowledge of faces , 2d example-based approaches should be considered alongside the more standard 3d modeling techniques . > story_separator_special_tag researchers in computer vision and pattern recognition have worked on automatic techniques for recognizing human faces for the last 20 years . while some systems , especially template-based ones , have been quite successful on expressionless , frontal views of faces with controlled lighting , not much work has taken face recognizers beyond these narrow imaging conditions . our goal is to build a face recognizer that works under varying pose , the difficult part of which is to handle face relations in depth . building on successful template-based systems , our basic approach is to represent faces with templates from multiple model views that cover different poses from the viewing sphere . to recognize a novel view , the recognizer locates the eyes and nose features , uses these locations to geometrically register the input with model views , and then uses correlation on model templates to find the best match in the data base of people . our system has achieved a recognition rate of 98 % on a data base of 62 people containing 10 testing and 15 modeling views per person . > story_separator_special_tag face images captured by surveillance cameras usually have poor resolution in addition to uncontrolled poses and illumination conditions , all of which adversely affect the performance of face matching algorithms . in this paper , we develop a completely automatic , novel approach for matching surveillance quality facial images to high-resolution images in frontal pose , which are often available during enrollment . the proposed approach uses multidimensional scaling to simultaneously transform the features from the poor quality probe images and the high-quality gallery images in such a manner that the distances between them approximate the distances had the probe images been captured in the same conditions as the gallery images . tensor analysis is used for facial landmark localization in the low-resolution uncontrolled probe images for computing the features . thorough evaluation on the multi-pie dataset and comparisons with state-of-the-art super-resolution and classifier-based approaches are performed to illustrate the usefulness of the proposed approach . experiments on surveillance imagery further signify the applicability of the framework . we also show the usefulness of the proposed approach for the application of tracking and recognition in surveillance videos . story_separator_special_tag in this paper , a new technique for modeling textured 3d faces is introduced . 3d faces can either be generated automatically from one or more photographs , or modeled directly through an intuitive user interface . users are assisted in two key problems of computer aided face modeling . first , new face images or new 3d face models can be registered automatically by computing dense one-to-one correspondence to an internal face model . second , the approach regulates the naturalness of modeled faces avoiding faces with an unlikely appearance . starting from an example set of 3d face models , we derive a morphable face model by transforming the shape and texture of the examples into a vector space representation . new faces and expressions can be modeled by forming linear combinations of the prototypes . shape and texture constraints derived from the statistics of our example faces are used to guide manual modeling or automated matching algorithms . we show 3d face reconstructions from single images and their applications for photo-realistic image manipulations . we also demonstrate face manipulations according to complex parameters such as gender , fullness of a face or its distinctiveness . story_separator_special_tag this paper presents a method for face recognition across variations in pose , ranging from frontal to profile views , and across a wide range of illuminations , including cast shadows and specular reflections . to account for these variations , the algorithm simulates the process of image formation in 3d space , using computer graphics , and it estimates 3d shape and texture of faces from single images . the estimate is achieved by fitting a statistical , morphable model of 3d faces to images . the model is learned from a set of textured 3d scans of heads . we describe the construction of the morphable model , an algorithm to fit the model to images , and a framework for face identification . in this framework , faces are represented by model parameters for 3d shape and texture . we present results obtained with 4,488 images from the publicly available cmu-pie database and 1,940 images from the feret database . story_separator_special_tag the decomposition of deformations by principal warps is demonstrated . the method is extended to deal with curving edges between landmarks . this formulation is related to other applications of splines current in computer vision . how they might aid in the extraction of features for analysis , comparison , and diagnosis of biological and medical images in indicated . > story_separator_special_tag this survey focuses on recognition performed by matching models of the three-dimensional shape of the face , either alone or in combination with matching corresponding two-dimensional intensity images . research trends to date are summarized , and challenges confronting the development of more accurate three-dimensional face recognition are identified . these challenges include the need for better sensors , improved recognition algorithms , and more rigorous experimental methodology . story_separator_special_tag two new algorithms for computer recognition of human faces , one based on the computation of a set of geometrical features , such as nose width and length , mouth position , and chin shape , and the second based on almost-gray-level template matching , are presented . the results obtained for the testing sets show about 90 % correct recognition using geometrical features and perfect recognition using template matching . > story_separator_special_tag pose variation is one of the challenging factors for face recognition . in this paper , we propose a novel cross-pose face recognition method named as regularized latent least square regression ( rllsr ) . the basic assumption is that the images captured under different poses of one person can be viewed as pose-specific transforms of a single ideal object . we treat the observed images as regressor , the ideal object as response , and then formulate this assumption in the least square regression framework , so as to learn the multiple pose-specific transforms . specifically , we incorporate some prior knowledge as two regularization terms into the least square approach : 1 ) the smoothness regularization , as the transforms for nearby poses should not differ too much ; 2 ) the local consistency constraint , as the distribution of the latent ideal objects should preserve the geometric structure of the observed image space . we develop an alternating algorithm to simultaneously solve for the ideal objects of the training individuals and a set of pose-specific transforms . the experimental results on the multi-pie dataset demonstrate the effectiveness of the proposed method and superiority over the previous methods story_separator_special_tag we present facewarehouse , a database of 3d facial expressions for visual computing applications . we use kinect , an off-the-shelf rgbd camera , to capture 150 individuals aged 7-80 from various ethnic backgrounds . for each person , we captured the rgbd data of her different expressions , including the neutral expression and 19 other expressions such as mouth-opening , smile , kiss , etc . for every rgbd raw data record , a set of facial feature points on the color image such as eye corners , mouth contour , and the nose tip are automatically localized , and manually adjusted if better accuracy is required . we then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh . starting from these fitted face meshes , we construct a set of individual-specific expression blendshapes for each person . these meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes : identity and expression . compared with previous 3d facial databases , for every person in our database , story_separator_special_tag we present a novel approach to address the representation issue and the matching issue in face recognition ( verification ) . firstly , our approach encodes the micro-structures of the face by a new learning-based encoding method . unlike many previous manually designed encoding methods ( e.g. , lbp or sift ) , we use unsupervised learning techniques to learn an encoder from the training examples , which can automatically achieve very good tradeoff between discriminative power and invariance . then we apply pca to get a compact face descriptor . we find that a simple normalization mechanism after pca can further improve the discriminative ability of the descriptor . the resulting face representation , learning-based ( le ) descriptor , is compact , highly discriminative , and easy-to-extract . to handle the large pose variation in real-life scenarios , we propose a pose-adaptive matching method that uses pose-specific classifiers to deal with different pose combinations ( e.g. , frontal v.s . frontal , frontal v.s . left ) of the matching face pair . our approach is comparable with the state-of-the-art methods on the labeled face in wild ( lfw ) benchmark ( we achieved 84.45 % recognition rate story_separator_special_tag we propose using stereo matching for 2-d face recognition across pose . we match one 2-d query image to one 2-d gallery image without performing 3-d reconstruction . then the cost of this matching is used to evaluate the similarity of the two images . we show that this cost is robust to pose variations . to illustrate this idea we built a face recognition system on top of a dynamic programming stereo matching algorithm . the method works well even when the epipolar lines we use do not exactly fit the viewpoints . we have tested our approach on the pie dataset . in all the experiments , our method demonstrates effective performance compared with other algorithms . story_separator_special_tag face recognition across pose is a problem of fundamental importance in computer vision . we propose to address this problem by using stereo matching to judge the similarity of two , 2d images of faces seen from different poses . stereo matching allows for arbitrary , physically valid , continuous correspondences . we show that the stereo matching cost provides a very robust measure of similarity of faces that is insensitive to pose variations . to enable this , we show that , for conditions common in face recognition , the epipolar geometry of face images can be computed using either four or three feature points . we also provide a straightforward adaptation of a stereo matching algorithm to compute the similarity between faces . the proposed approach has been tested on the cmu pie data set and demonstrates superior performance compared to existing methods in the presence of pose variation . it also shows robustness to lighting variation . story_separator_special_tag stereo matching has been used for face recognition in the presence of pose variation . in this approach , stereo matching is used to compare two 2-d images based on correspondences that reflect the effects of viewpoint variation and allow for occlusion . we show how to use stereo matching to derive image descriptors that can be used to train a classifier . this improves face recognition performance , producing the best published results on the cmu pie dataset . we also demonstrate that classification based on stereo matching can be used for general object classification in the presence of pose variation . in preliminary experiments we show promising results on the 3d object class dataset , a standard , challenging 3d classification data set . story_separator_special_tag 2-d face recognition in the presence of large pose variations presents a significant challenge . when comparing a frontal image of a face to a near profile image , one must cope with large occlusions , non-linear correspondences , and significant changes in appearance due to viewpoint . stereo matching has been used to handle these problems , but performance of this approach degrades with large pose changes . we show that some of this difficulty is due to the effect that foreshortening of slanted surfaces has on window-based matching methods , which are needed to provide robustness to lighting change . we address this problem by designing a new , dynamic programming stereo algorithm that accounts for surface slant . we show that on the cmu pie dataset this method results in significant improvements in recognition performance . story_separator_special_tag the variation of facial appearance due to the viewpoint ( /pose ) degrades face recognition systems considerably , which is one of the bottlenecks in face recognition . one of the possible solutions is generating virtual frontal view from any given nonfrontal view to obtain a virtual gallery/probe face . following this idea , this paper proposes a simple , but efficient , novel locally linear regression ( llr ) method , which generates the virtual frontal view from a given nonfrontal face image . we first justify the basic assumption of the paper that there exists an approximate linear mapping between a nonfrontal face image and its frontal counterpart . then , by formulating the estimation of the linear mapping as a prediction problem , we present the regression-based solution , i.e. , globally linear regression . to improve the prediction accuracy in the case of coarse alignment , llr is further proposed . in llr , we first perform dense sampling in the nonfrontal face image to obtain many overlapped local patches . then , the linear regression technique is applied to each small patch for the prediction of its virtual frontal patch . through the combination of story_separator_special_tag making a high-dimensional ( e.g. , 100k-dim ) feature for face recognition seems not a good idea because it will bring difficulties on consequent training , computation , and storage . this prevents further exploration of the use of a high dimensional feature . in this paper , we study the performance of a high dimensional feature . we first empirically show that high dimensionality is critical to high performance . a 100k-dim feature , based on a single-type local binary pattern ( lbp ) descriptor , can achieve significant improvements over both its low-dimensional version and the state-of-the-art . we also make the high-dimensional feature practical . with our proposed sparse projection method , named rotated sparse regression , both computation and model storage can be reduced by over 100 times without sacrificing accuracy quality . story_separator_special_tag expression and pose variations are major challenges for reliable face recognition ( fr ) in 2d . in this paper , we aim to endow state of the art face recognition sdks with robustness to facial expression variations and pose changes by using an extended 3d morphable model ( 3dmm ) which isolates identity variations from those due to facial expressions . specifically , given a probe with expression , a novel view of the face is generated where the pose is rectified and the expression neutralized . we present two methods of expression neutralization . the first one uses prior knowledge to infer the neutral expression image from an input image . the second method , specifically designed for verification , is based on the transfer of the gallery face expression to the probe . experiments using rectified and neutralized view with a standard commercial fr sdk on two 2d face databases , namely multi-pie and ar , show significant performance improvement of the commercial sdk to deal with expression and pose variations and demonstrates the effectiveness of the proposed approach . story_separator_special_tag we demonstrate a novel method of interpreting images using an active appearance model ( aam ) . an aam contains a statistical model of the shape and grey-level appearance of the object of interest which can generalise to almost any valid example . during a training phase we learn the relationship between model parameter displacements and the residual errors induced between a training image and a synthesised model example . to match to an image we measure the current residuals and use the model to predict changes to the current parameters , leading to a better fit . a good overall match is obtained in a few iterations , even from poor starting estimates . we describe the technique in detail and give results of quantitative performance tests . we anticipate that the aam algorithm will be an important method for locating deformable objects in many applications . story_separator_special_tag ! , model-based vision is firmly established as a robust approach to recognizing and locating known rigid objects in the presence of noise , clutter , and occlusion . it is more problematic to apply modelbased methods to images of objects whose appearance can vary , though a number of approaches based on the use of flexible templates have been proposed . the problem with existing methods is that they sacrifice model specificity in order to accommodate variability , thereby compromising robustness during image interpretation . we argue that a model should only be able to deform in ways characteristic of the class of objects it represents . we describe a method for building models by learning patterns of variability from a training set of correctly annotated images . these models can be used for image search in an iterative refinement algorithm analogous to that employed by active contour models ( snakes ) . the key difference is that our active shape models can only deform to fit the data in ways consistent with the training set . we show several practical examples where we have built such models and used them to locate partially occluded objects in noisy , story_separator_special_tag to perform unconstrained face recognition robust to variations in illumination , pose and expression , this paper presents a new scheme to extract multi-directional multi-level dual-cross patterns ( mdml-dcps ) from face images . specifically , the mdml-dcps scheme exploits the first derivative of gaussian operator to reduce the impact of differences in illumination and then computes the dcp feature at both the holistic and component levels . dcp is a novel face image descriptor inspired by the unique textural structure of human faces . it is computationally efficient and only doubles the cost of computing local binary patterns , yet is extremely robust to pose and expression variations . mdml-dcps comprehensively yet efficiently encodes the invariant characteristics of a face image from multiple levels into patterns that are highly discriminative of inter-personal differences but robust to intra-personal variations . experimental results on the feret , cas-perl-r1 , frgc 2.0 , and lfw databases indicate that dcp outperforms the state-of-the-art local descriptors ( e.g. , lbp , ltp , lpq , poem , tlbp , and lgxp ) for both face identification and face verification tasks . more impressively , the best performance is achieved on the challenging lfw and story_separator_special_tag face images appearing in multimedia applications , e.g. , social networks and digital entertainment , usually exhibit dramatic pose , illumination , and expression variations , resulting in considerable performance degradation for traditional face recognition algorithms . this paper proposes a comprehensive deep learning framework to jointly learn face representation using multimodal information . the proposed deep learning structure is composed of a set of elaborately designed convolutional neural networks ( cnns ) and a three-layer stacked auto-encoder ( sae ) . the set of cnns extracts complementary facial features from multimodal data . then , the extracted features are concatenated to form a high-dimensional feature vector , whose dimension is compressed by sae . all of the cnns are trained using a subset of 9,000 subjects from the publicly available casia-webface database , which ensures the reproducibility of this work . using the proposed single cnn architecture and limited training data , 98.43 % verification rate is achieved on the lfw database . benefitting from the complementary information contained in multimodal data , our small ensemble system achieves higher than 99.0 % recognition rate on lfw using publicly available training set . story_separator_special_tag face images captured in unconstrained environments usually contain significant pose variation , which dramatically degrades the performance of algorithms designed to recognize frontal faces . this paper proposes a novel face identification framework capable of handling the full range of pose variations within \xb190\xb0 of yaw . the proposed framework first transforms the original pose-invariant face recognition problem into a partial frontal face recognition problem . a robust patch-based face representation scheme is then developed to represent the synthesized partial frontal faces . for each patch , a transformation dictionary is learnt under the proposed multi-task learning scheme . the transformation dictionary transforms the features of different poses into a discriminative subspace . finally , face matching is performed at patch level rather than at the holistic level . extensive and systematic experimentation on feret , cmu-pie , and multi-pie databases shows that the proposed method consistently outperforms single-task-based baselines as well as state-of-the-art methods for the pose problem . we further extend the proposed algorithm for the unconstrained face verification problem and achieve top-level performance on the challenging lfw data set . story_separator_special_tag pose variation is a great challenge for robust face recognition . in this paper , we present a fully automatic pose normalization algorithm that can handle continuous pose variations and achieve high face recognition accuracy . first , an automatic method is proposed to find pose-dependent correspondences between 2-d facial feature points and 3-d face model . this method is based on a multi-view random forest embedded active shape model . then we densely map each pixel in the face image onto the 3-d face model and rotate it to the frontal view . the filling of occluded face regions is guided by facial symmetry . recognition experiments were conducted on the two western databases cmu-pie , feret and one eastern database cas-peal . currently the algorithm has been trained with pose variation up to \xb150\xb0 in yaw . our algorithm not only achieves high recognition accuracy for learnt poses but also shows good generalizability for extreme poses . furthermore , it suggests the promising application to people of different races . story_separator_special_tag face recognition across large pose changes is one of the hardest problems for automatic face recognition . recently , approaches that use partial least squares ( pls ) to compute pairwise pose-independent coupled subspaces have achieved good results on this problem . in this paper , we perform a thorough experimental analysis of the pls approach for pose-invariant face recognition . we find that the use of different alignment methods can have a significant influence on the results . we propose a simple and consistent alignment method that is easily reproducible and uses only few hand-tuned parameters . further , we find that block-based approaches outperform those using a holistic face representation . however , we note that the size , positioning and selection of the extracted blocks has a large influence on the performance of pls-based approaches , with the optimal sizes and selections differing significantly for different feature representations . finally , we show that local pls using simple intensity values performs almost as well as more sophisticated feature extraction methods like gabor features for frontal gallery images . however , gabor features perform significantly better with non-frontal gallery images . the achieved results exceed the previously reported story_separator_special_tag we focused this work on handling variation in facial appearance caused by 3d head pose . a pose normalization approach based on fitting active appearance models ( aam ) on a given face image was investigated . profile faces with different rotation angles in depth were warped into shape-free frontal view faces . face recognition experiments were carried out on the pose normalized facial images with a local appearance-based approach . the experimental results showed a significant improvement in accuracy . the local appearance-based face recognition approach is found to be robust against errors introduced by face model fitting . story_separator_special_tag one of the key remaining problems in face recognition is that of handling the variability in appearance due to changes in pose . the authors present a simple and computationally efficient 3-d pose recovery methodology . it addresses the computationally expensive problem of current generic 3-d model pose recovery methods and thus is able to be applied in real-time applications . compared with the virtual view methods , the face identification system with the proposed pose recovery method demands much less storage space as it transforms the 2-d rotated face to the 2-d fronto-parallel view for subsequent identification rather than generating multiple virtual views for a single input face . experiments evaluating the effectiveness of the technique are reported . the systems are compared with human performances and existing techniques . story_separator_special_tag we present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint . our method exploits the fact that the set of images of an object in fixed pose , but under all possible illumination conditions , is a convex cone in the space of images . using a small number of training images of each face taken with different lighting directions , the shape and albedo of the face can be reconstructed . in turn , this reconstruction serves as a generative model that can be used to render ( or synthesize ) images of the face under novel poses and illumination conditions . the pose space is then sampled and , for each pose , the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model . our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone . test results show that the method performs almost without error , except on the most extreme lighting directions . story_separator_special_tag this paper proposes novel ways to deal with pose variations in a 2-d face recognition scenario . using a training set of sparse face meshes , we built a point distribution model and identified the parameters which are responsible for controlling the apparent changes in shape due to turning and nodding the head , namely the pose parameters . based on them , we propose two approaches for pose correction : 1 ) a method in which the pose parameters from both meshes are set to typical values of frontal faces , and 2 ) a method in which one mesh adopts the pose parameters of the other one . finally , we obtain pose corrected meshes and , taking advantage of facial symmetry , virtual views are synthesized via thin plate splines-based warping . given that the corrected images are not embedded into a constant reference frame , holistic methods are not suitable for feature extraction . instead , the virtual faces are fed into a system that makes use of gabor filtering for recognition . unlike other approaches that warp faces onto a mean shape , we show that if only pose parameters are modified , client specific story_separator_special_tag a close relationship exists between the advancement of face recognition algorithms and the availability of face databases varying factors that affect facial appearance in a controlled manner . the cmu pie database has been very influential in advancing research in face recognition across pose and illumination . despite its success the pie database has several shortcomings : a limited number of subjects , a single recording session and only few expressions captured . to address these issues we collected the cmu multi-pie database . it contains 337 subjects , imaged under 15 view points and 19 illumination conditions in up to four recording sessions . in this paper we introduce the database and describe the recording procedure . we furthermore present results from baseline experiments using pca and lda classifiers to highlight similarities and differences between pie and multi-pie . story_separator_special_tag abstract in a vast number of real-world face recognition applications , gallery and probe image sets are captured from different scenarios . for such multi-view data , face recognition systems often perform poorly . to tackle this problem , in this paper we propose a graph embedding framework , which can project the multi-view data into a common subspace of higher discriminability between classes . this framework can be readily utilized to extend classical dimensionality reduction methods to multi-view scenarios . hence , by utilizing the framework for multi-view face recognition , we propose multi-view linear discriminant analysis ( milda ) . we also empirically demonstrate that , for several distinct multi-view face recognition scenarios , milda has an excellent performance and outperforms many popular approaches . story_separator_special_tag 3d face modeling from 2d face images is of significant importance for face analysis , animation and recognition . previous research on this topic mainly focused on 3d face modeling from a single 2d face image ; however , a single face image can only provide a limited description of a 3d face . in many applications , for example , law enforcement , multi-view face images are usually captured for a subject during enrollment , which makes it desirable to build a 3d face texture model , given a pair of frontal and profile face images . we first determine the correspondence between un-calibrated frontal and profile face images through facial landmark alignment . an initial 3d face shape is then reconstructed from the frontal face image , followed by shape refinement utilizing the depth information provided by the profile image . finally , face texture is extracted by mapping the frontal face image on the recovered 3d face shape . the proposed method is utilized for 2d face recognition in two scenarios : ( i ) normalization of probe image , and ( ii ) enhancing the representation capability of gallery set . experimental results comparing the proposed story_separator_special_tag taking advantage of the statistical learning-based point of view , several approaches of frontal face image synthesis have received remarkable achievement . however , the existing methods mainly utilize either ordinary least squares ( ols ) or fixed $ { l_1 } $ norm penalized sparse regression to estimate the solution . for the former , the solution is unstable when the linear equations system is ill-conditioned . for the latter , sparsity is only considered , while the significance of local similarity between input image and each training sample is ignored . thus the synthesized result fails to faithfully approximate the ground truth . moreover , these traditional methods can not ensure the consistency between corresponding patches in frontal and profile faces . to address these problems , we present a unified regularization framework ( urf ) by imposing two regularization terms onto the solution . firstly , we introduce an $ { l_2 } $ -norm constraint and impose a diagonal weights matrix onto it , in which each diagonal entry is defined by the spatial distance between input image patch and individual patch in training set . secondly , to mitigate the aforementioned inconsistency problem , we story_separator_special_tag we present a data-driven method for estimating the 3d shapes of faces viewed in single , unconstrained photos ( aka `` in-the-wild '' ) . our method was designed with an emphasis on robustness and efficiency - with the explicit goal of deployment in real-world applications which reconstruct and display faces in 3d . our key observation is that for many practical applications , warping the shape of a reference face to match the appearance of a query , is enough to produce realistic impressions of the query 's 3d shape . doing so , however , requires matching visual features between the ( possibly very different ) query and reference images , while ensuring that a plausible face shape is produced . to this end , we describe an optimization process which seeks to maximize the similarity of appearances and depths , jointly , to those of a reference model . we describe our system for monocular face shape reconstruction and present both qualitative and quantitative experiments , comparing our method against alternative systems , and demonstrating its capabilities . finally , as a testament to its suitability for real-world applications , we offer an open , on-line implementation story_separator_special_tag `` frontalization '' is the process of synthesizing frontal facing views of faces appearing in single unconstrained photos . recent reports have suggested that this process may substantially boost the performance of face recognition systems . this , by transforming the challenging problem of recognizing faces viewed from unconstrained viewpoints to the easier problem of recognizing faces in constrained , forward facing poses . previous frontalization methods did this by attempting to approximate 3d facial shapes for each query image . we observe that 3d face shape estimation from unconstrained photos may be a harder problem than frontalization and can potentially introduce facial misalignments . instead , we explore the simpler approach of using a single , unmodified , 3d surface as an approximation to the shape of all input faces . we show that this leads to a straightforward , efficient and easy to implement method for frontalization . more importantly , it produces aesthetic new frontal views and is surprisingly effective when used for face recognition and gender estimation . story_separator_special_tag pose , illumination , expression and the generalization of such effects to unseen face data samples are the fundamental problems faced in face recognition . the significant contribution of this thesis is the ability to match any two face images with a large pose angle variation . this approach utilizes a proposed 3d prior face model in order to cover a wide range of poses . to achieve this , a rapid 3d modeling scheme is proposed , called 3d generic elastic model ( gem ) , which allows the synthesis of novel 2d images faster and more realistically than traditional 3d morphable model ( 3dmm ) approaches used to date . in contrast , our work only requires the observed facial landmarks in a face image ( see appendix a for proposed work in robust facial landmarking and alignment using combined active shape and active appearance models ) , coupled with the proposed 3d gem depth-map generated from the usf human-id database . although we only use a single gem , we show that we can model a diverse set of 3d dense face shapes which provide visually accurate novel 2d pose synthesis of faces . indeed , we story_separator_special_tag this paper proposes an efficient way of modeling 3d faces by using only two a frontal and a profile images . although it is desirable to utilize only one single image for 3d face modeling , more accurate depth information can be obtained if we use a profile face image additionally . despite this seemly straightforward task , however , no standard solutions for 3d face modeling with two images have yet been reported . to tackle this problem , in our work , we first extract facial shape information from each image and then align these two shapes in order to obtain a sparse 3d face . then , the observed sparse 3d face is combined into generic dense depth information . by doing so , we reflect both the observed 3d sparse depth information and smooth depth changes around facial areas in our reconstructed 3d shape . finally , the intensity of the frontal image is texture-mapped onto the reconstructed 3d shape for realistic 3d modeling . unlike other 3d modeling methods , our proposed work is extremely fast ( within a few seconds ) and does not require any complex hardware settings or calibration . we illustrate story_separator_special_tag this paper provides an in-depth analysis on face shape alignment for pose insensitive face recognition . the dissimilarity between two face images can be modeled as the difference in intensity between these two images , obtained by warping these faces onto the same shape . in order to achieve this , we must first align both face images independently to obtain a sparse 2-d shape representation . we achieve this by using a combination of asms and aams ( casaams ) . we then exchange these two shapes and obtain new intensity ( texture ) faces based on these exchanged shapes . this allows us to align the two faces with increased pixel-level correspondence while simultaneously achieving a certain degree of pose correction . in order to account for large pose variation , it becomes necessary to model the underlying 3-d face structure for the synthesis of novel 2-d poses . however , in many real-world scenarios , only a single image of the subject is provided and acquisition of the 3-d model is not always feasible . to tackle this common real-world scenario , we propose a novel approach for modeling faces , called 3d generic elastic model ( story_separator_special_tag in this paper , we propose a novel method for generating a realistic 3d human face from a single 2d face image for the purpose of synthesizing new 2d face images at arbitrary poses using gender and ethnicity specific models . we employ the generic elastic model ( gem ) approach , which elastically deforms a generic 3d depth-map based on the sparse observations of an input face image in order to estimate the depth of the face image . particularly , we show that gender and ethnicity specific gems ( ge-gems ) can approximate the 3d shape of the input face image more accurately , achieving a better generalization of 3d face modeling and reconstruction compared to the original gem approach . we qualitatively validate our method using publicly available databases by showing each reconstructed 3d shape generated from a single image and new synthesized poses of the same person at arbitrary angles . for quantitative comparisons , we compare our synthesized results against 3d scanned data and also perform face recognition using synthesized images generated from a single enrollment frontal image . we obtain promising results for handling pose and expression changes based on the proposed method . story_separator_special_tag one of the key challenges for current face recognition techniques is how to handle pose variations between the probe and gallery face images . in this paper , we present a method for reconstructing the virtual frontal view from a given nonfrontal face image using markov random fields ( mrfs ) and an efficient variant of the belief propagation algorithm . in the proposed approach , the input face image is divided into a grid of overlapping patches , and a globally optimal set of local warps is estimated to synthesize the patches at the frontal view . a set of possible warps for each patch is obtained by aligning it with images from a training database of frontal faces . the alignments are performed efficiently in the fourier domain using an extension of the lucas-kanade algorithm that can handle illumination variations . the problem of finding the optimal warps is then formulated as a discrete labeling problem using an mrf . the reconstructed frontal face image can then be used with any face recognition technique . the two main advantages of our method are that it does not require manually selected facial landmarks or head pose estimation . in story_separator_special_tag approaches for cross-pose face recognition can be split into 2d image based and 3d model based . many 2d based methods are reported with promising performance but can only work for poses same as those in the training set . although 3d based methods can handle arbitrary poses , only a small number of approaches are available . extended from a latest face reconstruction method using a single 3d reference model , this study focuses on using the reconstructed 3d face for recognition . the reconstructed 3d face allows the generation of multi-pose samples for recognition . the recognition performance varies with poses , the closer the pose to the frontal , the better the performance attained . several ways to improve the performance are attempted , including different numbers of fiducial points for alignment , multiple reference models considered in the reconstruction phase , and both frontal and profile poses available in the gallery . these attempts make this approach competitive to the state-of-the-art methods . story_separator_special_tag the 3d morphable model ( 3dmm ) is currently receiving considerable attention for human face analysis . most existing work focuses on fitting a 3dmm to high resolution images . however , in many applications , fitting a 3dmm to low-resolution images is also important . in this paper , we propose a resolution-aware 3dmm ( ra- 3dmm ) , which consists of 3 different resolution 3dmms : high-resolution 3dmm ( hr- 3dmm ) , medium-resolution 3dmm ( mr-3dmm ) and low-resolution 3dmm ( lr-3dmm ) . ra-3dmm can automatically select the best model to fit the input images of different resolutions . the multi-resolution model was evaluated in experiments conducted on pie and xm2vts databases . the experimental results verified that hr- 3dmm achieves the best performance for input image of high resolution , and mr- 3dmm and lr-3dmm worked best for medium and low resolution input images , respectively . a model selection strategy incorporated in the ra-3dmm is proposed based on these results . the ra-3dmm model has been applied to pose correction of face images ranging from high to low resolution . the face verification results obtained with the pose-corrected images show considerable performance improvement over story_separator_special_tag large pose and illumination variations are very challenging for face recognition . the 3d morphable model ( 3dmm ) approach is one of the effective methods for pose and illumination invariant face recognition . however , it is very difficult for the 3dmm to recover the illumination of the 2d input image because the ratio of the albedo and illumination contributions in a pixel intensity is ambiguous . unlike the traditional idea of separating the albedo and illumination contributions using a 3dmm , we propose a novel albedo based 3d morphable model ( ab3dmm ) , which removes the illumination component from the images using illumination normalisation in a preprocessing step . a comparative study of different illumination normalisation methods for this step is conducted on pie and multi-pie databases . the results show that overall performance of our method outperforms state-of-the-art methods . story_separator_special_tag 3d face reconstruction from a single 2d image can be performed using a 3d morphable model ( 3dmm ) in an analysis-by-synthesis approach . however , the reconstruction is an ill-posed problem . the recovery of the illumination characteristics of the 2d input image is particularly difficult because the proportion of the albedo and shading contributions in a pixel intensity is ambiguous . in this paper we propose the use of a facial symmetry constraint , which helps to identify the relative contributions of albedo and shading . the facial symmetry constraint is incorporated in a multi-feature optimisation framework , which realises the fitting process . by virtue of this constraint better illumination parameters can be recovered , and as a result the estimated 3d face shape and surface texture are more accurate . the proposed method is validated on the pie face database . the experimental results show that the introduction of facial symmetry constraint improves the performance of both , face reconstruction and face recognition . story_separator_special_tag the labeled faces in the wild ( lfw ) database has spurred significant research in the problem of unconstrained face verification and other related problems . while careful usage guidelines were established in the original technical report describing the database , certain unforeseen issues have arisen . one of the major issues is how to make fair comparisons among algorithms that use additional outside data , i.e. , data that is not part of lfw , for training . another issue is the need for a clear definition of the unsupervised paradigm and the proper protocols for producing results under this paradigm . this technical report discusses these issues in detail and provides a new description of how we curate results and how we group algorithms together based on the details of the training data that they use . we encourage any authors who intend to publish their results on lfw to read both the original technical report and this one carefully . story_separator_special_tag most face databases have been created under controlled conditions to facilitate the study of specific parameters on the face recognition problem . these parameters include such variables as position , pose , lighting , background , camera quality , and gender . while there are many applications for face recognition technology in which one can control the parameters of image acquisition , there are also many applications in which the practitioner has little or no control over such parameters . this database , labeled faces in the wild , is provided as an aid in studying the latter , unconstrained , recognition problem . the database contains labeled face photographs spanning the range of conditions typically encountered in everyday life . the database exhibits natural variability in factors such as pose , lighting , race , accessories , occlusions , and background . in addition to describing the details of the database , we provide specific experimental paradigms for which the database is suitable . this is done in an effort to make research performed with the database as consistent and comparable as possible . we provide baseline results , including results of a state of the art face recognition story_separator_special_tag this paper addresses the problem of automatically tuning multiple kernel parameters for the kernel-based linear discriminant analysis ( lda ) method . the kernel approach has been proposed to solve face recognition problems under complex distribution by mapping the input space to a high-dimensional feature space . some recognition algorithms such as the kernel principal components analysis , kernel fisher discriminant , generalized discriminant analysis , and kernel direct lda have been developed in the last five years . the experimental results show that the kernel-based method is a good and feasible approach to tackle the pose and illumination variations . one of the crucial factors in the kernel approach is the selection of kernel parameters , which highly affects the generalization capability and stability of the kernel-based learning methods . in view of this , we propose an eigenvalue-stability-bounded margin maximization ( esbmm ) algorithm to automatically tune the multiple parameters of the gaussian radial basis function kernel for the kernel subspace lda ( kslda ) method , which is developed based on our previously developed subspace lda method . the esbmm algorithm improves the generalization capability of the kernel-based lda method by maximizing the margin maximization criterion while story_separator_special_tag recently , more and more approaches are emerging to solve the cross-view matching problem where reference samples and query samples are from different views . in this paper , inspired by graph embedding , we propose a unified framework for these cross-view methods called cross-view graph embedding . the proposed framework can not only reformulate most traditional cross-view methods ( e.g. , cca , pls and cdfe ) , but also extend the typical single-view algorithms ( e.g. , pca , lda and lpp ) to cross-view editions . furthermore , our general framework also facilitates the development of new cross-view methods . in this paper , we present a new algorithm named cross-view local discriminant analysis ( cloda ) under the proposed framework . different from previous cross-view methods only preserving inter-view discriminant information or the intra-view local structure , cloda preserves the local structure and the discriminant information of both intra-view and inter-view . extensive experiments are conducted to evaluate our algorithms on two cross-view face recognition problems : face recognition across poses and face recognition across resolutions . these real-world face recognition experiments demonstrate that our framework achieves impressive performance in the cross-view problems . story_separator_special_tag face recognition with variant pose , illumination and expression ( pie ) is a challenging problem . in this paper , we propose an analysis-by-synthesis framework for face recognition with variant pie . first , an efficient two-dimensional ( 2d ) -to-three-dimensional ( 3d ) integrated face reconstruction approach is introduced to reconstruct a personalized 3d face model from a single frontal face image with neutral expression and normal illumination . then , realistic virtual faces with different pie are synthesized based on the personalized 3d face to characterize the face subspace . finally , face recognition is conducted based on these representative virtual faces . compared with other related work , this framework has following advantages : ( 1 ) only one single frontal face is required for face recognition , which avoids the burdensome enrollment work ; ( 2 ) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex pie ; and ( 3 ) compared with other 3d reconstruction approaches , our proposed 2d-to-3d integrated face reconstruction approach is fully automatic and more efficient . the extensive experimental results show that the synthesized virtual faces significantly improve the accuracy of face story_separator_special_tag the 3d morphable model ( 3dmm ) and the structure from motion ( sfm ) methods are widely used for 3d facial reconstruction from 2d single-view or multiple-view images . however , model-based methods suffer from disadvantages such as high computational costs and vulnerability to local minima and head pose variations . the sfm-based methods require multiple facial images in various poses . to overcome these disadvantages , we propose a single-view-based 3d facial reconstruction method that is person-specific and robust to pose variations . our proposed method combines the simplified 3dmm and the sfm methods . first , 2d initial frontal facial feature points ( ffps ) are estimated from a preliminary 3d facial image that is reconstructed by the simplified 3dmm . second , a bilateral symmetric facial image and its corresponding ffps are obtained from the original side-view image and corresponding ffps by using the mirroring technique . finally , a more accurate the 3d facial shape is reconstructed by the sfm using the frontal , original , and bilateral symmetric ffps . we evaluated the proposed method using facial images in 35 different poses . the reconstructed facial images and the ground-truth 3d facial shapes obtained story_separator_special_tag face recognition has been studied extensively ; however , real-world face recognition still remains a challenging task . the demand for unconstrained practical face recognition is rising with the explosion of online multimedia such as social networks , and video surveillance footage where face analysis is of significant importance . in this paper , we approach face recognition in the context of graph theory . we recognize an unknown face using an external reference face graph ( rfg ) . an rfg is generated and recognition of a given face is achieved by comparing it to the faces in the constructed rfg . centrality measures are utilized to identify distinctive faces in the reference face graph . the proposed rfg-based face recognition algorithm is robust to the changes in pose and it is also alignment free . the rfg recognition is used in conjunction with dct locality sensitive hashing for efficient retrieval to ensure scalability . experiments are conducted on several publicly available databases and the results show that the proposed approach outperforms the state-of-the-art methods without any preprocessing necessities such as face alignment . due to the richness in the reference set construction , the proposed method can also story_separator_special_tag identifying subjects with variations caused by poses is one of the most challenging tasks in face recognition , since the difference in appearances caused by poses may be even larger than the difference due to identity . inspired by the observation that pose variations change non-linearly but smoothly , we propose to learn pose-robust features by modeling the complex non-linear transform from the non-frontal face images to frontal ones through a deep network in a progressive way , termed as stacked progressive auto-encoders ( spae ) . specifically , each shallow progressive auto-encoder of the stacked network is designed to map the face images at large poses to a virtual view at smaller ones , and meanwhile keep those images already at smaller poses unchanged . then , stacking multiple these shallow auto-encoders can convert non-frontal face images to frontal ones progressively , which means the pose variations are narrowed down to zero step by step . as a result , the outputs of the topmost hidden layers of the stacked network contain very small pose variations , which can be used as the pose-robust features for face recognition . an additional attractiveness of the proposed method is that no story_separator_special_tag the same object can be observed at different viewpoints or even by different sensors , thus generating multiple distinct even heterogeneous samples . nowadays , more and more applications need to recognize object from distinct views . some seminal works have been proposed for object recognition across two views and applied to multiple views in some inefficient pairwise manner . in this paper , we propose a multi-view discriminant analysis ( mvda ) method , which seeks for a discriminant common space by jointly learning multiple view-specific linear transforms for robust object recognition from multiple views , in a non-pairwise manner . specifically , our mvda is formulated to jointly solve the multiple linear transforms by optimizing a generalized rayleigh quotient , i.e. , maximizing the between-class variations and minimizing the within-class variations of the low-dimensional embeddings from both intra-view and inter-view in the common space . by reformulating this problem as a ratio trace problem , an analytical solution can be achieved by using the generalized eigenvalue decomposition . the proposed method is applied to three multi-view face recognition problems : face recognition across poses , photo-sketch face recognition , and visual ( vis ) image vs. near infrared story_separator_special_tag the inverse compositional image alignment ( icia ) is known as an efficient matching method for 3d morphable models ( 3dmms ) . however , it requires a long computation time since the 3d face models consist of a large number of vertices . also , it requires to recompute the hessian matrix using the visible vertices every iteration . for a fast and an efficient matching , we propose the efficient and accurate hierarchical icia ( hicia ) matching method for 3dmms . the proposed matching method requires multi-resolution 3d face models and the gaussian image pyramid . the multi-resolution 3d face models are built by sub-sampling at the 2:1 sampling rate to construct the lower-resolution 3d face models . for more accurate matching , we use a two-stage model parameter update that only updates the rigid and the texture parameters and then updates all parameters after the initial convergence . we present several experimental results to prove that the proposed method shows better performance than that of the conventional icia matching method . story_separator_special_tag we address the problem of pose-invariant face recognition based on a single model image . to cope with novel view face images , a model of the effect of pose changes on face appearance must be available . face images at an arbitrary pose can be mapped to a reference pose by the model yielding view-invariant representation . such a model typically relies on dense correspondences of different view face images , which are difficult to establish in practice . errors in the correspondences seriously degrade the accuracy of any recognizer . therefore , we assume only the minimal possible set of correspondences , given by the corresponding eye positions . we investigate a number of approaches to pose-invariant face recognition exploiting such a minimal set of facial features correspondences . four different methods are proposed as pose-invariant face recognition `` experts '' and combined in a single framework of expert fusion . each expert explicitly or implicitly realizes the three sequential functions jointly required to capture the nonlinear manifolds of face pose changes : representation , view transformation , and class discriminative feature extraction . within this structure , the experts are designed for diversity . we compare a story_separator_special_tag rapid progress in unconstrained face recognition has resulted in a saturation in recognition accuracy for current benchmark datasets . while important for early progress , a chief limitation in most benchmark datasets is the use of a commodity face detector to select face imagery . the implication of this strategy is restricted variations in face pose and other confounding factors . this paper introduces the iarpa janus benchmark a ( ijb-a ) , a publicly available media in the wild dataset containing 500 subjects with manually localized face images . key features of the ijb-a dataset are : ( i ) full pose variation , ( ii ) joint use for face recognition and face detection benchmarking , ( iii ) a mix of images and videos , ( iv ) wider geographic variation of subjects , ( v ) protocols supporting both open-set identification ( 1 n search ) and verification ( 1 1 comparison ) , ( vi ) an optional protocol that allows modeling of gallery subjects , and ( vii ) ground truth eye and nose locations . the dataset has been developed using 1,501,267 million crowd sourced annotations . baseline accuracies for both face detection story_separator_special_tag we present two novel methods for face verification . our first method - attribute classifiers - uses binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance ( e.g. , gender , race , and age ) . our second method - simile classifiers - removes the manual labeling required for attribute classification and instead learns the similarity of faces , or regions of faces , to specific reference people . neither method requires costly , often brittle , alignment between image pairs ; yet , both methods produce compact visual descriptions , and work on real-world images . furthermore , both the attribute and simile classifiers improve on the current state-of-the-art for the lfw data set , reducing the error rates compared to the current best by 23.92 % and 26.34 % , respectively , and 31.68 % when combined . for further testing across pose , illumination , and expression , we introduce a new data set - termed pubfig - of real-world images of public figures ( celebrities and politicians ) acquired from the internet . this data set is both larger ( 60,000 images ) and deeper ( 300 images per story_separator_special_tag deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction . these methods have dramatically improved the state-of-the-art in speech recognition , visual object recognition , object detection and many other domains such as drug discovery and genomics . deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer . deep convolutional nets have brought about breakthroughs in processing images , video , speech and audio , whereas recurrent nets have shone light on sequential data such as text and speech . story_separator_special_tag the paper proposes a novel , pose-invariant face recognition system based on a deformable , generic 3d face model , that is a composite of : ( 1 ) an edge model , ( 2 ) a color region model and ( 3 ) a wireframe model for jointly describing the shape and important features of the face . the first two submodels are used for image analysis and the third mainly for face synthesis . in order to match the model to face images in arbitrary poses , the 3d model can be projected onto different 2d viewplanes based on rotation , translation and scale parameters , thereby generating multiple face-image templates ( in different sizes and orientations ) . face shape variations among people are taken into account by the deformation parameters of the model . given an unknown face , its pose is estimated by model matching and the system synthesizes face images of known subjects in the same pose . the face is then classified as the subject whose synthesized image is most similar . the synthesized images are generated using a 3d face representation scheme which encodes the 3d shape and texture characteristics of the story_separator_special_tag state-of-the-art 3d morphable model ( 3dmm ) is used widely for 3d face reconstruction based on a single image . however , this method has a high computational cost , and hence , a simplified 3d morphable model ( s3dmm ) was proposed as an alternative . unlike the original 3dmm , s3dmm uses only a sparse 3d facial shape , and therefore , it incurs a lower computational cost . however , this method is vulnerable to self-occlusion due to head rotation . therefore , we propose a solution to the self-occlusion problem in s3dmm-based 3d face reconstruction.this research is novel compared with previous works , in the following three respects . first , self-occlusion of the input face is detected automatically by estimating the head pose using a cylindrical head model . second , a 3d model fitting scheme is designed based on selected visible facial feature points , which facilitates 3d face reconstruction without any effect from self-occlusion . third , the reconstruction performance is enhanced by using the estimated pose as the initial pose parameter during the 3d model fitting process.the experimental results showed that the self-occlusion detection had high accuracy and our proposed method delivered story_separator_special_tag in this paper , we have selected some recent advanced correlation filters : minimum average correlation filter ( mace ) , unconstrained mace filter ( umace ) , phase-only unconstrained mace filter ( poumace ) , distance-classifier correlation filter ( dccf ) [ b.v.k . vijaya kumar , d. casasent , a. mahalanobis , distance-classifier correlation filters for multiclass target recognition . appl . opt . 35 ( 1996 ) 3127-3133 ] and minimax distance transform correlation filter ( mdtc ) and used them to test recognition performance in different situations involving variations in facial expression , illumination conditions and head pose . the paper introduces the first application of correlation filter classifiers to facial images subject to head pose variations . it also demonstrates that it is possible to obtain illumination invariance without using any training images for this purpose . a comparison of mdtc with traditional discriminant learning methods ( e.g. , kpca [ scholikopf , b. , smola , a. , muller , k.r. , nonlinear component analysis as a kernel eigenvalue problem . neural comput. , 10 ( 1999 ) 1299-1319 ] , ipca [ 16 ] , gda [ baudat , g. , anouar , story_separator_special_tag the variations of pose lead to significant performance decline in face recognition systems , which is a bottleneck in face recognition . a key problem is how to measure the similarity between two image vectors of unequal length that viewed from different pose . in this paper , we propose a novel approach for pose robust face recognition , in which the similarity is measured by correlations in a media subspace between different poses on patch level . the media subspace is constructed by canonical correlation analysis , such that the intra-individual correlations are maximized . based on the media subspace two recognition approaches are developed . in the first , we transform non-frontal face into frontal for recognition . and in the second , we perform recognition in the media subspace with probabilistic modeling . the experimental results on feret database demonstrate the efficiency of our approach . story_separator_special_tag the pose problem is one of the bottlenecks for face recognition . in this paper we propose a novel cross-pose face recognition method based on partial least squares ( pls ) . by training on the coupled face images of the same identities and across two different poses , pls maximizes the squares of the intra-individual correlations . therefore , it leads to improvements in recognizing faces across pose differences . the experimental results demonstrate the effectiveness of the proposed method . story_separator_special_tag face recognition methods , which usually represent face images using holistic or local facial features , rely heavily on alignment . their performances also suffer a severe degradation under variations in expressions or poses , especially when there is one gallery per subject only . with the easy access to high-resolution ( hr ) face images nowadays , some hr face databases have recently been developed . however , few studies have tackled the use of hr information for face recognition or verification . in this paper , we propose a pose-invariant face-verification method , which is robust to alignment errors , using the hr information based on pore-scale facial features . a new keypoint descriptor , namely , pore-principal component analysis ( pca ) -scale invariant feature transform ( ppcasift ) adapted from pca-sift is devised for the extraction of a compact set of distinctive pore-scale facial features . having matched the pore-scale features of two-face regions , an effective robust-fitting scheme is proposed for the face-verification task . experiments show that , with one frontal-view gallery only per subject , our proposed method outperforms a number of standard verification methods , and can achieve excellent accuracy even the story_separator_special_tag pose variation remains one of the major factors adversely affect the accuracy of real-world face recognition systems . inspired by the recently proposed probabilistic elastic part ( pep ) model and the success of the deep hierarchical architecture in a number of visual tasks , we propose the hierarchical-pep model to approach the unconstrained face recognition problem . we apply the pep model hierarchically to decompose a face image into face parts at different levels of details to build pose-invariant part-based face representations . following the hierarchy from bottom-up , we stack the face part representations at each layer , discriminatively reduce its dimensionality , and hence aggregate the face part representations layer-by-layer to build a compact and invariant face representation . the hierarchical-pep model exploits the fine-grained structures of the face parts at different levels of details to address the pose variations . it is also guided by supervised information in constructing the face part/face representations . we empirically verify the hierarchical-pep model on two public benchmarks ( i.e. , the lfw and youtube faces ) and a face recognition challenge ( i.e. , the pasc grand challenge ) for image-based and video-based face verification . the state-of-the-art performance story_separator_special_tag pose variation remains to be a major challenge for real-world face recognition . we approach this problem through a probabilistic elastic matching method . we take a part based representation by extracting local features ( e.g. , lbp or sift ) from densely sampled multi-scale image patches . by augmenting each feature with its location , a gaussian mixture model ( gmm ) is trained to capture the spatial-appearance distribution of all face images in the training corpus . each mixture component of the gmm is confined to be a spherical gaussian to balance the influence of the appearance and the location terms . each gaussian component builds correspondence of a pair of features to be matched between two faces/face tracks . for face verification , we train an svm on the vector concatenating the difference vectors of all the feature pairs to decide if a pair of faces/face tracks is matched or not . we further propose a joint bayesian adaptation algorithm to adapt the universally trained gmm to better model the pose variations between the target pair of faces/face tracks , which consistently improves face verification accuracy . our experiments show that our method outperforms the state-of-the-art in story_separator_special_tag many face recognition algorithms use distance-based methods : feature vectors are extracted from each face and distances in feature space are compared to determine matches . in this paper , we argue for a fundamentally different approach . we consider each image as having been generated from several underlying causes , some of which are due to identity ( latent identity variables , or livs ) and some of which are not . in recognition , we evaluate the probability that two faces have the same underlying identity cause . we make these ideas concrete by developing a series of novel generative models which incorporate both within-individual and between-individual variation . we consider both the linear case , where signal and noise are represented by a subspace , and the nonlinear case , where an arbitrary face manifold can be described and noise is position-dependent . we also develop a tied version of the algorithm that allows explicit comparison of faces across quite different viewing conditions . we demonstrate that our model produces results that are comparable to or better than the state of the art for both frontal face recognition and face recognition under varying pose . story_separator_special_tag fully automatic face recognition across pose ( frap ) is one of the most desirable techniques , however , also one of the most challenging tasks in face recognition field . matching a pair of face images in different poses can be converted into matching their pixels corresponding to the same semantic facial point . following this idea , given two images g and p in different poses , we propose a novel method , named morphable displacement field ( mdf ) , to match g with p 's virtual view under g 's pose . by formulating mdf as a convex combination of a number of template displacement fields generated from a 3d face database , our model satisfies both global conformity and local consistency . we further present an approximate but effective solution of the proposed mdf model , named implicit morphable displacement field ( imdf ) , which synthesizes virtual view implicitly via an mdf by minimizing matching residual . this formulation not only avoids intractable optimization of the high-dimensional displacement field but also facilitates a constrained quadratic optimization . the proposed method can work well even when only 2 facial landmarks are labeled , which makes story_separator_special_tag due to the misalignment of image features , the performance of many conventional face recognition methods degrades considerably in across pose scenario . to address this problem , many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses . in this paper , we aim to solve two critical problems in previous image matching-based correspondence learning methods : 1 ) fail to fully exploit face specific structure information in correspondence estimation and 2 ) fail to learn personalized correspondence for each probe image . to this end , we first build a model , termed as morphable displacement field ( mdf ) , to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3d face models . then , we propose a maximal likelihood correspondence estimation ( mlce ) method to learn personalized correspondence based on maximal likelihood frontal face assumption . after obtaining the semantic correspondence encoded in the learned displacement , we can synthesize virtual frontal images of the profile faces for subsequent recognition . using linear discriminant analysis method with pixel-intensity features , state-of-the-art performance is achieved on three multipose benchmarks , i.e. story_separator_special_tag one approach to computer object recognition and modeling the brain 's ventral stream involves unsupervised learning of representations that are invariant to common transformations . however , applications of these ideas have usually been limited to 2d affine transformations , e.g. , translation and scaling , since they are easiest to solve via convolution . in accord with a recent theory of transformation-invariance [ 1 ] , we propose a model that , while capturing other common convolutional networks as special cases , can also be used with arbitrary identity-preserving transformations . the model 's wiring can be learned from videos of transforming objects or any other grouping of images into sets by their depicted object . through a series of successively more complex empirical tests , we study the invariance/discriminability properties of this model with respect to different transformations . first , we empirically confirm theoretical predictions ( from [ 1 ] ) for the case of 2d affine transformations . next , we apply the model to non-affine transformations ; as expected , it performs well on face verification tasks requiring invariance to the relatively smooth transformations of 3d rotation-in-depth and changes in illumination direction . surprisingly , story_separator_special_tag numerous methods have been developed for holistic face recognition with impressive performance . however , few studies have tackled how to recognize an arbitrary patch of a face image . partial faces frequently appear in unconstrained scenarios , with images captured by surveillance cameras or handheld devices ( e.g. , mobile phones ) in particular . in this paper , we propose a general partial face recognition approach that does not require face alignment by eye coordinates or any other fiducial points . we develop an alignment-free face representation method based on multi-keypoint descriptors ( mkd ) , where the descriptor size of a face is determined by the actual content of the image . in this way , any probe face image , holistic or partial , can be sparsely represented by a large dictionary of gallery descriptors . a new keypoint descriptor called gabor ternary pattern ( gtp ) is also developed for robust and discriminative face recognition . experimental results are reported on four public domain face databases ( frgcv2.0 , ar , lfw , and pubfig ) under both the open-set identification and verification scenarios . comparisons with two leading commercial face recognition sdks ( pittpatt story_separator_special_tag in this paper , we present a methodology for precisely comparing the robustness of face recognition algorithms with respect to changes in pose angle and illumination angle . for this study , we have chosen four widely-used algorithms : two subspace analysis methods ( principle component analysis ( pca ) and linear discriminant analysis ( lda ) ) and two probabilistic learning methods ( hidden markov models ( hmm ) and bayesian intra-personal classifier ( bic ) ) . we compare the recognition robustness of these algorithms using a novel database ( facepix ) that captures face images with a wide range of pose angles and illumination angles . we propose a method for deriving a robustness measure for each of these algorithms , with respect to pose and illumination angle changes . the results of this comparison indicate that the subspace methods perform more robustly than the probabilistic learning methods in the presence of pose and illumination angle changes . story_separator_special_tag this paper presents a novel gabor-based kernel principal component analysis ( pca ) method by integrating the gabor wavelet representation of face images and the kernel pca method for face recognition . gabor wavelets first derive desirable facial features characterized by spatial frequency , spatial locality , and orientation selectivity to cope with the variations due to illumination and facial expression changes . the kernel pca method is then extended to include fractional power polynomial models for enhanced face recognition performance . a fractional power polynomial , however , does not necessarily define a kernel function , as it might not define a positive semidefinite gram matrix . note that the sigmoid kernels , one of the three classes of widely used kernel functions ( polynomial kernels , gaussian kernels , and sigmoid kernels ) , do not actually define a positive semidefinite gram matrix either . nevertheless , the sigmoid kernels have been successfully used in practice , such as in building support vector machines . in order to derive real kernel pca features , we apply only those kernel pca eigenvectors that are associated with positive eigenvalues . the feasibility of the gabor-based kernel pca method with fractional story_separator_special_tag in this paper , we study a classification problem in which sample labels are randomly corrupted . in this scenario , there is an unobservable sample with noise-free labels . however , before being observed , the true labels are independently flipped with a probability $ \\rho \\in [ 0,0.5 ) $ , and the random label noise can be class-conditional . here , we address two fundamental problems raised by this scenario . the first is how to best use the abundant surrogate loss functions designed for the traditional classification problem when there is label noise . we prove that any surrogate loss function can be used for classification with noisy labels by using importance reweighting , with consistency assurance that the label noise does not ultimately hinder the search for the optimal classifier of the noise-free sample . the other is the open problem of how to obtain the noise rate $ \\rho $ . we show that the rate is upper bounded by the conditional probability $ p ( \\hat { y } |x ) $ of the noisy sample . consequently , the rate can be estimated , because the upper bound can be easily reached story_separator_special_tag researchers have been working on human face recognition for decades . face recognition is hard due to different types of variations in face images , such as pose , illumination and expression , among which pose variation is the hardest one to deal with . to improve face recognition under pose variation , this paper presents a geometry assisted probabilistic approach . we approximate a human head with a 3d ellipsoid model , so that any face image is a 2d projection of such a 3d ellipsoid at a certain pose . in this approach , both training and test images are back projected to the surface of the 3d ellipsoid , according to their estimated poses , to form the texture maps . thus the recognition can be conducted by comparing the texture maps instead of the original images , as done in traditional face recognition . in addition , we represent the texture map as an array of local patches , which enables us to train a probabilistic model for comparing corresponding patches . by conducting experiments on the cmu pie database , we show that the proposed algorithm provides better performance than the existing algorithms . story_separator_special_tag this paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene . the features are invariant to image scale and rotation , and are shown to provide robust matching across a substantial range of affine distortion , change in 3d viewpoint , addition of noise , and change in illumination . the features are highly distinctive , in the sense that a single feature can be correctly matched with high probability against a large database of features from many images . this paper also describes an approach to using these features for object recognition . the recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm , followed by a hough transform to identify clusters belonging to a single object , and finally performing verification through least-squares solution for consistent pose parameters . this approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance . story_separator_special_tag we present a method for recognizing objects ( faces ) on the basis of just one stored view , in spite of rotation in depth . the method is not based on the construction of a three-dimensional model for the object . our recognition results represent a signi cant improvement over a previous system developed in our laboratory . we achieve this with the help of a simple assumption about the transformation of local feature vectors with rotation in depth . story_separator_special_tag in this paper , a novel method for face recognition under pose and expression variations is proposed from only a single image in the gallery . a 3d probabilistic facial expression recognition generic elastic model is proposed to reconstruct a 3d model from real-world human face using only a single 2d frontal image with/without facial expressions . then , a feature library matrix ( flm ) is generated for each subject in the gallery from all face poses by rotating the 3d reconstructed models and extracting features in the rotated face pose . therefore , each flm is subsequently rendered for each subject in the gallery based on triplet angles of face poses . in addition , before matching the flm , an initial estimate of triplet angles is obtained from the face pose in probe images using an automatic head pose estimation approach . then , an array of the flm is selected for each subject based on the estimated triplet angles . finally , the selected arrays from flms are compared with extracted features from the probe image by iterative scoring classification using the support vector machine . convincing results are acquired to handle pose and expression changes story_separator_special_tag we propose an automatic pose invariant approach for face recognition at a distance ( frad ) . since face alignment is a crucial step in face recognition systems , we propose a novel facial features extraction model , which guides extended asm to accurately align the face . our main concern is to recognize human faces under uncontrolled environment at far distances accurately and fast . to achieve this goal , we perform an offline stage where 3d faces are reconstructed from stereo pair images . these 3d shapes are used to synthesize virtual 2d views in novel poses . to obtain good synthesized images from the 3d shape , we propose an accurate 3d reconstruction framework , which carefully handles illumination variance , occlusion , and the disparity discontinuity . the online phase is fast where a 2d image with unknown pose is matched with the closest virtual images in sampled poses . experiments show that our approach outperforms the-state-of-the-art approaches . story_separator_special_tag this paper proposes an automatic pose-invariant face recognition system . in our approach , we consider the texture information around the facial features to compute the similarity measure between the probe and gallery images . the weight of each facial feature is dynamically estimated based on its robustness to the pose of the captured image . an approach to extract the 9 facial features used to initialize the active shape model is proposed . the approach is not dependent on the texture around the facial feature only but incorporates the information obtained about the facial feature relations . our face recognition system is tested on common datasets in pose evaluation cmu-pie and feret . the results show out-performance of the state of the art automatic face recognition systems . story_separator_special_tag face recognition systems have to deal with the problem that not all variations of all persons can be enrolled . rather , the variations of most persons must be modeled . explicit modeling of different poses is awkward and time consuming . here , we present a subsystem that builds a model of pose variation by keeping a model database of persons in both poses , additionally to the gallery of clients known in only one pose . an identification or verification decision for probe images is made on the basis of the rank order of similarities with the model database . identification achieves up to 100 % recognition rate on 300 pairs of testing images with 45 degrees pose variation within the cas-peal database , the equal error rate for verification reaches 0.5 % . story_separator_special_tag one of the major challenges encountered by face recognition lies in the difficulty of handling arbitrary poses variations . while different approaches have been developed for face recognition across pose variations , many methods either require manual landmark annotations or assume the face poses to be known . these constraints prevent many face recognition systems from working automatically . in this paper , we propose a fully automatic method for multiview face recognition . we first build a 3d model from each frontal target face image , which is used to generate synthetic target face images . the pose of a query face image is also estimated using a multi-view face detector so that the synthetic target face images can be generated to resemble the pose variation of a query face image . procrustes analysis is then applied to align the synthetic target images and the query image , and block based mlbp features are extracted for face matching . experimental results on two public-domain databases ( color feret and pubfig ) , and a mobile face database collected using mobile phones show that the proposed approach outperforms two state-of-the-art face matchers ( facevacs and mkd-src ) in automatic multi-view story_separator_special_tag heterogeneous face recognition ( hfr ) refers to matching face imagery across different domains . it has received much interest from the research community as a result of its profound implications in law enforcement . a wide variety of new invariant features , cross-modality matching models and heterogeneous datasets are being established in recent years . this survey provides a comprehensive review of established techniques and recent developments in hfr . moreover , we offer a detailed account of datasets and benchmarks commonly used for evaluation . we finish by assessing the state of the field and discussing promising directions for future research . display omitted provide a comprehensive review of established techniques in hfrprovide a thorough review of recent developments in hfroffer a detailed account of datasets and benchmarks commonly used for evaluationassess the state of the field and discuss promising directions for future research story_separator_special_tag soft biometric traits embedded in a face ( e.g. , gender and facial marks ) are ancillary information and are not fully distinctive by themselves in face-recognition tasks . however , this information can be explicitly combined with face matching score to improve the overall face-recognition accuracy . moreover , in certain application domains , e.g. , visual surveillance , where a face image is occluded or is captured in off-frontal pose , soft biometric traits can provide even more valuable information for face matching or retrieval . facial marks can also be useful to differentiate identical twins whose global facial appearances are very similar . the similarities found from soft biometrics can also be useful as a source of evidence in courts of law because they are more descriptive than the numerical matching scores generated by a traditional face matcher . we propose to utilize demographic information ( e.g. , gender and ethnicity ) and facial marks ( e.g. , scars , moles , and freckles ) for improving face image matching and retrieval performance . an automatic facial mark detection method has been developed that uses ( 1 ) the active appearance model for locating primary facial features story_separator_special_tag in this paper we revisit the process of constructing a high resolution 3d morphable model of face shape variation . we demonstrate how the statistical tools of thin-plate splines and procrustes analysis can be used to construct a morphable model that is both more efficient and generalises to novel face surfaces more accurately than previous models . we also reformulate the probabilistic prior that the model provides on the distribution of parameter vector lengths . this distribution is determined solely by the number of model dimensions and can be used as a regularisation constraint in fitting the model to data without the need to empirically choose a parameter controlling the trade off between plausibility and quality of fit . as an example application of this improved model , we show how it may be fitted to a sparse set of 2d feature points ( approximately 100 ) . this provides a rapid means to estimate high resolution 3d face shape for a face in any pose given only a single face image . we present experimental results using ground truth data and hence provide absolute reconstruction errors . on average , the per vertex error of the reconstructed faces is story_separator_special_tag generative 3d face models are a powerful tool in computer vision . they provide pose and illumination invariance by modeling the space of 3d faces and the imaging process . the power of these models comes at the cost of an expensive and tedious construction process , which has led the community to focus on more easily constructed but less powerful models . with this paper we publish a generative 3d shape and texture model , the basel face model ( bfm ) , and demonstrate its application to several face recognition task . we improve on previous models by offering higher shape and texture accuracy due to a better scanning device and less correspondence artifacts due to an improved registration algorithm . the same 3d face model can be fit to 2d or 3d images acquired under different situations and with different sensors using an analysis by synthesis method . the resulting model parameters separate pose , lighting , imaging and identity parameters , which facilitates invariant face recognition across sensors and data sets by comparing only the identity parameters . we hope that the availability of this registered face model will spur research in generative models . together story_separator_special_tag we describe experiments with eigenfaces for recognition and interactive search in a large-scale face database . accurate visual recognition is demonstrated using a database of o ( 10/sup 3/ ) faces . the problem of recognition under general viewing orientation is also examined . a view-based multiple-observer eigenspace technique is proposed for use in face recognition under variable pose . in addition , a modular eigenspace description technique is used which incorporates salient features such as the eyes , nose and mouth , in an eigenfeature layer . this modular representation yields higher recognition rates as well as a more robust framework for face recognition . an automatic feature extraction technique using feature eigentemplates is also demonstrated . > story_separator_special_tag two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems . the face recognition technology ( feret ) program has addressed both issues through the feret database of facial images and the establishment of the feret tests . to date , 14,126 images from 1,199 individuals are included in the feret database , which is divided into development and sequestered portions of the database . in september 1996 , the feret program administered the third in a series of feret face-recognition tests . the primary objectives of the third test were to 1 ) assess the state of the art , 2 ) identify future areas of research , and 3 ) measure algorithm performance . story_separator_special_tag classical face recognition techniques have been successful at operating under well-controlled conditions ; however , they have difficulty in robustly performing recognition in uncontrolled real-world scenarios where variations in pose , illumination , and expression are encountered . in this paper , we propose a new method for real-world unconstrained pose-invariant face recognition . we first construct a 3d model for each subject in our database using only a single 2d image by applying the 3d generic elastic model ( 3d gem ) approach . these 3d models comprise an intermediate gallery database from which novel 2d pose views are synthesized for matching . before matching , an initial estimate of the pose of the test query is obtained using a linear regression approach based on automatic facial landmark annotation . each 3d model is subsequently rendered at different poses within a limited search space about the estimated pose , and the resulting images are matched against the test query . finally , we compute the distances between the synthesized images and test query by using a simple normalized correlation matcher to show the effectiveness of our pose synthesis method to real-world data . we present convincing results on challenging story_separator_special_tag a major goal for face recognition is to identify faces where the pose of the probe is different from the stored face . typical feature vectors vary more with pose than with identity , leading to very poor recognition performance . we propose a non-linear many-to-one mapping from a conventional feature space to a new space constructed so that each individual has a unique feature vector regardless of pose . training data is used to implicitly parameterize the position of the multi-dimensional face manifold by pose . we introduce a co-ordinate transform , which depends on the position on the manifold . this transform is chosen so that different poses of the same face are mapped to the same feature vector . the same approach is applied to illumination changes . we investigate different methods for creating features , which are invariant to both pose and illumination . we provide a metric to assess the discriminability of the resulting features . our technique increases the discriminability of faces under unknown pose and lighting compared to contemporary methods . story_separator_special_tag face recognition algorithms perform very unreliably when the pose of the probe face is different from the gallery face : typical feature vectors vary more with pose than with identity . we propose a generative model that creates a one-to-many mapping from an idealized `` identity '' space to the observed data space . in identity space , the representation for each individual does not vary with pose . we model the measured feature vector as being generated by a pose-contingent linear transformation of the identity variable in the presence of gaussian noise . we term this model `` tied '' factor analysis . the choice of linear transformation ( factors ) depends on the pose , but the loadings are constant ( tied ) for a given individual . we use the em algorithm to estimate the linear transformations and the noise parameters from training data . we propose a probabilistic distance metric that allows a full posterior over possible matches to be established . we introduce a novel feature extraction process and investigate recognition performance by using the feret , xm2vts , and pie databases . recognition performance compares favorably with contemporary approaches . story_separator_special_tag this paper presents a fully automatic system that recovers 3d face models from sequences of facial images . unlike most 3d morphable model ( 3dmm ) fitting algorithms that simultaneously reconstruct the shape and texture from a single input image , our approach builds on a more efficient least squares method to directly estimate the 3d shape from sparse 2d landmarks , which are localized by face alignment algorithms . the inconsistency between self-occluded 2d and 3d feature positions caused by head pose is ad-dressed . a novel framework to enhance robustness across multiple frames selected based on their 2d landmarks combined with individual self-occlusion handling is proposed . evaluation on groundtruth 3d scans shows superior shape and pose estimation over previous work . the whole system is also evaluated on an in the wild video dataset [ 12 ] and delivers personalized and realistic 3d face shape and texture models under less constrained conditions , which only takes seconds to process each video clip . story_separator_special_tag abstract we discuss the problem of pose invariant face recognition using a markov random field ( mrf ) model . mrf image to image matching has been shown to be very promising in earlier studies ( arashloo and kittler , 2011 ) [ 4 ] . its demanding computational complexity has been addressed in arashloo et al . ( 2011 ) [ 6 ] by means of multiresolution mrfs linked by the super coupling transform advocated by petrou et al . ( 1998 ) [ 37 , 11 ] . in this paper , we benefit from the daisy descriptor for face image representation in image matching . most importantly , we design an innovative gpu implementation of the proposed multiresolution mrf matching process . the significant speed up achieved ( factor of 25 ) has multiple benefits : it makes the mrf approach a practical proposition . it facilitates extensive empirical optimisation and evaluation studies . the latter conducted on benchmarking databases , including the challenging labelled faces in the wild ( lfw ) database show the outstanding potential of the proposed method , which consistently achieves state-of-the-art performance in standard benchmarking tests . the experimental studies also show story_separator_special_tag we present a novel algorithm aiming to estimate the 3d shape , the texture of a human face , along with the 3d pose and the light direction from a single photograph by recovering the parameters of a 3d morphable model . generally , the algorithms tackling the problem of 3d shape estimation from image data use only the pixels intensity as input to drive the estimation process . this was previously achieved using either a simple model , such as the lambertian reflectance model , leading to a linear fitting algorithm . alternatively , this problem was addressed using a more precise model and minimizing a non-convex cost function with many local minima . one way to reduce the local minima problem is to use a stochastic optimization algorithm . however , the convergence properties ( such as the radius of convergence ) of such algorithms , are limited . here , as well as the pixel intensity , we use various image features such as the edges or the location of the specular highlights . the 3d shape , texture and imaging parameters are then estimated by maximizing the posterior of the parameters given these image features . story_separator_special_tag canonical correlation analysis ( cca ) is a method for finding linear relations between two multidimensional random variables . this paper presents a generalization of the method to more than two variables . the approach is highly scalable , since it scales linearly with respect to the number of training examples and number of views ( standard cca implementations yield cubic complexity ) . the method is also extended to handle nonlinear relations via kernel trick ( this increases the complexity to quadratic complexity ) . the scalability is demonstrated on a large scale cross-lingual information retrieval task . story_separator_special_tag despite significant recent advances in the field of face recognition , implementing face verification and recognition efficiently at scale presents serious challenges to current approaches . in this paper we present a system , called facenet , that directly learns a mapping from face images to a compact euclidean space where distances directly correspond to a measure of face similarity . once this space has been produced , tasks such as face recognition , verification and clustering can be easily implemented using standard techniques with facenet embeddings as feature vectors . our method uses a deep convolutional network trained to directly optimize the embedding itself , rather than an intermediate bottleneck layer as in previous deep learning approaches . to train , we use triplets of roughly aligned matching / non-matching face patches generated using a novel online triplet mining method . the benefit of our approach is much greater representational efficiency : we achieve state-of-the-art face recognition performance using only 128-bytes per face . on the widely used labeled faces in the wild ( lfw ) dataset , our system achieves a new record accuracy of 99.63 % . on youtube faces db it achieves 95.12 % . our story_separator_special_tag face recognition approaches have traditionally focused on direct comparisons between aligned images , e.g . using pixel values or local image features . such comparisons become prohibitively difficult when comparing faces across extreme differences in pose , illumination and expression . the goal of this work is to develop a face-similarity measure that is largely invariant to these differences . we propose a novel data driven method based on the insight that comparing images of faces is most meaningful when they are in comparable imaging conditions . to this end we describe an image of a face by an ordered list of identities from a library . the order of the list is determined by the similarity of the library images to the probe image . the lists act as a signature for each face image : similarity between face images is determined via the similarity of the signatures . here the cmu multi-pie database , which includes images of 337 individuals in more than 2000 pose , lighting and illumination combinations , serves as the library . we show improved performance over state of the art face-similarity measures based on local features , such as fplbp , especially across story_separator_special_tag we propose a novel pose-invariant face recognition approach which we call discriminant multiple coupled latent subspace framework . it finds the sets of projection directions for different poses such that the projected images of the same subject in different poses are maximally correlated in the latent space . discriminant analysis with artificially simulated pose errors in the latent space makes it robust to small pose errors caused due to a subject 's incorrect pose estimation . we do a comparative analysis of three popular latent space learning approaches : partial least squares ( plss ) , bilinear model ( blm ) and canonical correlational analysis ( cca ) in the proposed coupled latent subspace framework . we experimentally demonstrate that using more than two poses simultaneously with cca results in better performance . we report state-of-the-art results for pose-invariant face recognition on cmu pie and feret and comparable results on multipie when using only four fiducial points for alignment and intensity features . story_separator_special_tag this paper presents a novel way to perform multi-modal face recognition . we use partial least squares ( pls ) to linearly map images in different modalities to a common linear subspace in which they are highly correlated . pls has been previously used effectively for feature selection in face recognition . we show both theoretically and experimentally that pls can be used effectively across modalities . we also formulate a generic intermediate subspace comparison framework for multi-modal recognition . surprisingly , we achieve high performance using only pixel intensities as features . we experimentally demonstrate the highest published recognition rates on the pose variations in the pie data set , and also show that pls can be used to compare sketches to photos , and to compare images taken at different resolutions . story_separator_special_tag this paper presents a general multi-view feature extraction approach that we call generalized multiview analysis or gma . gma has all the desirable properties required for cross-view classification and retrieval : it is supervised , it allows generalization to unseen classes , it is multi-view and kernelizable , it affords an efficient eigenvalue based solution and is applicable to any domain . gma exploits the fact that most popular supervised and unsupervised feature extraction techniques are the solution of a special form of a quadratic constrained quadratic program ( qcqp ) , which can be solved efficiently as a generalized eigenvalue problem . gma solves a joint , relaxed qcqp over different feature spaces to obtain a single ( non ) linear subspace . intuitively , gma is a supervised extension of canonical correlational analysis ( cca ) , which is useful for cross-view classification and retrieval . the proposed approach is general and has the potential to replace cca whenever classification or retrieval is the purpose and label information is available . we outperform previous approaches for textimage retrieval on pascal and wiki text-image data . we report state-of-the-art results for pose and lighting invariant face recognition on the story_separator_special_tag in the fall of 2000 , we collected a database of more than 40,000 facial images of 68 people . using the carnegie mellon university 3d room , we imaged each person across 13 different poses , under 43 different illumination conditions , and with four different expressions . we call this the cmu pose , illumination , and expression ( pie ) database . we describe the imaging hardware , the collection procedure , the organization of the images , several possible uses , and how to obtain the database . story_separator_special_tag several recent papers on automatic face verification have significantly raised the performance bar by developing novel , specialised representations that outperform standard features such as sift for this problem . this paper makes two contributions : first , and somewhat surprisingly , we show that fisher vectors on densely sampled sift features , i.e . an off-the-shelf object recognition representation , are capable of achieving state-of-the-art face verification performance on the challenging labeled faces in the wild benchmark ; second , since fisher vectors are very high dimensional , we show that a compact descriptor can be learnt from them using discriminative metric learning . this compact descriptor has a better recognition accuracy and is very well suited to large scale identification tasks . story_separator_special_tag mosaicing entails the consolidation of information represented by multiple images through the application of a registration and blending procedure . we describe a face mosaicing scheme that generates a composite face image during enrollment based on the evidence provided by frontal and semiproflle face images of an individual . face mosaicing obviates the need to store multiple face templates representing multiple poses of a user 's face image . in the proposed scheme , the side profile images are aligned with the frontal image using a hierarchical registration algorithm that exploits neighborhood properties to determine the transformation relating the two images . multiresolution splining is then used to blend the side profiles with the frontal image , thereby generating a composite face image of the user . a texture-based face recognition technique that is a slightly modified version of the c2 algorithm proposed by serre et al . is used to compare a probe face image with the gallery face mosaic . experiments conducted on three different databases indicate that face mosaicing , as described in this paper , offers significant benefits by accounting for the pose variations that are commonly observed in face images . story_separator_special_tag this paper designs a high-performance deep convolutional network ( deepid2+ ) for face recognition . it is learned with the identification-verification supervisory signal . by increasing the dimension of hidden representations and adding supervision to early convolutional layers , deepid2+ achieves new state-of-the-art on lfw and youtube faces benchmarks . through empirical studies , we have discovered three properties of its deep neural activations critical for the high performance : sparsity , selectiveness and robustness . ( 1 ) it is observed that neural activations are moderately sparse . moderate sparsity maximizes the discriminative power of the deep net as well as the distance between images . it is surprising that deepid2+ still can achieve high recognition accuracy even after the neural responses are binarized . ( 2 ) its neurons in higher layers are highly selective to identities and identity-related attributes . we can identify different subsets of neurons which are either constantly excited or inhibited when different identities or attributes are present . although deepid2+ is not taught to distinguish attributes during training , it has implicitly learned such high-level concepts . ( 3 ) it is much more robust to occlusions , although occlusion patterns are not story_separator_special_tag in modern face recognition , the conventional pipeline consists of four stages : detect = > align = > represent = > classify . we revisit both the alignment step and the representation step by employing explicit 3d face modeling in order to apply a piecewise affine transformation , and derive a face representation from a nine-layer deep neural network . this deep network involves more than 120 million parameters using several locally connected layers without weight sharing , rather than the standard convolutional layers . thus we trained it on the largest facial dataset to-date , an identity labeled dataset of four million facial images belonging to more than 4 , 000 identities . the learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments , even with a simple classifier . our method reaches an accuracy of 97.35 % on the labeled faces in the wild ( lfw ) dataset , reducing the error of the current state of the art by more than 27 % , closely approaching human-level performance . story_separator_special_tag scaling machine learning methods to very large datasets has attracted considerable attention in recent years , thanks to easy access to ubiquitous sensing and data from the web . we study face recognition and show that three distinct properties have surprising effects on the transferability of deep convolutional networks ( cnn ) : ( 1 ) the bottleneck of the network serves as an important transfer learning regularizer , and ( 2 ) in contrast to the common wisdom , performance saturation may exist in cnn 's ( as the number of training samples grows ) ; we propose a solution for alleviating this by replacing the naive random subsampling of the training set with a bootstrapping process . moreover , ( 3 ) we find a link between the representation norm and the ability to discriminate in a target domain , which sheds lights on how such networks represent faces . based on these discoveries , we are able to improve face recognition accuracy on the widely used lfw benchmark , both in the verification ( 1:1 ) and identification ( 1 : n ) protocols , and directly compare , for the first time , with the state story_separator_special_tag one of the main challenges faced by the current face recognition techniques lies in the difficulties of collecting samples . fewer samples per person mean less laborious effort for collecting them , lower cost for storing and processing them . unfortunately , many reported face recognition techniques rely heavily on the size and representative of training set , and most of them will suffer serious performance drop or even fail to work if only one training sample per person is available to the systems . this situation is called `` one sample per person '' problem : given a stored database of faces , the goal is to identify a person from the database later in time in any different and unpredictable poses , lighting , etc . from just one image . such a task is very challenging for most current algorithms due to the extremely limited representative of training sample . numerous techniques have been developed to attack this problem , and the purpose of this paper is to categorize and evaluate these algorithms . the prominent algorithms are described and critically analyzed . relevant issues such as data collection , the influence of the small sample size story_separator_special_tag traditional image representations are not suited to conventional classification methods such as the linear discriminant analysis ( lda ) because of the undersample problem ( usp ) : the dimensionality of the feature space is much higher than the number of training samples . motivated by the successes of the two-dimensional lda ( 2dlda ) for face recognition , we develop a general tensor discriminant analysis ( gtda ) as a preprocessing step for lda . the benefits of gtda , compared with existing preprocessing methods such as the principal components analysis ( pca ) and 2dlda , include the following : 1 ) the usp is reduced in subsequent classification by , for example , lda , 2 ) the discriminative information in the training tensors is preserved , and 3 ) gtda provides stable recognition rates because the alternating projection optimization algorithm to obtain a solution of gtda converges , whereas that of 2dlda does not . we use human gait recognition to validate the proposed gtda . the averaged gait images are utilized for gait representation . given the popularity of gabor-function-based image decompositions for image understanding and object recognition , we develop three different gabor-function-based image story_separator_special_tag subspace selection approaches are powerful tools in pattern classification and data visualization . one of the most important subspace approaches is the linear dimensionality reduction step in the fisher 's linear discriminant analysis ( flda ) , which has been successfully employed in many fields such as biometrics , bioinformatics , and multimedia information management . however , the linear dimensionality reduction step in flda has a critical drawback : for a classification task with c classes , if the dimension of the projected subspace is strictly lower than c - 1 , the projection to a subspace tends to merge those classes , which are close together in the original feature space . if separate classes are sampled from gaussian distributions , all with identical covariance matrices , then the linear dimensionality reduction step in flda maximizes the mean value of the kullback-leibler ( kl ) divergences between different classes . based on this viewpoint , the geometric mean for subspace selection is studied in this paper . three criteria are analyzed : 1 ) maximization of the geometric mean of the kl divergences , 2 ) maximization of the geometric mean of the normalized kl divergences , and story_separator_special_tag relevance feedback schemes based on support vector machines ( svm ) have been widely used in content-based image retrieval ( cbir ) . however , the performance of svm-based relevance feedback is often poor when the number of labeled positive feedback samples is small . this is mainly due to three reasons : 1 ) an svm classifier is unstable on a small-sized training set , 2 ) svm 's optimal hyperplane may be biased when the positive feedback samples are much less than the negative feedback samples , and 3 ) overfitting happens because the number of feature dimensions is much higher than the size of the training set . in this paper , we develop a mechanism to overcome these problems . to address the first two problems , we propose an asymmetric bagging-based svm ( ab-svm ) . for the third problem , we combine the random subspace method and svm for relevance feedback , which is named random subspace svm ( rs-svm ) . finally , by integrating ab-svm and rs-svm , an asymmetric bagging and random subspace svm ( abrs-svm ) is built to solve these three problems and further improve the relevance feedback performance story_separator_special_tag scientists working with large volumes of high-dimensional data , such as global climate patterns , stellar spectra , or human gene distributions , regularly confront the problem of dimensionality reduction : finding meaningful low-dimensional structures hidden in their high-dimensional observations . the human brain confronts the same problem in everyday perception , extracting from its high-dimensional sensory inputs-30,000 auditory nerve fibers or 10 ( 6 ) optic nerve fibers-a manageably small number of perceptually relevant features . here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set . unlike classical techniques such as principal component analysis ( pca ) and multidimensional scaling ( mds ) , our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations , such as human handwriting or images of a face under different viewing conditions . in contrast to previous algorithms for nonlinear dimensionality reduction , ours efficiently computes a globally optimal solution , and , for an important class of data manifolds , is guaranteed to converge asymptotically to the true structure . story_separator_special_tag we have developed a near-real-time computer system that can locate and track a subject 's head , and then recognize the person by comparing characteristics of the face to those of known individuals . the computational approach taken in this system is motivated by both physiology and information theory , as well as by the practical requirements of near-real-time performance and accuracy . our approach treats the face recognition problem as an intrinsically two-dimensional ( 2-d ) recognition problem rather than requiring recovery of three-dimensional geometry , taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-d characteristic views . the system functions by projecting face images onto a feature space that spans the significant variations among known face images . the significant features are known as `` eigenfaces , '' because they are the eigenvectors ( principal components ) of the set of faces ; they do not necessarily correspond to features such as eyes , ears , and noses . the projection operation characterizes an individual face by a weighted sum of the eigenface features , and so to recognize a particular face it is necessary only story_separator_special_tag 1. introduction . image processing as picture analysis . the advantages of interactive graphics . representative uses of computer graphics . classification of applications . development of hardware and software for computer graphics . conceptual framework for interactive graphics . 2. programming in the simple raster graphics package ( srgp ) / . drawing with srgp/ . basic interaction handling/ . raster graphics features/ . limitations of srgp/ . 3. basic raster graphics algorithms for drawing 2d primitives . overview . scan converting lines . scan converting circles . scan convertiing ellipses . filling rectangles . fillign polygons . filling ellipse arcs . pattern filling . thick primiives . line style and pen style . clipping in a raster world . clipping lines . clipping circles and ellipses . clipping polygons . generating characters . srgp_copypixel . antialiasing . 4. graphics hardware . hardcopy technologies . display technologies . raster-scan display systems . the video controller . random-scan display processor . input devices for operator interaction . image scanners . 5. geometrical transformations . 2d transformations . homogeneous coordinates and matrix representation of 2d transformations . composition of 2d transformations . the window-to-viewport transformation . efficiency . matrix representation of story_separator_special_tag images formed by a human face change with viewpoint . a new technique is described for synthesizing images of faces from new viewpoints , when only a single 2d image is available . a novel 2d image of a face can be computed without explicitly computing the 3d structure of the head . the technique draws on a single generic 3d model of a human head and on prior knowledge of faces based on example images of other faces seen in different poses . the example images are used to learn a pose-invariant shape and texture description of a new face . the 3d model is used to solve the correspondence problem between images showing faces in different poses . the proposed method is interesting for view independent face recognition tasks as well as for image synthesis problems in areas like teleconferencing and virtualized reality . story_separator_special_tag this paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates . there are three key contributions . the first is the introduction of a new image representation called the integral image which allows the features used by our detector to be computed very quickly . the second is a simple and efficient classifier which is built using the adaboost learning algorithm ( freund and schapire , 1995 ) to select a small number of critical visual features from a very large set of potential features . the third contribution is a method for combining classifiers in a cascade which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions . a set of experiments in the domain of face detection is presented . the system yields face detection performance comparable to the best previous systems ( sung and poggio , 1998 ; rowley et al. , 1998 ; schneiderman and kanade , 2000 ; roth et al. , 2000 ) . implemented on a conventional desktop , face detection proceeds at 15 frames per second . story_separator_special_tag abstract this paper presents a comprehensive survey of facial feature point detection with the assistance of abundant manually labeled images . facial feature point detection favors many applications such as face recognition , animation , tracking , hallucination , expression analysis and 3d face modeling . existing methods are categorized into two primary categories according to whether there is the need of a parametric shape model : parametric shape model-based methods and nonparametric shape model-based methods . parametric shape model-based methods are further divided into two secondary classes according to their appearance models : local part model-based methods ( e.g . constrained local model ) and holistic model-based methods ( e.g . active appearance model ) . nonparametric shape model-based methods are divided into several groups according to their model construction process : exemplar-based methods , graphical model-based methods , cascaded regression-based methods , and deep learning based methods . though significant progress has been made , facial feature point detection is still limited in its success by wild and real-world conditions : large variations across poses , expressions , illuminations , and occlusions . a comparative illustration and analysis of representative methods provides us a holistic understanding and deep story_separator_special_tag the comparison of heterogeneous samples extensively exists in many applications , especially in the task of image classification . in this paper , we propose a simple but effective coupled neural network , called deeply coupled autoencoder networks ( dcan ) , which seeks to build two deep neural networks , coupled with each other in every corresponding layers . in dcan , each deep structure is developed via stacking multiple discriminative coupled auto-encoders , a denoising auto-encoder trained with maximum margin criterion consisting of intra-class compactness and inter-class penalty . this single layer component makes our model simultaneously preserve the local consistency and enhance its discriminative capability . with increasing number of layers , the coupled networks can gradually narrow the gap between the two views . extensive experiments on cross-view image classification tasks demonstrate the superiority of our method over state-of-the-art methods . story_separator_special_tag over the past two decades , a number of face recognition methods have been proposed in the literature . most of them use holistic face images to recognize people . however , human faces are easily occluded by other objects in many real-world scenarios and we have to recognize the person of interest from his/her partial faces . in this paper , we propose a new partial face recognition approach by using feature set matching , which is able to align partial face patches to holistic gallery faces automatically and is robust to occlusions and illumination changes . given each gallery image and probe face patch , we first detect key points and extract their local features . then , we propose a metric learned extended robust point matching ( mlerpm ) method to discriminatively match local feature sets of a pair of gallery and probe samples . lastly , the similarity of two faces is converted as the distance between two feature sets . experimental results on three public face databases are presented to show the effectiveness of the proposed approach . story_separator_special_tag we present a system for recognizing human faces from single images out of a large database containing one image per person . faces are represented by labeled graphs , based on a gabor wavelet transform . image graphs of new faces are extracted by an elastic graph matching process and can be compared by a simple similarity function . the system differs from the preceding one ( lades et al. , 1993 ) in three respects . phase information is used for accurate node positioning . object-adapted graphs are used to handle large rotations in depth . image graph extraction is based on a novel data structure , the bunch graph , which is constructed from a small get of sample image graphs . story_separator_special_tag we present a new approach to robust pose-variant face recognition , which exhibits excellent generalization ability even across completely different datasets due to its weak dependence on data . most face recognition algorithms assume that the face images are very well-aligned . this assumption is often violated in real-life face recognition tasks , in which face detection and rectification have to be performed automatically prior to recognition . although great improvements have been made in face alignment recently , significant pose variations may still occur in the aligned faces . we propose a multiscale local descriptor-based face representation to mitigate this issue . first , discriminative local image descriptors are extracted from a dense set of multiscale image patches . the descriptors are expanded by their spatial locations . each expanded descriptor is quantized by a set of random projection trees . the final face representation is a histogram of the quantized descriptors . the location expansion constrains the quantization regions to be localized not just in feature space but also in image space , allowing us to achieve an implicit elastic matching for face images . our experiments on challenging face recognition benchmarks demonstrate the advantages of the proposed story_separator_special_tag mathematical optimization plays a fundamental role in solving many problems in computer vision ( e.g. , camera calibration , image alignment , structure from motion ) . it is generally accepted that second order descent methods are the most robust , fast , and reliable approaches for nonlinear optimization of a general smooth function . however , in the context of computer vision , second order descent methods have two main drawbacks : 1 ) the function might not be analytically differentiable and numerical approximations are impractical , and 2 ) the hessian may be large and not positive definite . recently , supervised descent method ( sdm ) , a method that learns the weighted averaged gradients in a supervised manner has been proposed to solve these issues . however , sdm is a local algorithm and it is likely to average conflicting gradient directions . this paper proposes global sdm ( gsdm ) , an extension of sdm that divides the search space into regions of similar gradient directions . gsdm provides a better and more efficient strategy to minimize non-linear least squares functions in computer vision problems . we illustrate the effectiveness of gsdm in two problems story_separator_special_tag it is practical to assume that an individual view is unlikely to be sufficient for effective multi-view learning . therefore , integration of multi-view information is both valuable and necessary . in this paper , we propose the multi-view intact space learning ( misl ) algorithm , which integrates the encoded complementary information in multiple views to discover a latent intact representation of the data . even though each view on its own is insufficient , we show theoretically that by combing multiple views we can obtain abundant information for latent intact space learning . employing the cauchy loss ( a technique used in statistical learning ) as the error measurement strengthens robustness to outliers . we propose a new definition of multi-view stability and then derive the generalization error bound based on multi-view stability and rademacher complexity , and show that the complementarity between multiple views is beneficial for the stability and generalization . misl is efficiently optimized using a novel iteratively reweight residuals ( irr ) technique , whose convergence is theoretically analyzed . experiments on synthetic data and real-world datasets demonstrate that misl is an effective and promising algorithm for practical applications . story_separator_special_tag most existing pose robust methods are too computational complex to meet practical applications and their performance under unconstrained environments are rarely evaluated . in this paper , we propose a novel method for pose robust face recognition towards practical applications , which is fast , pose robust and can work well under unconstrained environments . firstly , a 3d deformable model is built and a fast 3d model fitting algorithm is proposed to estimate the pose of face image . secondly , a group of gabor filters are transformed according to the pose and shape of face image for feature extraction . finally , pca is applied on the pose adaptive gabor features to remove the redundances and cosine metric is used to evaluate the similarity . the proposed method has three advantages : ( 1 ) the pose correction is applied in the filter space rather than image space , which makes our method less affected by the precision of the 3d model , ( 2 ) by combining the holistic pose transformation and local gabor filtering , the final feature is robust to pose and other negative factors in face recognition , ( 3 ) the 3d structure story_separator_special_tag face recognition under viewpoint and illumination changes is a difficult problem , so many researchers have tried to solve this problem by producing the pose- and illumination- invariant feature . zhu et al . [ 26 ] changed all arbitrary pose and illumination images to the frontal view image to use for the invariant feature . in this scheme , preserving identity while rotating pose image is a crucial issue . this paper proposes a new deep architecture based on a novel type of multitask learning , which can achieve superior performance in rotating to a target-pose face image from an arbitrary pose and illumination image while preserving identity . the target pose can be controlled by the user 's intention . this novel type of multi-task model significantly improves identity preservation over the single task model . by using all the synthesized controlled pose images , called controlled pose image ( cpi ) , for the pose-illumination-invariant feature and voting among the multiple face recognition results , we clearly outperform the state-of-the-art algorithms by more than 4 6 % on the multipie dataset . story_separator_special_tag face information processing relies on the quality of data resource . from the data modality point of view , a face database can be 2d or 3d , and static or dynamic . from the task point of view , the data can be used for research of computer based automatic face recognition , face expression recognition , face detection , or cognitive and psychological investigation . with the advancement of 3d imaging technologies , 3d dynamic facial sequences ( called 4d data ) have been used for face information analysis . in this paper , we focus on the modality of 3d dynamic data for the task of facial expression recognition . we present a newly created high-resolution 3d dynamic facial expression database , which is made available to the scientific research community . the database contains 606 3d facial expression sequences captured from 101 subjects of various ethnic backgrounds . the database has been validated through our facial expression recognition experiment using an hmm based 3d spatio-temporal facial descriptor . it is expected that such a database shall be used to facilitate the facial expression analysis from a static 3d space to a dynamic 3d space , with story_separator_special_tag handling intra-personal variation is a major challenge in face recognition . it is difficult how to appropriately measure the similarity between human faces under significantly different settings ( e.g. , pose , illumination , and expression ) . in this paper , we propose a new model , called associate-predict ( ap ) model , to address this issue . the associate-predict model is built on an extra generic identity data set , in which each identity contains multiple images with large intra-personal variation . when considering two faces under significantly different settings ( e.g. , non-frontal and frontal ) , we first associate one input face with alike identities from the generic identity date set . using the associated faces , we generatively predict the appearance of one input face under the setting of another input face , or discriminatively predict the likelihood whether two input faces are from the same person or not . we call the two proposed prediction methods as appearance-prediction and likelihood-prediction . by leveraging an extra data set ( memory ) and the associate-predict model , the intra-personal variation can be effectively handled . to improve the generalization ability of our model , we story_separator_special_tag we propose a pose-robust face recognition method to handle the challenging task of face recognition in the presence of large pose difference between gallery and probe faces . the proposed method exploits the sparse property of the representation coefficients of a face image over its corresponding view-dictionary . by assuming the representation coefficients are invariant to pose , we can synthesize for the probe image a novel face image which has smaller pose difference with the gallery faces . furthermore , face recognition in the presence of pose variations is achieved based on the synthesized face image again via sparse representation . extensive experiments on cmu multi-pie face database are conducted to verify the efficacy of the proposed method . story_separator_special_tag one of the major challenges encountered by current face recognition techniques lies in the difficulties of handling varying poses , i.e. , recognition of faces in arbitrary in-depth rotations . the face image differences caused by rotations are often larger than the inter-person differences used in distinguishing identities . face recognition across pose , on the other hand , has great potentials in many applications dealing with uncooperative subjects , in which the full power of face recognition being a passive biometric technique can be implemented and utilised . extensive efforts have been put into the research toward pose-invariant face recognition in recent years and many prominent approaches have been proposed . however , several issues in face recognition across pose still remain open , such as lack of understanding about subspaces of pose variant images , problem intractability in 3d face modelling , complex face surface reflection mechanism , etc . this paper provides a critical survey of researches on image-based face recognition across pose . the existing techniques are comprehensively reviewed and discussed . they are classified into different categories according to their methodologies in handling pose variations . their strategies , advantages/disadvantages and performances are elaborated . story_separator_special_tag this paper proposes a novel heterogeneous specular and diffuse ( hsd ) 3-d surface approximation which considers spatial variability of specular and diffuse reflections in face modelling and recognition . traditional 3-d face modelling and recognition methods constrain human faces with either the lambertian assumption or the homogeneity assumption , resulting in suboptimal shape and texture models . the proposed hsd approach allows both specular and diffuse reflectance coefficients to vary spatially to better accommodate surface properties of real human faces . from a small number of face images of a person under different lighting conditions , 3-d shape and surface reflectivity property are estimated using a localized stochastic optimization method . the resultant personalized 3-d face model is used to render novel gallery views under different poses for recognition across pose . the proposed approach is evaluated on both synthetic and real face datasets and benchmarked against the state-of-the-art approaches . experimental results demonstrated that it can achieve a higher level of performances in modelling accuracy , algorithm reliability , and recognition accuracy , which suggests that face modelling and recognition beyond the lambertian and homogeneity assumptions is a feasible and better solution towards pose-invariant face recognition . story_separator_special_tag one of the key remaining problems in face recognition is that of handling the variability in appearance due to changes in pose . one strategy is to synthesize virtual face views from real views . in this paper , a novel 3d face shape-modeling algorithm , multilevel quadratic variation minimization ( mqvm ) , is proposed . our method makes sole use of two orthogonal real views of a face , i.e. , the frontal and profile views . by applying quadratic variation minimization iteratively in a coarse-to-fine hierarchy of control lattices , the mqvm algorithm can generate c\xb2-smooth 3d face surfaces . then realistic virtual face views can be synthesized by rotating the 3d models . the algorithm works properly on sparse constraint points and large images . it is much more efficient than single-level quadratic variation minimization . the modeling results suggest the validity of the mqvm algorithm for 3d face modeling and 2d face view synthesis under different poses . story_separator_special_tag one possible solution for pose- and illumination-invariant face recognition is to employ appearance-based approaches , which rely greatly on correct facial textures . however , existing facial texture analysis algorithms are suboptimal , because they usually neglect specular reflections and require numerous training images for virtual view synthesis . this paper presents a novel texture synthesis approach from a single frontal view for face recognition . using a generic 3d face shape , facial textures are analyzed with consideration of all of the ambient , diffuse , and specular reflections . virtual views are synthesized under different poses and illuminations . the proposed approach was evaluated using the cmu-pie face database . encouraging results show that the proposed approach improves face recognition performances across pose and illumination variations story_separator_special_tag mug shot photography has been used to identify criminals by the police for more than a century . however , the common scenario of face recognition using frontal and side-view mug shots as gallery remains largely uninvestigated in computerized face recognition across pose . this paper presents a novel appearance-based approach using frontal and sideface images to handle pose variations in face recognition , which has great potential in forensic and security applications involving police mugshot databases . virtual views in different poses are generated in two steps : 1 ) shape modelling and 2 ) texture synthesis . in the shape modelling step , a multilevel variation minimization approach is applied to generate personalized 3-d face shapes . in the texture synthesis step , face surface properties are analyzed and virtual views in arbitrary viewing conditions are rendered , taking diffuse and specular reflections into account . appearance-based face recognition is performed with the augmentation of synthesized virtual views covering possible viewing angles to recognize probe views in arbitrary conditions . the encouraging experimental results demonstrated that the proposed approach by using frontal and side-view images is a feasible and effective solution to recognizing rotated faces , which can story_separator_special_tag tolerance to pose variations is one of the key remaining problems in face recognition . it is of great interest in airport surveillance systems using mugshot databases to screen travellers ' faces . this paper presents a novel pose-invariant face recognition approach using two orthogonal face images from mugshot databases . virtual views under different poses are generated in two steps : shape modeling and texture synthesis . in the shape modeling step , a feature-based multilevel quadratic variation minimization approach is applied to generate smooth 3d face shapes . in the texture synthesis step , a non-lambertian reflectance model is explored to synthesize facial textures taking into account both diffuse and specular reflections . a view-based face recognizer is used to examine the feasibility and effectiveness of the proposed pose-invariant face recognition . the experimental results show that the proposed method provides a new solution to the problem of recognizing rotated faces story_separator_special_tag one of the most challenging task in face recognition is to identify people with varied poses . namely , the test faces have significantly different poses compared with the registered faces . in this paper , we propose a high-level feature learning scheme to extract pose-invariant identity feature for face recognition . first , we build a single-hidden-layer neural network with sparse constraint , to extract pose-invariant feature in a supervised fashion . second , we further enhance the discriminative capability of the proposed feature by using multiple random faces as the target values for multiple encoders . by enforcing the target values to be unique for input faces over different poses , the learned high-level feature that is represented by the neurons in the hidden layer is pose free and only relevant to the identity information . finally , we conduct face identification on cmu multi-pie , and verification on labeled faces in the wild ( lfw ) databases , where identification rank-1 accuracy and face verification accuracy with roc curve are reported . these experiments demonstrate that our model is superior to other state-of-the-art approaches on handling pose variations . story_separator_special_tag this paper addresses two critical but rarely concerned issues in 2d face recognition : wider-range tolerance to pose variation and misalignment . we propose a new textural hausdorff distance ( thd ) , which is a compound measurement integrating both spatial and textural features . the thd is applied to a significant jet point ( sjp ) representation of face images , where a varied number of shape-driven sjps are detected automatically from low-level edge map with rich information content . the comparative experiments conducted on publicly available feret and ar face databases demonstrated that the proposed approach has a considerably wider range of tolerance against both in-depth head rotation and face misalignment . story_separator_special_tag as one of the most successful applications of image analysis and understanding , face recognition has recently received significant attention , especially during the past several years . at least two reasons account for this trend : the first is the wide range of commercial and law enforcement applications , and the second is the availability of feasible technologies after 30 years of research . even though current machine recognition systems have reached a certain level of maturity , their success is limited by the conditions imposed by many real applications . for example , recognition of face images acquired in an outdoor environment with changes in illumination and/or pose remains a largely unsolved problem . in other words , current systems are still far away from the capability of the human perception system.this paper provides an up-to-date critical survey of still- and video-based face recognition research . there are two underlying motivations for us to write this survey paper : the first is to provide an up-to-date review of the existing literature , and the second is to offer some insights into the studies of machine recognition of faces . to provide a comprehensive survey , we not only story_separator_special_tag pose and expression normalization is a crucial step to recover the canonical view of faces under arbitrary conditions , so as to improve the face recognition performance . an ideal normalization method is desired to be automatic , database independent and high-fidelity , where the face appearance should be preserved with little artifact and information loss . however , most normalization methods fail to satisfy one or more of the goals . in this paper , we propose a high-fidelity pose and expression normalization ( hpen ) method with 3d morphable model ( 3dmm ) which can automatically generate a natural face image in frontal pose and neutral expression . specifically , we firstly make a landmark marching assumption to describe the non-correspondence between 2d and 3d landmarks caused by pose variations and propose a pose adaptive 3dmm fitting algorithm . secondly , we mesh the whole image into a 3d object and eliminate the pose and expression variations using an identity preserving 3d transformation . finally , we propose an inpainting method based on possion editing to fill the invisible region caused by self occlusion . extensive experiments on multi-pie and lfw demonstrate that the proposed method significantly improves story_separator_special_tag face recognition with large pose and illumination variations is a challenging problem in computer vision . this paper addresses this challenge by proposing a new learning based face representation : the face identity-preserving ( fip ) features . unlike conventional face descriptors , the fip features can significantly reduce intra-identity variances , while maintaining discriminative ness between identities . moreover , the fip features extracted from an image under any pose and illumination can be used to reconstruct its face image in the canonical view . this property makes it possible to improve the performance of traditional descriptors , such as lbp [ 2 ] and gabor [ 31 ] , which can be extracted from our reconstructed images in the canonical view to eliminate variations . in order to learn the fip features , we carefully design a deep network that combines the feature extraction layers and the reconstruction layer . the former encodes a face image into the fip features , while the latter transforms them to an image in the canonical view . extensive experiments on the large multipie face database [ 7 ] demonstrate that it significantly outperforms the state-of-the-art face recognition methods . story_separator_special_tag various factors , such as identity , view , and illumination , are coupled in face images . disentangling the identity and view representations is a major challenge in face recognition . existing face recognition systems either use handcrafted features or learn features discriminatively to improve recognition accuracy . this is different from the behavior of primate brain . recent studies [ 5 , 19 ] discovered that primate brain has a face-processing network , where view and identity are processed by different neurons . taking into account this instinct , this paper proposes a novel deep neural net , named multi-view perceptron ( mvp ) , which can untangle the identity and view features , and in the meanwhile infer a full spectrum of multi-view images , given a single 2d face image . the identity features of mvp achieve superior performance on the multipie dataset . mvp is also capable to interpolate and predict images under viewpoints that are unobserved in the training data . story_separator_special_tag face images in the wild undergo large intra-personal variations , such as poses , illuminations , occlusions , and low resolutions , which cause great challenges to face-related applications . this paper addresses this challenge by proposing a new deep learning framework that can recover the canonical view of face images . it dramatically reduces the intra-person variances , while maintaining the inter-person discriminativeness . unlike the existing face reconstruction methods that were either evaluated in controlled 2d environment or employed 3d information , our approach directly learns the transformation from the face images with a complex set of variations to their canonical views . at the training stage , to avoid the costly process of labeling canonical-view images from the training set by hand , we have devised a new measurement to automatically select or synthesize a canonical-view image for each identity . as an application , this face recovery approach is used for face verification . facial features are learned from the recovered canonical-view face images by using a facial component-based convolutional neural network . our approach achieves the state-of-the-art performance on the lfw dataset . story_separator_special_tag the illumination variation problem is one of the well-known problems in face recognition in uncontrolled environment . in this paper an extensive and up-to-date survey of the existing techniques to address this problem is presented . this survey covers the passive techniques that attempt to solve the illumination problem by studying the visible light images in which face appearance has been altered by varying illumination , as well as the active techniques that aim to obtain images of face modalities invariant to environmental illumination . story_separator_special_tag nonlinear models arise when e [ y ] is a nonlinear function of unknown parameters . hypotheses about these parameters may be linear or nonlinear . such models tend to be used when they are suggested by theoretical considerations or used to build non-linear behavior into a model . even when a linear approximation works well , a nonlinear model may still be used to retain a clear interpretation of the parameters . once we have established a nonlinear relationship the next problem is how to incorporate the error term \\ ( \\varepsilon\\ ) . sometimes a nonlinear relationship can be transformed into a linear one but in doing so we may end up with an error term that has awkward properties . in this case it is usually better to work with the non-linear model . these kinds of problems are demonstrated by several examples . story_separator_special_tag one of the more exciting and unsolved problems in computer vision nowadays is automatic , fast and full interpretation of face images under variable conditions of lighting and pose . interpretation is the inference of knowledge from an image . this knowledge covers relevant information , such as 3d shape and albedo , both related to the identity , but also information about physical factors which affect appearance of faces , such as pose and lighting . interpretation of faces not only should be limited to retrieve the aforementioned pieces of information , but also , it should be capable of synthesizing novel facial images in which some of these pieces of information have been modified . this kind of interpretation can be achieved by using the paradigm known as analysis by synthesis , see figure 1. ideally , an approach based on analysis by synthesis , should consist of a generative facial parametric model that codes all the sources of appearance variation separately and independently , and an optimization algorithm which systematically varies the model parameters until the synthetic image produced by the model is as similar as possible to the test image , also called input image . story_separator_special_tag traditional methods for image-based 3d face reconstruction and facial motion retargeting fit a 3d morphable model ( 3dmm ) to the face , which has limited modeling capacity and fail to generalize well to in-the-wild data . use of deformation transfer or multilinear tensor as a personalized 3dmm for blendshape interpolation does not address the fact that facial expressions result in different local and global skin deformations in different persons . moreover , existing methods learn a single albedo per user which is not enough to capture the expression-specific skin reflectance variations . we propose an end-to-end framework that jointly learns a personalized face model per user and per-frame facial motion parameters from a large corpus of in-the-wild videos of user expressions . specifically , we learn user-specific expression blendshapes and dynamic ( expression-specific ) albedo maps by predicting personalized corrections on top of a 3dmm prior . we introduce novel constraints to ensure that the corrected blendshapes retain their semantic meanings and the reconstructed geometry is disentangled from the albedo . experimental results show that our personalization accurately captures fine-grained facial dynamics in a wide range of conditions and efficiently decouples the learned face model from facial motion , story_separator_special_tag unlabelled auto-cpap gives an opportunity to decrease costs of evaluating patient with osa , replacing manual titration of pressure during psg . the aim of this study was to compare automatic ( auto-cpap ) and manual cpap pressure titration in patients with osa . we studied 50 obese patients ( bmi -- 35 +/- 6 kg/m2 ) , mean age 52.4 +/- 9.4 years with severe osa , mean : ahi -- 62.9 +/- 22.1 , mean overnight sao2 -- 89.1 +/- 3.7 % , t90 -- 54.4 +/- 29.6 % . two polysomnographies were performed : first when patient slept with cpap and pressure was titrated manually by a technician and second on auto-cpap device . both methods had similar efficacy in reduction of ahi ( < 10/h ) and hypoxaemia , despite lower pressure established during auto-cpap mode preventing apnoeas and hypopnoes during 90 % of sleep time ( 8.2 +/- 1.7 cm h2o ) compared to manual cpap titration ( 9.2 +/- 1.7 cm h2o ) ( p < 0.05 ) . conclusion auto-cpap seems to be a reliable alternative to manual titration of the therapeutic pressure in patients with osa . this may help to cut story_separator_special_tag head of signaling change via manual , essentially characterized by comprising a preferably prismatic housing constructed in resistant material and suitable for a portalamparas interior connection compatible with mains , being provided walls side panels material traslucido high strength aptos to facilitate the vision of the interior light from outside and distance , by providing housing compatible with access to the interior of the same hinged doors , and with head associated with an able to support assurance assembly and fixing by means conventional such as assemble and screwed . ( machine-translation by google translate , not legally binding ) story_separator_special_tag abstract in order to enhance the photoelectrochemical ( pec ) performance of tungsten oxide ( wo3 ) , it is critical to overcome the problems of narrow visible light absorption range and low carrier separation efficiency . in this work , we firstly prepared the 2d plate-like wo3/cuwo4 uniform core-shell heterojunction through in-situ synthesis method . after modification with the amorphous co-pi co-catalyst , the ternary uniform core-shell structure photoanode achieved a photocurrent of 1.4\xa0ma/cm2 at 1.23\xa0v vs. rhe , which was about 6.67 and 1.75 times higher than that of pristine wo3 and 2d uniform core-shell heterojunction , respectively . furthermore , the onset potential of 2d wo3/cuwo4/co-pi core-shell heterojunction occurred a negatively shifts of about 20\xa0mv . experiments illuminated that the enhanced pec performance of wo3/cuwo4/co-pi photoanode was attributed to the broader light absorption , reduced carrier transfer barrier and increased carrier separation efficiency . the work provides a strategy of maximizing the advantages of core-shell heterojunction and co-catalyst to achieve effective pec performance .
the clustering algorithm is a kind of key technique used to reduce energy consumption . it can increase the scalability and lifetime of the network . energy-efficient clustering protocols should be designed for the characteristic of heterogeneous wireless sensor networks . we propose and evaluate a new distributed energy-efficient clustering scheme for heterogeneous wireless sensor networks , which is called deec . in deec , the cluster-heads are elected by a probability based on the ratio between residual energy of each node and the average energy of the network . the epochs of being cluster-heads for nodes are different according to their initial and residual energy . the nodes with high initial and residual energy will have more chances to be the cluster-heads than the nodes with low energy . finally , the simulation results show that deec achieves longer lifetime and more effective messages than current important clustering protocols in heterogeneous environments . story_separator_special_tag many routing protocols on clustering structure have been proposed in recent years . in recent advances , achieving the energy efficiency , lifetime , deployment of nodes , fault tolerance , latency , in short high reliability and robustness have become the main research goals of wireless sensor network . many routing protocols on clustering structure have been proposed in recent years based on heterogeneity . we propose edeec for three types of nodes in prolonging the lifetime and stability of the network . hence , it increases the heterogeneity and energy level of the network . simulation results show that edeec performs better than sep with more stability and effective messages . story_separator_special_tag in recent advances , many routing protocols have been proposed based on heterogeneity with main research goals such as achieving the energy efficiency , lifetime , deployment of nodes , fault tolerance , latency , in short high reliability and robustness . in this paper , we have proposed an energy efficient cluster head scheme , for heterogeneous wireless sensor networks , by modifying the threshold value of a node based on which it decides to be a cluster head or not , called tdeec ( threshold distributed energy efficient clustering ) protocol . simulation results show that proposed algorithm performs better as compared to others . story_separator_special_tag the hierarchical clustering technique reduces strongly direct transmissions which consumes hardly the nodes energy . many new protocols are specifically designed for wireless sensor networks using this strategy , where the objectives are to save energy and extend the network 's lifetime . adapting this approach , we propose a more equitable and stochastic technique which distributes uniformly the energy consumed trough the whole network using a dynamic probability to elect the cluster head ( ch ) . this protocol is used for more energy efficiency distribution where the bs is localised far away from the network . moreover , it is used where the collect data defines the maximum or the minimum values in supervised region . simulation results show that our protocol enlarges the network lifetime and performances results compared to the low-energy adaptive clustering hierarchy ( leach ) , the distributed energy-efficient clustering ( deec ) and the equitable distributed energy-efficient clustering ( edeec ) . story_separator_special_tag typically , a wireless sensor network contains an important number of inexpensive power constrained sensors which collect data from the environment and transmit them towards the base station in a cooperative way . saving energy and therefore , extending the wireless sensor networks lifetime , imposes a great challenge . many new protocols are specifically designed for these raisons where energy awareness is an essential consideration . the clustering techniques are largely used for these purposes . in this paper , we present and evaluate a stochastic and balanced developed distributed energy-efficient clustering ( sbdeec ) scheme for heterogeneous wireless sensor networks . this protocol is based on dividing the network into dynamic clusters . the cluster 's nodes communicate with an elected node called cluster head , and then the cluster heads communicate the information to the base station . sbdeec introduces a balanced and dynamic method where the cluster head election probability is more efficient . moreover , it uses a stochastic scheme detection to extend the network lifetime . simulation results show that our protocol performs better than the stable election protocol ( sep ) and than the distributed energy- efficient clustering ( deec ) in terms story_separator_special_tag 4 abstract : the clustering algorithm are considered as a kind of key technique used to reduce energy consumption . it can help in increasing the stability period and network life time . routing protocol for efficient energy utilization should be designed for heterogeneous wireless sensor networks ( wsns ) . we purpose hybrid-deec ( h-deec ) , a chain and cluster based ( hybrid ) distributed scheme for efficient energy utilization in wsns . in h-deec , elected cluster heads ( chs ) communicate the base station ( bs ) through beta elected nodes , by using multi-hopping . we logically divide the network into two parts , on the basis of the residual energy of nodes . the normal nodes with high initial and residual energy will be highly probable to be chs than the nodes with lesser energy . to overcome the deficiencies of h-deec , we propose multi-edged hybrid-deec ( mh-deec ) . in mh-deec the criteria of chain construction is modified . finally , the comparison in simulation results with other heterogeneous protocols show that , mh-deec and h-deec achieves longer stability time and network life time due to efficient energy utilization . story_separator_special_tag wireless sensor networks ( wsns ) consist of numerous sensors which send sensed data to base station . energy conservation is an important issue for sensor nodes as they have limited power.many routing protocols have been proposed earlier for energy efficiency of both homogeneous and heterogeneous environments . we can prolong our stability and network lifetime by reducing our energy consumption . in this research paper , we propose a protocol designed for the characteristics of a reactive homogeneous wsns , heer ( hybrid energy efficient reactive ) protocol . in heer , cluster head ( ch ) selection is based on the ratio of residual energy of node and average energy of network . moreover , to conserve more energy , we introduce hard threshold ( ht ) and soft threshold ( st ) . finally , simulations show that our protocol has not only prolonged the network lifetime but also significantly increased stability period . story_separator_special_tag abstract wireless sensor networks ( wsns ) consist of large number of randomly deployed energy constrained sensor nodes . sensor nodes have ability to sense and send sensed data to base station ( bs ) . sensing as well as transmitting data to- wards bs require high energy . in wsns , saving energy and extending network lifetime are great challenges . clustering is a key technique used to optimize energy consumption in wsns . in this paper , we propose a novel clustering based routing technique : enhanced developed distributed energy efficient clustering scheme ( eddeec ) for heterogeneous wsns . our technique is based on changing dynamically and with more efficiency the cluster head ( ch ) election prob- ability . simulation results show that our proposed protocol achieves longer lifetime , stability period and more effective messages to bs than distributed energy efficient clustering ( deec ) , developed deec ( ddeec ) and enhanced deec ( edeec ) in heterogeneous environments . story_separator_special_tag in past years there has been increasing interest in field of wireless sensor networks ( wsns ) . one of the major issue of wsns is development of energy efficient routing protocols . clustering is an effective way to increase energy efficiency . mostly , heterogenous protocols consider two or three energy level of nodes . in reality , heterogonous wsns contain large range of energy levels . by analyzing communication energy consumption of the clusters and large range of energy levels in heterogenous wsn , we propose beenish ( balanced energy efficient network integrated super heterogenous ) protocol . it assumes wsn containing four energy levels of nodes . here , cluster heads ( chs ) are elected on the bases of residual energy level of nodes . simulation results show that it performs better than existing clustering protocols in heterogeneous wsns . our protocol achieve longer stability , lifetime and more effective messages than distributed energy efficient clustering ( deec ) , developed deec ( ddeec ) and enhanced deec ( edeec ) . story_separator_special_tag the paper presents an analysis of energy efficient routing protocols with direct communication protocol . a comparison of these protocols is made analyzing energy consumption at each node and explaining system lifetime after certain rounds . the paper also proposes a novel energy conscious cluster head selection algorithm for making system more reliable and efficient . simulation shows that our proposed algorithm enhances the system reliability and accuracy . story_separator_special_tag we study the impact of heterogeneity of nodes , in terms of their energy , in wireless sensor networks that are hierarchically clustered . in these networks some of the nodes become cluster heads , aggregate the data of their cluster members and transmit it to the sink . we assume that a percentage of the population of sensor nodes is equipped with additional energy resources this is a source of heterogeneity which may result from the initial setting or as the operation of the network evolves . we also assume that the sensors are randomly ( uniformly ) distributed and are not mobile , the coordinates of the sink and the dimensions of the sensor field are known . we show that the behavior of such sensor networks becomes very unstable once the first node dies , especially in the presence of node heterogeneity . classical clustering protocols assume that all the nodes are equipped with the same amount of energy and as a result , they can not take full advantage of the presence of node heterogeneity . we propose sep , a heterogeneous-aware protocol to prolong the time interval before the death of the first node ( story_separator_special_tag wireless sensor networks ( wsn ) are emerging in various fields like disaster management , battle field surveillance and border security surveillance . a large number of sensors in these applications are unattended and work autonomously . clustering is a key technique to improve the network lifetime , reduce the energy consumption and increase the scalability of the sensor network . in this paper , we study the impact of heterogeneity of the nodes to the performance of wsn . this paper surveys the different clustering algorithm for heterogeneous wsn . story_separator_special_tag recent advances in wireless sensor networks have led to many new protocols specifically designed for sensor networks where energy awareness is an essential consideration . most of the attention , however , has been given to the routing protocols since they might differ depending on the application and network architecture . this paper surveys recent routing protocols for sensor networks and presents a classification for the various approaches pursued . the three main categories explored in this paper are data-centric , hierarchical and location-based . each routing protocol is described and discussed under the appropriate category . moreover , protocols using contemporary methodologies such as network flow and quality of service modeling are also discussed . the paper concludes with open research issues . 2003 elsevier b.v. all rights reserved . story_separator_special_tag sensor webs consisting of nodes with limited battery power and wireless communications are deployed to collect useful information from the field . gathering sensed information in an energy efficient manner is critical to operate the sensor network for a long period of time . in w. heinzelman et al . ( proc . hawaii conf . on system sci. , 2000 ) , a data collection problem is defined where , in a round of communication , each sensor node has a packet to be sent to the distant base station . if each node transmits its sensed data directly to the base station then it will deplete its power quickly . the leach protocol presented by w. heinzelman et al . is an elegant solution where clusters are formed to fuse data before transmitting to the base station . by randomizing the cluster heads chosen to transmit to the base station , leach achieves a factor of 8 improvement compared to direct transmissions , as measured in terms of when nodes die . in this paper , we propose pegasis ( power-efficient gathering in sensor information systems ) , a near optimal chain-based protocol that is an improvement over story_separator_special_tag in wireless sensor network , routing is the process by which the data gathered by sensors is relayed towards the end user ( usually termed as sink ) .a lot of routing protocols have been developed since now and these protocols differ according to network structure and field of application . in this paper , a survey of the routing protocols developed so far in field of wsn is presented . broadly , routing protocols are classified depending on network structure and protocol operation . the advantages and performance issues of hierarchical protocols will also be highlighted . story_separator_special_tag 3 abstract : in this paper , we propose the mobility of a sink in improved energy efficient pegasis-based protocol ( ieepb ) to advance the network lifetime of wireless sensor networks ( wsns ) . the multi-head chain , multi-chain concept and the sink mobility affects largely in enhancing the network lifetime of wireless sensors . thus , we recommend mobile sink improved energy-efficient pegasis-based routing protocol ( mieepb ) ; a multi-chain model having a sink mobility , to achieve proficient energy utilization of wireless sensors . as the motorized movement of mobile sink is steered by petrol or current , there is a need to confine this movement within boundaries and the trajectory of mobile sink should be fixed . in our technique , the mobile sink moves along its trajectory and stays for a sojourn time at sojourn location to guarantee complete data collection . we develop an algorithm for trajectory of mobile sink . we ultimately perform wide-ranging experiments to assess the performance of the proposed method . the results reveal that the proposed way out is nearly optimal and also better than ieepb in terms of network lifetime . story_separator_special_tag abstract in recent years , there has been a growing interest in wireless sensor networks . one of the major issues in wireless sensor network is developing an energy-efficient clustering protocol . hierarchical clustering algorithms are very important in increasing the network s life time . each clustering algorithm is composed of two phases , the setup phase and steady state phase . the hot point in these algorithms is the cluster head selection . in this paper , we study the impact of heterogeneity of nodes in terms of their energy in wireless sensor networks that are hierarchically clustered . we assume that a percentage of the population of sensor nodes is equipped with the additional energy resources . we also assume that the sensor nodes are randomly distributed and are not mobile , the coordinates of the sink and the dimensions of the sensor field are known . homogeneous clustering protocols assume that all the sensor nodes are equipped with the same amount of energy and as a result , they can not take the advantage of the presence of node heterogeneity . adapting this approach , we introduce an energy efficient heterogeneous clustered scheme for wireless sensor
this paper will very briefly review the history of the relationship between modern optimal control and robust control . the latter is commonly viewed as having arisen in reaction to certain perceived inadequacies of the former . more recently , the distinction has effectively disappeared . once-controversial notions of robust control have become thoroughly mainstream , and optimal control methods permeate robust control theory . this has been especially true in h-infinity theory , the primary focus of this paper . story_separator_special_tag this paper presents an elementary solution to the non-singular h control problem . in this control problem , the underlying linear system satisfies a set of assumptions which ensures that the solution can be obtained by solving just two algebraic riccati equations of the game type . this leads to the central solution to the h control problem . the solution presented in this paper uses only elementary ideas beginning with the bounded real lemma . story_separator_special_tag 1. introduction.- 1.1 the concept of an uncertain system.- 1.2 overview of the book.- 2. uncertain systems.- 2.1 introduction.- 2.2 uncertain systems with norm-bounded uncertainty.- 2.2.1 special case : sector-bounded nonlinearities.- 2.3 uncertain systems with integral quadratic constraints.- 2.3.1 integral quadratic constraints.- 2.3.2 integral quadratic constraints with weighting coefficients.- 2.3.3 integral uncertainty constraints for nonlinear uncertain systems.- 2.3.4 averaged integral uncertainty constraints.- 2.4 stochastic uncertain systems.- 2.4.1 stochastic uncertain systems with multiplicative noise.- 2.4.2 stochastic uncertain systems with additive noise : finitehorizon relative entropy constraints.- 2.4.3 stochastic uncertain systems with additive noise : infinite-horizon relative entropy constraints.- 3. h ? control and related preliminary results.- 3.1 riccati equations.- 3.2 h ? control.- 3.2.1 the standard h ? control problem.- 3.2.2 h ? control with transients.- 3.2.3 h ? control of time-varying systems.- 3.3 risk-sensitive control.- 3.3.1 exponential-of-integral cost analysis.- 3.3.2 finite-horizon risk-sensitive control.- 3.3.3 infinite-horizon risk-sensitive control.- 3.4 quadratic stability.- 3.5 a connection between h ? control and the absolute stabilizability of uncertain systems.- 3.5.1 definitions.- 3.5.2 the equivalence between absolute stabilization and h ? control.- 4. the s-procedure.- 4.1 introduction.- 4.2 an s-procedure result for a quadratic functional and one quadratic constraint.- 4.2.1 proof of theorem 4.2.1.- 4.3 an story_separator_special_tag a necessary and sufficient condition , expressed simply as the dc loop gain ( ie the loop gain at zero frequency ) being less than unity , is given in this paper to guarantee the internal stability of a feedback interconnection of linear time-invariant ( lti ) multiple-input multiple-output ( mimo ) systems with negative imaginary frequency response . systems with negative imaginary frequency response arise for example when considering transfer functions from force actuators to co-located position sensors , and are commonly important in for example lightly damped structures . the key result presented here has similar application to the small-gain theorem , which refers to the stability of feedback interconnections of contractive gain systems , and the passivity theorem ( or more precisely the positive real theorem in the lti case ) , which refers to the stability of feedback interconnections of positive real systems . a complete state-space characterisation of systems with negative imaginary frequency response is also given in this paper and also an example that demonstrates the application of the key result is provided . story_separator_special_tag the note is concerned with linear negative imaginary systems . first , a previously established negative imaginary lemma is shown to remain true even if the system transfer function matrix has poles on the imaginary axis . this result is achieved by suitably extending the definition of negative imaginary transfer function matrices . secondly , a necessary and sufficient condition is established for the internal stability of the positive feedback interconnections of negative imaginary systems . meanwhile , some properties of linear negative imaginary systems are developed . finally , an undamped flexible structure example is presented to illustrate the theory . story_separator_special_tag this paper investigates the robustness of positive-position feedback control of flexible structures with colocated force actuators and position sensors . in particular , the theory of negative-imaginary systems is used to reveal the robustness properties of multi-input , multi-output ( mimo ) positive-position feedback controllers and related types of controllers for flexible structures . the negative-imaginary property of linear systems can be extended to nonlinear systems through the notion of counterclockwise input-output dynamics . story_separator_special_tag the paper is concerned with the notion of lossless negative imaginary systems and their stabilization using a strictly negative imaginary controller through positive feedback . firstly , some properties of lossless negative imaginary transfer functions are studied . secondly , a lossless negative imaginary lemma is given which establishes conditions on matrices appearing in a minimal state-space realization that are necessary and sufficient for a transfer function to be lossless negative imaginary . thirdly , a necessary and sufficient condition is developed for the stabilization of a lossless negative imaginary system by a strictly negative imaginary controller . finally , a numerical example is presented to illustrate the theory . story_separator_special_tag this technical note studies the negative imaginary properties of descriptor linear systems based on state-space realizations . under the assumption of a minimal realization , necessary and sufficient conditions are established to characterize the negative imaginary properties of descriptor systems in terms of linear matrix inequalities with equality constraints . in particular , a negative imaginary lemma , a strict negative imaginary lemma and a lossless negative imaginary lemma are developed . a multiple-input and multiple-output rlc circuit network is used as an illustrative example to validate the developed theory . story_separator_special_tag we consider second order infinite-dimensional systems with force control and collocated position measurement interconnected with finite-dimensional controllers of the same type . we show that under assumptions that generalize those in the finite-dimensional case ( the theory of negative imaginary systems ) , asymptotic stability of the closed-loop system can be concluded , but that the closed-loop system may not be exponentially stable nor input-output stable . story_separator_special_tag abstract systems with counterclockwise input output dynamics ( or negative imaginary transfer functions ) arise in various applications such as the modeling of flexible mechanical structures or electrical circuits when certain kinds of measurements are taken . in this paper we introduce descriptor systems with such an additional structure . we state various of their properties and prove algebraic characterizations of negative imaginariness in terms of spectral conditions of certain structured matrix pencils . for this purpose we also analyze particular boundary cases which are characterized by properties of a structured kronecker canonical form . finally , we describe a method which can be used to restore the negative imaginary property in case that it is lost . this happens , e.g. , when a system with theoretically negative imaginary transfer function is obtained by , e.g. , model order reduction methods , linearization , or other approximations . the method is illustrated by numerical examples . story_separator_special_tag abstract this note represents a first attempt to provide a definition and characterisation of negative imaginary systems for not necessarily rational transfer functions via a sign condition expressed in the entire domain of analyticity , along the same lines of the classic definition of positive real systems . under the standing assumption of symmetric transfer functions , we then derive a necessary and sufficient condition that characterises negative imaginary transfer functions in terms of a matrix sign condition restricted to the imaginary axis , once again following the same line of argument of the standard positive real case . using this definition , even transfer functions with a pole at the origin with double multiplicity , as well as with a possibly negative relative degree , can be negative imaginary . story_separator_special_tag this paper studies the stability of the feedback interconnection of discrete-time negative imaginary ( d-ni ) systems through integral quadratic constraints ( iqcs ) . applying the latest iqc-based resu . story_separator_special_tag abstract this paper studies non-proper negative imaginary systems . first , the concept of negative imaginary transfer functions that may have poles at the origin and infinity is introduced . then , a generalized lemma is presented to provide a sufficient condition to characterize the non-proper negative imaginary properties of systems . the generalized lemma is given in terms of complex variable s in the quarter domain of analyticity . compared to previous results , our result removes the symmetric restriction . also , a new relationship is established between ( lossless ) negative imaginary and ( lossless ) positive real transfer function matrices by using a minor decomposition . the results in this paper give us the possibility to address non-proper and non-symmetric descriptor systems with negative imaginary frequency response . several examples are presented to illustrate the results . story_separator_special_tag in this technical note we lay the foundations of a not necessarily rational negative imaginary systems theory and its relations with positive real systems theory . in analogy with the theory of positive real functions , in our general framework negative imaginary systems are defined in terms of a domain of analyticity of the transfer function and of a sign condition that must be satisfied in such domain . in this way , we do not require to restrict the attention to systems with a rational transfer function . in this work , we also define various grades of negative imaginary systems and aim to provide a unitary view of the different notions that have appeared so far in the literature within the framework of positive real and in the more recent theory of negative imaginary systems , and to show how these notions are characterized and linked to each other . story_separator_special_tag abstract in this letter a strategy to make a system negative imaginary is introduced . we prove that a dynamical forward action is effective to do it for lyapunov stable systems and discuss how to design the forward compensator for siso and mimo systems . some numerical examples are also included . story_separator_special_tag abstract this note provides the connection between the paper absolute stability analysis for negative-imaginary systems and classical results in absolute stability . strictly negative-imaginary systems satisfy the aizerman conjecture . story_separator_special_tag for a string of ( possibly arbitrarily many ) coupled stable subsystems that are equipped with a dynamic property of negative imaginary frequency response , we characterize stability of the string by a dc gain condition that can be expressed as a continued fraction with verifiable convergence properties . through analysis of the convergence of the continued fraction , we establish stability results for the string with various coupling gains and patterns . the derived results are applied to locally decentralized control of large vehicle platoons , possibly with heterogenous neighboring coupling . story_separator_special_tag abstract in this paper , we present a generalized negative imaginary lemma based on a generalized negative imaginary system definition . then , an algebraic riccati equation method is given to determine if a system is negative imaginary . also , a state feedback control procedure is presented that stabilizes an uncertain system and leads to the satisfaction of the negative imaginary property . the controller synthesis procedure is based on the proposed negative imaginary lemma . using this procedure , the closed-loop system can be guaranteed to be robustly stable against any strict negative imaginary uncertainty , such as in the case of unmodeled spill-over dynamics in a lightly damped flexible structure . a numerical example is presented to illustrate the usefulness of the results . story_separator_special_tag introduces a class of resonant controllers that can be used to minimize structural vibration using collocated piezoelectric actuator-sensor pairs . the proposed controller increases the damping of the structure so as to minimize a chosen number of resonant responses . the controller can be tuned to a chosen number of modes . this results in controllers of minimal dimension . the controller structure is chosen such that closed-loop stability is guaranteed . moreover , the controller can be designed such that the spatial h/sub 2/ norm of the system is minimized . this will guarantee average reduction of vibration throughout the entire structure . experimental validation on a simply supported beam is presented showing the effectiveness of the proposed controller . story_separator_special_tag in this paper we propose a special type of colocated feedback controller for smart structures . the controller is a parallel combination of high-q resonant circuits . each of the resonant circuits is tuned to a pole ( or the resonant frequency ) of the smart structure . it is proven that the parallel combination of resonant controllers is stable with an infinite gain margin . only one set of actuator sensor can damp multiple resonant modes with the resonant controllers . experimental results are presented to show the robustness of the proposed controller in damping multimode resonances . story_separator_special_tag to realize a vibration suppression of flexible structures like a membrane , the smart structures technology that uses material such as piezoelectric element is focused . in this study , a vibration control system using a pvdf film as an actuator for the membrane structure is proposed . to confirm the effectiveness of the proposed method , the control properties with h control for reducing single mode , 1st or 2nd mode vibration , are evaluated by using the non-contact laser excitation vibration test system . in the system , a high power pulse laser is used for producing an ideal impulse excitation and a laser doppler vibrometer is used for measuring the response on the membrane . the obtained results show that the vibration suppression is achieved at each mode in all experiments . therefore , the present method using the flexible piezoelectric element is effective to suppress the vibration of flexible structures . story_separator_special_tag a novel approach for shape control of flexible surface is presented . the main idea of this work is to develop a shape control algorithm using the potential field concept . the dynamical model derived for the flexible surface with embedded/bonded actuators is divided into two categories of having relative and absolute actuator points termed as relative and absolute pixels . the algorithm designed is used to control each individual pixels causing local deformation , which in turns causes global deformation of the flexible surface . separate control laws are designed for relative and absolute pixels . the convergence of the control law is guaranteed using lyapunov theory . model parameter uncertainties of the flexible structure are also taken into account and the control law is modified accordingly . the effectiveness of the control algorithm is highlighted with the simulation results presented . story_separator_special_tag a transfer-function is said to be negative imaginary if the corresponding frequency response function has a negative definite imaginary part ( on the positively increasing imaginary axis ) . negative imaginary transfer-functions can be stabilized using negative imaginary feedback controllers . flexible structures with compatible collocated sensor/actuator pairs have transfer-functions that are negative imaginary . in this paper a model structure that typically represents a collocated structure is considered . an identification algorithm which enforces the negative imaginary constraint is proposed for estimating the model parameters . a feedback control technique , known as integral resonant control ( irc ) , is proposed for damping vibrations in collocated flexible structures . conditions for the stability of the proposed controller are derived , and shown that the set of stabilizing ircs is convex . finally , a flexible beam with two pairs of collocated piezoelectric actuators/sensors is considered . the proposed identification scheme is used determining the transfer-function and an irc is designed for damping the vibrations . the experimental results obtained are reported . story_separator_special_tag abstract a computational scheme is proposed to estimate a state-space representation of mimo transfer functions from frequency response measurements . the approach can constrain the phase curve of selected elements of the transfer function matrix to certain regions . poles of the system are determined using a frequency domain subspace approach . the phase constraint is enforced by an lmi formulation based on the positive real lemma when the zeros of the system are estimated . the successful application of the algorithm to measurements from a cantilever beam with three collocated piezoelectric actuator/sensor pairs is demonstrated . story_separator_special_tag this paper reports experimental implementation of an extended positive position feedback ( ppf ) controller on an active structure consisting of a cantilevered beam with bonded collocated piezoelectric actuators and sensors . stability conditions for ppf control are rederived to allow for a feed-through term in the model of the structure . this feed-through term is needed to ensure that the system 's in-bandwidth zeros are captured with reasonable accuracy . the set of stabilizing ppf controllers is shown to be a convex set characterized by a set of linear matrix inequalities . a number of multivariable ppf controllers are designed and successfully implemented on the structure . story_separator_special_tag in recent years , the atomic force microscope ( afm ) has become an important tool in nanotechnology research . it was first conceived to generate 3-d images of conducting as well as nonconducting surfaces with a high degree of accuracy . presently , it is also being used in applications that involve manipulation of material surfaces at a nanoscale . in this paper , we describe a new scanning method for fast atomic force microscopy . in this technique , the sample is scanned in a spiral pattern instead of the well-established raster pattern . a constant angular velocity spiral scan can be produced by applying single frequency cosine and sine signals with slowly varying amplitudes to the x-axis and y -axis of afm nanopositioner , respectively . the use of single-frequency input signals allows the scanner to move at high speeds without exciting the mechanical resonance of the device . alternatively , the frequency of the sinusoidal set points can be varied to maintain a constant linear velocity ( clv ) while a spiral trajectory is being traced . thus , producing a clv spiral . these scan methods can be incorporated into most modern afms with minimal story_separator_special_tag the negative imaginary ( ni ) property is exhibited by many systems such as flexible structures with force actuators and position sensors and can be used to prove the robust stability of flexible structure control systems . in this paper , we derive methods to check for the ni and strict negative imaginary ( sni ) properties in both the single-input single-output as well as multi-input multi-output cases . the proposed methods are based on spectral conditions on a corresponding hamiltonian matrix obtained for a given system transfer function matrix . under certain conditions , a given transfer function matrix satisfies the ni property if and only if the corresponding hamiltonian matrix has no pure imaginary eigenvalues with odd multiplicity . it is also shown that a given transfer function matrix satisfies the sni property if and only if the corresponding hamiltonian matrix has no eigenvalues on the imaginary axis , except at the origin . the results of this paper are applied to check the ni property in two nanopositioning applications . story_separator_special_tag flexible structures with collocated force actuators and position sensors lead to negative imaginary dynamics . however , in some cases , the mathematical models obtained for these systems , for example , using system identification methods may not yield a negative imaginary system . this paper provides two methods for enforcing negative imaginary dynamics on such mathematical models , given that it is known that the underlying dynamics ought to belong to this system class . the first method is based on a study of the spectral properties of hamiltonian matrices . a test for checking the negativity of the imaginary part of a corresponding transfer function matrix is first developed . if an associated hamiltonian matrix has pure imaginary axis eigenvalues , the mathematical model loses the negative imaginary property in some frequency bands . in such cases , a first-order perturbation method is proposed for iteratively collapsing the frequency bands whose negative imaginary property is violated and finally displacing the . story_separator_special_tag the negative imaginary property is a property that many practical systems exhibit . this paper is concerned with the negative imaginary synthesis problem for linear time-invariant systems by output feedback control . sufficient conditions are developed for the design of static output feedback controllers , dynamic output feedback controllers and observer-based feedback controllers . based on the design conditions , a numerical algorithm is suggested to find the desired controllers . structural constraints can be imposed on the controllers to reflect the practical system constraints . also , the separation principle is shown to be valid for the observer-based design . finally , three numerical examples are presented to illustrate the efficiency of the developed theory . story_separator_special_tag a general theory of quantum-limited feedback for continuously monitored systems is presented . two approaches are used , one based on quantum measurement theory and one on hamiltonian system-bath interactions . the former gives rise to a stochastic non-markovian evolution equation for the density operator , and the latter a non-markovian quantum langevin equation . in the limit that the time delay in the feedback loop is negligible , a simple deterministic markovian master equation can be derived from either approach . two special cases of interest are treated : feedback mediated by optical homodyne detection and self-excited quantum point processes . story_separator_special_tag we present a formulation of feedback in quantum systems in which the best estimates of the dynamical variables are obtained continuously from the measurement record , and fed back to control the system . we apply this method to the problem of cooling and confining a single quantum degree of freedom , and compare it to current schemes in which the measurement signal is fed back directly in the manner usually considered in existing treatments of quantum feedback . direct feedback may be combined with feedback by estimation , and the resulting combination , performed on a linear system , is closely analogous to classical linear-quadratic-gaussian control theory with residual feedback . story_separator_special_tag this paper gives a unified approach to feedback control theory of quantum mechanical systems of bosonic modes described by noncommutative operators . a quantum optical closed-loop , including a plant and controller , is developed and its fundamental structural properties are analyzed extensively from a purely quantum mechanical point of view , in order to facilitate the use of control theory in microscopic world described by quantum theory . in particular , an input-output description of quantum mechanical systems which is essential in describing the behavior of the feedback systems is fully formulated and developed . this would then provide a powerful tool for quantum control and pave an avenue that connects control theory to quantum dynamics . this paper is divided into two parts . the first part is devoted to the basic formulation of quantum feedback control via quantum communication and local operations on an optical device , cavity , that can be regarded as a unit of quantum dynamics of bosonic modes . the formulation introduced in this paper presents the feature intrinsic in quantum feedback systems based on quantum stochastic differential equations . the input-output description provides a basis for developing quantum feedback control through the story_separator_special_tag based on the stochastic differential equation of quantum mechanical feedback obtained in the first part of this paper , detailed control concepts and applications are discussed for quantum systems interacting with a noncommutative noise source . a feedback system in our framework is purely nonclassical in the sense that feedback control is performed via local operation and quantum communication through a quantum channel . the role of the controller is to alter the quantum dynamic characteristics of the plant through entanglement , shared between the plant and controller by sending quantum states , that is modulated by the hamiltonian on the controller . the input-output relation of quantum systems provides a natural extension of control theory to the quantum domain . this enables one to present a control theoretical interpretation of some fundamental quantum mechanical notions such as the uncertainty principle , but also to find applications of ideas and tools of control theory . one of the most important applications is the production of squeezed states , which has been an important issue of quantum theory in relation to quantum computation and quantum communication . the method proposed here reduces the application to a conventional noise reduction problem with story_separator_special_tag this paper presents a survey on quantum control theory and applications from a control systems perspective . some of the basic concepts and main developments ( including open-loop control and closed-loop control ) in quantum control theory are reviewed . in the area of open-loop quantum control , the paper surveys the notion of controllability for quantum systems and presents several control design strategies including optimal control , lyapunov-based methodologies , variable structure control and quantum incoherent control . in the area of closed-loop quantum control , the paper reviews closed-loop learning control and several important issues related to quantum feedback control including quantum filtering , feedback stabilization , lqg control and robust quantum control . story_separator_special_tag based on a recently developed notion of physical realizability for quantum linear stochastic systems , we formulate a quantum lqg optimal control problem for quantum linear stochastic systems where the controller itself may also be a quantum system and the plant output signal can be fully quantum . such a control scheme is often referred to in the quantum control literature as `` coherent feedback control '' . it distinguishes the present work from previous works on the quantum lqg problem where measurement is performed on the plant and the measurement signals are used as the input to a fully classical controller with no quantum degrees of freedom . the difference in our formulation is the presence of additional non-linear and linear constraints on the coefficients of the sought after controller , rendering the problem as a type of constrained controller design problem . due to the presence of these constraints , our problem is inherently computationally hard and this also distinguishes it in an important way from the standard lqg problem . we propose a numerical procedure for solving this problem based on an alternating projections algorithm and , as an initial demonstration of the feasibility of this approach story_separator_special_tag gough , j. e. , gohm , r. , yanagisawa , m. ( 2008 ) . linear quantum feedback networks . physical review a , 78 ( 6 ) , article no : 062104 . story_separator_special_tag this paper considers the bounded real properties for a class of linear quantum systems which can be defined by complex quantum stochastic differential equations in terms of annihilation operators only . the paper considers complex quantum versions of the bounded real lemma , the strict bounded real lemma and the lossless bounded real lemma . for the class of quantum systems under consideration , it is shown that the question of physical realizability is related to the bounded real and lossless bounded real properties . story_separator_special_tag this paper considers a coherent h control problem for a class of linear quantum systems which can be defined by complex quantum stochastic differential equations in terms of annihilation operators only . for this class of quantum systems , a solution to the h control problem can be obtained in terms of a pair of complex riccati equations . the paper also considers complex versions of the bounded real lemma , the strict bounded real lemma and the lossless bounded real lemma . for the class of quantum systems under consideration , the question of physical realizability is related to the bounded real and lossless bounded real properties . story_separator_special_tag i present an experimental realization of a coherent-feedback control system that was recently proposed for testing basic principles of linear quantum stochastic control theory [ m. r. james , h. i. nurdin , and i. r. petersen , e-print arxiv : quant-ph/0703150v2 , ieee transactions on automatic control ( to be published ) ] . for a dynamical plant consisting of an optical ring resonator , i demonstrate $ \\ensuremath { \\sim } 7\\phantom { \\rule { 0.3em } { 0ex } } \\mathrm { db } $ broadband disturbance rejection of injected laser signals via all-optical feedback with a tailored dynamic compensator . comparison of the results with a transfer function model pinpoints critical parameters that determine the coherent-feedback control system 's performance . story_separator_special_tag in this paper , we consider a linear quantum network composed of two distantly separated cavities that are connected via a one-way optical field . when one of the cavities is damped and the other undamped , the overall cavity state obtains a large amount of entanglement in its quadratures . this entanglement , however , immediately decays and vanishes in a finite time . that is , entanglement sudden death occurs . we show that the direct measurement feedback method proposed by wiseman can avoid this entanglement sudden death , and , further , enhance the entanglement . it is also shown that the entangled state under feedback control is robust against signal loss in a realistic detector , indicating the reliability of the proposed direct feedback method in practical situations . story_separator_special_tag this paper presents a realization algorithm for a class of complex transfer functions corresponding to physically realizable complex linear quantum systems . the class of complex linear quantum systems under consideration includes interconnections of passive optical components such as cavities , beam-splitters , phase-shifters and interferometers . it is shown that for almost all quantum optical systems within this class , the corresponding transfer function can be realized as a cascade connection involving only cavities and phase-shifters . story_separator_special_tag a recently emerging approach to the feedback control of linear quantum systems involves the use of a controller which itself is a quantum linear system . this approach to quantum feedback control , referred to as coherent quantum feedback control , has the advantage that it does not destroy quantum information , is fast , and has the potential for efficient implementation . an important issue which arises both in the synthesis of linear coherent quantum controllers and in the modeling of linear quantum systems , is the issue of physical realizability . this issue relates to the property of whether a given set of linear quantum stochastic differential equations corresponds to a physical quantum system satisfying the laws of quantum mechanics . under suitable assumptions , the paper shows that the question of physical realizability is equivalent to a frequency domain ( j , j ) -unitary condition . this is important in controller synthesis since it is the transfer function matrix of the controller which determines the closed loop system behavior . story_separator_special_tag the purpose of this paper is to develop a synthesis theory for linear dynamical quantum stochastic systems that are encountered in linear quantum optics and in phenomenological models of linear quantum circuits . in particular , such a theory will enable the systematic realization of coherent/fully quantum linear stochastic controllers for quantum control , amongst other potential applications . we show how general linear dynamical quantum stochastic systems can be constructed by assembling an appropriate interconnection of one degree of freedom open quantum harmonic oscillators and , in the quantum optics setting , discuss how such a network of oscillators can be approximately synthesized or implemented in a systematic way from some linear and non-linear quantum optical elements . an example is also provided to illustrate the theory . story_separator_special_tag the purpose of this paper is to present simple and general algebraic methods for describing series connections in quantum networks . these methods build on and generalize existing methods for series ( or cascade ) connections by allowing for more general interfaces , and by introducing an efficient algebraic tool , the series product . we also introduce another product , which we call the concatenation product , that is useful for assembling and representing systems without necessarily having connections . we show how the concatenation and series products can be used to describe feedforward and feedback networks . a selection of examples from the quantum control literature are analyzed to illustrate the utility of our network modeling methodology . story_separator_special_tag the aim of this article is to extend linear quantum dynamical network theory to include static bogoliubov components ( such as squeezers ) . within this integrated quantum network theory , we provide general methods for cascade or series connections , as well as feedback interconnections using linear fractional transformations . in addition , we define input-output maps and transfer functions for representing components and describing convergence . we also discuss the underlying group structure in this theory arising from series interconnection . several examples illustrate the theory . story_separator_special_tag the control of individual quantum systems promises a new technology for the 21st century quantum technology . this book is the first comprehensive treatment of modern quantum measurement and measurement-based quantum control , which are vital elements for realizing quantum technology . readers are introduced to key experiments and technologies through dozens of recent experiments in cavity qed , quantum optics , mesoscopic electronics , and trapped particles several of which are analyzed in detail . nearly 300 exercises help build understanding , and prepare readers for research in these exciting areas . this important book will interest graduate students and researchers in quantum information , quantum metrology , quantum control and related fields . novel topics covered include adaptive measurement ; realistic detector models ; mesoscopic current detection ; markovian , state-based and optimal feedback ; and applications to quantum information processing . story_separator_special_tag this paper considers the problem of robust stability for a class of uncertain linear quantum systems subject to unknown perturbations in the system hamiltonian . the case of a nominal linear quantum system is considered with quadratic perturbations to the system hamiltonian . a robust stability condition is given in terms of a strict bounded real condition . story_separator_special_tag this technical note uses a system theoretic approach to show that classical linear time invariant controllers can not generate steady state entanglement in a bipartite gaussian quantum system which is initialized in a gaussian state . the technical note also shows that the use of classical linear controllers can not generate entanglement in a finite time from a bipartite system initialized in a separable gaussian state . the approach reveals connections between system theoretic concepts and the well known physical principle that local operations and classical communications can not generate entangled states starting from separable states . story_separator_special_tag for quantum systems with linear dynamics in phase space much of classical feedback control theory applies . however , there are some questions that are sensible only for the quantum case : given a fixed interaction between the system and the environment what is the optimal measurement on the environment for a particular control problem ? we show that for a broad class of optimal ( state-based ) control problems ( the stationary linear-quadratic-gaussian class ) , this question is a semidefinite program . moreover , the answer also applies to markovian ( current-based ) feedback . story_separator_special_tag in the theory of quantum dynamical filtering , one of the biggest issues is that the underlying system dynamics represented by a quantum stochastic differential equation must be known exactly in order that the corresponding filter provides an optimal performance ; however , this assumption is generally unrealistic . therefore , in this paper , we consider a class of linear quantum systems subjected to time-varying norm-bounded parametric uncertainties and then propose a robust observer such that the variance of the estimation error is guaranteed to be within a certain bound . although in the linear case much of classical control theory can be applied to quantum systems , the quantum robust observer obtained in this paper does not have a classical analog due to the system 's specific structure with respect to the uncertainties . moreover , by considering a typical quantum control problem , we show that the proposed robust observer is fairly robust against a parametric uncertainty of the system even when the other estimators -- the optimal kalman filter and risk-sensitive observer -- fail in the estimation . story_separator_special_tag we examine a proposal by sherson and m\\o { } lmer to generate polarization-squeezed light in terms of quantum stochastic calculus ( qsc ) . we investigate the statistics of the output field and confirm their results using the qsc formalism . in addition , we study the atomic dynamics of the system and find that this setup can produce up to 3 db of atomic spin squeezing . story_separator_special_tag using only the boson canonical commutation relations and the riemann-lebesgue integral we construct a simple theory of stochastic integrals and differentials with respect to the basic field operator processes . this leads to a noncommutative ito product formula , a realisation of the classical poisson process in fock space which gives a noncommutative central limit theorem , the construction of solutions of certain noncommutative stochastic differential equations , and finally to the integration of certain irreversible equations of motion governed by semigroups of completely positive maps . the classical ito product formula for stochastic differentials with respect to brownian motion and the poisson process is a special case . story_separator_special_tag `` elegantly written , with obvious appreciation for fine points of higher mathematics.most notable is [ the ] author 's effort to weave classical probability theory into [ a ] quantum framework . '' - the american mathematical monthly `` this is an excellent volume which will be a valuable companion both for those who are already active in the field and those who are new to it . furthermore there are a large number of stimulating exercises scattered through the text which will be invaluable to students . '' - mathematical reviews an introduction to quantum stochastic calculus aims to deepen our understanding of the dynamics of systems subject to the laws of chance both from the classical and the quantum points of view and stimulate further research in their unification . this is probably the first systematic attempt to weave classical probability theory into the quantum framework and provides a wealth of interesting features : the origin of ito 's correction formulae for brownian motion and the poisson process can be traced to communication relations or , equivalently , the uncertainty principle . quantum stochastic interpretation enables the possibility of seeing new relationships between fermion and boson fields story_separator_special_tag this paper provides an introduction to quantum filtering theory . an introduction to quantum probability theory is given , focusing on the spectral theorem and the conditional expectation as a least squares estimate , and culminating in the construction of wiener and poisson processes on the fock space . we describe the quantum it\\^o calculus and its use in the modelling of physical systems . we use both reference probability and innovations methods to obtain quantum filtering equations for system-probe models from quantum optics . story_separator_special_tag all-optical feedback can be effected by putting the output of a source cavity through a faraday isolator and into a second cavity which is coupled to the source cavity by a nonlinear crystal . if the driven cavity is heavily damped , then it can be adiabatically eliminated and a master equation or quantum langevin equation derived for the first cavity alone . this is done for an input bath in an arbitrary state , and for an arbitrary nonlinear coupling . if the intercavity coupling involves only the intensity ( or one quadrature ) of the driven cavity , then the effect on the source cavity is identical to that which can be obtained from electro-optical feedback using direct ( or homodyne ) detection . if the coupling involves both quadratures , this equivalence no longer holds and a coupling linear in the source amplitude can produce a nonclassical state in the source cavity . the analogous electro-optic scheme using heterodyne detection introduces extra noise which prevents the production of nonclassical light . unlike the electro-optical case , the all-optical feedback loop has an output beam ( reflected from the second cavity ) . we show that this may story_separator_special_tag in the conventional picture of quantum feedback control , sensors perform measurements on the system , a classical controller processes the results of the measurements , and actuators supply semiclassical potentials to alter the behavior of the quantum system . in this picture , the sensors tend to destroy coherence in the process of making measurements , and although the controller can use the actuators to act coherently on the quantum system , it is processing and feeding back classical information . this paper proposes an alternative method for quantum feedback control , in which the sensors , controller , and actuators are quantum systems that interact coherently with the system to be controlled . in this picture , the controller gets , processes , and feeds back quantum information . controllers that operate using such quantum feedback loops can perform tasks such as entanglement transfer that are not possible using classical feedback . necessary and sufficient conditions are presented for hamiltonian quantum systems to be controllable and observable using both classical and quantum feedback . story_separator_special_tag the theory of quantum feedback networks has recently been developed with the aim of showing how quantum input-output components may be connected together so as to control , stabilize , or enhance the performance of one of the subcomponents . in this paper , we show how the degree to which an idealized component ( a degenerate parametric amplifier in the strong-coupling regime ) can squeeze input fields may be enhanced by placing the component in loop in a simple feedback mechanism involving a beam splitter . we study the spectral properties of output fields , placing particular emphasis on the elastic and inelastic components of the power density . story_separator_special_tag this paper surveys some recent results on the theory of quantum linear systems and presents them within a unified framework . quantum linear systems are a class of systems whose dynamics , which are described by the laws of quantum mechanics , take the specific form of a set of linear quantum stochastic differential equations ( qsdes ) . such sys- tems commonly arise in the area of quantum optics and related disciplines . systems whose dynamics can be described or ap- proximated by linear qsdes include interconnections of optical cavities , beam-spitters , phase-shifters , optical parametric am- plifiers , optical squeezers , and cavity quantum electrodynamic systems . with advances in quantum technology , the feedback control of such quantum systems is generating new challenges in the field of control theory . potential applications of such quantum feedback control systems include quantum computing , quantum error correction , quantum communications , gravity wave detection , metrology , atom lasers , and superconducting quantum circuits . a recently emerging approach to the feedback control of quantum linear systems involves the use of a controller which itself is a quantum linear system . this approach to quantum feedback control , story_separator_special_tag this paper surveys some recent results on the feedback control of quantum linear systems and the robustness properties of these systems . quantum linear systems are a class of systems whose dynamics , which are described by the laws of quantum mechanics , take the specific form of a set of linear quantum stochastic differential equations ( qsdes ) . these systems can also be described in terms of a hamiltonian operator h and a coupling operator l , which in the case of quantum linear systems have a specific quadratic and linear form respectively . such systems commonly arise in the area of quantum optics and related disciplines . systems whose dynamics can be described or approximated by linear qsdes include interconnections of optical cavities , beam-splitters , phase-shifters , optical parametric amplifiers , optical squeezers , and cavity quantum electrodynamic systems . an important approach to the feedback control of quantum linear systems involves the use of a controller which itself is a quantum linear system . this approach to quantum feedback control , referred to as coherent quantum feedback control , has the advantage that it does not destroy quantum information , is fast , and has story_separator_special_tag negative imaginary ( ni ) systems play an important role in the robust control of highly resonant flexible structures . in this paper , a generalized ni system framework is presented . a new ni system definition is given , which allows for flexible structure systems with colocated force actuators and position sensors , and with free body motion . this definition extends the existing definitions of ni systems . also , necessary and sufficient conditions are provided for the stability of positive feedback control systems where the plant is ni according to the new definition and the controller is strictly negative imaginary . the stability conditions in this paper are given purely in terms of properties of the plant and controller transfer function matrices , although the proofs rely on state space techniques . furthermore , the stability conditions given are independent of the plant and controller system order . as an application of these results , a case study involving the control of a flexible robotic arm with a piezo-electric actuator and sensor is presented . story_separator_special_tag in recent years , the classical theory of stochastic integration and stochastic differential equations has been extended to a non-commutative set-up to develop models for quantum noises . the author , a specialist of classical stochastic calculus and martingale theory , tries to provide an introduction to this rapidly expanding field in a way which should be accessible to probabilists familiar with the ito integral . it can also , on the other hand , provide a means of access to the methods of stochastic calculus for physicists familiar with foch space analysis . story_separator_special_tag this paper considers the physical realizability condition for multi-level quantum systems having polynomial hamiltonian and multiplicative coupling with respect to several interacting boson fields . specifically , it generalizes a recent result the authors developed for two-level quantum systems . for this purpose , the algebra of su ( n ) was incorporated . as a consequence , the obtained condition is given in terms of the structure constants of su ( n ) . story_separator_special_tag arbitrary linear time invariant systems can be implemented as quantum systems if additional quantum noises are permitted in the implementation . we give several results concerning how many additional quantum noise channels are necessary to implement state space realizations and transfer functions as quantum systems . we also give algorithms to do so . we demonstrate the utility of these results with an algorithm for obtaining a suboptimal solution to a coherent quantum lqg control problem .
the design and performance of the new cold neutron chopper spectrometer ( cncs ) at the spallation neutron source in oak ridge are described . cncs is a direct-geometry inelastic time-of-flight spectrometer , designed essentially to cover the same energy and momentum transfer ranges as in5 at ill , let at isis , dcs at nist , toftof at frm-ii , amateras at j-parc , pharos at lansce , and neat at hzb , at similar energy resolution . measured values of key figures such as neutron flux at sample position and energy resolution are compared between measurements and ray tracing monte carlo simulations , and good agreement ( better than 20 % of absolute numbers ) has been achieved . the instrument performs very well in the cold and thermal neutron energy ranges , and promises to become a workhorse for the neutron scattering community for quasielastic and inelastic scattering experiments . story_separator_special_tag the spallation neutron source at oak ridge national laboratory now hosts four direct geometry time-of-flight chopper spectrometers . these instruments cover a range of wave-vector and energy transfer space with varying degrees of neutron flux and resolution . the regions of reciprocal and energy space available to measure at these instruments are not exclusive and overlap significantly . we present a direct comparison of the capabilities of this instrumentation , conducted by data mining the instrument usage histories , and specific scanning regimes . in addition , one of the common science missions for these instruments is the study of magnetic excitations in condensed matter systems . we have measured the powder averaged spin wave spectra in one particular sample using each of these instruments , and use these data in our comparisons . story_separator_special_tag we address recent progress in the continued neutronic design of the spallation neutron source target station as regards moderator performance . the spallation neutron source target station will receive 2 mw of 1 gev protons at 60 hz . this level of proton power offers unprecedented neutronic performance for pulsed neutron production for time-of-flight neutron scattering , as well as unprecedented challenges in providing such a performance . we report on the results of recent design optimization studies and the importance of carefully matching moderator performance characteristics to the needs of the neutron scattering instruments . story_separator_special_tag national user facilities such as the nist center for neutron research ( ncnr ) require a significant base of software to treat the data produced by their specialized measurement instruments . there is no universally accepted and used data treatment package for the reduction , visualization , and analysis of inelastic neutron scattering data . however , we believe that the software development approach adopted at the ncnr has some key characteristics that have resulted in a successful software package called dave ( the data analysis and visualization environment ) . it is developed using a high level scientific programming language , and it has been widely adopted in the united states and abroad . in this paper we describe the development approach , elements of the dave software suite , its usage and impact , and future directions and opportunities for development . story_separator_special_tag the mantid framework is a software solution developed for the analysis and visualization of neutron scattering and muon spin measurements . the framework is jointly developed by software engineers and scientists at the isis neutron and muon facility and the oak ridge national laboratory . the objectives , functionality and novel design aspects of mantid are described . story_separator_special_tag the horace suite of programs has been developed to work with large multiple-measurement data sets collected from time-of-flight neutron spectrometers equipped with arrays of position-sensitive detectors . the software allows exploratory studies of the four dimensions of reciprocal space and excitation energy to be undertaken , enabling multi-dimensional subsets to be visualized , algebraically manipulated , and models for the scattering to simulated or fitted to the data . the software is designed to be an extensible framework , thus allowing user-customized operations to be performed on the data . examples of the use of its features are given for measurements exploring the spin waves of the simple antiferromagnet rbmnf3 and ferromagnetic iron , and the phonons in uru2si2 . story_separator_special_tag superconductivity is a remarkable phenomenon that arises from the collective motion of electrons in materials , and in particular the partnering of electrons into so-called cooper pairs . superconductors can conduct electric current without dissipating energy into heat , and can levitate magnets owing to their perfect diamagnetism . since the late 1980s , much of the condensed matter physics community has focused on understanding high-temperature superconductivity , with the hope that a room-temperature superconductor could one day revolutionize power delivery . story_separator_special_tag abstract we present the results of measurements taken with the newly constructed pastis coil set insert , which uses a wide-angle banana 3he neutron spin-filter cell to cover a large range of scattering angles . the neutron polarization direction is freely rotatable to allow 3-directional ( xyz ) polarization analysis and the coil set is designed to fit within the sample areas of most of the spectrometers at the institut laue-langevin ( ill ) in grenoble . no significant depolarization due to the repeated spin rotations is observed . the pastis insert is now available for use on selected instruments at the ill. this will enable xyz neutron polarization analysis studies to be implemented on the majority of public ill instruments . story_separator_special_tag here we report on the development of polarization analysis ( pa ) techniques to be employed at the isis pulsed neutron source second target station . both spin exchange optical pumping and metastability exchange optical pumping techniques are being developed at isis to produce polarized neutron spin filters for use as neutron polarizers and analysers . we focus on the developments of a polarization solution on the let spectrometer , including the updated design of the pastis xyz coil set and single crystal silicon analyser cell . we also report on the construction of a combined polarizer/analyser solution for the wish diffractometer . story_separator_special_tag the instrumental design of the polarization analysis neutron spectrometer with correlation method ( polano ) is almost ready for its construction , although some discussion on the design still remains . polano is a new inelastic neutron spectrometer in the japan proton accelerator research complex ( j-parc ) utilizing polarized neutrons for comprehensive materials research , focusing on the use of quasi-elastic and inelastic scattering techniques . for instrumental construction , the basic shield design is complete and the shielding capability against radiation has been assessed . additionally , the designs of the beam transport section using 4 qc supermirror guide tube are almost complete . also , detecting section and a large vacuum chamber is now under designing . the development of polarization and chopper devices are the key to the success for the very first application of polarization analysis to inelastic scattering at spallation neutron source . high performance fermi and t0 choppers are now under manufacturing . story_separator_special_tag hyspec is a high-intensity , direct-geometry time-of-flight spectrometer at the spallation neutron source , optimized for measurement of excitations in small single-crystal specimens with optional polarization analysis capabilities . the incident neutron beam is monochromated using a fermi chopper with short , straight blades , and is then vertically focused by bragg scattering onto the sample position by either a highly oriented pyrolitic graphite ( unpolarized ) or a heusler ( polarized ) crystal array . neutrons are detected by a bank of 3 he tubes that can be positioned over a wide range of scattering angles about the sample axis . hyspec entered the user program in february 2013 for unpolarized experiments , and is already experiencing a vibrant research program . polarization analysis will be accomplished by using the heusler crystal array to polarize the incident beam , and either a 3 he spin filter or a supermirror wide-angle polarization analyser to analyse the scattered beam . the 3 he spin filter employs the spin-exchange optical pumping technique . a 60 wide angle 3 he cell that matches the detector coverage will be used for polarization analysis . the polarized gas in the post-sample wide angle cell is story_separator_special_tag the technique of longitudinal ( xyz ) polarization analysis has been used successfully for many years to study disordered magnetic materials in thermal and cold neutron diffraction experiments . the technique allows the simultaneous and unambiguous separation of the nuclear , magnetic , and nuclear spin-incoherent contributions to the scattering . the technical advances seen in recent years , such as the availability of polarized 3he analyzer cells to cover a large detector solid angle , the ability to detect out-of-plane scattering in a multi-detector , and a significant increase of the usable beam divergence , call for a generalization of the method . a general treatment of the formalism for carrying out neutron polarization analysis will be given in this paper , which describes a possible method of usage at a future , modern diffractometer or inelastic spectrometer with large area multi-detector coverage . story_separator_special_tag solid parahydrogen has been used as a transmitter of approximately hydrostatic pressure to study the effects of pressures up to 10 000 atmos on the superconducting transition temperatures of polycrystalline tin , indium , tantalum , thallium , and mercury . the technique which was used allowed an approximate evaluation of the effects of sample deformation and pressure gradients , and the results are considerably more accurate than the high-pressure data previously available . the transition temperature data for tin and indium showed considerable curvature when plotted vs pressure , but gave a roughly linear relationship when plotted against volume . no curvature was found for tantalum . the thallium data agree qualitatively with previous work , and show a maximum in the transition temperature vs pressure curve at about 2000 atmos . the mercury results were anomalous in that two distinct transition temperature vs pressure curves ( with different zero-pressure transition temperatures ) were found ; one when the pressure was kept below 4000 atmos , and the other when the sample was cycled from zero to 10 000 atmos . other experiments have shown that these results are due to two different modifications of solid mercury , each story_separator_special_tag finding ways to achieve higher values of the transition temperature , tc , in superconductors remains a great challenge . the superconducting phase is often one of several competing types of electronic order , including antiferromagnetism and charge density waves . an emerging trend documented in heavy-fermion and organic conductors is that the maximum tc for superconductivity occurs under external conditions that cause the critical temperature for a competing order to go to zero . recently , such competition has been found in multilayer copper oxide high-temperature superconductors ( htscs ) that possess two crystallographically inequivalent cuo2 planes in the unit cell . however , whether the competing electronic state can be suppressed to enhance tc in htscs remains an open question . here we show that pressure-driven phase competition leads to an unusual two-step enhancement of tc in optimally doped trilayer bi2sr2ca2cu3o10+ ( bi2223 ) . we find that tc first increases with pressure and then decreases after passing through a maximum . unexpectedly , tc increases again when the pressure is further raised above a critical value of around 24 gpa , surpassing the first maximum . the presence of this critical pressure is a manifestation of the story_separator_special_tag a superconductor is a material that can conduct electricity without resistance below a superconducting transition temperature , tc . the highest tc that has been achieved to date is in the copper oxide system : 133 kelvin at ambient pressure and 164 kelvin at high pressures . as the nature of superconductivity in these materials is still not fully understood ( they are not conventional superconductors ) , the prospects for achieving still higher transition temperatures by this route are not clear . in contrast , the bardeen cooper schrieffer theory of conventional superconductivity gives a guide for achieving high tc with no theoretical upper bound all that is needed is a favourable combination of high-frequency phonons , strong electron phonon coupling , and a high density of states . these conditions can in principle be fulfilled for metallic hydrogen and covalent compounds dominated by hydrogen , as hydrogen atoms provide the necessary high-frequency phonon modes as well as the strong electron phonon coupling . numerous calculations support this idea and have predicted transition temperatures in the range 50 235 kelvin for many hydrides , but only a moderate tc of 17 kelvin has been observed experimentally . here we story_separator_special_tag under pressure , metals exhibit increasingly shorter interatomic distances . intuitively , this response is expected to be accompanied by an increase in the widths of the valence and conduction bands and hence a more pronounced free-electron-like behaviour . but at the densities that can now be achieved experimentally , compression can be so substantial that core electrons overlap . this effect dramatically alters electronic properties from those typically associated with simple free-electron metals such as lithium ( li ; refs 1-3 ) and sodium ( na ; refs 4 , 5 ) , leading in turn to structurally complex phases and superconductivity with a high critical temperature . but the most intriguing prediction-that the seemingly simple metals li ( ref . 1 ) and na ( ref . 4 ) will transform under pressure into insulating states , owing to pairing of alkali atoms-has yet to be experimentally confirmed . here we report experimental observations of a pressure-induced transformation of na into an optically transparent phase at approximately 200 gpa ( corresponding to approximately 5.0-fold compression ) . experimental and computational data identify the new phase as a wide bandgap dielectric with a six-coordinated , highly distorted double-hexagonal close-packed story_separator_special_tag unifying principles that underlie recently discovered transitions between metallic and insulating states in elemental solids under pressure are developed . using group theory arguments and first-principles calculations , we show that the electronic properties of the phases involved in these transitions are controlled by symmetry principles . the valence bands in these systems are described by simple and composite band representations constructed from localized wannier functions centered on points unoccupied by atoms , and which are not necessarily all symmetrical . the character of the wannier functions is closely related to the degree of $ s\\text { \\ensuremath { - } } p ( \\text { \\ensuremath { - } } d ) $ hybridization and reflects multicenter chemical bonding in these insulating states . the conditions under which an insulating state is allowed for structures having an integer number of atoms per primitive unit cell as well as reentrant ( i.e. , metal-insulator-metal ) transition sequences are detailed , resulting in predictions of behavior such as phases having band-contact lines . the general principles developed are tested and applied to the alkali and alkaline earth metals , including elements where high-pressure insulating phases have been reported ( e.g. , story_separator_special_tag a metal insulator transition ( mit ) in bifeo3 under pressure was investigated by a method combining generalized gradient corrected local density approximation with dynamical mean field theory ( gga+dmft ) . our paramagnetic calculations are found to be in agreement with experimental phase diagram : magnetic and spectral properties of bifeo3 at ambient and high pressures were calculated for three experimental crystal structures r3c , pbnm and pm3 m. at ambient pressure in the r3c phase , an insulating gap of 1.2 ev was obtained in good agreement with its experimental value . both r3c and pbnm phases have a metal insulator transition that occurs simultaneously with a high spin ( hs ) to low spin ( ls ) transition . the critical pressure for the pbnm phase is 25 33 gpa that agrees well with the experimental observations . the high pressure and temperature pm3 m phase exhibits a metallic behavior observed experimentally as well as in our calculations in the whole range of considered pressures and undergoes to the ls state at 33 gpa where a pbnm to pm3 m transition is experimentally observed . the antiferromagnetic gga+dmft calculations carried out for the pbnm structure result in story_separator_special_tag abstract the nanoscale ordered materials diffractometer ( nomad ) is neutron time-of-flight diffractometer designed to determine pair distribution functions of a wide range of materials ranging from short range ordered liquids to long range ordered crystals . due to a large neutron flux provided by the spallation neutron source sns and a large detector coverage neutron count-rates exceed comparable instruments by one to two orders of magnitude . this is achieved while maintaining a relatively high momentum transfer resolution of a q / q 0.8 % fwhm ( typical ) , and a possible q / q of 0.24 % fwhm ( best ) . the real space resolution is related to the maximum momentum transfer ; a maximum momentum transfer of 50\xa0a 1 can be obtained routinely and the maximum momentum transfer given by the detector configuration and the incident neutron spectrum is 125\xa0 a - 1 . high stability of the source and the detector allow small contrast isotope experiments to be performed . a detailed description of the instrument is given and the results of experiments with standard samples are discussed . story_separator_special_tag quantitative high pressure neutron-diffraction measurements have traditionally required large sample volumes of at least 25 mm3 due to limited neutron flux . therefore , pressures in these experiments have been limited to below 25 gpa . in comparison , for x-ray diffraction , sample volumes in conventional diamond cells for pressures up to 100 gpa have been less than 1\xd710 4 mm3 . here , we report a new design of strongly supported conical diamond anvils for neutron diffraction that has reached 94 gpa with a sample volume of 2\xd710 2 mm3 , a 100-fold increase . this sample volume is sufficient to measure full neutron-diffraction patterns of d2o ice to this pressure at the high flux spallation neutrons and pressure beamline at the oak ridge national laboratory . this provides an almost fourfold extension of the previous pressure regime for such measurements . story_separator_special_tag abstract measurements of seven microscopic gruneisen parameters in kbr by inelastic neutron scattering are reported and compared with several theoretical predictions . story_separator_special_tag we report measurements of the phonon dispersion of ice ih under hydrostatic pressure up to 0.5 gpa , at 140 k , using inelastic neutron scattering . they reveal a pronounced softening of various low-energy modes , in particular , those of the transverse acoustic phonon branch in the [ 100 ] direction and polarization in the hexagonal plane . we demonstrate with the aid of a lattice dynamical model that these anomalous features in the phonon dispersion are at the origin of the negative thermal expansion ( nte ) coefficient in ice below 60 k. moreover , extrapolation to higher pressures shows that the mode frequencies responsible for the nte approach zero at approximately 2.5 gpa , which explains the known pressure-induced amorphization ( pia ) in ice . these results give the first clear experimental evidence that pia in ice is due to a lattice instability , i.e. , mechanical melting . story_separator_special_tag neutron scattering techniques have been used to measure the phonon dispersion curves for sms in both the semiconducting and the metallic mixed-valent state . large softening of the longitudinal acoustic phonon branches was found in the mixed-valent state , particularly for the ( 111 ) direction . it is apparent there is a strong coupling between the valence fluctuations and the phonons in the mixed-valent phase of sms . story_separator_special_tag acoustic-phonon dispersion relations of black phosphorus in its orthorhombic $ a11 $ phase have been measured along three principal directions mainly at 1 bar and 15.4 kbar . an anomalous softening has been observed on the la [ 100 ] branch whose vibrational patterns ( eigenvectors ) have been analyzed based on a force-constant-model fit by kaneta et al . it is suggested that this softening is caused through electron-phonon interactions associated with a large change in bonding angles . the $ { \\mathrm { ta } } _ { y } [ 001 ] $ mode at the zone boundary , which can be assigned to the atomic displacements relevant to the $ a11 $ -to- $ a7 $ ( rhombohedral ) structural transition at 45 kbar , does not soften , but at least the hardening shows a tendency to saturate at pressure higher than 15.4 kbar . story_separator_special_tag abstract lattice dynamics of graphite at high pressures is studied by inelastic scattering of neutrons using anvils technique . pressure dependence of frequencies for hexagonal-axis-polarized phonons in the strongly anisotropic layered crystal lattice of graphite has been measured up to 60 kbar . it is shown that the pressure effect , resulting in a contraction of interlayer distances , gives rise to a monotonous hardening of the measured frequencies . an investigated longitudinal acoustic branch does not change its sinusoidal shape under pressure while the continuous evolution from quasi-twodimensional to three-dimensional behavior is found for a transverse acoustic branch . it is pointed out that the observed higher rate of pressure variations of the lattice dynamics parameters as compared to the structural anisotropy can reflect changes of the crystal potential in graphite related to an available high-pressure phase transformation from the layered to more isotropic lattice . story_separator_special_tag we have performed an inelastic neutron scattering experiment in order to investigate the pressure dependence of the magnon dispersion of terbium in its ferromagnetic phase . our measurements were performed along the crystal c axis at 90 k ( well below the curie temperature tc=220 k ) at ambient pressure , and at 4.3 and 15.2 kbar of applied hydrostatic pressure . the difference between the magnon dispersion curves at ambient pressure and at 4.3 kbar is small , while at 15.2 kbar the dispersion curve shifts appreciably towards higher energies . the measured magnon dispersion curves have been analyzed using a hamiltonian that includes heisenberg exchange as well as single ion anisotropy terms . for each dispersion curve we have calculated five exchange constants and two anisotropy terms . the energy gap at q=0 due to the anisotropies is enhanced with pressure and exchange interactions acting between c planes appear to decrease with the application of high pressure . story_separator_special_tag the phonon dispersion of bcc iron under high pressure to 10 gpa was measured at 300 k by inelastic neutron scattering . its pressure dependence is surprisingly uniform . contrary to the behavior found in other bcc elements , there is a lack of any significant pretransitional behavior close to the martensitic bcc-hcp transition which could be related to the burgers mechanism . this finding confirms predictions by spin-polarized total energy calculations that explain the transition by the effect of pressure on the magnetism of iron . the high pressure frequencies were used to develop a lattice dynamical model from which thermodynamic quantities can be determined at any pressure to 10 gpa . story_separator_special_tag abstract we previously reported the existence and properties of a low-temperature modification of metastable -fe2 ( po4 ) o. its structure was proposed by analogy with nicr ( po4 ) o , but the magnetic measurements were hampered by traces of fe3o4 . we have now obtained a purer sample and a single crystal , allowing precise structure refinement , detailed magnetic characterization , and an investigation of the temperature stability range . the single crystal x-ray study confirms the structure as previously proposed : tetragonal ( z = 4 ) , sg i4 1 amd , with a single iron site in face-sharing octahedra , and isolated po4 tetrahedra ; the reliability factor is r = 0.0345 ( rw = 0.0363 ) . the magnetic susceptibility has been measured from 4 to 850 k. the magnetization at zero applied field is around 0.01 emu/g at 300 and 88 k , and 0.04 emu/g at 3.5 k. the = f ( t ) curve displays several unusual features : above tn ( 408 k ) the curie constant continuously decreases as a consequence of short-range magnetic order ; below 100 k the susceptibility displays a small second maximum at 12 story_separator_special_tag inelastic-neutron-scattering studies show triple degeneracy of the gamma/sub 1/-gamma/sub 4/ exciton along ( 111 ) and double degeneracy along ( 100 ) with a band-structure-related softening of the longitudinal branch at x. the gamma/sub 1/-gamma/sub 4/ and gamma/sub 4/-gamma/sub 5/ transitions decrease with increasing pressure suggesting the occurrence of a pressure-induced soft-mode magnetic transition . story_separator_special_tag abstract we describe the inelastic-neutron-scattering technique to observe the crystal field in high-t c superconductors as a direct probe of the local symmetry and the charge distribution at the rare-earth site . this is exemplified for the compounds nd 2 x ce x cuo 4 , erba 2 cu 3 o x and erba 2 cu 4 o 8 . for erba 2 cu 3 o x we succeeded to directly prove the oxygen-vacancy induced charge redistribution in the cuo 2 planes . an empirical relation between t c and the observed charge transfer o was derived which is highly nonlinear close to t c = 90 k. crystal-field studies performed for erba 2 cu 3 o 7 and erba 2 cu 4 o 8 under external pressure up to 10 kbar can be consistently described within this picture . story_separator_special_tag the crystal fields ( cfs ) of the binary rare-earth compounds pral3 and ndal3 have been examined at ambient pressure by means of inelastic neutron scattering . the cf of the latter compound has also been measured under hydrostatic pressure ( p = 0.84 gpa ) . the observed substantial changes of the cf under pressure are discussed within the framework of first-principles density functional theory calculations . story_separator_special_tag the boson peak in deeply cooled water confined in nanopores is studied with inelastic neutron scattering . we show that in the ( p , t ) plane , the locus of the emergence of the boson peak is nearly parallel to the widom line below 1600 bar . above 1600 bar , the situation is different and from this difference the end pressure of the widom line is estimated . the frequency and width of the boson peak correlate with the density of water , which suggests a method to distinguish the hypothetical low-density liquid and highdensity liquid phases in deeply cooled water . story_separator_special_tag the boson peak in deeply cooled water confined in nanopores is studied to examine the liquid-liquid transition ( llt ) . below 180 k , the boson peaks at pressures p higher than 3.5 kbar are evidently distinct from those at low pressures by higher mean frequencies and lower heights . moreover , the higher-p boson peaks can be rescaled to a master curve while the lower-p boson peaks can be rescaled to a different one . these phenomena agree with the existence of two liquid phases with different densities and local structures and the associated llt in the measured ( p , t ) region . in addition , the p dependence of the librational band also agrees with the above conclusion . story_separator_special_tag this paper presents a review of techniques and considerations in the design and construction of high pressure , low temperature diffraction experiments . also intended as an introductory text to new high pressure users , the crucial aspects of pressure cell design are covered . the general classification of common designs , and a discussion into the key beam interaction , mechanical , and thermal properties of commonly used materials is given . the advantages of different materials and high pressure cell classifications are discussed , and examples of designs developed for low temperature diffraction studies are presented , and compared . story_separator_special_tag the ability to manipulate structure and properties using pressure has been well known for many centuries . diffraction provides the unique ability to observe these structural changes in fine detail on lengthscales spanning atomic to nanometre dimensions . amongst the broad suite of diffraction tools available today , neutrons provide unique capabilities of fundamental importance . however , to date , the growth of neutron diffraction under extremes of pressure has been limited by the weakness of available sources . in recent years , substantial government investments have led to the construction of a new generation of neutron sources while existing facilities have been revitalized by upgrades . the timely convergence of these bright facilities with new pressure-cell technologies suggests that the field of high-pressure ( hp ) neutron science is on the cusp of substantial growth . here , the history of hp neutron research is examined with the hope of gleaning an accurate prediction of where some of these revolutionary capabilities will lead in the near future . in particular , a dramatic expansion of current pressure-temperature range is likely , with corresponding increased scope for extreme-conditions science with neutron diffraction . this increase in coverage will be story_separator_special_tag in this paper , we present inelastic neutron-scattering experiments on the s=1/2 frustrated gapped quantum magnet piperazinium hexachlorodicuprate ( phcc ) under applied hydrostatic pressure . these results show that at 9 kbar the magnetic triplet excitations in the system are gapless , contrary to what was previously reported . our results are in agreement with recent muon-spin relaxation experiments which found magnetic order above a quantum-critical point at 4.3 kbar . finally , we show that the changes in the excitation spectrum can be primarily attributed to the change in a single exchange pathway . story_separator_special_tag the hydrogen atoms in hemimorphite , zn4si207 ( oh ) 2 ' h20 , have been located and its crystal structure refined using 415 three-dimensional singlecrystal neutron-diffraction data . the mineral is orthorhombic , space group imm2 , with a = 8.367 ( 5 ) , b = 10.730 ( 6 ) , c = 5.115 ( 3 ) a , and z = 2. the structure consists of three-membered rings of corner-sharing zn ( oh ) oa ( x 2 ) and si04 tetrahedra arranged in compact sheets parallel to ( 010 ) . three oxygen atoms in each tetrahedron are bonded to two zinc atoms and one silicon atom , while a fourth oxygen atom forms a bridging bond to an equivalent cation in an adjacent sheet . the water molecules are oriented parallel to ( 010 ) inside large cavities between the tetrahedral sheets and are held in place by hydrogen bonds to and from the hydroxyl groups of the zn - oh - zn bridging linkages . mul . liken population analyses calculated using constant bond lengths and the ob . served angles within and between the tetrahedra allow a rationalization of the bond-length variations in story_separator_special_tag the high-pressure structural evolution of hemimorphite , zn4si2o7 ( oh ) 2\xb7h2o , a\xa0=\xa08.3881 ( 13 ) , b\xa0=\xa010.7179 ( 11 ) , c\xa0=\xa05.1311 ( 9 ) \xa0\xe5 , v\xa0=\xa0461.30 ( 12 ) \xa0\xe53 , space group imm2 , z\xa0=\xa02 , was studied by single-crystal x-ray diffraction with a diamond anvil cell under hydrostatic conditions up to 4.2\xa0gpa . in the pressure range of 0.0001 2.44\xa0gpa , the unit-cell parameters change almost linearly . the phase transition ( probably of the second order ) with symmetry reduction from imm2 ( hemimorphite-i ) to pnn2 ( hemimorphite-ii ) was found near 2.5\xa0gpa . the structure compressibility increases somewhat above the phase transition . namely , the initial unit-cell volume decreases by 3.6 % at 2.44\xa0gpa and by 7.2 % at 4.20\xa0gpa . the hemimorphite framework can be described as built up of secondary building units ( sbu ) zn4si2o7 ( oh ) 2. these blocks are combined to form the rods arranged along the c-axis ; these rods are multiplied by basic and i-translations of orthorhombic unit cell . the symmetry reduction is caused by the rotation of the rods along their axis . in hemimorphite-i , the compression affects mainly story_separator_special_tag understanding the microscopic processes affecting the bulk thermal conductivity is crucial to develop more efficient thermoelectric materials . pbte is currently one of the leading thermoelectric materials , largely thanks to its low thermal conductivity . however , the origin of this low thermal conductivity in a simple rocksalt structure has so far been elusive . using a combination of inelastic neutron scattering measurements and first-principles computations of the phonons , we identify a strong anharmonic coupling between the ferroelectric transverse optic ( to ) mode and the longitudinal acoustic ( la ) modes in pbte . this interaction extends over a large portion of reciprocal space , and directly affects the heat-carrying la phonons . the la-to anharmonic coupling is likely to play a central role in explaining the low thermal conductivity of pbte . the present results provide a microscopic picture of why many good thermoelectric materials are found near a lattice instability of the ferroelectric type . story_separator_special_tag the structure and lattice dynamics of rock-salt thermoelectric materials snte and pbte are investigated with single crystal and powder neutron diffraction , inelastic neutron scattering ( ins ) , and first-principles simulations . our first-principles calculations of the radial distribution function ( rdf ) in both snte and pbte show a clear asymmetry in the first nearest-neighbor ( 1nn ) peak , which increases with temperature , in agreement with experimental reports ( ref . 1,2 ) . we show that this peak asymmetry for the 1nn sn te or pb te bond results from large-amplitude anharmonic vibrations ( phonons ) . no atomic off-centering is found in our simulations . in addition , the atomic mean square displacements derived from our diffraction data reveal stiffer bonding at the anion site , in good agreement with the partial phonon densities of states from ins , and first-principles calculations . in conclusion , these results provide clear evidence for large-amplitude anharmonic phonons associated with the resonant bonding leading to the ferroelectric instability . story_separator_special_tag materials with very low thermal conductivity are of great interest for both thermoelectric and optical phase-change applications . synthetic nanostructuring is most promising for suppressing thermal conductivity through phonon scattering , but challenges remain in producing bulk samples . in crystalline agsbte2 we show that a spontaneously forming nanostructure leads to a suppression of thermal conductivity to a glass-like level . our mapping of the phonon mean free paths provides a novel bottom-up microscopic account of thermal conductivity and also reveals intrinsic anisotropies associated with the nanostructure . ground-state degeneracy in agsbte2 leads to the natural formation of nanoscale domains with different orderings on the cation sublattice , and correlated atomic displacements , which efficiently scatter phonons . this mechanism is general and suggests a new avenue for the nanoscale engineering of materials to achieve low thermal conductivities for efficient thermoelectric converters and phase-change memory devices . story_separator_special_tag the anharmonic lattice dynamics of rock-salt thermoelectric compounds snte and pbte are investigated with inelastic neutron scattering ( ins ) and first-principles calculations . the experiments show that , surprisingly , although snte is closer to the ferroelectric instability , phonon spectra in pbte exhibit a more anharmonic character . this behavior is reproduced in first-principles calculations of the temperature-dependent phonon self-energy . our simulations reveal how the nesting of phonon dispersions induces prominent features in the self-energy , which account for the measured ins spectra and their temperature dependence . we establish that the phase space for three-phonon scattering processes , combined with the proximity to the lattice instability , is the mechanism determining the complex spectrum of the transverse-optic ferroelectric mode . story_separator_special_tag understanding elementary excitations and their couplings in condensed matter systems is critical for developing better energy-conversion devices . in thermoelectric materials , the heat-to-electricity conversion efficiency is directly improved by suppressing the propagation of phonon quasiparticles responsible for macroscopic thermal transport . the current record material for thermoelectric conversion efficiency , snse , has an ultralow thermal conductivity , but the mechanism behind the strong phonon scattering remains largely unknown . from inelastic neutron scattering measurements and first-principles simulations , we mapped the four-dimensional phonon dispersion surfaces of snse , and found the origin of the ionic-potential anharmonicity responsible for the unique properties of snse . we show that the giant phonon scattering arises from an unstable electronic structure , with orbital interactions leading to a ferroelectric-like lattice instability . the present results provide a microscopic picture connecting electronic structure and phonon anharmonicity in snse , and offers new insights on how electron phonon and phonon phonon interactions may lead to the realization of ultralow thermal conductivity . tin selenide is at present the best thermoelectric conversion material . neutron scattering results and ab initio simulations show that the large phonon scattering is due to the development of a lattice story_separator_special_tag we have performed elastic and inelastic neutron experiments on single crystal samples of the coordination polymer compound cuf2 ( h2o ) 2 ( pyz ) ( pyz=pyrazine ) to study the magnetic structure and excitations . the elastic neutron diffraction measurements indicate a collinear antiferromagnetic structure with moments oriented along the [ 0.7 0 1 ] real-space direction and an ordered moment of 0.60 +/- 0.03 mub/cu . this value is significantly smaller than the single ion magnetic moment , reflecting the presence of strong quantum fluctuations . the spin wave dispersion from magnetic zone center to the zone boundary points ( 0.5 1.5 0 ) and ( 0.5 0 1.5 ) can be described by a two dimensional heisenberg model with a nearest neighbor magnetic exchange constant j2d = 0.934 +/-0.0025 mev . the inter-layer interaction jperp in this compound is less than 1.5 % of j2d . the spin excitation energy at the ( 0.5 0.5 0.5 ) zone boundary point is reduced when compared to the ( 0.5 1 0.5 ) zone boundary point by ~10.3 +/- 1.4 % . this zone boundary dispersion is consistent with quantum monte carlo and series expansion calculations which include corrections story_separator_special_tag the zero-field excitation spectrum of the strong-leg spin ladder ( c7h10n ) ( 2 ) cubr4 is studied with a neutron time-of-flight technique . the spectrum is decomposed into its symmetric and asymmetric parts with respect to the rung momentum and compared with theoretical results obtained by the density matrix renormalization group method . additionally , the calculated dynamical correlations are shown for a wide range of rung and leg coupling ratios in order to point out the evolution of arising excitations , as , e.g. , of the two-magnon bound state from the strong to the weak coupling limit . story_separator_special_tag high intensity pulsed neutron scattering reveals a new set of magnetic excitations in the pinwheel valence bond solid state of the distorted kagome lattice antiferromagnet rb $ _2 $ cu $ _3 $ snf $ _ { 12 } $ . the polarization of the dominant dispersive modes ( 2 mev $ < \\hbar\\omega < $ 7 mev ) is determined and found consistent with a dimer series expansion with strong dzyaloshinskii-moriya interactions ( $ d/j=0.18 $ ) . a weakly dispersive mode near 5 mev and shifted `` ghosts '' of the main modes are attributed to the enlarged unit cell below a $ t=215 $ k structural transition . continuum scattering between 8 mev and 10 mev might be interpreted as a remnant of the kagome spinon continuum [ t.-h. han et al. , nature 492 , 406 ( 2012 ) ] story_separator_special_tag orbitals and charge go their separate ways in certain materials at very low temperatures , an electron 's spin can separate from its charge , zooming through the crystal in the form of a spinon . such materials are usually one-dimensional , and their atoms have spins of 1/2 . wu et al . observed related behavior in a three-dimensional metal , yb2pt2pb , where the yb ions have a large magnetic moment that has its origin in the electrons ' orbital motion rather than their spin . neutron-scattering measurements indicated that these large magnetic moments can flip their direction through an exchange process similar to the one that occurs in spin 1/2 systems . this process results in effective charge-orbital separation . science , this issue p. 1206 neutron-scattering measurements indicate that a 3d metal with high atomic magnetic moments behaves like a 1d spin chain . exotic quantum states and fractionalized magnetic excitations , such as spinons in one-dimensional chains , are generally expected to occur in 3d transition metal systems with spin 1/2 . our neutron-scattering experiments on the 4f-electron metal yb2pt2pb overturn this conventional wisdom . we observe broad magnetic continuum dispersing in only one direction story_separator_special_tag we present new magnetic heat capacity and neutron scattering results for two magnetically frustrated molybdate pyrochlores : $ s=1 $ oxide $ { \\mathrm { lu } } _ { 2 } { \\mathrm { mo } } _ { 2 } { \\mathrm { o } } _ { 7 } $ and $ s=\\frac { 1 } { 2 } $ oxynitride $ { \\mathrm { lu } } _ { 2 } { \\mathrm { mo } } _ { 2 } { \\mathrm { o } } _ { 5 } { \\mathrm { n } } _ { 2 } $ . $ { \\mathrm { lu } } _ { 2 } { \\mathrm { mo } } _ { 2 } { \\mathrm { o } } _ { 7 } $ undergoes a transition to an unconventional spin glass ground state at $ { t } _ { f } \\ensuremath { \\sim } 16\\text { } \\text { } \\mathrm { k } $ . however , the preparation of the corresponding oxynitride tunes the nature of the ground state from spin glass to quantum spin liquid . the comparison of story_separator_special_tag specific heat , elastic neutron scattering , and muon spin rotation experiments have been carried out on a well-characterized sample of stuffed ( pr-rich ) pr2+xir2-xo7- . elastic neutron scattering shows the onset of long-range spin-ice 2-in/2-out magnetic order at 0.93 kelvin , with an ordered moment of 1.7 ( 1 ) bohr magnetons per pr ion at low temperatures . approximate lower bounds on the correlation length and correlation time in the ordered state are 170 angstroms and 0.7 nanosecond , respectively . muon spin rotation experiments yield an upper bound 2.6 ( 7 ) milliteslas on the local field b4floc at the muon site , which is nearly two orders of magnitude smaller than the expected dipolar field for long-range spin-ice ordering of 1.7-bohr magneton moments ( 120 270 milliteslas , depending on the muon site ) . this shortfall is due in part to splitting of the non-kramers crystal-field ground-state doublets of near-neighbor pr3+ ions by the positive-muon-induced lattice distortion . for this to be the only effect , however , ~160 pr moments out to a distance of ~14 angstroms must be suppressed . an alternative scenario one consistent with the observed reduced nuclear hyperfine schottky story_separator_special_tag we present single-crystal neutron scattering measurements of the spin-1/2 equilateral triangular-lattice antiferromagnet ba_ { 3 } cosb_ { 2 } o_ { 9 } . besides confirming that the co^ { 2+ } magnetic moments lie in the ab plane for zero magnetic field and then determining all the exchange parameters of the minimal quasi-2d spin hamiltonian , we provide conclusive experimental evidence of magnon decay through observation of intrinsic line broadening . through detailed comparisons with the linear and nonlinear spin-wave theories , we also point out that the large-s approximation , which is conventionally employed to predict magnon decay in noncollinear magnets , is inadequate to explain our experimental observation . thus , our results call for a new theoretical framework for describing excitation spectra in low-dimensional frustrated magnets under strong quantum effects . story_separator_special_tag the spin-wave excitations of the multiferroic mnwo4 have been measured in its low-temperature collinear commensurate phase using high-resolution inelastic neutron scattering . these excitations can be well described by a heisenberg model with competing long-range exchange interactions and a single-ion anisotropy term . we find that the magnetic interactions are strongly frustrated within the zigzag spin chain along c-axis and between chains along the a-axis , while the coupling between spin along the b-axis is much weaker . we argue that the balance of these interactions results in the noncollinear incommensurate spin structure associated with the magnetoelectric effect , and the perturbation of the magnetic interactions leads to the observed rich phase diagrams of the chemically-doped materials . this delicate balance can also be tuned by the application of external electric or magnetic fields to achieve practical magnetoelectric control of this type of materials . story_separator_special_tag in this paper detailed neutron scattering measurements of the magnetic excitation spectrum of \\cco\\ in the ordered state below $ $ t_ { \\rm { n1 } } =24.2 $ $ ~k are presented . the spectra are analyzed using a model hamiltonian which includes intralayer-exchange up to the next-next-nearest neighbor and interlayer-exchange . we obtain a definite parameter set and show that exchange interaction terms beyond the next-nearest neighbor are important to describe the inelastic excitation spectrum . the magnetic ground state structure generated with our parameter set is in agreement with the structure proposed for \\cco\\ from the results of single crystal diffraction experiments previously published . we argue that the role of the interlayer exchange is crucial to understand the incommensurability of the magnetic structure as well as the spin-charge coupling mechanism . story_separator_special_tag we study the application of a magnetic field transverse to the easy axis , ising direction in the quasi-two-dimensional kagome staircase magnet , co3v2o8 , induces three quantum phase transitions at low temperatures , ultimately producing a novel high field polarized state , with two distinct sublattices . new time-of-flight neutron scattering techniques , accompanied by large angular access , high magnetic field infrastructure allow the mapping of a sequence of ferromagnetic and incommensurate phases and their accompanying spin excitations . also , at least one of the transitions to incommensurate phases at 0hc1~6.25 t and 0hc2~7 t is discontinuous , while the final quantum critical point at 0hc3~13 t is continuous . story_separator_special_tag the majority of superconducting magnets for neutron scattering experiments take the form of split pairs with magnetic field vector in the vertical plane . sample environment access is along the vertical magnetic axis whilst neutron access is in the horizontal plane . split pair superconducting magnets present significantly more challenges in terms of design than the simpler solenoid type arrangement and the addition of requirements for neutron access further complicates the situation . many of the requirements of split pair magnets for neutron scattering are conflicting and often compromises have to be made . presented here are some of the more important design criteria and the ways in which these are met in practical magnet designs . topics covered range from the choice of superconducting material through to the control of magnetic flux density profiles and mechanical aspects of the magnet former providing the neutron access between the coils . most of the information presented is based on recent or current production magnets manufactured by oxford instruments for a range of neutron related applications . story_separator_special_tag we have combined time-of-flight neutron laue diffraction and pulsed high magnetic fields at the spallation neutron source to study the phase diagram of the multiferroic material mnwo ( 4 ) . the control of the field-pulse timing enabled an exploration of magnetic bragg scattering through the time dependence of both the neutron wavelength and the pulsed magnetic field . this allowed us to observe several magnetic bragg peaks in different field-induced phases of mnwo ( 4 ) with a single instrument configuration . these phases were not previously amenable to neutron diffraction studies due to the large fields involved . story_separator_special_tag following the discovery of long-range antiferromagnetic order in the parent compounds of high-transition-temperature ( high-tc ) copper oxides , there have been efforts to understand the role of magnetism in the superconductivity that occurs when mobile electrons or holes are doped into the antiferromagnetic parent compounds . superconductivity in the newly discovered rare-earth iron-based oxide systems rofeas ( r , rare-earth metal ) also arises from either electron or hole doping of their non-superconducting parent compounds . the parent material laofeas is metallic but shows anomalies near 150 k in both resistivity and d.c. magnetic susceptibility . although optical conductivity and theoretical calculations suggest that laofeas exhibits a spin-density-wave ( sdw ) instability that is suppressed by doping with electrons to induce superconductivity , there has been no direct evidence of sdw order . here we report neutron-scattering experiments that demonstrate that laofeas undergoes an abrupt structural distortion below 155 k , changing the symmetry from tetragonal ( space group p4/nmm ) to monoclinic ( space group p112/n ) at low temperatures , and then , at 137 k , develops long-range sdw-type antiferromagnetic order with a small moment but simple magnetic structure . doping the system with fluorine suppresses story_separator_special_tag in this review , we present a summary of experimental studies of magnetism in fe-based superconductors . the doping dependent phase diagram shows strong similarities to the generic phase diagram of the cuprates . parent compounds exhibit magnetic order together with a structural phase transition both of which are progressively suppressed with doping allowing superconductivity to emerge . the stripe-like spin arrangement of fe moments in the magnetically ordered state shows the identical in-plane structure for the rfeaso ( r=rare earth ) and afe2as2 ( a=sr , ca , ba , eu and k ) parent compounds , notably different than the spin configuration of the cuprates . interestingly , fe1+yte orders with a different spin order despite very similar fermi surface topology . studies of the spin dynamics in the parent compounds shows that the interactions are best characterized as anisotropic three-dimensional ( 3d ) interactions . despite the room temperature tetragonal structure , analysis of the low temperature spin waves under the assumption of a heisenberg hamiltonian indicates strong in-plane anisotropy with a significant next-near neighbor interaction . in the superconducting state , a resonance , localized in both wavevector and energy is observed in the spin excitation story_separator_special_tag exotic superconductivity has often been discovered in materials with a layered ( two-dimensional ) crystal structure . the low dimensionality can affect the electronic structure and can realize high transition temperatures ( tc ) and/or unconventional superconductivity mechanisms . as standard examples , we now have two types of high-tc superconductors . the first group is the cu-oxide superconductors whose crystal structure is basically composed of a stacking of spacer ( blocking ) layers and superconducting cuo2 layers.1-4 the second group is the fe-based superconductors which also possess a stacking structure of spacer layers and superconducting fe2an2 ( an = p , as , se , te ) layers.5-13 in both systems , dramatic enhancements of tc are achieved by optimizing the spacer layer structure , for instance , a variety of composing elements , spacer thickness , and carrier doping levels with respect to the superconducting layers . in this respect , to realize higher-tc superconductivity , other than cu-oxide and fe-based superconductors , the discovery of a new prototype of layered superconductors needs to be achieved . here we show superconductivity in a new bismuth-oxysulfide layered compound bi4o4s3 . crystal structure analysis indicates that this superconductor has a story_separator_special_tag inelastic neutron scattering measurements on ba ( fe0.963ni0.037 ) 2as2 manifest a neutron spin resonance in the superconducting state with anisotropic dispersion within the fe layer . whereas the resonance is sharply peaked at the antiferromagnetic ( afm ) wave vector q ( afm ) along the orthorhombic a axis , the resonance disperses upwards away from q ( afm ) along the b axis . in contrast to the downward dispersing resonance and hourglass shape of the spin excitations in superconducting cuprates , the resonance in electron-doped bafe2as2 compounds possesses a magnonlike upwards dispersion . story_separator_special_tag neutron scattering measurements have been performed on polycrystalline samples of the newly discovered layered superconductor lao0:5f0:5bis2 , and its nonsuperconducting parent compound laobis2 . the crystal structures and vibrational modes have been examined . upon f-doping , while the lattice contracts signicantly along c and expands slightly along a , the buckling of the bis2 plane remains almost the same . in the inelastic measurements , a large dierence in the high energy phonon modes was observed upon f substitution . alternatively , the low energy modes remain almost unchanged between non-superconducting and superconducting states either by f- doping or by cooling through the transition temperature . using density functional perturbation theory we identify the phonon modes , and estimate the phonon density of states . we compare these calculations to the current measurements and other theoretical studies of this new superconducting material . story_separator_special_tag j. lee , s. demura , m. b. stone , k. iida , g. ehlers , c. r. dela cruz , m. matsuda , k. deguchi , y. takano , y. mizuguchi , o. miura , d. louca , and s.-h. lee department of physics , university of virginia , charlottesville , va 22904 , usa quantum condensed matter division , oak ridge national laboratory , oak ridge , tennessee 37831-6393 , usa national institute for materials science , 1-2-1 , sengen , tsukuba , 305-0047 , japan and department of electrical and electronic engineering , tokyo metropolitan university , 1-1 , minami-osawa , hachioji , 192-0397 , japan ( dated : november 14 , 2014 ) story_separator_special_tag we use neutron scattering to study spin excitations in single crystals of life_ { 0.88 } co_ { 0.12 } as , which is located near the boundary of the superconducting phase of life_ { 1-x } co_ { x } as and exhibits non-fermi-liquid behavior indicative of a quantum critical point . by comparing spin excitations of life_ { 0.88 } co_ { 0.12 } as with a combined density functional theory and dynamical mean field theory calculation , we conclude that wave-vector correlated low energy spin excitations are mostly from the d_ { xy } orbitals , while high-energy spin excitations arise from the d_ { yz } and d_ { xz } orbitals . unlike most iron pnictides , the strong orbital selective spin excitations in the lifeas family can not be described by an anisotropic heisenberg hamiltonian . while the evolution of low-energy spin excitations of life_ { 1-x } co_ { x } as is consistent with the electron-hole fermi surface nesting conditions for the d_ { xy } orbital , the reduced superconductivity in life_ { 0.88 } co_ { 0.12 } as suggests that fermi surface nesting conditions for the d_ { yz story_separator_special_tag we present a detailed analysis of the picosecond-to-nanosecond motions of green fluorescent protein ( gfp ) and its hydration water using neutron scattering spectroscopy and hydrogen/deuterium contrast . the analysis reveals that hydration water suppresses protein motions at lower temperatures ( < ~ 200 k ) , and facilitates protein dynamics at high temperatures . experimental data demonstrate that the hydration water is harmonic at temperatures < ~ 180-190 k and is not affected by the proteins ' methyl group rotations . the dynamics of the hydration water exhibits changes at ~ 180-190 k that we ascribe to the glass transition in the hydrated protein . our results confirm significant differences in the dynamics of protein and its hydration water at high temperatures : on the picosecond-to-nanosecond timescale , the hydration water exhibits diffusive dynamics , while the protein motions are localized to < ~3 \xe5 . the diffusion of the gfp hydration water is similar to the behavior of hydration water previously observed for other proteins . comparison with other globular proteins ( e.g. , lysozyme ) reveals that on the timescale of 1 ns and at equivalent hydration level , gfp dynamics ( mean-square displacements and quasielastic intensity story_separator_special_tag phytoglycogen is a naturally occurring polysaccharide nanoparticle made up of extensively branched glucose monomers . it has a number of unusual and advantageous properties , such as high water retention , low viscosity , and high stability in water , which make this biomaterial a promising candidate for a wide variety of applications . in this study , we have characterized the structure and hydration of aqueous dispersions of phytoglycogen nanoparticles using neutron scattering . small angle neutron scattering results suggest that the phytoglycogen nanoparticles behave similar to hard sphere colloids and are hydrated by a large number of water molecules ( each nanoparticle contains between 250 % and 285 % of its mass in water ) . this suggests that phytoglycogen is an ideal sample in which to study the dynamics of hydration water . to this end , we used quasielastic neutron scattering ( qens ) to provide an independent and consistent measure of the hydration number , and to estimate the retardation factor ( or degree of wa . story_separator_special_tag collective dynamics are considered to be one of the major properties of soft materials , including biological macromolecules . we present coherent neutron scattering studies of the low-frequency vibrations , the so-called boson peak , in fully deuterated green fluorescent protein ( gfp ) . our analysis revealed unexpectedly low coherence of the atomic motions in gfp . this result implies a low amount of in-phase collective motion of the secondary structural units contributing to the boson peak vibrations and fast conformational fluctuations on the picosecond timescale . these observations are in contrast to earlier studies of polymers and glass-forming systems , and suggest that random or out-of-phase motions of the -strands contribute greater than two-thirds of the intensity to the low-frequency vibrational spectra of gfp . story_separator_special_tag complementary neutron- and light-scattering results on nine proteins and amino acids reveal the role of rigidity and secondary structure in determining the time- and lengthscales of low-frequency collective vibrational dynamics in proteins . these dynamics manifest in a spectral feature , known as the boson peak ( bp ) , which is common to all disordered materials . we demonstrate that bp position scales systematically with structural motifs , reflecting local rigidity : disordered proteins appear softer than -helical proteins ; which are softer than -sheet proteins . our analysis also reveals a universal spectral shape of the bp in proteins and amino acid mixtures ; superimposable on the shape observed in typical glasses . uniformity in the underlying physical mechanism , independent of the specific chemical composition , connects the bp vibrations to nanometer-scale heterogeneities , providing an experimental benchmark for coarse-grained simulations , structure/rigidity relationships , and engineering of proteins for novel applications . story_separator_special_tag there is tremendous interest in understanding the role that secondary structure plays in the rigidity and dynamics of proteins . in this work we analyze nanomechanical properties of proteins chosen to represent different secondary structures : -helices ( myoglobin and bovine serum albumin ) , -barrels ( green fluorescent protein ) , and + + loop structures ( lysozyme ) . our experimental results show that in these model proteins , the motif is a stiffer structural unit than the -helix in both dry and hydrated states . this difference appears not only in the rigidity of the protein , but also in the amplitude of fast picosecond fluctuations . moreover , we show that for these examples the secondary structure correlates with the temperature- and hydration-induced changes in the protein dynamics and rigidity . analysis also suggests a connection between the length of the secondary structure ( -helices ) and the low-frequency vibrational mode , the so-called boson peak . the presented results suggest an intimate connection of dynamics and rigidity with the protein secondary structure . story_separator_special_tag the emergence of intrinsically disordered proteins ( idps ) as a recognized structural class has forced the community to confront a new paradigm of structure , dynamics , and mechanical properties for proteins . we present novel data on the similarities and differences in the dynamics and nanomechanical properties of idps and other biomacromolecules on the picosecond time scale . an idp , -casein ( cas ) , has been studied in a calcium bound and unbound state using neutron and light scattering techniques . we show that cas partially folds and stiffens upon calcium binding , but in the unfolded state , it is softer than folded proteins such as green fluorescence protein ( gfp ) . we also see that some localized diffusive motions in cas have a larger amplitude than in gfp at this time scale but are still smaller than those observed in trna . in spite of these differences , cas dynamics are consistent with the classes of motions seen in folded protein on this time scale . story_separator_special_tag poly-l-glutamic acid ( pga ) is a widely used biomaterial , with applications ranging from drug delivery and biological glues to food products and as a tissue engineering scaffold . a biodegradable material with flexible conjugation functional groups , tunable secondary structure , and mechanical properties , pga has potential as a tunable matrix material in mechanobiology . recent studies in proteins connecting dynamics , nanometer length scale rigidity , and secondary structure suggest a new point of view from which to analyze and develop this promising material . we have characterized the structure , topology , and rigidity properties of pga prepared with different molecular weights and secondary structures through various techniques including scanning electron microscopy , ftir , light , and neutron scattering spectroscopy . on the length scale of a few nanometers , rigidity is determined by hydrogen bonding interactions in the presence of neutral species and by electrostatic interactions when the polypeptide is negatively charged . when probed over hundreds of nanometers , the rigidity of these materials is modified by long range intermolecular interactions that are introduced by the supramolecular structure . story_separator_special_tag we report on a neutron spin echo investigation of the intermediate scale dynamics of polyisobutylene studying both the self-motion and the collective motion . the momentum transfer ( q ) dependences of the self-correlation times are found to follow a q ( -2/beta ) law in agreement with the picture of gaussian dynamics . in the full q range of observation , their temperature dependence is weaker than the rheological shift factor . the same is true for the stress relaxation time as seen in sound wave absorption . the collective times show both temperature dependences ; at the structure factor peak , they follow the temperature dependence of the viscosity , but below the peak , one finds the stress relaxation behavior . story_separator_special_tag quasielastic neutron scattering with polarized neutrons allows for an experimental separation of single-particle and collective processes , as contained in the incoherent and coherent scattering contributions . this technique was used to investigate the dynamical processes in the pyridinium-based ionic liquid 1-butylpyridinium bis ( trifluoromethylsulfonyl ) -imide . we observed two diffusion processes with different time scales . the slower diffusional process was present in both the coherent and the incoherent contribution , meaning that this process has at least a partial collective nature . the second faster localized process is only present in the incoherent scattering contribution . we conclude that it is a true single-particle process on a shorter time scale . story_separator_special_tag as with most liquids , it is possible to supercool1,2,3,4 water ; this generally involves cooling the liquid below its melting temperature ( avoiding crystallization ) until it eventually forms a glass . the viscosity and related relaxation times ( ) of glass-forming liquids typically show non-arrhenius temperature ( t ) dependencies . liquids with highly non-arrhenius behaviour in the supercooled region are termed fragile . in contrast , liquids whose behaviour is close to the arrhenius law ( ln 1/t ) are termed strong ( ref . 5 ) . a unique fragile strong transition around 228 k has been proposed6 for supercooled water ; however , experimental studies of bulk supercooled water in this temperature range are generally hampered because crystallization occurs . here we use broad-band dielectric spectroscopy to study the relaxation dynamics of supercooled water in a wide temperature range , including the usually inaccessible temperature region . this is possible because the supercooled water is held within a layered vermiculite clay the geometrical confinement and presence of intercalated sodium ions prevent7 most of the water from crystallizing . we find a relaxational process with an arrhenius temperature dependence , consistent with the proposed strong nature story_separator_special_tag dynamics of water confined in 5 a diameter channels of beryl and cordierite single crystals were studied by using inelastic ( ins ) and quasielastic ( qens ) neutron scattering . the ins spectra for bo . story_separator_special_tag using neutron scattering and ab\xa0initio simulations , we document the discovery of a new `` quantum tunneling state '' of the water molecule confined in 5\xa0a channels in the mineral beryl , characterized by extended proton and electron delocalization . we observed a number of peaks in the inelastic neutron scattering spectra that were uniquely assigned to water quantum tunneling . in addition , the water proton momentum distribution was measured with deep inelastic neutron scattering , which directly revealed coherent delocalization of the protons in the ground state . story_separator_special_tag studies of single-particle momentum distributions in light atoms and molecules are reviewed with specific emphasis on experimental measurements using the deep inelastic neutron scattering technique at ev energies . the technique has undergone a remarkable development since the mid-1980s , when intense fluxes of epithermal neutrons were made available from pulsed neutron sources . these types of measurements provide a probe of the short-time dynamics of the recoiling atoms or molecules as well as information on the local structure of the materials . the paper introduces both the theoretical framework for the interpretation of deep inelastic neutron scattering experiments and thoroughly illustrates the physical principles underlying the impulse approximation from light atoms and molecules . the most relevant experimental studies performed on a variety of condensed matter systems in the last 20 years are reviewed . the experimental technique is critically presented in the context of a full list of published work . it is shown how , in some cases , these measurements can be used to extract directly the effective born oppenheimer potential . a summary of the progress made to date in instrument development is also provided . current data analysis and the interpretation of the results
in prostate cancer radiotherapy , the accurate identification of the prostate and organs at risk in planning computer tomography ( ct ) images is an important part of the therapy planning and optimization . manually contouring these organs can be a time-consuming process and subject to intra- and inter-expert variability . automatic identification of organ boundaries from these images is challenging due to the poor soft tissue contrast . atlas-based approaches may provide a priori structural information by propagating manual expert delineations to a new individual space ; however , the interindividual variability and registration errors may lead to biased results . multi-atlas approaches can partly overcome some of these difficulties by selecting the most similar atlases among a large data base , but the definition of similarity measure between the available atlases and the query individual has still to be addressed . the purpose of this chapter is to explain atlas-based segmentation approaches and the evaluation of different atlas-based strategies to simultaneously segment prostate , bladder , and rectum from ct images . a comparison between single and multiple atlases is performed . experiments on atlas ranking , selection strategies , and fusion-decision rules are carried out to illustrate story_separator_special_tag in prostate cancer radiotherapy , accurate segmentation of prostate and organs at risk in planning ct and follow-up cbct images is an essential part of the therapy planning and optimization . automatic segmentation is challenging because of the poor constrast in soft tissues . although atlas-based approaches may provide a priori structural information by propagating manual expert delineations to a new individual space , the interindividual variability and registration errors can introduce bias in the results . multi-atlas approaches can partly overcome some of these difficulties by selecting the most similar atlases among a large data base but the definition of similarity measure between the available atlases and the query individual has still to be addressed . the purpose of this paper is the evaluation of different strategies to simultaneously segment prostate , bladder and rectum from ct images , by selecting the most similar atlases from a prebuilt 24 atlas subset . three similarity measures were considered : cross-correlation ( cc ) , sum of squared differences ( ssd ) and mutual information ( mi ) . experiments on atlas ranking , selection strategies and fusion decision rules were carried out . propagation of labels using the diffeomorphic demons story_separator_special_tag pelvic floor dysfunction is common in women after childbirth and precise segmentation of magnetic resonance images ( mri ) of the pelvic floor may facilitate diagnosis and treatment of patients . however , because of the complexity of its structures , manual segmentation of the pelvic floor is challenging and suffers from high inter and intra-rater variability of expert raters . multiple template fusion algorithms are promising segmentation techniques for these types of applications , but they have been limited by imperfections in the alignment of templates to the target , and by template segmentation errors . a number of algorithms sought to improve segmentation performance by combining image intensities and template labels as two independent sources of information , carrying out fusion through local intensity weighted voting schemes . this class of approach is a form of linear opinion pooling , and achieves unsatisfactory performance for this application . we hypothesized that better decision fusion could be achieved by assessing the contribution of each template in comparison to a reference standard segmentation of the target image and developed a novel segmentation algorithm to enable automatic segmentation of mri of the female pelvic floor . the algorithm achieves high performance story_separator_special_tag we introduce a 3d segmentation framework which uses principal shapes . the probabilistic energy function of the method is defined based on intensity , tissue type , and location information of the structures using a multiple atlas method . for intensity information , nonparametric probability density function is used which considers intensity relation of different structures . to find a local minimum of the energy function , a two-step optimization strategy is used . in the first step , shape parameters are optimized based on the analytic derivatives of the energy function . in the second step , shapes of the structures are fine-tuned using a level set method . the proposed method is shown to be superior to some popular methods in the literature using a dataset of 64 patients with mesial temporal lobe epilepsy . in addition , the method can be used for lateralization with accuracy close to that of manual segmentation . story_separator_special_tag quantitative research in neuroimaging often relies on anatomical segmentation of human brain mr images . recent multi-atlas based approaches provide highly accurate structural segmentations of the brain by propagating manual delineations from multiple atlases in a database to a query subject and combining them . the atlas databases which can be used for these purposes are growing steadily . we present a framework to address the consequent problems of scale in multi-atlas segmentation . we show that selecting a custom subset of atlases for each query subject provides more accurate subcortical segmentations than those given by non-selective combination of random atlas subsets . using a database of 275 atlases , we tested an image-based similarity criterion as well as a demographic criterion ( age ) in a leave-one-out cross-validation study . using a custom ranking of the database for each subject , we combined a varying number n of atlases from the top of the ranked list . the resulting segmentations were compared with manual reference segmentations using dice overlap . image-based selection provided better segmentations than random subsets ( mean dice overlap 0.854 vs. 0.811 for the estimated optimal subset size , n=20 ) . age-based selection resulted in story_separator_special_tag analysis of structural neuroimaging studies often relies on volume or shape comparisons of labeled neuroanatomical structures in two or more clinical groups . such studies have common elements involving segmentation , morphological feature extraction for comparison , and subject and group discrimination . we combine two state-of-the-art analysis approaches , namely automated segmentation using label fusion and classification via spectral analysis to explore the relationship between the morphology of neuroanatomical structures and clinical diagnosis in dementia . we apply this framework to a cohort of normal controls and patients with mild dementia where accurate diagnosis is notoriously difficult . we compare and contrast our ability to discriminate normal and abnormal groups on the basis of structural morphology with ( supervised ) and without ( unsupervised ) knowledge of each individual 's diagnosis . we test the hypothesis that morphological features resulting from alzheimer disease processes are the strongest discriminator between groups . story_separator_special_tag the corpus callosum ( cc ) is the largest fiber bundle connecting the left and right cerebral hemispheres . it has been a region examined extensively for indications of various pathologies , including alzheimer s disease ( ad ) . almost all previous studies of the cc in ad have been concerned with its size , particularly its mid-sagittal cross-sectional area ( cca ) . in this study , we show that the cc shape , characterized by its circularity ( cir ) , may be affected more profoundly than its size in early ad . mri scans ( n = 196 ) were obtained from the publicly available open access series of imaging studies database . the cc cross-sectional region on the mid-sagittal section of the brain was automatically segmented using a novel algorithm . the cca and cir were compared in 98 normal controls ( nc ) subjects , 70 patients with very mild ad ( ad-vm ) , and 28 patients with mild ad ( ad-m ) . statistical analysis of covariance controlling for age and intracranial capacity showed that both the cir and the cca were significantly reduced in the ad-vm group relative to the nc story_separator_special_tag it has been shown that employing multiple atlas images improves segmentation accuracy in atlas-based medical image segmentation . each atlas image is registered to the target image independently and the calculated transformation is applied to the segmentation of the atlas image to obtain a segmented version of the target image . several independent candidate segmentations result from the process , which must be somehow combined into a single final segmentation . majority voting is the generally used rule to fuse the segmentations , but more sophisticated methods have also been proposed . in this paper , we show that the use of global weights to ponderate candidate segmentations has a major limitation . as a means to improve segmentation accuracy , we propose the generalized local weighting voting method . namely , the fusion weights adapt voxel-by-voxel according to a local estimation of segmentation performance . using digital phantoms and mr images of the human brain , we demonstrate that the performance of each combination technique depends on the gray level contrast characteristics of the segmented region , and that no fusion method yields better results than the others for all the regions . in particular , we show that story_separator_special_tag the spinal cord is an essential and vulnerable component of the central nervous system . differentiating and localizing the spinal cord internal structure ( i.e. , gray matter vs. white matter ) is critical for assessment of therapeutic impacts and determining prognosis of relevant conditions . fortunately , new magnetic resonance imaging ( mri ) sequences enable clinical study of the in vivo spinal cord 's internal structure . yet , low contrast-to-noise ratio , artifacts , and imaging distortions have limited the applicability of tissue segmentation techniques pioneered elsewhere in the central nervous system . additionally , due to the inter-subject variability exhibited on cervical mri , typical deformable volumetric registrations perform poorly , limiting the applicability of a typical multi-atlas segmentation framework . thus , to date , no automated algorithms have been presented for the spinal cord 's internal structure . herein , we present a novel slice-based groupwise registration framework for robustly segmenting cervical spinal cord mri . specifically , we provide a method for ( 1 ) pre-aligning the slice-based atlases into a groupwise-consistent space , ( 2 ) constructing a model of spinal cord variability , ( 3 ) projecting the target slice into story_separator_special_tag purpose : multi-atlas segmentation has been shown to be highly robust and accurate across an extraordinary range of potential applications . however , it is limited to the segmentation of structures that are anatomically consistent across a large population of potential target subjects ( i.e. , multi-atlas segmentation is limited to in-atlas applications ) . herein , the authors propose a technique to determine the likelihood that a multi-atlas segmentation estimate is representative of the problem at hand , and , therefore , identify anomalous regions that are not well represented within the atlases . methods : the authors derive a technique to estimate the out-of-atlas ( ooa ) likelihood for every voxel in the target image . these estimated likelihoods can be used to determine and localize the probability of an abnormality being present on the target image . results : using a collection of manually labeled whole-brain datasets , the authors demonstrate the efficacy of the proposed framework on two distinct applications . first , the authors demonstrate the ability to accurately and robustly detect malignant gliomas in the human brain an aggressive class of central nervous system neoplasms . second , the authors demonstrate how this ooa story_separator_special_tag segmentation and delineation of structures of interest in medical images is paramount to quantifying and characterizing structural , morphological , and functional correlations with clinically relevant conditions . the established gold standard for performing segmentation has been manual voxel-by-voxel labeling by a neuroanatomist expert . this process can be extremely time consuming , resource intensive and fraught with high inter-observer variability . hence , studies involving characterizations of novel structures or appearances have been limited in scope ( numbers of subjects ) , scale ( extent of regions assessed ) , and statistical power . statistical methods to fuse data sets from several different sources ( e.g. , multiple human observers ) have been proposed to simultaneously estimate both rater performance and the ground truth labels . however , with empirical datasets , statistical fusion has been observed to result in visually inconsistent findings . so , despite the ease and elegance of a statistical approach , single observers and/or direct voting are often used in practice . hence , rater performance is not systematically quantified and exploited during label estimation . to date , statistical fusion methods have relied on characterizations of rater performance that do not intrinsically include story_separator_special_tag to date , label fusion methods have primarily relied either on global [ e.g. , simultaneous truth and performance level estimation ( staple ) , globally weighted vote ] or voxelwise ( e.g. , locally weighted vote ) performance models . optimality of the statistical fusion framework hinges upon the validity of the stochastic model of how a rater errs ( i.e. , the labeling process model ) . hitherto , approaches have tended to focus on the extremes of potential models . herein , we propose an extension to the staple approach to seamlessly account for spatially varying performance by extending the performance level parameters to account for a smooth , voxelwise performance level field that is unique to each rater . this approach , spatial staple , provides significant improvements over state-of-the-art label fusion algorithms in both simulated and empirical data sets . story_separator_special_tag multi-atlas segmentation provides a general purpose , fully-automated approach for transferring spatial information from an existing dataset ( atlases ) to a previously unseen context ( target ) through image registration . the method to resolve voxelwise label conflicts between the registered atlases ( label fusion ) has a substantial impact on segmentation quality . ideally , statistical fusion algorithms ( e.g. , staple ) would result in accurate segmentations as they provide a framework to elegantly integrate models of rater performance . the accuracy of statistical fusion hinges upon accurately modeling the underlying process of how raters err . despite success on human raters , current approaches inaccurately model multi-atlas behavior as they fail to seamlessly incorporate exogenous intensity information into the estimation process . as a result , locally weighted voting algorithms represent the de facto standard fusion approach in clinical applications . moreover , regardless of the approach , fusion algorithms are generally dependent upon large atlas sets and highly accurate registration as they implicitly assume that the registered atlases form a collectively unbiased representation of the target . herein , we propose a novel statistical fusion algorithm , non-local staple ( nls ) . nls reformulates story_separator_special_tag label fusion is a critical step in many image segmentation frameworks ( e.g. , multi-atlas segmentation ) as it provides a mechanism for generalizing a collection of labeled examples into a single estimate of the underlying segmentation . in the multi-label case , typical label fusion algorithms treat all labels equally fully neglecting the known , yet complex , anatomical relationships exhibited in the data . to address this problem , we propose a generalized statistical fusion framework using hierarchical models of rater performance . building on the seminal work in statistical fusion , we reformulate the traditional rater performance model from a multi-tiered hierarchical perspective . the proposed approach provides a natural framework for leveraging known anatomical relationships and accurately modeling the types of errors that raters ( or atlases ) make within a hierarchically consistent formulation . herein , the primary contributions of this manuscript are : ( 1 ) we provide a theoretical advancement to the statistical fusion framework that enables the simultaneous estimation of multiple ( hierarchical ) confusion matrices for each rater , ( 2 ) we highlight the amenability of the proposed hierarchical formulation to many of the state-of-the-art advancements to the statistical fusion story_separator_special_tag we present our submission to the stacom 2014 moco challenge for motion correction of dynamic contrast myocardial perfusion mri . our submission is based on the publicly available advanced normalization tools ( ants ) specifically tailored for this problem domain . we provide a brief description with actual code calls to facilitate reproducibility . time plots and \\ ( k^ { trans } \\ ) values , based on the validation methodology of [ 11 ] , are also provided to determine clinically relevant performance levels . story_separator_special_tag we evaluate the impact of template choice on template-based segmentation of the hippocampus in epilepsy . four dataset-specific strategies are quantitatively contrasted : the closest to average individual template , the average shape version of the closest to average template , a best appearance template and the best appearance and shape template proposed here and implemented in the open source toolkit advanced normalization tools ( ants ) . the cross-correlation similarity metric drives the correspondence model and is used consistently to determine the optimal appearance . minimum shape distance in the diffeomorphic space determines optimal shape . our evaluation results show that , with respect to gold-standard manual labeling of hippocampi in epilepsy , optimal shape and appearance template construction outperforms the other strategies for gaining data-derived templates . our results also show the improvement is most significant on the diseased side and insignificant on the healthy side . thus , the importance of the template increases when used to study pathology and may be less critical for normal control studies . furthermore , explicit geometric optimization of the shape component of the unbiased template positively impacts the study of diseased hippocampi . story_separator_special_tag this paper proposes a novel theoretical framework to model and analyze the statistical characteristics of a wide range of segmentation methods that incorporate a database of label maps or atlases ; such methods are termed as label fusion or multiatlas segmentation . we model these multiatlas segmentation problems as nonparametric regression problems in the high-dimensional space of image patches . we analyze the nonparametric estimator 's convergence behavior that characterizes expected segmentation error as a function of the size of the multiatlas database . we show that this error has an analytic form involving several parameters that are fundamental to the specific segmentation problem ( determined by the chosen anatomical structure , imaging modality , registration algorithm , and label-fusion algorithm ) . we describe how to estimate these parameters and show that several human anatomical structures exhibit the trends modeled analytically . we use these parameter estimates to optimize the regression estimator . we show that the expected error for large database sizes is well predicted by models learned on small databases . thus , a few expert segmentations can help predict the database sizes required to keep the expected error below a specified tolerance level . such cost-benefit story_separator_special_tag the automation of segmentation of subcortical structures in the brain is an active research area . we have comprehensively evaluated four novel methods of fully automated segmentation of subcortical structures using volumetric , spatial overlap and distance-based measures . two methods are atlas-based - classifier fusion and labelling ( cfl ) and expectation-maximisation segmentation using a brain atlas ( ems ) , and two incorporate statistical models of shape and appearance - profile active appearance models ( pam ) and bayesian appearance models ( bam ) . each method was applied to the segmentation of 18 subcortical structures in 270 subjects from a diverse pool varying in age , disease , sex and image acquisition parameters . our results showed that all four methods perform on par with recently published methods . cfl performed better than the others according to all three classes of metrics . in summary over all structures , the ranking by the dice coefficient was cfl , bam , joint ems and pam . the hausdorff distance ranked the methods as cfl , joint pam and bam , ems , whilst percentage absolute volumetric difference ranked them as joint cfl and pam , joint bam and story_separator_special_tag although many atlas-based segmentation methods have been developed and validated for the human brain , limited work has been done for the mouse brain . this paper investigated roles of image registration and segmentation model complexity in the mouse brain segmentation . we employed four segmentation models [ single atlas , multiatlas , simultaneous truth and performance level estimation ( staple ) and markov random field ( mrf ) via four different image registration algorithms ( affine , b-spline free-form deformation ( ffd ) , demons and large deformation diffeomorphic metric mapping ( lddmm ) ] for delineating 19 structures from in vivo magnetic resonance microscopy images . we validated their accuracies against manual segmentation . our results revealed that lddmm outperformed demons , ffd and affine in any of the segmentation models . under the same registration , increasing segmentation model complexity from single atlas to multiatlas , staple or mrf significantly improved the segmentation accuracy . interestingly , the multiatlas-based segmentation using nonlinear registrations ( ffd , demons and lddmm ) had similar performance to their staple counterparts , while they both outperformed their mrf counterparts . furthermore , when the single-atlas affine segmentation was used as reference story_separator_special_tag the evaluation of ventricular function is important for the diagnosis of cardiovascular diseases . it typically involves measurement of the left ventricular ( lv ) mass and lv cavity volume . manual delineation of the myocardial contours is time-consuming and dependent on the subjective experience of the expert observer . in this paper , a multi-atlas method is proposed for cardiac magnetic resonance ( mr ) image segmentation . the proposed method is novel in two aspects . first , it formulates a patch-based label fusion model in a bayesian framework . second , it improves image registration accuracy by utilizing label information , which leads to improvement of segmentation accuracy . the proposed method was evaluated on a cardiac mr image set of 28 subjects . the average dice overlap metric of our segmentation is 0.92 for the lv cavity , 0.89 for the right ventricular cavity and 0.82 for the myocardium . the results show that the proposed method is able to provide accurate information for clinical diagnosis . story_separator_special_tag article mri templates and digital atlases are needed for automated and reproducible quantitative analysis of non-human primate pet studies . segmenting brain images via multiple atlases outperforms single-atlas labelling in humans . we present a set of atlases manually delineated on brain mri scans of the monkey macaca fascicularis .w e use this multi-atlas dataset to evaluate two automated methods in terms of accuracy , robustness and reliability in segmenting brain structures on mri and extracting regional pet measures . methods : twelve individual macaca fascicularis high-resolution 3dt1 mr images were acquired . four individual atlases were created by manually drawing 42 anatomical structures , including cortical and sub-cortical structures , white matter regions , and ventricles . to create the mri template , we first chose one mri to define a reference space , and then performed a two-step iterative procedure : affine registration of individual mris to the reference mri , followed by averaging of the twelve resampled mris . automated segmentation in native space was obtained in two ways : 1 ) maximum probability atlases were created by decision fusion of two to four individual atlases in the reference space , and transformation back into the individual story_separator_special_tag this paper examine the euler-lagrange equations for the solution of the large deformation diffeomorphic metric mapping problem studied in dupuis et al . ( 1998 ) and trouve ( 1995 ) in which two images i 0 , i 1 are given and connected via the diffeomorphic change of coordinates i 0 ? ? ? 1=i 1 where ? = ? 1 is the end point at t= 1 of curve ? t , t ? [ 0 , 1 ] satisfying . ? t =v t ( ? t ) , t ? [ 0,1 ] with ? 0=id . the variational problem takes the form $ $ \\mathop { \\arg { \\text { m } } in } \\limits_ { on : \\dot \\phi _t = on _t \\left ( { \\dot \\phi } \\right ) } \\left ( { \\int_0^1 { \\left\\| { on _t } \\right\\| } ^2 { \\text { d } } t + \\left\\| { i_0 \\circ \\phi _1^ { - 1 } - i_1 } \\right\\|_ { l^2 } ^2 } \\right ) , $ $ where ? v t ? v is an appropriate sobolev norm on the velocity field v story_separator_special_tag abstract volumetric measurements obtained from image parcellation have been instrumental in uncovering structure function relationships . however , anatomical study of the cerebellum is a challenging task . because of its complex structure , expert human raters have been necessary for reliable and accurate segmentation and parcellation . such delineations are time-consuming and prohibitively expensive for large studies . therefore , we present a three-part cerebellar parcellation system that utilizes multiple inexpert human raters that can efficiently and expediently produce results nearly on par with those of experts . this system includes a hierarchical delineation protocol , a rapid verification and evaluation process , and statistical fusion of the inexpert rater parcellations . the quality of the raters and fused parcellations was established by examining their dice similarity coefficient , region of interest ( roi ) volumes , and the intraclass correlation coefficient of region volume . the intra-rater icc was found to be 0.93 at the finest level of parcellation . story_separator_special_tag abstract in this paper , we present a set of techniques for the evaluation of brain tissue classifiers on a large data set of mr images of the head . due to the difficulty of establishing a gold standard for this type of data , we focus our attention on methods which do not require a ground truth , but instead rely on a common agreement principle . three different techniques are presented : the williams index , a measure of common agreement ; staple , an expectation maximization algorithm which simultaneously estimates performance parameters and constructs an estimated reference standard ; and multidimensional scaling , a visualization technique to explore similarity data . we apply these different evaluation methodologies to a set of eleven different segmentation algorithms on forty mr images . we then validate our evaluation pipeline by building a ground truth based on human expert tracings . the evaluations with and without a ground truth are compared . our findings show that comparing classifiers without a gold standard can provide a lot of interesting information . in particular , outliers can be easily detected , strongly consistent or highly variable techniques can be readily discriminated , and story_separator_special_tag purpose : expert manual labeling is the gold standard for image segmentation , but this process is difficult , time-consuming , and prone to inter-individual differences . while fully automated methods have successfully targeted many anatomies , automated methods have not yet been developed for numerous essential structures ( e.g. , the internal structure of the spinal cord as seen on magnetic resonance imaging ) . collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts . yet , distributing manual labeling expertise across individuals and sites introduces potential human factors concerns ( e.g. , training , software usability ) and statistical considerations ( e.g. , fusion of information , assessment of confidence , bias ) that must be further explored . during the labeling process , it is simple to ask raters to self-assess the confidence of their labels , but this is rarely done and has not been previously quantitatively studied . herein , the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion . methods : the authors conducted a study of story_separator_special_tag we propose a new measure , the method noise , to evaluate and compare the performance of digital image denoising methods . we first compute and analyze this method noise for a wide class of denoising algorithms , namely the local smoothing filters . second , we propose a new algorithm , the nonlocal means ( nl-means ) , based on a nonlocal averaging of all pixels in the image . finally , we present some experiments comparing the nl-means algorithm and the local smoothing filters . story_separator_special_tag the national library of medicine ( nlm ) is developing a digital chest x-ray ( cxr ) screening system for deployment in resource constrained communities and developing countries worldwide with a focus on early detection of tuberculosis . a critical component in the computer-aided diagnosis of digital cxrs is the automatic detection of the lung regions . in this paper , we present a nonrigid registration-driven robust lung segmentation method using image retrieval-based patient specific adaptive lung models that detects lung boundaries , surpassing state-of-the-art performance . the method consists of three main stages : 1 ) a content-based image retrieval approach for identifying training images ( with masks ) most similar to the patient cxr using a partial radon transform and bhattacharyya shape similarity measure , 2 ) creating the initial patient-specific anatomical model of lung shape using sift-flow for deformable registration of training masks to the patient cxr , and 3 ) extracting refined lung boundaries using a graph cuts optimization approach with a customized energy function . our average accuracy of 95.4 % on the public jsrt database is the highest among published results . a similar degree of accuracy of 94.1 % and 91.7 % on story_separator_special_tag atlas selection plays an important role in multiatlas based image segmentation . in atlas selection methods , manifold learning based techniques have recently emerged as very promisingly . however , due to the complexity of anatomical structures in raw images , it is difficult to get accurate atlas selection results by measuring only the distance between raw images on the manifolds . in this paper , we tackle this problem by proposing a label image constrained atlas selection ( licas ) method to exploit the shape and size information of the regions to be segmented from the label images . constrained by the label images , a new manifold projection method is developed to help uncover the intrinsic similarity between the regions of interest across images . compared with other existing methods , the experimental results of segmentation on 60 magnetic resonance ( mr ) images showed that the selected atlases are closer to the target structure and more accurate segmentation can be obtained by using the proposed method . story_separator_special_tag in medical image analysis , atlas-based segmentation has become a popular approach . given a target image , how to select the atlases with the similar shape of anatomical structure to the input image is one of the most critical factors affecting the segmentation accuracy . in this paper , we propose a novel strategy by putting the images on a manifold to analyze the intrinsic similarity between the images . a subset of atlases can be selected and the optimal fusion weights are computed in a low-dimensional manifold space . finally , it combines the selected atlases by using the corresponding weights for image segmentation . the experimental results demonstrated that our proposed method is robust and accurate especially when a large number of training samples are available . story_separator_special_tag anatomical segmentation of structures of interest is critical to quantitative analysis in medical imaging . several automated multi-atlas based segmentation propagation methods that utilise manual delineations from multiple templates appear promising . however , high levels of accuracy and reliability are needed for use in diagnosis or in clinical trials . we propose a new local ranking strategy for template selection based on the locally normalised cross correlation ( lncc ) and an extension to the classical staple algorithm by warfield et al . ( 2004 ) , which we refer to as steps for similarity and truth estimation for propagated segmentations . it addresses the well-known problems of local vs. global image matching and the bias introduced in the performance estimation due to structure size . we assessed the method on hippocampal segmentation using a leave-one-out cross validation with optimised model parameters ; steps achieved a mean dice score of 0.925 when compared with manual segmentation . this was significantly better in terms of segmentation accuracy when compared to other state-of-the-art fusion techniques . furthermore , due to the finer anatomical scale , steps also obtains more accurate segmentations even when using only a third of the templates , story_separator_special_tag classically , model based segmentation procedures match magnetic resonance imaging ( mri ) volumes to an expertly labeled atlas using nonlinear registration . the accuracy of these techniques are limited due to atlas biases , misregistration , and resampling error . multi atlas based approaches are used as a remedy and involve matching each subject to a number of manually labeled templates . this approach yields numerous independent segmentations that are fused using a voxel by voxel label voting procedure . in this article , we demonstrate how the multi atlas approach can be extended to work with input atlases that are unique and extremely time consuming to construct by generating a library of multiple automatically generated templates of different brains ( maget brain ) . we demonstrate the efficacy of our method for the mouse and human using two different nonlinear registration algorithms ( animal and ants ) . the input atlases consist a high resolution mouse brain atlas and an atlas of the human basal ganglia and thalamus derived from serial histological data . maget brain segmentation improves the identification of the mouse anterior commissure ( mean dice kappa values ( = 0.801 ) , but may be story_separator_special_tag we developed and validated a new method to create automated 3d parametric surface models of the lateral ventricles in brain mri scans , providing an efficient approach to monitor degenerative disease in clinical studies and drug trials . first , we used a set of parameterized surfaces to represent the ventricles in four subjects ' manually labeled brain mri scans ( atlases ) . we fluidly registered each atlas and mesh model to mris from 17 alzheimer 's disease ( ad ) patients and 13 age- and gender-matched healthy elderly control subjects , and 18 asymptomatic apoe4-carriers and 18 age- and gender-matched non-carriers . we examined genotyped healthy subjects with the goal of detecting subtle effects of a gene that confers heightened risk for alzheimer 's disease . we averaged the meshes extracted for each 3d mr data set , and combined the automated segmentations with a radial mapping approach to localize ventricular shape differences in patients . validation experiments comparing automated and expert manual segmentations showed that ( 1 ) the hausdorff labeling error rapidly decreased , and ( 2 ) the power to detect disease- and gene-related alterations improved , as the number of atlases , n , story_separator_special_tag presents diffeomorphic transformations of three-dimensional ( 3-d ) anatomical image data of the macaque occipital lobe and whole brain cryosection imagery and of deep brain structures in human brains as imaged via magnetic resonance imagery . these transformations are generated in a hierarchical manner , accommodating both global and local anatomical detail . the initial low-dimensional registration is accomplished by constraining the transformation to be in a low-dimensional basis . the basis is defined by the green 's function of the elasticity operator placed at predefined locations in the anatomy and the eigenfunctions of the elasticity operator . the high-dimensional large deformations are vector fields generated via the mismatch between the template and target-image volumes constrained to be the solution of a navier-stokes fluid model . as part of this procedure , the jacobian of the transformation is tracked , insuring the generation of diffeomorphisms . it is shown that transformations constrained by quadratic regularization methods such as the laplacian , biharmonic , and linear elasticity models , do not ensure that the transformation maintains topology and , therefore , must only be used for coarse global registration . story_separator_special_tag our aim was to compare the predictive accuracy of 4 different medial temporal lobe measurements for alzheimer 's disease ( ad ) in subjects with mild cognitive impairment ( mci ) . manual hippocampal measurement , automated atlas-based hippocampal measurement , a visual rating scale ( mta-score ) , and lateral ventricle measurement were compared . predictive accuracy for ad 2 years after baseline was assessed by receiver operating characteristics analyses with area under the curve as outcome . annual cognitive decline was assessed by slope analyses up to 5 years after baseline . correlations with biomarkers in cerebrospinal fluid ( csf ) were investigated . subjects with mci were selected from the development of screening guidelines and clinical criteria for predementia ad ( descripa ) multicenter study ( n = 156 ) and the single-center vu medical center ( n = 172 ) . at follow-up , area under the curve was highest for automated atlas-based hippocampal measurement ( 0.71 ) and manual hippocampal measurement ( 0.71 ) , and lower for mta-score ( 0.65 ) and lateral ventricle ( 0.60 ) . slope analysis yielded similar results . hippocampal measurements correlated with csf total tau and phosphorylated tau story_separator_special_tag explicit segmentation is required for many forms of quantitative neuroanatomic analysis . however , manual methods are time consuming and subject to errors in both accuracy and reproducibility ( precision ) . a 3 d model based segmentation method is presented in this paper for the completely automatic identification and delineation of gross anatomical structures of the human brain based on their appearance in magnetic resonance images ( mri ) . story_separator_special_tag we present a new algorithm , called local map staple , to estimate from a set of multi-label segmentations both a reference standard segmentation and spatially varying performance parameters . it is based on a sliding window technique to estimate the segmentation and the segmentation performance parameters for each input segmentation . in order to allow for optimal fusion from the small amount of data in each local region , and to account for the possibility of labels not being observed in a local region of some ( or all ) input segmentations , we introduce prior probabilities for the local performance parameters through a new maximum a posteriori formulation of staple . further , we propose an expression to compute confidence intervals in the estimated local performance parameters . we carried out several experiments with local map staple to characterize its performance and value for local segmentation evaluation . first , with simulated segmentations with known reference standard segmentation and spatially varying performance , we show that local map staple performs better than both staple and majority voting . then we present evaluations with data sets from clinical applications . these experiments demonstrate that spatial adaptivity in segmentation performance story_separator_special_tag quantitative magnetic resonance analysis often requires accurate , robust , and reliable automatic extraction of anatomical structures . recently , template-warping methods incorporating a label fusion strategy have demonstrated high accuracy in segmenting cerebral structures . in this study , we propose a novel patch-based method using expert manual segmentations as priors to achieve this task . inspired by recent work in image denoising , the proposed nonlocal patch-based label fusion produces accurate and robust segmentation . validation with two different datasets is presented . in our experiments , the hippocampi of 80 healthy subjects and the lateral ventricles of 80 patients with alzheimer 's disease were segmented . the influence on segmentation accuracy of different parameters such as patch size and number of training subjects was also studied . a comparison with an appearance-based method and a template-based method was also carried out . the highest median kappa index values obtained with the proposed method were 0.884 for hippocampus segmentation and 0.959 for lateral ventricle segmentation . story_separator_special_tag visual learning.- constrained spectral clustering via exhaustive and efficient constraint propagation.- object recognition with hierarchical stel models.- miforests : multiple-instance learning with randomized trees.- manifold valued statistics , exact principal geodesic analysis and the effect of linear approximations.- stacked hierarchical labeling.- spotlights and posters r2.- fully isotropic fast marching methods on cartesian grids.- clustering complex data with group-dependent feature selection.- on parameter learning in crf-based approaches to object class image segmentation.- exploring the identity manifold : constrained operations in face space.- multi-label linear discriminant analysis.- convolutional learning of spatio-temporal features.- learning pre-attentive driving behaviour from holistic visual features.- detecting people using mutually consistent poselet activations.- disparity statistics for pedestrian detection : combining appearance , motion and stereo.- multi-stage sampling with boosting cascades for pedestrian detection in images and videos.- learning to detect roads in high-resolution aerial images.- thinking inside the box : using appearance models and context based on room geometry.- a structural filter approach to human detection.- geometric constraints for human detection in aerial imagery.- handling urban location recognition as a 2d homothetic problem.- recursive coarse-to-fine localization for fast object detection.- a local bag-of-features model for large-scale object retrieval.- velocity-dependent shutter sequences for motion deblurring.- colorization for single image story_separator_special_tag clustering algorithms have found application in tissue classification in mri . standard techniques such as k-means iteratively define intensity clusters based on the distribution of voxels in intensity space . spectral clustering is potentially more powerful as it models voxel-to-voxel relationships rather than voxel-to-cluster relationships . unfortunately , for images of n-voxels naive application leads to an n ( n 1 ) /2 voxel comparison problem and an order n n eigenvalue problem which has prevented these techniques being widely investigated in 3d medical imaging . in this paper we report an empirical evaluation of a stochastic sampling approach to modelling voxel-to-voxel relationships for spectral clustering . stochastic sampling captures sufficient intensity structure to give plausible tissue classification in 3d brain mri . we test the stability of our approach to similarity parameter choice , sample size and stochastic effects in simulated and real 3d mr images . story_separator_special_tag multi-atlas segmentation propagation has evolved quickly in recent years , becoming a state-of-the-art method for automatic struc- tural parcellation for brain mri . however , few studies have applied these methods to preclinical research . in this study , we present a fully auto- matic multi-atlas segmentation pipeline for mouse brain mri tissue par- cellation . the pipeline adopts the multi-steps multi-atlas segmentation algorithm , which utilises a locally normalised cross correlation ( lncc ) similarity metric for atlas selection and an extended staple frame- work for multi-label fusion . the segmentation accuracy of the pipeline was evaluated using an in vivo mouse brain atlas with pre-segmented manual labels as gold standard , and optimised parameters were obtained . results show a mean dice similarity coe cient of 0.839 over all the struc- tures and for all the samples in the database , signi cantly higher than in a single atlas propagation strategy , and also generally higher than staple strategy , although the improvement is not signi cant . story_separator_special_tag purpose the spatial normalization and registration of tomographic images from different subjects is a major problem in several medical imaging areas , including functional image analysis , morphometrics , and computer-aided neurosurgery . the focus of this article is the development of a computerized methodology for the spatial normalization of 3d images . method we propose a technique that is based on geometric deformable models . in particular , we first describe a deformable surface algorithm that finds a mathematical representation of the outer cortical surface . based on this representation , a procedure for obtaining a map between corresponding regions of the outer cortex in two different images is established . this map is subsequently used to derive a 3d elastic warping transformation , which brings two images into register . results the performance of our algorithm is demonstrated on several datasets . in particular , we first test our deformable surface algorithm on mr images . we then register mr images to atlas images . in our third experiment , we apply a procedure for matching distinct cortical features identified through the curvature map of the outer cortex . finally , we apply our technique to images from story_separator_special_tag the study presented in this paper tests the hypothesis that the combination of a global similarity transformation and local free-form deformations can be used for the accurate segmentation of internal structures in mr images of the brain . to quantitatively evaluate the authors ' approach , the entire brain , the cerebellum , and the head of the caudate have been segmented manually by two raters on one of the volumes ( the reference volume ) and mapped back onto all the other volumes , using the computed transformations . the contours so obtained have been compared to contours drawn manually around the structures of interest in each individual brain . manual delineation was performed twice by the same two raters to test inter- and intrarater variability . for the brain and the cerebellum , results indicate that for each rater , contours obtained manually and contours obtained automatically by deforming his own atlas are virtually indistinguishable . furthermore , contours obtained manually by one rater and contours obtained automatically by deforming this rater 's own atlas are more similar than contours obtained manually by two raters . for the caudate , manual intra- and interrater similarity indexes remain slightly story_separator_special_tag label fusion is a multi-atlas segmentation approach that explicitly maintains and exploits the entire training dataset , rather than a parametric summary of it . recent empirical evidence suggests that label fusion can achieve significantly better segmentation accuracy over classical parametric atlas methods that utilize a single coordinate frame . however , this performance gain typically comes at an increased computational cost due to the many pairwise registrations between the novel image and training images . in this work , we present a modified label fusion method that approximates these pairwise warps by first pre-registering the training images via a diffeomorphic groupwise registration algorithm . the novel image is then only registered once , to the template image that represents the average training subject . the pairwise spatial correspondences between the novel image and training images are then computed via concatenation of appropriate transformations . our experiments on cardiac mr data suggest that this strategy for nonparametric segmentation dramatically improves computational efficiency , while producing segmentation results that are statistically indistinguishable from those obtained with regular label fusion . these results suggest that the key benefit of label fusion approaches is the underlying nonparametric inference algorithm , and not the story_separator_special_tag large variations occur in brain anatomical structures in human populations , presenting a critical challenge to the brain mapping process . this study investigates the major impact of these variations on the performance of atlas-based segmentation . it is based on two publicly available datasets , from each of which 17 t1-weighted brain atlases were extracted . each subject was registered to every other subject using the morphons , a non-rigid registration algorithm . the automatic segmentations , obtained by warping the segmentation of this template , were compared with the expert segmentations using dice index and the differences were statistically analyzed using bonferroni multiple comparisons at significance level 0.05. the results showed that an optimum atlas for accurate segmentation of all structures can not be found , and that the group of preferred templates , defined as being significantly superior to at least two other templates regarding the segmentation accuracy , varies significantly from structure to structure . moreover , compared to other templates , a template giving the best accuracy in segmentation of some structures can provide highly inferior segmentation accuracy for other structures . it is concluded that there is no template optimum for automatic segmentation of story_separator_special_tag rationale and objectives we present a new method for automatic brain extraction on structural magnetic resonance images , based on a multi-atlas registration framework . materials and methods our method addresses fundamental challenges of multi-atlas approaches . to overcome the difficulties arising from the variability of imaging characteristics between studies , we propose a study-specific template selection strategy , by which we select a set of templates that best represent the anatomical variations within the data set . against the difficulties of registering brain images with skull , we use a particularly adapted registration algorithm that is more robust to large variations between images , as it adaptively aligns different regions of the two images based not only on their similarity but also on the reliability of the matching between images . finally , a spatially adaptive weighted voting strategy , which uses the ranking of jacobian determinant values to measure the local similarity between the template and the target images , is applied for combining coregistered template masks . results the method is validated on three different public data sets and obtained a higher accuracy than recent state-of-the-art brain extraction methods . also , the proposed method is successfully story_separator_special_tag multi-atlas segmentation has been widely used to segment various anatomical structures . the success of this technique partly relies on the selection of atlases that are best mapped to a new target image after registration . recently , manifold learning has been proposed as a method for atlas selection . each manifold learning technique seeks to optimize a unique objective function . therefore , different techniques produce different embeddings even when applied to the same data set . previous studies used a single technique in their method and gave no reason for the choice of the manifold learning technique employed nor the theoretical grounds for the choice of the manifold parameters . in this study , we compare side-by-side the results given by 3 manifold learning techniques ( isomap , laplacian eigenmaps and locally linear embedding ) on the same data set . we assess the ability of those 3 different techniques to select the best atlases to combine in the framework of multi-atlas segmentation . first , a leave-one-out experiment is used to optimize our method on a set of 110 manually segmented atlases of hippocampi and find the manifold learning technique and associated manifold parameters that give the story_separator_special_tag much recent research has been devoted to learning algorithms for deep architectures such as deep belief networks and stacks of auto-encoder variants , with impressive results obtained in several areas , mostly on vision and language data sets . the best results obtained on supervised learning tasks involve an unsupervised learning component , usually in an unsupervised pre-training phase . even though these new algorithms have enabled training deep models , many questions remain as to the nature of this difficult learning problem . the main question investigated here is the following : how does unsupervised pre-training work ? answering this questions is important if learning in deep architectures is to be further improved . we propose several explanatory hypotheses and test them through extensive simulations . we empirically show the influence of pre-training with respect to architecture depth , model capacity , and number of training examples . the experiments confirm and clarify the advantage of unsupervised pre-training . the results suggest that unsupervised pre-training guides the learning towards basins of attraction of minima that support better generalization from the training data set ; the evidence from these results supports a regularization explanation for the effect of pre-training . story_separator_special_tag we present a technique for automatically assigning a neuroanatomical label to each voxel in an mri volume based on probabilistic information automatically estimated from a manually labeled training set . in contrast to existing segmentation procedures that only label a small number of tissue classes , the current method assigns one of 37 labels to each voxel , including left and right caudate , putamen , pallidum , thalamus , lateral ventricles , hippocampus , and amygdala . the classification technique employs a registration procedure that is robust to anatomical variability , including the ventricular enlargement typically associated with neurological diseases and aging . the technique is shown to be comparable in accuracy to manual labeling , and of sufficient sensitivity to robustly detect changes in the volume of noncortical structures that presage the onset of probable alzheimer 's disease . story_separator_special_tag we propose a new method combining a population-specific nonlinear template atlas approach with non-local patch-based structure segmentation for whole brain segmentation into individual structures . this way , we benefit from the efficient intensity-driven segmentation of the non-local means framework and from the global shape constraints imposed by the nonlinear template matching . story_separator_special_tag purpose : hyperthermia treatment of head and neck tumors requires accurate treatment planning , based on 3d patient models that are derived from segmented 3d images . these segmentations are currently obtained by manual outlining of the relevant tissue regions , which is a tedious and time-consuming procedure ( 8 h ) limiting the clinical applicability of hyperthermia treatment . in this context , the authors present and evaluate an automatic segmentation algorithm for ct images of the head and neck . methods : the proposed method combines anatomical information , based on atlas registration , with local intensity information in a graph cut framework . the method is evaluated with respect to ground truth manual delineation and compared with multiatlas-based segmentation on a dataset of 18 labeled ct images using the dice similarity coefficient ( dsc ) , the mean surface distance ( msd ) , and the hausdorff surface distance ( hsd ) as evaluation measures . on a subset of 13 labeled images , the influence of different labelers on the method 's accuracy is quantified and compared with the interobserver variability . results : for the dsc , the proposed method performs significantly better for the story_separator_special_tag purpose : accurate delineation of organs at risk ( oars ) is a precondition for intensity modulated radiation therapy . however , manual delineation of oars is time consuming and prone to high interobserver variability . because of image artifacts and low image contrast between different structures , however , the number of available approaches for autosegmentation of structures in the head-neck area is still rather low . in this project , a new approach for automated segmentation of head-neck ct images that combine the robustness of multiatlas-based segmentation with the flexibility of geodesic active contours and the prior knowledge provided by statistical appearance models is presented . methods : the presented approach is using an atlas-based segmentation approach in combination with label fusion in order to initialize a segmentation pipeline that is based on using statistical appearance models and geodesic active contours . an anatomically correct approximation of the segmentation result provided by atlas-based segmentation acts as a starting point for an iterative refinement of this approximation . the final segmentation result is based on using model to image registration and geodesic active contours , which are mutually influencing each other . results : 18 ct images in combination story_separator_special_tag a semi-supervised segmentation method using a single atlas is presented in this paper . traditional atlas-based segmentation suffers from either a strong bias towards the selected atlas or the need for manual effort to create multiple atlas images . similar to semi-supervised learning in computer vision , we study a method which exploits information contained in a set of unlabelled images by mutually registering them non-rigidly and propagating the single atlas segmentation over multiple such registration paths to each target . these multiple segmentation hypotheses are then fused by local weighting based on registration similarity . our results on two datasets of different anatomies and image modalities , corpus callosum mr and mandible ct images , show a significant improvement in segmentation accuracy compared to traditional single atlas based segmentation . we also show that the bias towards the selected atlas is minimized using our method . additionally , we devise a method for the selection of intermediate targets used for propagation , in order to reduce the number of necessary inter-target registrations without loss of final segmentation accuracy . story_separator_special_tag a system and method of analysis for a medical image are described . a medical image is received and analyzed , and an initial border of a region within the medical image is determined based on the analysis of the medical image . a user input is received indicating one or more control points , where each of the one or more control points is located inside or outside of the initial border . a modified border of the region is determined based on the analysis and the user input , the modified border passing through the one or more control points . story_separator_special_tag registration algorithms can facilitate the automatic anatomical segmentation of pediatric brain mr data sets when segmentation priors ( atlases ) are in hand . automatic segmentation can be achieved through label propagation and label fusion in target space . we investigated the performance of different age cohorts used as prior atlases for the segmentation of 13 mris of 1-year-olds . thirty adults and 33 2-year-olds ( including the 13 1-year olds , scanned a year later ) served as priors for label propagation and fusion . in addition , we tested the accuracy of a single propagation step of the atlas of the same subject scanned at 2 years of age . pediatric priors performed better than adult priors on visual inspection as well as manual validation of the caudate nucleus ( dice index=0.89\xb10.02 vs. 0.86\xb10.03 ) . corresponding single atlases at the age of 2 performed better than the fusion of 30 adult priors ( 83 rois / average dice=0.87\xb10.05 vs. 0.84\xb10.07 ) . story_separator_special_tag we studied methods for the automatic segmentation of neonatal and developing brain images into 50 anatomical regions , utilizing a new set of manually segmented magnetic resonance ( mr ) images from 5 term-born and 15 preterm infants imaged at term corrected age called alberts . two methods were compared : individual registrations with label propagation and fusion ; and template based registration with propagation of a maximum probability neonatal albert ( mpna ) . in both cases we evaluated the performance of different neonatal atlases and mpna , and the approaches were compared with the manual segmentations by means of the dice overlap coefficient . dice values , averaged across regions , were 0.81\xb10.02 using label propagation and fusion for the preterm population , and 0.81\xb10.02 using the single registration of a mpna for the term population . segmentations of 36 further unsegmented target images of developing brains yielded visibly high-quality results . this registration approach allows the rapid construction of automatically labeled age-specific brain atlases for neonates and the developing brain . story_separator_special_tag three-dimensional atlases and databases of the brain at different ages facilitate the description of neuroanatomy and the monitoring of cerebral growth and development . brain segmentation is challenging in young children due to structural differences compared to adults . we have developed a method , based on established algorithms , for automatic segmentation of young children 's brains into 83 regions of interest ( rois ) , and applied this to an exemplar group of 33 2-year-old subjects who had been born prematurely . the algorithm uses prior information from 30 normal adult brain magnetic resonance ( mr ) images , which had been manually segmented to create 30 atlases , each labeling 83 anatomical structures . each of these adult atlases was registered to each 2-year-old target mr image using non-rigid registration based on free-form deformations . label propagation from each adult atlas yielded a segmentation of each 2-year-old brain into 83 rois . the final segmentation was obtained by combination of the 30 propagated adult atlases using decision fusion , improving accuracy over individual propagations . we validated this algorithm by comparing the automatic approach with three representative manually segmented volumetric regions ( the subcortical caudate nucleus , story_separator_special_tag in temporal lobe epilepsy ( tle ) , hippocampal atrophy ( ha ) is a marker of poor prognosis regarding seizure remission , but predicts success of anterior temporal lobe resection . manual quantification of ha on mri is time-consuming and limited by investigator availability . normal ranges of hippocampal volumes , both in absolute terms and relative to intracranial volume , and of hippocampal asymmetry were defined using an automatic label propagation and decision fusion technique based on thirty manually derived atlases of healthy controls . manual test-retest reliability and overlaps of automatically and manually determined hippocampal volumes were quantified with similarity indices ( sis ) . correct clinical identification of ipsilateral ha , and contralaterally normal hippocampal volumes , was determined in nine patients with histologically confirmed hippocampal sclerosis in terms of volumes and asymmetry indices ( ais ) for standard statistical thresholds and with receiver operating characteristic ( roc ) analysis . manual test-retest reliability was very high , with sis between 0.87 and 0.90. manual and automatic hippocampus labels overlapped with a si of 0.83 on the unaffected but with 0.76 on the atrophic side . accuracy was higher for less atrophic hippocampi . the automatic story_separator_special_tag brain structure segmentation is an important task in many neuroscience and clinical applications . in this paper , we introduce a novel mi-based dense deformable registration method and apply it to the automatic segmentation of detailed brain structures . together with a multiple atlas fusion strategy , very accurate segmentation results were obtained , as compared with other reported methods in the literature . to make multi-atlas segmentation computationally feasible , we also propose to take advantage of the recent advancements in gpu technology and introduce a gpu-based implementation of the proposed registration method . with gpu acceleration it takes less than 8 minutes to compile a multi-atlas segmentation for each subject even with as many as 17 atlases , which demonstrates that the use of gpus can greatly facilitate the application of such atlas-based segmentation methods in practice . story_separator_special_tag for medical image segmentation , multi-atlas based segmentation methods have attracted great attention recently . within the multi-atlas segmentation framework , labels of all atlases are propagated to the target image by means of image registration and then fused to achieve segmentation of the target image . while most multi-atlas based segmentation methods focus on developing effective label fusion strategies , few of them make an effort to improve the accuracy of image registration between atlas and target images . inspired by the idea that the estimated segmentation of the target image can be used to refine the pairwise registration performance , we propose an iterative strategy to improve registration accuracy between the atlas and target images using a multi-channel registration approach . in addition , an overfitting-resistant discriminative learning procedure , referred to as jackknife context model ( jcm ) , is adopted at each iteration to improve accuracy and robustness of label fusion results . validation experiments on hippocampal segmentation have demonstrated that our method can statistically significantly improve the performance of the state-of-art multi-atlas based methods . story_separator_special_tag automatic and reliable segmentation of subcortical structures is an important but difficult task in quantitative brain image analysis . multi-atlas based segmentation methods have attracted great interest due to their promising performance . under the multi-atlas based segmentation framework , using deformation fields generated for registering atlas images onto a target image to be segmented , labels of the atlases are first propagated to the target image space and then fused to get the target image segmentation based on a label fusion strategy . while many label fusion strategies have been developed , most of these methods adopt predefined weighting models that are not necessarily optimal . in this study , we propose a novel local label learning strategy to estimate the target image 's segmentation label using statistical machine learning techniques . in particular , we use a l1-regularized support vector machine ( svm ) with a k nearest neighbor ( knn ) based training sample selection strategy to learn a classifier for each of the target image voxel from its neighboring voxels in the atlases based on both image intensity and texture features . our method has produced segmentation results consistently better than state-of-the-art label fusion methods in story_separator_special_tag the optic nerve ( on ) plays a critical role in many devastating pathological conditions . segmentation of the on has the ability to provide understanding of anatomical development and progression of diseases of the on . recently , methods have been proposed to segment the on but progress toward full automation has been limited . we optimize registration and fusion methods for a new multi-atlas framework for automated segmentation of the ons , eye globes , and muscles on clinically acquired computed tomography ( ct ) data . briefly , the multi-atlas approach consists of determining a region of interest within each scan using affine registration , followed by nonrigid registration on reduced field of view atlases , and performing statistical fusion on the results . we evaluate the robustness of the approach by segmenting the on structure in 501 clinically acquired ct scan volumes obtained from 183 subjects from a thyroid eye disease patient population . a subset of 30 scan volumes was manually labeled to assess accuracy and guide method choice . of the 18 compared methods , the ants symmetric normalization registration and nonlocal spatial simultaneous truth and performance level estimation statistical fusion resulted in the story_separator_special_tag regions in three-dimensional magnetic resonance ( mr ) brain images can be classified using protocols for manually segmenting and labeling structures . for large cohorts , time and expertise requirements make this approach impractical . to achieve automation , an individual segmentation can be propagated to another individual using an anatomical correspondence estimate relating the atlas image to the target image . the accuracy of the resulting target labeling has been limited but can potentially be improved by combining multiple segmentations using decision fusion . we studied segmentation propagation and decision fusion on 30 normal brain mr images , which had been manually segmented into 67 structures . correspondence estimates were established by nonrigid registration using free-form deformations . both direct label propagation and an indirect approach were tested . individual propagations showed an average similarity index ( si ) of 0.754+/-0.016 against manual segmentations . decision fusion using 29 input segmentations increased si to 0.836+/-0.009 . for indirect propagation of a single source via 27 intermediate images , si was 0.779+/-0.013 . we also studied the effect of the decision fusion procedure using a numerical simulation with synthetic input data . the results helped to formulate a model that story_separator_special_tag this paper presents a novel , publicly available repository of anatomically segmented brain images of healthy subjects as well as patients with mild cognitive impairment and alzheimer 's disease . the underlying magnetic resonance images have been obtained from the alzheimer 's disease neuroimaging initiative ( adni ) database . t1-weighted screening and baseline images ( 1.5\xa0t and 3\xa0t ) have been processed with the multi-atlas based maper procedure , resulting in labels for 83 regions covering the whole brain in 816 subjects . selected segmentations were subjected to visual assessment . the segmentations are self-consistent , as evidenced by strong agreement between segmentations of paired images acquired at different field strengths ( jaccard coefficient : 0.802\xa0\xb1\xa00.0146 ) . morphometric comparisons between diagnostic groups ( normal ; stable mild cognitive impairment ; mild cognitive impairment with progression to alzheimer 's disease ; alzheimer 's disease ) showed highly significant group differences for individual regions , the majority of which were located in the temporal lobe . additionally , significant effects were seen in the parietal lobe . increased left/right asymmetry was found in posterior cortical regions . an automatically derived white-matter hypointensities index was found to be a suitable means story_separator_special_tag automatic anatomical segmentation of magnetic resonance human brain images has been shown to be accurate and robust when based on multiple atlases that encompass the anatomical variability of the cohort of subjects . we observed that the method tends to fail when the segmentation target shows ventricular enlargement that is not captured by the atlas database . by incorporating tissue classification information into the image registration process , we aimed to increase the robustness of the method . for testing , subjects who participated in the oxford project to investigate memory and aging ( optima ) and the alzheimer 's disease neuroimaging initiative ( adni ) were selected for ventriculomegaly . segmentation quality was substantially improved in the ventricles and surrounding structures ( 9/9 successes on visual rating versus 4/9 successes using the baseline method ) . in addition , the modification resulted in a significant increase of segmentation accuracy in healthy subjects ' brain images . hippocampal segmentation results in a group of patients with temporal lobe epilepsy were near identical with both approaches . the modified approach ( maper , multi-atlas propagation with enhanced registration ) extends the applicability of multi-atlas based automatic whole-brain segmentation to subjects with story_separator_special_tag commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies . we have already implemented a cardiovascular image analysis software package and released it as freeware for the research community . however , it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms . we believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible , so that researchers can develop their own modules or improvements . such an initiative might then serve as a bridge between image analysis research and cardiovascular research . the aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package ( segment ) and to announce its release in a source code format . segment can be used for image analysis in magnetic resonance imaging ( mri ) , computed tomography ( ct ) , single photon emission computed tomography ( spect ) and positron emission tomography ( pet ) . some of its main features include loading of dicom images from story_separator_special_tag high-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors . gradient descent can be used for fine-tuning the weights in such `` autoencoder '' networks , but this works well only if the initial weights are close to a good solution . we describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data . story_separator_special_tag image analysis and validation.- classification of structural images via high-dimensional image warping , robust feature extraction , and svm.- bone enhancement filtering : application to sinus bone segmentation and simulation of pituitary surgery.- simultaneous registration and segmentation of anatomical structures from brain mri.- synthetic ground truth for validation of brain tumor mri segmentation.- vascular image segmentation.- automatic cerebrovascular segmentation by accurate probabilistic modeling of tof-mra images.- a segmentation and reconstruction technique for 3d vascular structures.- mra image segmentation with capillary active contour.- spatial graphs for intra-cranial vascular network characterization , generation , and discrimination.- image registration i.- surface alignment of 3d spherical harmonic models : application to cardiac mri analysis.- unified point selection and surface-based registration using a particle filter.- elastic registration of 3d ultrasound images.- tracer kinetic model-driven registration for dynamic contrast enhanced mri time series.- generalised overlap measures for assessment of pairwise and groupwise image registration and segmentation.- diffusion tensor image analysis.- uncertainty in white matter fiber tractography.- fast and simple calculus on tensors in the log-euclidean framework.- 3d curve inference for diffusion mri regularization.- fiber tract-oriented statistics for quantitative diffusion tensor mri analysis.- white matter tract clustering and correspondence in populations.- 76-space analysis of grey matter diffusivity story_separator_special_tag automated analysis of mammograms requires robust methods for pectoralis segmentation and nipple detection . locating the nipple is especially important in multiview computer aided detection systems , in which findings are matched across images using the nipple-to-finding distance . segmenting the pectoralis is a key preprocessing step to avoid false positives when detecting masses due to the similarity of the texture of mammographic parenchyma and the pectoral muscle . a multi-atlas algorithm capable of providing very robust initial estimates of the nipple position and pectoral region in digitized mammograms is presented here . ten full-field digital mammograms , which are easily annotated attributed to their excellent contrast , are robustly registered to the target digitized film-screen mammogram . the annotations are then propagated and fused into a final nipple position and pectoralis segmentation . compared to other nipple detection methods in the literature , the system proposed here has the advantages that it is more robust and can provide a reliable estimate when the nipple is located outside the image . our results show that the change in the correlation between nipple-to-finding distances in craniocaudal and mediolateral oblique views is not significant when the detected nipple positions replace the manual story_separator_special_tag in this paper we present a novel label fusion algorithm suited for scenarios in which different manual delineation protocols with potentially disparate structures have been used to annotate the training scans ( hereafter referred to as atlases ) . such scenarios arise when atlases have missing structures , when they have been labeled with different levels of detail , or when they have been taken from different heterogeneous databases . the proposed algorithm can be used to automatically label a novel scan with any of the protocols from the training data . further , it enables us to generate new labels that are not present in any delineation protocol by defining intersections on the underling labels . we first use probabilistic models of label fusion to generalize three popular label fusion techniques to the multi-protocol setting : majority voting , semi-locally weighted voting and staple . then , we identify some shortcomings of the generalized methods , namely the inability to produce meaningful posterior probabilities for the different labels ( majority voting , semi-locally weighted voting ) and to exploit the similarities between the atlases ( all three methods ) . finally , we propose a novel generative label fusion story_separator_special_tag current label fusion methods enhance multi-atlas segmentation by locally weighting the contribution of the atlases according to their similarity to the target volume after registration . however , these methods can not handle voxel intensity inconsistencies between the atlases and the target image , which limits their application across modalities or even across mri datasets due to differences in image contrast . here we present a generative model for multi-atlas image segmentation , which does not rely on the intensity of the training images . instead , we exploit the consistency of voxel intensities within regions in the target volume and their relation to the propagated labels . this is formulated in a probabilistic framework , where the most likely segmentation is obtained with variational expectation maximization ( em ) . the approach is demonstrated in an experiment where t 1 -weighted mri atlases are used to segment proton-density ( pd ) weighted brain mri scans , a scenario in which traditional weighting schemes can not be used . our method significantly improves the results provided by majority voting and staple . story_separator_special_tag many segmentation algorithms in medical image analysis use bayesian modeling to augment local image appearance with prior anatomical knowledge . such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process . however , a faithful bayesian analysis would marginalize over such parameters , accounting for their uncertainty by considering all possible values they may take . here we propose to incorporate this uncertainty into bayesian segmentation methods in order to improve the inference process . in particular , we approximate the required marginalization over model parameters using computationally efficient markov chain monte carlo techniques . we illustrate the proposed approach using a recently developed bayesian method for the segmentation of hippocampal subfields in brain mri scans , showing a significant improvement in an alzheimer s disease classification task . as an additional benefit , the technique also allows one to compute informative error bars on the volume estimates of individual structures . story_separator_special_tag abstract multi-atlas label fusion is a powerful image segmentation strategy that is becoming increasingly popular in medical imaging . a standard label fusion algorithm relies on independently computed pairwise registrations between individual atlases and the ( target ) image to be segmented . these registrations are then used to propagate the atlas labels to the target space and fuse them into a single final segmentation . such label fusion schemes commonly rely on the similarity between intensity values of the atlases and target scan , which is often problematic in medical imaging in particular , when the atlases and target images are obtained via different sensor types or imaging protocols . in this paper , we present a generative probabilistic model that yields an algorithm for solving the atlas-to-target registrations and label fusion steps simultaneously . the proposed model does not directly rely on the similarity of image intensities . instead , it exploits the consistency of voxel intensities within the target scan to drive the registration and label fusion , hence the atlases and target image can be of different modalities . furthermore , the framework models the joint warp of all the atlases , introducing interdependence between the story_separator_special_tag a novel atlas-based segmentation approach based on the combination of multiple registrations is presented . multiple atlases are registered to a target image . to obtain a segmentation of the target , labels of the atlas images are propagated to it . the propagated labels are combined by spatially varying decision fusion weights . these weights are derived from local assessment of the registration success . furthermore , an atlas selection procedure is proposed that is equivalent to sequential forward selection from statistical pattern recognition theory . the proposed method is compared to three existing atlas-based segmentation approaches , namely ( 1 ) single atlas-based segmentation , ( 2 ) average-shape atlas-based segmentation , and ( 3 ) multi-atlas-based segmentation with averaging as decision fusion . these methods were tested on the segmentation of the heart and the aorta in computed tomography scans of the thorax . the results show that the proposed method outperforms other methods and yields results very close to those of an independent human observer . moreover , the additional atlas selection step led to a faster segmentation at a comparable performance . story_separator_special_tag the striatum has a clear role in addictive disorders and is involved in drug-related craving . recently , enhanced striatal volume was associated with greater lifetime nicotine exposure , suggesting a bridge between striatal function and structural phenotypes . to assess this link between striatal structure and function , we evaluated the relationship between striatal morphology and this brain region s well-established role in craving . in tobacco smokers , we assessed striatal volume , surface area , and shape using a new segmentation methodology coupled with local shape indices . striatal morphology was then related with two measures of craving : state-based craving , assessed by the brief questionnaire of smoking urges ( qsu ) , and craving induced by smoking-related images . a positive association was found between left striatal volume and surface area with both measures of craving . a more specific relationship was found between both craving measures and the dorsal , but not in ventral striatum . evaluating dorsal striatal subregions showed a single relationship between the caudate and qsu . although cue-induced craving and the qsu were both associated with enlarged striatal volume and surface area , these measures were differentially associated with global story_separator_special_tag in this paper , we present a multi-atlas-based framework for accurate , consistent and simultaneous segmentation of a group of target images . multi-atlas-based segmentation algorithms consider concurrently complementary information from multiple atlases to produce optimal segmentation outcomes . however , the accuracy of these algorithms relies heavily on the precise alignment of the atlases with the target image . in particular , the commonly used pairwise registration may result in inaccurate alignment especially between images with large shape differences . additionally , when segmenting a group of target images , most current methods consider these images independently with disregard of their correlation , thus resulting in inconsistent segmentations of the same structures across different target images . we propose two novel strategies to address these limitations : 1 ) a novel tree-based groupwise registration method for concurrent alignment of both the atlases and the target images , and 2 ) an iterative groupwise segmentation method for simultaneous consideration of segmentation information propagated from all available images , including the atlases and other newly segmented target images . evaluation based on various datasets indicates that the proposed multi-atlas-based multi-image segmentation ( mabmis ) framework yields substantial improvements in terms of story_separator_special_tag the accuracy of brain tumor detection and segmentation are greatly affected by tumors location , shape , and image properties . in some situations , brain tumor detection and segmentation processes are greatly complicated and far from being completely resolved . the accuracy of the segmentation process significantly influences the diagnosis process , such as abnormal tissue detection , disease classification , and assessment . however , medical images , in particular , the magnetic resonance imaging ( mri ) , often include undesirable artefacts such as noise , density inhomogeneity , and partial volume effects . although many segmentation methods have been proposed , the accuracy of the segmentation results can be further improved . subsequently , this study attempts to provide very important properties about the size , initial location and shape of tumors known as region of interest ( roi ) to kick-start the segmentation process . the mri consists of a sequence of images ( mri slices ) of a particular person and not one image . our method chooses the best image among them based on the tumor size , initial location and shape to avoid the partial volume effects . the selected algorithms to story_separator_special_tag purposeto develop and demonstrate a rapid whole-body magnetic resonance imaging ( mri ) method for automatic quantification of total and regional skeletal muscle volume.materials and methodsthe metho . story_separator_special_tag multi-atlas label fusion is a widely used approach in medical image analysis that has improved the accuracy of segmentation . majority voting , as the most common combination strategy , weighs each candidate in the atlas database equally . more sophisticated methods rely on the intensity similarity of each atlas to the target volume . however , these methods can not handle those cases in which the atlases and the target image are in different modalities . a new method for label fusion is proposed , based on a structural similarity measure , relying on the structural relationships of features extracted from an undecimated wavelet transform instead of explicit image intensities . the new label fusion method has been tested on simulated and real mr images ; segmentation results are promising , and open the door to a wider range of multi-modal approaches . story_separator_special_tag brain structural volumes can be used for automatically classifying subjects into categories like controls and patients . we aimed to automatically separate patients with temporal lobe epilepsy ( tle ) with and without hippocampal atrophy on mri , ptle and ntle , from controls , and determine the epileptogenic side . in the proposed framework 83 brain structure volumes are identified using multi-atlas segmentation . we then use structure selection using a divergence measure and classification based on structural volumes , as well as morphological similarities using svm . a spectral analysis step is used to convert the pairwise measures of similarity between subjects into per-subject features . up to 96 % of ptle patients were correctly separated from controls using 14 structural brain volumes . the classification method based on spectral analysis was 91 % accurate at separating ntle patients from controls . right and left hippocampus were sufficient for the lateralization of the seizure focus in the ptle group and achieved 100 % accuracy . story_separator_special_tag the entorhinal cortex has been implicated in the early stages of alzheimer s disease , which is characterized by changes in the tau protein and in the cleaved fragments of the amyloid precursor protein ( app ) . we used a high-resolution functional magnetic resonance imaging ( fmri ) variant that can map metabolic defects in patients and mouse models to address basic questions about entorhinal cortex pathophysiology . the entorhinal cortex is divided into functionally distinct regions , the medial entorhinal cortex ( mec ) and the lateral entorhinal cortex ( lec ) , and we exploited the high-resolution capabilities of the fmri variant to ask whether either of them was affected in patients with preclinical alzheimer s disease . next , we imaged three mouse models of disease to clarify how tau and app relate to entorhinal cortex dysfunction and to determine whether the entorhinal cortex can act as a source of dysfunction observed in other cortical areas . we found that the lec was affected in preclinical disease , that lec dysfunction could spread to the parietal cortex during preclinical disease and that app expression potentiated tau toxicity in driving lec dysfunction , thereby helping to explain story_separator_special_tag in drug-resistant temporal lobe epilepsy ( tle ) , detecting hippocampal atrophy on mri is important as it allows defining the surgical target . the performance of automatic segmentation in tle has so far been considered unsatisfactory . in addition to atrophy , about 40 % of patients present with developmental abnormalities ( referred to as malrotation ) characterized by atypical morphologies of the hippocampus and collateral sulcus . our purpose was to evaluate the impact of malrotation and atrophy on the performance of three state-of-the-art automated algorithms . we segmented the hippocampus in 66 patients and 35 sex- and age-matched healthy subjects using a region-growing algorithm constrained by anatomical priors ( sacha ) , a freely available atlas-based software ( freesurfer ) and a multi-atlas approach ( animal-multi ) . to quantify malrotation , we generated 3d models from manual hippocampal labels and automatically extracted collateral sulci . the accuracy of automated techniques was evaluated relative to manual labeling using the dice similarity index and surface-based shape mapping , for which we computed vertex-wise displacement vectors between automated and manual segmentations . we then correlated segmentation accuracy with malrotation features and atrophy . animal-multi demonstrated similar accuracy in patients story_separator_special_tag abstract leveraging available annotated data is an essential component of many modern methods for medical image analysis . in particular , approaches making use of the neighbourhood structure between images for this purpose have shown significant potential . such techniques achieve high accuracy in analysing an image by propagating information from its immediate neighbours within an annotated database . despite their success in certain applications , wide use of these methods is limited due to the challenging task of determining the neighbours for an out-of-sample image . this task is either computationally expensive due to large database sizes and costly distance evaluations , or infeasible due to distance definitions over semantic information , such as ground truth annotations , which is not available for out-of-sample images . this article introduces neighbourhood approximation forests ( nafs ) , a supervised learning algorithm providing a general and efficient approach for the task of approximate nearest neighbour retrieval for arbitrary distances . starting from an image training database and a user-defined distance between images , the algorithm learns to use appearance-based features to cluster images approximating the neighbourhood structured induced by the distance . naf is able to efficiently infer nearest neighbours of story_separator_special_tag purpose the aims of this work were to ( a ) develop an approach for ex vivo mr volumetry of human brain hemispheres that does not contaminate the results of histopathological examination , ( b ) longitudinally assess regional brain volumes postmortem , and ( c ) investigate the relationship between mr volumetric measurements performed in vivo and ex vivo . methods an approach for ex vivo mr volumetry of human brain hemispheres was developed . five hemispheres from elderly subjects were imaged ex vivo longitudinally . all datasets were segmented . the longitudinal behavior of volumes measured ex vivo was assessed . the relationship between in vivo and ex vivo volumetric measurements was investigated in seven elderly subjects imaged both antemortem and postmortem . results this approach for ex vivo mr volumetry did not contaminate the results of histopathological examination . for a period of 6 months postmortem , within-subject volume variation across time points was substantially smaller than intersubject volume variation . a close linear correspondence was detected between in vivo and ex vivo volumetric measurements . conclusion regional brain volumes measured with this approach for ex vivo mr volumetry remain relatively unchanged for a period of 6 story_separator_special_tag a forward transform method for retrieving brain labels from the 1988 talairach atlas using x y z coordinates is presented . a hierarchical volume occupancy labeling scheme was created to simplify the organization of atlas labels using volume and subvolumetric components . segmentation rules were developed to define boundaries that were not given explicitly in the atlas . the labeling scheme and segmentation rules guided the segmentation and labeling of 160 contiguous regions within the atlas . a unique three dimensional ( 3 d ) database label server called the talairach daemon ( http : //ric.uthscsa.edu/projects ) was developed for serving labels keyed to the talairach coordinate system . given an x y z talairach coordinate , a corresponding hierarchical listing of labels is returned by the server . the accuracy and precision of the forward transform labeling method is now under evaluation . hum . brain mapping 5:238 242 , 1997 . \xa9 1997 wiley liss , inc . story_separator_special_tag introduction : preclinical in vivo imaging requires precise and reproducible delineation of brain structures . manual segmentation is time consuming and operator dependent . automated segmentation as usually performed via single atlas registration fails to account for anatomo-physiological variability . we present , evaluate , and make available a multi-atlas approach for automatically segmenting rat brain mri and extracting pet activies . methods : high-resolution 7t 2dt2 mr images of 12 sprague-dawley rat brains were manually segmented into 27-voi label volumes using detailed protocols . automated methods were developed with 7/12 atlas datasets , i.e . the mris and their associated label volumes . mris were registered to a common space , where an mri template and a maximum probability atlas were created . three automated methods were tested : 1/registering individual mris to the template , and using a single atlas ( sa ) , 2/using the maximum probability atlas ( mp ) , and 3/registering the mris from the multi-atlas dataset to an individual mri , propagating the label volumes and fusing them in individual mri space ( propagation & fusion , pf ) . evaluation was performed on the five remaining rats which additionally underwent [ 18 story_separator_special_tag labels that identify specific anatomical and functional structures within medical images are essential to the characterization of the relationship between structure and function in many scientific and clinical studies . automated methods that allow for high throughput have not yet been developed for all anatomical targets or validated for exceptional anatomies , and manual labeling remains the gold standard in many cases . however , manual placement of labels within a large image volume such as that obtained using magnetic resonance imaging ( mri ) is exceptionally challenging , resource intensive , and fraught with intra- and inter-rater variability . the use of statistical methods to combine labels produced by multiple raters has grown significantly in popularity , in part , because it is thought that by estimating and accounting for rater reliability estimates of the true labels will be more accurate . this paper demonstrates the performance of a class of these statistical label combination methodologies using real-world data contributed by minimally trained human raters . the consistency of the statistical estimates , the accuracy compared to the individual observations , and the variability of both the estimates and the individual observations with respect to the number of labels story_separator_special_tag image labeling and parcellation ( i.e. , assigning structure to a collection of voxels ) are critical tasks for the assessment of volumetric and morphometric features in medical imaging data . the process of image labeling is inherently error prone as images are corrupted by noise and artifacts . even expert interpretations are subject to subjectivity and the precision of the individual raters . hence , all labels must be considered imperfect with some degree of inherent variability . one may seek multiple independent assessments to both reduce this variability and quantify the degree of uncertainty . existing techniques have exploited maximum a posteriori statistics to combine data from multiple raters and simultaneously estimate rater reliabilities . although quite successful , wide-scale application has been hampered by unstable estimation with practical datasets , for example , with label sets with small or thin objects to be labeled or with partial or limited datasets . as well , these approaches have required each rater to generate a complete dataset , which is often impossible given both human foibles and the typical turnover rate of raters in a research or clinical environment . herein , we propose a robust approach to improve story_separator_special_tag purpose : automatic , atlas-based segmentation of medical images benefits from using multiple atlases , mainly in terms of robustness . however , a large disadvantage of using multiple atlases is the large computation time that is involved in registering atlas images to the target image . this paper aims to reduce the computation load of multiatlas-based segmentation by heuristically selecting atlases before registration . methods : to be able to select atlases , pairwise registrations are performed for all atlas combinations . based on the results of these registrations , atlases are clustered , such that each cluster contains atlas that registers well to each other . this can all be done in a preprocessing step . then , the representatives of each cluster are registered to the target image . the quality of the result of this registration is estimated for each of the representatives and used to decide which clusters to fully register to the target image . finally , the segmentations of the registered images are combined into a single segmentation in a label fusion procedure . results : the authors perform multiatlas segmentation once with postregistration atlas selection and once with the proposed preregistration method story_separator_special_tag in a multi-atlas based segmentation procedure , propagated atlas segmentations must be combined in a label fusion process . some current methods deal with this problem by using atlas selection to construct an atlas set either prior to or after registration . other methods estimate the performance of propagated segmentations and use this performance as a weight in the label fusion process . this paper proposes a selective and iterative method for performance level estimation ( simple ) , which combines both strategies in an iterative procedure . in subsequent iterations the method refines both the estimated performance and the set of selected atlases . for a dataset of 100 mr images of prostate cancer patients , we show that the results of simple are significantly better than those of several existing methods , including the staple method and variants of weighted majority voting . story_separator_special_tag we propose a framework for the robust and fully-automatic segmentation of magnetic resonance ( mr ) brain images called `` multi-atlas label propagation with expectation-maximisation based refinement '' ( malp-em ) . the presented approach is based on a robust registration approach ( maper ) , highly performant label fusion ( joint label fusion ) and intensity-based label refinement using em . we further adapt this framework to be applicable for the segmentation of brain images with gross changes in anatomy . we propose to account for consistent registration errors by relaxing anatomical priors obtained by multi-atlas propagation and a weighting scheme to locally combine anatomical atlas priors and intensity-refined posterior probabilities . the method is evaluated on a benchmark dataset used in a recent miccai segmentation challenge . in this context we show that malp-em is competitive for the segmentation of mr brain scans of healthy adults when compared to state-of-the-art automatic labelling techniques . to demonstrate the versatility of the proposed approach , we employed malp-em to segment 125 mr brain images into 134 regions from subjects who had sustained traumatic brain injury ( tbi ) . we employ a protocol to assess segmentation quality if no manual story_separator_special_tag this paper presents a novel x-ray and mr image registration technique based on individual-specific biomechanical finite element ( fe ) models of the breasts . information from 3d magnetic resonance ( mr ) images was registered to x-ray mammographic images using non-linear fe models subject to contact mechanics constraints to simulate the large compressive deformations between the two imaging modalities . a physics-based perspective ray-casting algorithm was used to generate 2d pseudo-x-ray projections of the fe-warped 3d mr images . unknown input parameters to the fe models , such as the location and orientation of the compression plates , were optimised to provide the best match between the pseudo and clinical x-ray images . the methods were validated using images taken before and during compression of a breast-shaped phantom , for which 12 inclusions were tracked between imaging modalities . these methods were then applied to x-ray and mr images from six breast cancer patients . error measures ( such as centroid and surface distances ) of segmented tumours in simulated and actual x-ray mammograms were used to assess the accuracy of the methods . sensitivity analysis of the lesion co-localisation accuracy to rotation about the anterior-posterior axis was then story_separator_special_tag purpose : to develop a fully automated method to segment cartilage from the magnetic resonance ( mr ) images of knee and to evaluate the performance of the method on a public , open dataset . methods : the segmentation scheme consisted of three procedures : multiple-atlas building , applying a locally weighted vote ( lwv ) , and region adjustment . in the atlas building procedure , all training cases were registered to a target image by a nonrigid registration scheme and the best matched atlases selected . a lwv algorithm was applied to merge the information from these atlases and generate the initial segmentation result . subsequently , for the region adjustment procedure , the statistical information of bone , cartilage , and surrounding regions was computed from the initial segmentation result . the statistical information directed the automated determination of the seed points inside and outside bone regions for the graph-cut based method . finally , the region adjustment was conducted by the revision of outliers and the inclusion of abnormal bone regions . results : a total of 150 knee mr images from a public , open dataset ( available atwww.ski10.org ) were used for the story_separator_special_tag whole brain extraction is an important pre-processing step in neuroimage analysis . manual or semi-automated brain delineations are labour-intensive and thus not desirable in large studies , meaning that automated techniques are preferable . the accuracy and robustness of automated methods are crucial because human expertise may be required to correct any suboptimal results , which can be very time consuming . we compared the accuracy of four automated brain extraction methods : brain extraction tool ( bet ) , brain surface extractor ( bse ) , hybrid watershed algorithm ( hwa ) and a multi-atlas propagation and segmentation ( maps ) technique we have previously developed for hippocampal segmentation . the four methods were applied to extract whole brains from 682 1.5t and 157 3t t ( 1 ) -weighted mr baseline images from the alzheimer 's disease neuroimaging initiative database . semi-automated brain segmentations with manual editing and checking were used as the gold-standard to compare with the results . the median jaccard index of maps was higher than hwa , bet and bse in 1.5t and 3t scans ( p < 0.05 , all tests ) , and the 1st to 99th centile range of the jaccard story_separator_special_tag volume and change in volume of the hippocampus are both important markers of alzheimer 's disease ( ad ) . delineation of the structure on mri is time-consuming and therefore reliable automated methods are required . we describe an improvement ( multiple-atlas propagation and segmentation ( maps ) ) to our template library-based segmentation technique . the improved technique uses non-linear registration of the best-matched templates from our manually segmented library to generate multiple segmentations and combines them using the simultaneous truth and performance level estimation ( staple ) algorithm . change in volume over 12 months ( maps-hbsi ) was measured by applying the boundary shift integral using maps regions . methods were developed and validated against manual measures using subsets from alzheimer 's disease neuroimaging initiative ( adni ) . the best method was applied to 682 adni subjects , at baseline and 12-month follow-up , enabling assessment of volumes and atrophy rates in control , mild cognitive impairment ( mci ) and ad groups , and within mci subgroups classified by subsequent clinical outcome . we compared our measures with those generated by surgical navigation technologies ( snt ) available from adni . the accuracy of our story_separator_special_tag according to clinical reports , people with ages between 60 to 79 years have a high risk of stroke . the most obvious facial features of stroke are expressional asymmetry and mouth askew . in this study , we proposed a facial stroke recognition system that assists patients in self-judgment . facial landmarks were tracked by an ensemble of regression trees ( ert ) method . two symmetry indexes area ratio and distance ratio between the left and right side of the eye and mouth were calculated . local ternary pattern ( ltp ) and gabor filter were used to enhance and to extract the texture features of the region of interest ( roi ) , respectively . the structural similarity of roi between the left and right face was calculated . after that , we modified the original feature selection algorithm to select the best feature set . to classify facial stroke , the support vector machine ( svm ) , random forest ( rf ) , and bayesian classifier were adopted as classifier . the experimental results show that the proposed system can accurately and effectively distinguish stroke from facial images . the recognition accuracy of svm , story_separator_special_tag the human cerebral cortex develops extremely dynamically in the first 2years of life . accurate and consistent parcellation of longitudinal dynamic cortical surfaces during this critical stage is essential to understand the early development of cortical structure and function in both normal and high-risk infant brains . however , directly applying the existing methods developed for the cross-sectional studies often generates longitudinally-inconsistent results , thus leading to inaccurate measurements of the cortex development . in this paper , we propose a new method for accurate , consistent , and simultaneous labeling of longitudinal cortical surfaces in the serial infant brain mr images . the proposed method is explicitly formulated as a minimization problem with an energy function that includes a data fitting term , a spatial smoothness term , and a temporal consistency term . specifically , inspired by multi-atlas based label fusion , the data fitting term is designed to integrate the contributions from multi-atlas surfaces adaptively , according to the similarities of their local cortical folding with that of the subject cortical surface . the spatial smoothness term is then designed to adaptively encourage label smoothness based on the local cortical folding geometries , i.e. , allowing label story_separator_special_tag in this paper , we propose a new prostate computed tomography ( ct ) segmentation method for image guided radiation therapy . the main contributions of our method lie in the following aspects . 1 ) instead of using voxel intensity information alone , patch-based representation in the discriminative feature space with logistic sparse lasso is used as anatomical signature to deal with low contrast problem in prostate ct images . 2 ) based on the proposed patch-based signature , a new multi-atlases label fusion method formulated under sparse representation framework is designed to segment prostate in the new treatment images , with guidance from the previous segmented images of the same patient . this method estimates the prostate likelihood of each voxel in the new treatment image from its nearby candidate voxels in the previous segmented images , based on the nonlocal mean principle and sparsity constraint . 3 ) a hierarchical labeling strategy is further designed to perform label fusion , where voxels with high confidence are first labeled for providing useful context information in the same image for aiding the labeling of the remaining voxels . 4 ) an online update mechanism is finally adopted to progressively story_separator_special_tag since hippocampal volume has been found to be an early biomarker for alzheimer 's disease , there is large interest in automated methods to accurately , robustly , and reproducibly extract the hippocampus from mri data . in this work we present a segmentation method based on the minimization of an energy functional with intensity and prior terms , which are derived from manually labelled training images . the intensity energy is based on a statistical intensity model that is learned from the training images . the prior energy consists of a spatial and regularity term . the spatial prior is obtained from a probabilistic atlas created by registering the training images to the unlabelled target image , and deforming and averaging the training labels . the regularity prior energy encourages smooth segmentations . the resulting energy functional is globally minimized using graph cuts . the method was evaluated using image data from a population-based study on diseases among the elderly . two set of images were used : a small set of 20 manually labelled mr images and a larger set of 498 images , for which manual volume measurements were available , but no segmentations . this data story_separator_special_tag prostate cancer is one of the major causes of cancer death for men in the western world . magnetic resonance imaging ( mri ) is being increasingly used as a modality to detect prostate cancer . therefore , computer-aided detection of prostate cancer in mri images has become an active area of research . in this paper we investigate a fully automated computer-aided detection system which consists of two stages . in the first stage , we detect initial candidates using multi-atlas-based prostate segmentation , voxel feature extraction , classification and local maxima detection . the second stage segments the candidate regions and using classification we obtain cancer likelihoods for each candidate . features represent pharmacokinetic behavior , symmetry and appearance , among others . the system is evaluated on a large consecutive cohort of 347 patients with mr-guided biopsy as the reference standard . this set contained 165 patients with cancer and 182 patients without prostate cancer . performance evaluation is based on lesion-based free-response receiver operating characteristic curve and patient-based receiver operating characteristic analysis . the system is also compared to the prospective clinical performance of radiologists . results show a sensitivity of 0.42 , 0.75 , and story_separator_special_tag we introduce an optimised pipeline for multi-atlas brain mri segmentation . both accuracy and speed of segmentation are considered . we study different similarity measures used in non-rigid registration . we show that intensity differences for intensity normalised images can be used instead of standard normalised mutual information in registration without compromising the accuracy but leading to threefold decrease in the computation time . we study and validate also different methods for atlas selection . finally , we propose two new approaches for combining multi-atlas segmentation and intensity modelling based on segmentation using expectation maximisation ( em ) and optimisation via graph cuts . the segmentation pipeline is evaluated with two data cohorts : ibsr data ( n=18 , six subcortial structures : thalamus , caudate , putamen , pallidum , hippocampus , amygdala ) and adni data ( n=60 , hippocampus ) . the average similarity index between automatically and manually generated volumes was 0.849 ( ibsr , six subcortical structures ) and 0.880 ( adni , hippocampus ) . the correlation coefficient for hippocampal volumes was 0.95 with the adni data . the computation time using a standard multicore pc computer was about 3-4 min . our results story_separator_special_tag multi-atlas segmentation propagation has evolved quickly in recent years , becoming a state-of-the-art methodology for automatic parcellation of structural images . however , few studies have applied these methods to preclinical research . in this study , we present a fully automatic framework for mouse brain mri structural parcellation using multi-atlas segmentation propagation . the framework adopts the similarity and truth estimation for propagated segmentations ( steps ) algorithm , which utilises a locally normalised cross correlation similarity metric for atlas selection and an extended simultaneous truth and performance level estimation ( staple ) framework for multi-label fusion . the segmentation accuracy of the multi-atlas framework was evaluated using publicly available mouse brain atlas databases with pre-segmented manually labelled anatomical structures as the gold standard , and optimised parameters were obtained for the steps algorithm in the label fusion to achieve the best segmentation accuracy . we showed that our multi-atlas framework resulted in significantly higher segmentation accuracy compared to single-atlas based segmentation , as well as to the original staple framework . story_separator_special_tag magnetic resonance ( mr ) imaging is increasingly being used to assess brain growth and development in infants . such studies are often based on quantitative analysis of anatomical segmentations of brain mr images . however , the large changes in brain shape and appearance associated with development , the lower signal to noise ratio and partial volume effects in the neonatal brain present challenges for automatic segmentation of neonatal mr imaging data . in this study , we propose a framework for accurate intensity-based segmentation of the developing neonatal brain , from the early preterm period to term-equivalent age , into 50 brain regions . we present a novel segmentation algorithm that models the intensities across the whole brain by introducing a structural hierarchy and anatomical constraints . the proposed method is compared to standard atlas-based techniques and improves label overlaps with respect to manual reference segmentations . we demonstrate that the proposed technique achieves highly accurate results and is very robust across a wide range of gestational ages , from 24 weeks gestational age to term-equivalent age . story_separator_special_tag in this paper we report the set-up and results of the multimodal brain tumor image segmentation benchmark ( brats ) organized in conjunction with the miccai 2012 and 2013 conferences . twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast mr scans of low- and high-grade glioma patients - manually annotated by up to four raters - and to 65 comparable scans generated using tumor image simulation software . quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions ( dice scores in the range 74-85 % ) , illustrating the difficulty of this task . we found that different algorithms worked best for different sub-regions ( reaching performance comparable to human inter-rater variability ) , but that no single algorithm ranked in the top for all subregions simultaneously . fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms , indicating remaining opportunities for further methodological improvements . the brats image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource . story_separator_special_tag in this work , a supervised automatic multi-atlas based segmentation method for corpus callosum ( cc ) in magnetic resonance images ( mris ) of ms patients is presented . due to atrophy , the shape of disease affected cc differs distinctively from healthy ones . therefore , atlases are used that are built from the underlying dataset and do not originate from atlas datasets of healthy brains . the atlas construction is done by clustering the patient images into subgroups of similar images and building a mean image from each cluster . during this work , the optimal number of atlases and the best label fusion method are analyzed . the method is evaluated on 100 t1-weighted brain mri images from ms patients . accuracy is assessed by comparing the overlap of the segmentations from the developed method against manual segmentations obtained by a medical student . story_separator_special_tag [ 1 ] yvert , b. , et al . `` a systematic evaluation of the spherical model accuracy in eeg dipole localization . '' electroencephalography and clinical neurophysiology 102.5 ( 1997 ) : 452-459 . [ 2 ] stenroos , matti , alexander hunold , and jens haueisen . `` comparison of three-shell and simplified volume conductor models in magnetoencephalography . '' neuroimage 94 ( 2014 ) : 337-348 . [ 3 ] ourselin , s\xe9bastien , et al . `` reconstructing a 3d structure from serial histological sections . '' image and vision computing 19.1 ( 2001 ) : 25-31 . [ 4 ] modat , marc , et al . `` fast free-form deformation using graphics processing units . '' computer methods and programs in biomedicine 98.3 ( 2010 ) : 278-284 . [ 5 ] rivest-h\xe9nault , david , et al . `` robust inverse-consistent affine ct mr registration in mri-assisted and mri-alone prostate radiation therapy . '' medical image analysis 23.1 ( 2015 ) : 56-69 . [ 6 ] national institutes of health . `` retrospective image registration evaluation . '' vanderbilt university , nashville ( tn ) , usa ( 2003 ) . story_separator_special_tag we propose an automated multi-atlas and multi-roi based segmentation method for both skull-stripping of mouse brain and the roi-labeling of mouse brain structures from the three dimensional ( 3d ) magnetic resonance images ( mri ) . three main steps are involved in our method . first , a region of interest ( roi ) guided warping algorithm is designed to register multi-atlas images to the subject space , by considering more on the matching of image contents around the roi boundaries which are more important for roi labeling . then , a multi-atlas and multi-roi based deformable segmentation method is adopted to refine the roi labeling result by deforming each roi surface via boundary recognizers ( i.e. , svm classifiers ) trained on local surface patches . finally , a local-mutual-information ( mi ) based multi-label fusion technique is proposed for allowing the atlases with better local image similarity with the subject to have more contributions in label fusion . the experimental results show that our method works better than the conventional methods on both in vitro and in vivo mouse brain datasets . story_separator_special_tag low-dose-rate brachytherapy is a radiation treatment method for localized prostate cancer . the standard of care for this treatment procedure is to acquire transrectal ultrasound images of the prostate in order to devise a plan to deliver sufficient radiation dose to the cancerous tissue . brachytherapy planning involves delineation of contours in these images , which closely follow the prostate boundary , i.e. , clinical target volume . this process is currently performed either manually or semi-automatically , which requires user interaction for landmark initialization . in this paper , we propose a multi-atlas fusion framework to automatically delineate the clinical target volume in ultrasound images . a dataset of a priori segmented ultrasound images , i.e. , atlases , is registered to a target image . we introduce a pairwise atlas agreement factor that combines an image-similarity metric and similarity between a priori segmented contours . this factor is used in an atlas selection algorithm to prune the dataset before combining the atlas contours to produce a consensus segmentation . we evaluate the proposed segmentation approach on a set of 280 transrectal prostate volume studies . the proposed method produces segmentation results that are within the range of observer story_separator_special_tag a general-purpose deformable registration algorithm referred to as `` dramms '' is presented in this paper . dramms bridges the gap between the traditional voxel-wise methods and landmark/feature-based methods with primarily two contributions . first , dramms renders each voxel relatively distinctively identifiable by a rich set of attributes , therefore largely reducing matching ambiguities . in particular , a set of multi-scale and multi-orientation gabor attributes are extracted and the optimal components are selected , so that they form a highly distinctive morphological signature reflecting the anatomical and geometric context around each voxel . moreover , the way in which the optimal gabor attributes are constructed is independent of the underlying image modalities or contents , which renders dramms generally applicable to diverse registration tasks . a second contribution of dramms is that it modulates the registration by assigning higher weights to those voxels having higher ability to establish unique ( hence reliable ) correspondences across images , therefore reducing the negative impact of those regions that are less capable of finding correspondences ( such as outlier regions ) . a continuously-valued weighting function named `` mutual-saliency '' is developed to reflect the matching uniqueness between a pair of story_separator_special_tag multiatlas methods have been successful for brain segmentation , but their application to smaller anatomies remains relatively unexplored . we evaluate seven statistical and voting-based label fusion algorithms ( and six additional variants ) to segment the optic nerves , eye globes , and chiasm . for nonlocal simultaneous truth and performance level estimation ( staple ) , we evaluate different intensity similarity measures ( including mean square difference , locally normalized cross-correlation , and a hybrid approach ) . each algorithm is evaluated in terms of the dice overlap and symmetric surface distance metrics . finally , we evaluate refinement of label fusion results using a learning-based correction method for consistent bias correction and markov random field regularization . the multiatlas labeling pipelines were evaluated on a cohort of 35 subjects including both healthy controls and patients . across all three structures , nonlocal spatial staple ( nlss ) with a mixed weighting type provided the most consistent results ; for the optic nerve nlss resulted in a median dice similarity coefficient of 0.81 , mean surface distance of 0.41\xa0mm , and hausdorff distance 2.18\xa0mm for the optic nerves . joint label fusion resulted in slightly superior median performance story_separator_special_tag there have been significant efforts to build a probabilistic atlas of the brain and to use it for many common applications , such as segmentation and registration . though the work related to brain atlases can be applied to nonbrain organs , less attention has been paid to actually building an atlas for organs other than the brain . motivated by the automatic identification of normal organs for applications in radiation therapy treatment planning , we present a method to construct a probabilistic atlas of an abdomen consisting of four organs ( i.e. , liver , kidneys , and spinal cord ) . using 32 noncontrast abdominal computed tomography ( ct ) scans , 31 were mapped onto one individual scan using thin plate spline as the warping transform and mutual information ( mi ) as the similarity measure . except for an initial coarse placement of four control points by the operators , the mi-based registration was automatic . additionally , the four organs in each of the 32 ct data sets were manually segmented . the manual segmentations were warped onto the `` standard '' patient space using the same transform computed from their gray scale ct data story_separator_special_tag abstract the cerebellum has classically been linked to motor learning and coordination . however , there is renewed interest in the role of the cerebellum in non-motor functions such as cognition and in the context of different neuropsychiatric disorders . the contribution of neuroimaging studies to advancing understanding of cerebellar structure and function has been limited , partly due to the cerebellum being understudied as a result of contrast and resolution limitations of standard structural magnetic resonance images ( mri ) . these limitations inhibit proper visualization of the highly compact and detailed cerebellar foliations . in addition , there is a lack of robust algorithms that automatically and reliably identify the cerebellum and its subregions , further complicating the design of large-scale studies of the cerebellum . as such , automated segmentation of the cerebellar lobules would allow detailed population studies of the cerebellum and its subregions . in this manuscript , we describe a novel set of high-resolution in vivo atlases of the cerebellum developed by pairing mr imaging with a carefully validated manual segmentation protocol . using these cerebellar atlases as inputs , we validate a novel automated segmentation algorithm that takes advantage of the neuroanatomical variability story_separator_special_tag aizu research cluster for medical engineering and informatics , center for advanced information science and technology , the university of aizu , aizu-wakamatsu , fukushima 965-8580 , japan the theme of this special issue ( supplement ) is current challenging medical image analysis . this issue consists of two selected papers presented at the 35 ieee-embc workshop on current challenging image analysis and information processing in life science , which was held on 03 july 2013 , osaka , japan ; and one invited paper to reflect the theme of this issue . all three papers were accepted based on the peer-review process of biomedical engineering online journal . the first paper entitled deformable part models for object detection in medical images by klaus toennies , et al. , presents a novel application of a deformable model of the finite element method for the detection of objects in medical images . this proposed approach is promising for context-based detection , model-based segmentation , and shape analysis of medical objects . the second paper entitled motion correction of whole-body pet data with a joint pet-mri registration functional by michael fieseler , et al. , makes use of the multi-modal information simultaneously story_separator_special_tag computers can process large amounts of data . medical practitioners can deliver better services and provide more accurate diagnoses and treatment regimens to patients . this document described how 3d slicer allows command line interface ( cli ) , python , jupyter , and matlab in software to process medial data . 3d slicer has become useful software worldwide since 1997 , especially in the medical field for preoperative visualization and analysis . today , 3d slicer is supported by the national alliance for medical imaging computing ( na-mic ) , neuroimaging analysis center ( nac ) , biomedical informatics research network ( birn ) , the national center for image-guided therapy ( ncigt ) , the harvard clinical and translational science center ( ctsc ) , and the slicer community worldwide as a platform to develop new ideas . in this paper , we demonstrate our knowledge in using the 3d slicer software . story_separator_special_tag abstract introduction advances in image segmentation of magnetic resonance images ( mri ) have demonstrated that multi-atlas approaches improve segmentation over regular atlas-based approaches . these approaches often rely on a large number of manually segmented atlases ( e.g . 30 80 ) that take significant time and expertise to produce . we present an algorithm , maget-brain ( multiple automatically generated templates ) , for the automatic segmentation of the hippocampus that minimises the number of atlases needed whilst still achieving similar agreement to multi-atlas approaches . thus , our method acts as a reliable multi-atlas approach when using special or hard-to-define atlases that are laborious to construct . method maget-brain works by propagating atlas segmentations to a template library , formed from a subset of target images , via transformations estimated by nonlinear image registration . the resulting segmentations are then propagated to each target image and fused using a label fusion method . we conduct two separate monte carlo cross-validation experiments comparing maget-brain and basic multi-atlas whole hippocampal segmentation using differing atlas and template library sizes , and registration and label fusion methods . the first experiment is a 10-fold validation ( per parameter setting ) over story_separator_special_tag the aim of this paper is to develop a probabilistic modeling framework for the segmentation of structures of interest from a collection of atlases . given a subset of registered atlases into the target image for a particular region of interest ( roi ) , a statistical model of appearance and shape is computed for fusing the labels . segmentations are obtained by minimizing an energy function associated with the proposed model , using a graph-cut technique . we test different label fusion methods on publicly available mr images of human brains . story_separator_special_tag the measurement of hippocampal volumes using mri is a useful in-vivo biomarker for detection and monitoring of early alzheimer 's disease ( ad ) , including during the amnestic mild cognitive impairment ( a-mci ) stage . the pathology underlying ad has regionally selective effects within the hippocampus . as such , we predict that hippocampal subfields are more sensitive in discriminating prodromal ad ( i.e. , a-mci ) from cognitively normal controls than whole hippocampal volumes , and attempt to demonstrate this using a semi-automatic method that can accurately segment hippocampal subfields . high-resolution coronal-oblique t2-weighted images of the hippocampal formation were acquired in 45 subjects ( 28 controls and 17 a-mci ( mean age : 69.5 \xb1 9.2 ; 70.2 \xb1 7.6 ) ) . ca1 , ca2 , ca3 , and ca4/dg subfields , along with head and tail regions , were segmented using an automatic algorithm . ca1 and ca4/dg segmentations were manually edited . whole hippocampal volumes were obtained from the subjects ' t1-weighted anatomical images . automatic segmentation produced significant group differences in the following subfields : ca1 ( left : p = 0.001 , right : p = 0.038 ) , ca4/dg ( story_separator_special_tag a statistical model is presented that combines the registration of an atlas with the segmentation of magnetic resonance images . we use an expectation maximization-based algorithm to find a solution within the model , which simultaneously estimates image artifacts , anatomical labelmaps , and a structure-dependent hierarchical mapping from the atlas to the image space . the algorithm produces segmentations for brain tissues as well as their substructures . we demonstrate the approach on a set of 22 magnetic resonance images . on this set of images , the new approach performs significantly better than similar methods which sequentially apply registration and segmentation . story_separator_special_tag comprehensive visual and quantitative analysis of in vivo human mitral valve morphology is central to the diagnosis and surgical treatment of mitral valve disease . real-time 3d transesophageal echocardiography ( 3d tee ) is a practical , highly informative imaging modality for examining the mitral valve in a clinical setting . to facilitate visual and quantitative 3d tee image analysis , we describe a fully automated method for segmenting the mitral leaflets in 3d tee image data . the algorithm integrates complementary probabilistic segmentation and shape modeling techniques ( multi-atlas joint label fusion and deformable modeling with continuous medial representation ) to automatically generate 3d geometric models of the mitral leaflets from 3d tee image data . these models are unique in that they establish a shape-based coordinate system on the valves of different subjects and represent the leaflets volumetrically , as structures with locally varying thickness . in this work , expert image analysis is the gold standard for evaluating automatic segmentation . without any user interaction , we demonstrate that the automatic segmentation method accurately captures patient-specific leaflet geometry at both systole and diastole in 3d tee data acquired from a mixed population of subjects with normal valve story_separator_special_tag sequential search methods characterized by a dynamically changing number of features included or eliminated at each step , henceforth `` floating '' methods , are presented . they are shown to give very good results and to be computationally more effective than the branch and bound method . story_separator_special_tag biomarkers derived from brain magnetic resonance ( mr ) imaging have promise in being able to assist in the clinical diagnosis of brain pathologies . these have been used in many studies in which the goal has been to distinguish between pathologies such as alzheimer 's disease and healthy aging . however , other dementias , in particular , frontotemporal dementia , also present overlapping pathological brain morphometry patterns . hence , a classifier that can discriminate morphometric features from a brain mri from the three classes of normal aging , alzheimer 's disease ( ad ) , and frontotemporal dementia ( ftd ) would offer considerable utility in aiding in correct group identification . compared to the conventional use of multiple pair-wise binary classifiers that learn to discriminate between two classes at each stage , we propose a single three-way classification system that can discriminate between three classes at the same time . we present a novel classifier that is able to perform a three-class discrimination test for discriminating among ad , ftd , and normal controls ( nc ) using volumes , shape invariants , and local displacements ( three features ) of hippocampi and lateral ventricles ( story_separator_special_tag in atlas-based segmentation , using one single atlas for segmenting all patients introduces a bias . multi-atlas techniques overcome this drawback by selecting and fusing the most appropriate atlases among a database for a given patient . globally assessing different multi-atlas strategies provides a biased evaluation of the atlas selection methods . to address this problem , we propose to evaluate atlas selection methods independently from the number of atlases selected and from the atlas fusion step . briefly , we first cluster the selection methods on the basis of rank correlation and then assess each sub-group of methods with respect to a sub-group of reference selection methods . we apply our method to 105 images of the head and neck region . story_separator_special_tag longitudinal image analysis has become increasingly important in clinical studies of normal aging and neurodegenerative disorders . furthermore , there is a growing appreciation of the potential utility of longitudinally acquired structural images and reliable image processing to evaluate disease modifying therapies . challenges have been related to the variability that is inherent in the available cross-sectional processing tools , to the introduction of bias in longitudinal processing and to potential over-regularization . in this paper we introduce a novel longitudinal image processing framework , based on unbiased , robust , within-subject template creation , for automatic surface reconstruction and segmentation of brain mri of arbitrarily many time points . we demonstrate that it is essential to treat all input images exactly the same as removing only interpolation asymmetries is not sufficient to remove processing bias . we successfully reduce variability and avoid over-regularization by initializing the processing in each time point with common information from the subject template . the presented results show a significant increase in precision and discrimination power while preserving the ability to detect large anatomical deviations ; as such they hold great potential in clinical applications , e.g . allowing for smaller sample sizes or story_separator_special_tag a fully automatic system for segmentation of the liver from ct scans is presented . the core of the method consists of a voxel labeling procedure where the probability that each voxel is part of the liver is estimated using a statistical classifier ( k-nearest-neighbor ) and a set of features . several features encode positional information , obtained from a multi-atlas registration procedure . in addition , pre-processing steps are carried out to determine the vertical scan range of the liver and to rotate the scan so that the subject is in supine position , and post-processing is applied to the voxel classification result to smooth and improve the final segmentation . the method is evaluated on 10 test scans and performs robustly , as the volumetric overlap error is 12.5 % on average and 15.3 % for the worst case . a careful inspection of the results reveals , however , that locally many errors are made and the localization of the border is often not precise . the causes and possible solutions for these failures are briefly discussed . story_separator_special_tag lung segmentation is a prerequisite for automated analysis of chest ct scans . conventional lung segmentation methods rely on large attenuation differences between lung parenchyma and surrounding tissue . these methods fail in scans where dense abnormalities are present , which often occurs in clinical data . some methods to handle these situations have been proposed , but they are too time consuming or too specialized to be used in clinical practice . in this article , a new hybrid lung segmentation method is presented that automatically detects failures of a conventional algorithm and , when needed , resorts to a more complex algorithm , which is expected to produce better results in abnormal cases . in a large quantitative evaluation on a database of 150 scans from different sources , the hybrid method is shown to perform substantially better than a conventional approach at a relatively low increase in computational cost . story_separator_special_tag atlas-based segmentation is a powerful generic technique for automatic delineation of structures in volumetric images . several studies have shown that multi-atlas segmentation methods outperform schemes that use only a single atlas , but running multiple registrations on volumetric data is time-consuming . moreover , for many scans or regions within scans , a large number of atlases may not be required to achieve good segmentation performance and may even deteriorate the results . it would therefore be worthwhile to include the decision which and how many atlases to use for a particular target scan in the segmentation process . to this end , we propose two generally applicable multi-atlas segmentation methods , adaptive multi-atlas segmentation ( amas ) and adaptive local multi-atlas segmentation ( almas ) . amas automatically selects the most appropriate atlases for a target image and automatically stops registering atlases when no further improvement is expected . almas takes this concept one step further by locally deciding how many and which atlases are needed to segment a target image . the methods employ a computationally cheap atlas selection strategy , an automatic stopping criterion , and a technique to locally inspect registration results and determine how story_separator_special_tag nonrigid registration of medical images is important for a number of applications such as the creation of population averages , atlas-based segmentation , or geometric correction of functional magnetic resonance imaging ( imri ) images to name a few . in recent years , a number of methods have been proposed to solve this problem , one class of which involves maximizing a mutual information ( ml ) -based objective function over a regular grid of splines . this approach has produced good results but its computational complexity is proportional to the compliance of the transformation required to register the smallest structures in the image . here , we propose a method that permits the spatial adaptation of the transformation 's compliance . this spatial adaptation allows us to reduce the number of degrees of freedom in the overall transformation , thus speeding up the process and improving its convergence properties . to develop this method , we introduce several novelties : 1 ) we rely on radially symmetric basis functions rather than b-splines traditionally used to model the deformation field ; 2 ) we propose a metric to identify regions that are poorly registered and over which the transformation story_separator_special_tag this paper evaluates strategies for atlas selection in atlas-based segmentation of three-dimensional biomedical images . segmentation by intensity-based nonrigid registration to atlas images is applied to confocal microscopy images acquired from the brains of 20 bees . this paper evaluates and compares four different approaches for atlas image selection : registration to an individual atlas image ( ind ) , registration to an average-shape atlas image ( avg ) , registration to the most similar image from a database of individual atlas images ( sim ) , and registration to all images from a database of individual atlas images with subsequent multi-classifier decision fusion ( mul ) . the mul strategy is a novel application of multi-classifier techniques , which are common in pattern recognition , to atlas-based segmentation . for each atlas selection strategy , the segmentation performance of the algorithm was quantified by the similarity index ( si ) between the automatic segmentation result and a manually generated gold standard . the best segmentation accuracy was achieved using the mul paradigm , which resulted in a mean similarity index value between manual and automatic segmentation of 0.86 ( avg , 0.84 ; sim , 0.82 ; ind , story_separator_special_tag we develop and evaluate in this paper a multi-classifier framework for atlas-based segmentation , a popular segmentation method in biomedical image analysis . an atlas is a spatial map of classes ( e.g. , anatomical structures ) , which is usually derived from a reference individual by manual segmentation . an atlas-based classification is generated by registering an image to an atlas , that is , by computing a semantically correct coordinate mapping between the two . in the present paper the registration algorithm is an intensity-based non-rigid method that computes a free-form deformation ( ffd ) defined on a uniform grid of control points . the transformation is regularized by a weighted smoothness constraint term . different atlases , as well as different parameterizations of the registration algorithm , lead to different and somewhat independent atlas-based classifiers . the outputs of these classifiers can be combined in order to improve overall classification accuracy . in an evaluation study , biomedical images from seven subjects are segmented ( 1 ) using three individual atlases ; ( 2 ) using one atlas and three different resolutions of the ffd control point grid , ( 3 ) using one atlas and three story_separator_special_tag shape modelling.- shape modelling using markov random field restoration of point correspondences.- optimal deformable surface models for 3d medical image analysis.- learning object correspondences with the observed transport shape measure.- shape discrimination in the hippocampus using an mdl model.- posters i : shape modelling and analysis.- minimum description length shape and appearance models.- evaluation of 3d correspondence methods for model building.- localization of anatomical point landmarks in 3d medical images by fitting 3d parametric intensity models.- morphology-based cortical thickness estimation.- the shape operator for differential analysis of images.- feature selection for shape-based classification of biological objects.- corresponding articular cartilage thickness measurements in the knee joint by modelling the underlying bone ( commercial in confidence ) .- adapting active shape models for 3d segmentation of tubular structures in medical images.- a unified variational approach to denoising and bias correction in mr.- shape analysis.- object-based strategy for morphometry of the cerebral cortex.- genus zero surface conformal mapping and its application to brain surface mapping.- segmentation.- coupled multi-shape model and mutual information for medical image segmentation.- neighbor-constrained segmentation with 3d deformable models.- expectation maximization strategies for multi-atlas multi-label segmentation.- quantitative analysis of intrathoracic airway trees : methods and validation.- posters ii : segmentation story_separator_special_tag we propose in this work a patch-based image labeling method relying on a label propagation framework . based on image intensity similarities between the input image and an anatomy textbook , an original strategy which does not require any nonrigid registration is presented . following recent developments in nonlocal image denoising , the similarity between images is represented by a weighted graph computed from an intensity-based distance between patches . experiments on simulated and in vivo magnetic resonance images show that the proposed method is very successful in providing automated human brain labeling . story_separator_special_tag in this paper the authors present a new approach for the nonrigid registration of contrast-enhanced breast mri . a hierarchical transformation model of the motion of the breast has been developed . the global motion of the breast is modeled by an affine transformation while the local breast motion is described by a free-form deformation ( ffd ) based on b-splines . normalized mutual information is used as a voxel-based similarity measure which is insensitive to intensity changes as a result of the contrast enhancement . registration is achieved by minimizing a cost function , which represents a combination of the cost associated with the smoothness of the transformation and the cost associated with the image similarity . the algorithm has been applied to the fully automated registration of three-dimensional ( 3-d ) breast mri in volunteers and patients . in particular , the authors have compared the results of the proposed nonrigid registration algorithm to those obtained using rigid and affine registration techniques . the results clearly indicate that the nonrigid registration algorithm is much better able to recover the motion and deformation of the breast than rigid or affine registration algorithms . story_separator_special_tag we propose a nonparametric , probabilistic model for the automatic segmentation of medical images , given a training set of images and corresponding label maps . the resulting inference algorithms rely on pairwise registrations between the test image and individual training images . the training labels are then transferred to the test image and fused to compute the final segmentation of the test subject . such label fusion methods have been shown to yield accurate segmentation , since the use of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures . to the best of our knowledge , this manuscript presents the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach . the proposed framework allows us to compare different label fusion algorithms theoretically and practically . in particular , recent label fusion or multiatlas segmentation algorithms are interpreted as special cases of our framework . we conduct two sets of experiments to validate the proposed methods . in the first set of experiments , we use 39 brain mri scans - with manually segmented white matter , cerebral cortex , ventricles and subcortical structures - to compare different label fusion story_separator_special_tag the authors describe a computerized method to automatically find and label the cortical surface in three-dimensional ( 3-d ) magnetic resonance ( mr ) brain images . the approach the authors take is to model a prelabeled brain atlas as a physical object and give it elastic properties , allowing it to warp itself onto regions in a preprocessed image . preprocessing consists of boundary-finding and a morphological procedure which automatically extracts the brain and sulci from an mr image and provides a smoothed representation of the brain surface to which the deformable model can rapidly converge . the authors ' deformable models are energy-minimizing elastic surfaces that can accurately locate image features . the models are parameterized with 3-d bicubic b-spline surfaces . the authors design the energy function such that cortical fissure ( sulci ) points on the model are attracted to fissure points on the image and the remaining model points are attracted to the brain surface . a conjugate gradient method minimizes the energy function , allowing the model to automatically converge to the smoothed brain surface . finally , labels are propagated from the deformed atlas onto the high-resolution brain surface . story_separator_special_tag segmentation of organs at risk ( oars ) remains one of the most time-consuming tasks in radiotherapy treatment planning . atlas-based segmentation methods using single templates have emerged as a practical approach to automate the process for brain or head and neck anatomy , but pose significant challenges in regions where large interpatient variations are present . we show that significant changes are needed to autosegment thoracic and abdominal datasets by combining multi-atlas deformable registration with a level set-based local search . segmentation is hierarchical , with a first stage detecting bulk organ location , and a second step adapting the segmentation to fine details present in the patient scan . the first stage is based on warping multiple presegmented templates to the new patient anatomy using a multimodality deformable registration algorithm able to cope with changes in scanning conditions and artifacts . these segmentations are compacted in a probabilistic map of organ shape using the staple algorithm . final segmentation is obtained by adjusting the probability map for each organ type , using customized combinations of delineation filters exploiting prior knowledge of organ characteristics . validation is performed by comparing automated and manual segmentation using the dice coefficient , story_separator_special_tag in this paper , different methods to improve atlas based segmentation are presented . the first technique is a new mapping of the labels of an atlas consistent with a given intensity classification segmentation . this new mapping combines the two segmentations using the nearest neighbor transform and is especially effective for complex and folded regions like the cortex where the registration is difficult . then , in a multi atlas context , an original weighting is introduced to combine the segmentation of several atlases using a voting procedure . this weighting is derived from statistical classification theory and is computed offline using the atlases as a training dataset . concretely , the accuracy map of each atlas is computed and the vote is weighted by the accuracy of the atlases . numerical experiments have been performed on publicly available in vivo datasets and show that , when used together , the two techniques provide an important improvement of the segmentation accuracy . story_separator_special_tag this paper proposes a method to build a bone-cartilage atlas of the knee and to use it to automatically segment femoral and tibial cartilage from t1 weighted magnetic resonance ( mr ) images . anisotropic spatial regularization is incorporated into a three-label segmentation framework to improve segmentation results for the thin cartilage layers . we jointly use the atlas information and the output of a probabilistic k nearest neighbor classifier within the segmentation method . the resulting cartilage segmentation method is fully automatic . validation results on 18 knee mr images against manual expert segmentations from a dataset acquired for osteoarthritis research show good performance for the segmentation of femoral and tibial cartilage ( mean dice similarity coefficient of 78.2 % and 82.6 % respectively ) . story_separator_special_tag abstract neonatal brain mri segmentation is a challenging problem due to its poor image quality . atlas-based segmentation approaches have been widely used for guiding brain tissue segmentation . existing brain atlases are usually constructed by equally averaging pre-segmented images in a population . however , such approaches diminish local inter-subject structural variability and thus lead to lower segmentation guidance capability . to deal with this problem , we propose a multi-region-multi-reference framework for atlas-based neonatal brain segmentation . for each region of a brain parcellation , a population of spatially normalized pre-segmented images is clustered into a number of sub-populations . each sub-population of a region represents an independent distribution from which a regional probability atlas can be generated . a selection of these regional atlases , across different sub-regions , will in the end be adaptively combined to form an overall atlas specific to the query image . given a query image , the determination of the appropriate set of regional atlases is achieved by comparing the query image regionally with the reference , or exemplar , of each sub-population . upon obtaining an overall atlas , an atlas-based joint registration segmentation strategy is employed to segment the story_separator_special_tag background and purpose : multi-atlas segmentation can yield better results than single atlas segmentation , but practical applications are limited by long calculation times for deformable registration . to shorten the calculation time pre-calculated registrations of atlases could be linked via a single atlas registered in runtime to the current patient . the primary purpose of this work is to investigate and quantify segmentation quality changes introduced by such linked registrations . we also determine the optimal parameters for fusing linked multi-atlas labels using probabilistic weighted fusion . material and methods : computed tomography images of 10 head and neck cancer patients were used as atlases , with parotid glands , submandibular glands , the mandible and lymph node levels ii-iv segmented by an experienced radiation oncologist following published consensus guidelines . the change in segmentation quality scored by dice similarity coefficient ( dsc ) for linking free-form deformable registrations , modeled by b-splines , was investigated for both single- and multi-atlas label fusion by using a leave-one-out approach . results : the median decrease of the dsc was in the range 2.8 % to 8.4 % compared to direct registrations for all structures while reducing the computer calculation time story_separator_special_tag background : semi-automated segmentation using deformable registration of selected atlas cases consisting of expert segmented patient images has been proposed to facilitate the delineation of lymph . story_separator_special_tag deformable image registration is a fundamental task in medical image processing . among its most important applications , one may cite : 1 ) multi-modality fusion , where information acquired by different imaging devices or protocols is fused to facilitate diagnosis and treatment planning ; 2 ) longitudinal studies , where temporal structural or anatomical changes are investigated ; and 3 ) population modeling and statistical atlases used to study normal anatomical variability . in this paper , we attempt to give an overview of deformable registration methods , putting emphasis on the most recent advances in the domain . additional emphasis has been given to techniques applied to medical images . in order to study image registration methods in depth , their main components are identified and studied independently . the most recent techniques are presented in a systematic fashion . the contribution of this paper is to provide an extensive account of registration techniques in a systematic manner . story_separator_special_tag keynote papers.- discovery of emerging patterns and their use in classification.- robot soccer : science or just fun and games ? .- on how to learn from a stochastic teacher or a stochastic compulsive liar of unknown identity.- multimedia analysis and synthesis.- ontology.- modelling message handling system.- a new approach for concept-based web search.- representing the spatial relations in the semantic web ontologies.- inductive construction of ontologies from formal concept analysis.- problem solving.- dynamic variable filtering for hard random 3-sat problems.- a proposal of an efficient crossover using fitness prediction and its application.- a new hybrid genetic algorithm for the robust graph coloring problem.- estimating problem metrics for sat clause weighting local search.- knowledge discovery and data mining i.- information extraction via path merging.- natural language agreement description for reversible grammars.- token identification using hmm and ppm models.- korean compound noun term analysis based on a chart parsing technique.- knowledge discovery and data milling ii.- a language modeling approach to search distributed text databases.- combining multiple host-based detectors using decision tree.- association rule discovery with unbalanced class distributions.- efficiently mining frequent patterns from dense datasets using a cluster of computers.- expert systems.- weighted mcrdr : deriving information about relationships between story_separator_special_tag abstract the purpose of this study was to develop and validate an observer-independent approach for automatic generation of volume-of-interest ( voi ) brain templates to be used in emission tomography studies of the brain . the method utilizes a voi probability map created on the basis of a database of several subjects ' mr-images , where voi sets have been defined manually . high-resolution structural mr-images and 5-ht2a receptor binding pet-images ( in terms of 18f-altanserin binding ) from 10 healthy volunteers and 10 patients with mild cognitive impairment were included for the analysis . a template including 35 vois was manually delineated on the subjects ' mr images . through a warping algorithm template voi sets defined from each individual were transferred to the other subjects mr-images and the voxel overlap was compared to the voi set specifically drawn for that particular individual . comparisons were also made for the voi templates 5-ht2a receptor binding values . it was shown that when the generated voi set is based on more than one template voi set , delineation of vois is better reproduced and shows less variation as compared both to transfer of a single set of template vois as story_separator_special_tag this paper presents a fully automated method for segmenting articular knee cartilage and bone from in vivo 3-d dual echo steady state images . the magnetic resonance imaging ( mri ) datasets were obtained from the osteoarthritis initiative ( oai ) pilot study and include longitudinal images from controls and subjects with knee osteoarthritis ( oa ) scanned twice at each visit ( baseline , 24 month ) . initially , human experts segmented six mri series . five of the six resultant sets served as reference atlases for a multiatlas segmentation algorithm . the methodology created precise knee segmentations that were used to extract articular cartilage volume , surface area , and thickness as well as subchondral bone plate curvature . comparison to manual segmentation showed dice similarity coefficient ( dsc ) of 0.88 and 0.84 for the femoral and tibial cartilage . in oa subjects , thickness measurements showed test-retest precision ranging from 0.014 mm ( 0.6 % ) at the femur to 0.038 mm ( 1.6 % ) at the femoral trochlea . in the same population , the curvature test-retest precision ranged from 0.0005 mm-1 ( 3.6 % ) at the femur to 0.0026 mm-1 ( story_separator_special_tag this paper examines the multiple atlas random diffeomorphic orbit model in computational anatomy ( ca ) for parameter estimation and segmentation of subcortical and ventricular neuroanatomy in magnetic resonance imagery . we assume that there exist multiple magnetic resonance image ( mri ) atlases , each atlas containing a collection of locally-defined charts in the brain generated via manual delineation of the structures of interest . we focus on maximum a posteriori estimation of high dimensional segmentations of mr within the class of generative models representing the observed mri as a conditionally gaussian random field , conditioned on the atlas charts and the diffeomorphic change of coordinates of each chart that generates it . the charts and their diffeomorphic correspondences are unknown and viewed as latent or hidden variables . we demonstrate that the expectation-maximization ( em ) algorithm arises naturally , yielding the likelihood-fusion equation which the a posteriori estimator of the segmentation labels maximizes . the likelihoods being fused are modeled as conditionally gaussian random fields with mean fields a function of each atlas chart under its diffeomorphic change of coordinates onto the target . the conditional-mean in the em algorithm specifies the convex weights with which the story_separator_special_tag in this paper , we propose a novel method for parcellating the human brain into 193 anatomical structures based on diffusion tensor images ( dtis ) . this was accomplished in the setting of multi-contrast diffeomorphic likelihood fusion using multiple dti atlases . dti images are modeled as high dimensional fields , with each voxel exhibiting a vector valued feature comprising of mean diffusivity ( md ) , fractional anisotropy ( fa ) , and fiber angle . for each structure , the probability distribution of each element in the feature vector is modeled as a mixture of gaussians , the parameters of which are estimated from the labeled atlases . the structure-specific feature vector is then used to parcellate the test image . for each atlas , a likelihood is iteratively computed based on the structure-specific vector feature . the likelihoods from multiple atlases are then fused . the updating and fusing of the likelihoods is achieved based on the expectation-maximization ( em ) algorithm for maximum a posteriori ( map ) estimation problems . we first demonstrate the performance of the algorithm by examining the parcellation accuracy of 18 structures from 25 subjects with a varying degree of story_separator_special_tag the success of radiation therapy depends critically on accurately delineating the target volume , which is the region of known or suspected disease in a patient . methods that can compute a contour set defining a target volume on a set of patient images will contribute greatly to the success of radiation therapy and dramatically reduce the workload of radiation oncologists , who currently draw the target by hand on the images using simple computer drawing tools . the most challenging part of this process is to estimate where there is microscopic spread of disease . given a set of reference ct images with `` gold standard '' lymph node regions drawn by the experts , we are proposing an image registration based method that could automatically contour the cervical lymph code levels for patients receiving radiation therapy . we are also proposing a method that could help us identify the reference models which could potentially produce the best results . the computer generated lymph node regions are evaluated quantitatively and qualitatively . although not conforming to clinical criteria , the results suggest the technique has promise . story_separator_special_tag reliable identification of thalamic nuclei is required to improve targeting of electrodes used in deep brain stimulation ( dbs ) , and for exploring the role of thalamus in health and disease . a previously described method using probabilistic tractography to segment the thalamus based on connections to cortical target regions was implemented . both within- and between-subject reproducibility were quantitatively assessed by the overlap of the resulting segmentations ; the effect of two different numbers of target regions ( 6 and 31 ) on reproducibility of the segmentation results was also investigated . very high reproducibility was observed when a single dataset was processed multiple times using different starting conditions . thalamic segmentation was also very reproducible when multiple datasets from the same subject were processed using six cortical target regions . within-subject reproducibility was reduced when the number of target regions was increased , particularly in medial and posterior regions of the thalamus . a large degree of overlap in segmentation results from different subjects was obtained , particularly in thalamic regions classified as connecting to frontal , parietal , temporal and pre-central cortical target regions . story_separator_special_tag neointima thickening plays a decisive role in coronary restenosis after stenting . the aim of this study is to detect neointima tissue in intravascular optical coherence tomography ( ivoct ) sequences . we developed a multi-atlas based segmentation method to detect neointima without stent struts locations . the atlases are selected by measurements of stenosis and a similarity metric . the probability map is then used to estimate neointima label in the unseen image . to account for the registration errors , a patch-based label fusion approach is applied . validation is performed using 18 typical in-vivo ivoct sequences . the comparison against manual expert segmentation and other fusion approaches demonstrates that the proposed neointima identification is robust and accurate . story_separator_special_tag accurate automated brain structure segmentation methods facilitate the analysis of large-scale neuroimaging studies . this work describes a novel method for brain structure segmentation in magnetic resonance images that combines information about a structure 's location and appearance . the spatial model is implemented by registering multiple atlas images to the target image and creating a spatial probability map . the structure 's appearance is modeled by a classifier based on gaussian scale-space features . these components are combined with a regularization term in a bayesian framework that is globally optimized using graph cuts . the incorporation of the appearance model enables the method to segment structures with complex intensity distributions and increases its robustness against errors in the spatial model . the method is tested in cross-validation experiments on two datasets acquired with different magnetic resonance sequences , in which the hippocampus and cerebellum were segmented by an expert . furthermore , the method is compared to two other segmentation techniques that were applied to the same data . results show that the atlas- and appearance-based method produces accurate results with mean dice similarity indices of 0.95 for the cerebellum , and 0.87 for the hippocampus . this was story_separator_special_tag we propose an efficient non-parametric diffeomorphic image registration algorithm based on thirion 's demons algorithm . in the first part of this paper , we show that thirion 's demons algorithm can be seen as an optimization procedure on the entire space of displacement fields . we provide strong theoretical roots to the different variants of thirion 's demons algorithm . this analysis predicts a theoretical advantage for the symmetric forces variant of the demons algorithm . we show on controlled experiments that this advantage is confirmed in practice and yields a faster convergence . in the second part of this paper , we adapt the optimization procedure underlying the demons algorithm to a space of diffeomorphic transformations . in contrast to many diffeomorphic registration algorithms , our solution is computationally efficient since in practice it only replaces an addition of displacement fields by a few compositions . our experiments show that in addition to being diffeomorphic , our algorithm provides results that are similar to the ones from the demons algorithm but with transformations that are much smoother and closer to the gold standard , available in controlled experiments , in terms of jacobians . story_separator_special_tag accurately monitoring the efficacy of disease-modifying drugs in glaucoma therapy is of critical importance . albeit high resolution spectral-domain optical coherence tomography ( sdoct ) is now in widespread clinical use , past landmark glaucoma clinical trials have used time-domain optical coherence tomography ( tdoct ) , which leads , however , to poor statistical power due to low signal-to-noise characteristics . here , we propose a probabilistic ensemble model for improving the statistical power of imaging-based clinical trials . tdoct are converted to synthesized sdoct images and segmented via bayesian fusion of an ensemble of generative adversarial networks ( gans ) . the proposed model integrates super resolution ( sr ) and multi-atlas segmentation ( mas ) in a principled way . experiments on the uk glaucoma treatment study ( ukgts ) show that the model successfully combines the strengths of both techniques ( improved image quality of sr and effective label propagation of mas ) , and produces a significantly better separation between treatment arms than conventional segmentation of tdoct . story_separator_special_tag multi-atlas based label fusionmethods have been successfully used for medical image segmentation . in the field of brain region segmentation , multi-atlas based methods propagate labels from multiple atlases to target image by the similarity between patches in target image and atlases . most of existing multi-atlas based methods usually use intensity feature , which is hard to capture high-order information in brain images . in light of this , in this paper , we endeavor to apply high-order restricted boltzmann machines to represent brain images and use the learnt feature for brain region of interesting ( rois ) segmentation . specifically , we firstly capture the covariance and the mean information from patches by high-order boltzmann machine . then , we propagate the label by the similarity of the learnt high-order features . we validate our feature learning method on two well-known label fusion methods e.g. , local-weighted voting ( lwv ) and non-local mean patch-based method ( pbm ) . experimental results on the nirep dataset demonstrate that our method can improve the performance of both lwv and pbm by using the high-order features . story_separator_special_tag multi-atlas segmentation is an effective approach for automatically labeling objects of interest in biomedical images . in this approach , multiple expert-segmented example images , called atlases , are registered to a target image , and deformed atlas segmentations are combined using label fusion . among the proposed label fusion strategies , weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity have been particularly successful . however , one limitation of these strategies is that the weights are computed independently for each atlas , without taking into account the fact that different atlases may produce similar label errors . to address this limitation , we propose a new solution for the label fusion problem in which weighted voting is formulated in terms of minimizing the total expectation of labeling error and in which pairwise dependency between atlases is explicitly modeled as the joint probability of two atlases making a segmentation error at a voxel . this probability is approximated using intensity similarity between a pair of atlases and the target image in the neighborhood of each voxel . we validate our method in two medical image segmentation problems : hippocampus segmentation and hippocampus subfield segmentation in magnetic story_separator_special_tag tubular structures.- automated nomenclature labeling of the bronchial tree in 3d-ct lung images.- segmentation , skeletonization , and branchpoint matching - a fully automated quantitative evaluation of human intrathoracic airway trees.- improving virtual endoscopy for the intestinal tract.- finding a non-continuous tube by fuzzy inference for segmenting the mr cholangiography image.- level-set based carotid artery segmentation for stenosis grading.- interventions - augmented reality.- pc-based control unit for a head mounted operating microscope for augmented reality visualization in surgical navigation.- technical developments for mr-guided microwave thermocoagulation therapy of liver tumors.- robust automatic c-arm calibration for fluoroscopy-based navigation : a practical approach.- application of a population based electrophysiological database to the planning and guidance of deep brain stereotactic neurosurgery.- an image overlay system with enhanced reality for percutaneous therapy performed inside ct scanner.- high-resolution stereoscopic surgical display using parallel integral videography and multi-projector.- three-dimensional display for multi-sourced activities and their relations in the human brain by information flow between estimated dipoles.- interventions - navigation.- 2d guide wire tracking during endovascular interventions.- specification method of surface measurement for surgical navigation : ridgeline based organ registration.- an augmented reality navigation system with a single-camera tracker : system design and needle biopsy phantom trial.- a story_separator_special_tag automated segmenting and labeling of individual brain anatomical regions , in mri are challenging , due to the issue of individual structural variability . although atlas-based segmentation has shown its potential for both tissue and structure segmentation , due to the inherent natural variability as well as disease-related changes in mr appearance , a single atlas image is often inappropriate to represent the full population of datasets processed in a given neuroimaging study . as an alternative for the case of single atlas segmentation , the use of multiple atlases alongside label fusion techniques has been introduced using a set of individual atlases that encompasses the expected variability in the studied population . in our study , we proposed a multi-atlas segmentation scheme with a novel graph-based atlas selection technique . we first paired and co-registered all atlases and the subject mr scans . a directed graph with edge weights based on intensity and shape similarity between all mr scans is then computed . the set of neighboring templates is selected via clustering of the graph . finally , weighted majority voting is employed to create the final segmentation over the selected atlases . this multi-atlas segmentation scheme is used story_separator_special_tag purpose : cone-beam computed tomography ( cbct ) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial ( cmf ) deformities . accurate segmentation of cbct image is an essential step to generate three-dimensional ( 3d ) models for the diagnosis and treatment planning of the patients with cmf deformities . however , due to the poor image quality , including very low signal-to-noise ratio and the widespread image artifacts such as noise , beam hardening , and inhomogeneity , it is challenging to segment the cbct images . in this paper , the authors present a new automatic segmentation method to address these problems . methods : to segment cbct images , the authors propose a new method for fully automated cbct segmentation by using patch-based sparse representation to ( 1 ) segment bony structures from the soft tissues and ( 2 ) further separate the mandible from the maxilla . specifically , a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases . finally , story_separator_special_tag segmentation of infant brain mr images is challenging due to poor spatial resolution , severe partial volume effect , and the ongoing maturation and myelination processes . during the first year of life , the brain image contrast between white and gray matters undergoes dramatic changes . in particular , the image contrast inverses around 6-8months of age , where the white and gray matter tissues are isointense in t1 and t2 weighted images and hence exhibit the extremely low tissue contrast , posing significant challenges for automated segmentation . in this paper , we propose a general framework that adopts sparse representation to fuse the multi-modality image information and further incorporate the anatomical constraints for brain tissue segmentation . specifically , we first derive an initial segmentation from a library of aligned images with ground-truth segmentations by using sparse representation in a patch-based fashion for the multi-modality t1 , t2 and fa images . the segmentation result is further iteratively refined by integration of the anatomical constraint . the proposed method was evaluated on 22 infant brain mr images acquired at around 6months of age by using a leave-one-out cross-validation , as well as other 10 unseen testing subjects story_separator_special_tag the segmentation of neonatal brain mr image into white matter ( wm ) , gray matter ( gm ) , and cerebrospinal fluid ( csf ) , is challenging due to the low spatial resolution , severe partial volume effect , high image noise , and dynamic myelination and maturation processes . atlas-based methods have been widely used for guiding neonatal brain segmentation . existing brain atlases were generally constructed by equally averaging all the aligned template images from a population . however , such population-based atlases might not be representative of a testing subject in the regions with high inter-subject variability and thus often lead to a low capability in guiding segmentation in those regions . recently , patch-based sparse representation techniques have been proposed to effectively select the most relevant elements from a large group of candidates , which can be used to generate a subject-specific representation with rich local anatomical details for guiding the segmentation . accordingly , in this paper , we propose a novel patch-driven level set method for the segmentation of neonatal brain mr images by taking advantage of sparse representation techniques . specifically , we first build a subject-specific atlas from a library story_separator_special_tag accurate diagnosis of alzheimer 's disease ( ad ) , especially mild cognitive impairment ( mci ) , is critical for treatment of the disease . many algorithms have been proposed to improve classification performance . while most existing methods focus on exploring different feature extraction and selection techniques , in this paper , we show that the pre-processing steps for mri scans , i.e. , registration and segmentation , significantly affect the classification performance . specifically , we evaluate the classification performance given by a multi-atlas based multi-image segmentation ( mabmis ) method , with respect to more conventional segmentation methods . by incorporating tree-based groupwise registration and iterative groupwise segmentation strategies , mabmis attains more accurate and consistent segmentation results compared with the conventional methods that do not take into account the inherent distribution of images under segmentation . this increased segmentation accuracy will benefit classification by minimizing errors that are propagated to the subsequent analysis steps . experimental results indicate that mabmis achieves better performance when compared with the conventional methods in the following classification tasks using the adni dataset : ad vs. mci ( accuracy : 71.8 % ) , ad vs. healthy control ( hc story_separator_special_tag we propose a novel segmentation approach based on deep convolutional encoder networks and apply it to the segmentation of multiple sclerosis ( ms ) lesions in magnetic resonance images . our model is a neural network that has both convolutional and deconvolutional layers , and combines feature extraction and segmentation prediction in a single model . the joint training of the feature extraction and prediction layers allows the model to automatically learn features that are optimized for accuracy for any given combination of image types . in contrast to existing automatic feature learning approaches , which are typically patch-based , our model learns features from entire images , which eliminates patch selection and redundant calculations at the overlap of neighboring patches and thereby speeds up the training . our network also uses a novel objective function that works well for segmenting underrepresented classes , such as ms lesions . we have evaluated our method on the publicly available labeled cases from the ms lesion segmentation challenge 2008 data set , showing that our method performs comparably to the state-of-theart . in addition , we have evaluated our method on the images of 500 subjects from an ms clinical trial and story_separator_special_tag characterizing the performance of image segmentation approaches has been a persistent challenge . performance analysis is important since segmentation algorithms often have limited accuracy and precision . interactive drawing of the desired segmentation by human raters has often been the only acceptable approach , and yet suffers from intra-rater and inter-rater variability . automated algorithms have been sought in order to remove the variability introduced by raters , but such algorithms must be assessed to ensure they are suitable for the task . the performance of raters ( human or algorithmic ) generating segmentations of medical images has been difficult to quantify because of the difficulty of obtaining or estimating a known true segmentation for clinical data . although physical and digital phantoms can be constructed for which ground truth is known or readily estimated , such phantoms do not fully reflect clinical images due to the difficulty of constructing phantoms which reproduce the full range of imaging characteristics and normal and pathological anatomical variability observed in clinical data . comparison to a collection of segmentations by raters is an attractive alternative since it can be carried out directly on the relevant clinical imaging data . however , the most story_separator_special_tag reliable and fast segmentation of the human cerebellum with its complex architecture of lobes and lobules has been a challenge for the past decades . emerging knowledge of the functional integration of the cerebellum in various sensori motor and cognitive behavioral circuits demands new automatic segmentation techniques , with accuracies similar to manual segmentations , but applicable to large subject numbers in a reasonable time frame . this article presents the development and application of a novel pipeline for rapid automatic segmentation of the human cerebellum and its lobules ( rascal ) combining patch based label fusion and a template library of manually labeled cerebella of 16 healthy controls from the international consortium for brain mapping ( icbm ) database . leave one out experiments revealed a good agreement between manual and automatic segmentations ( dice kappa = 0.82 ) . intraclass correlation coefficients ( icc ) were calculated to test reliability of segmented volumes and were highest ( icc > 0.9 ) for global measures ( total and hemispherical grey and white matter ) followed by larger lobules of the posterior lobe ( icc > 0.8 ) . further we applied the pipeline to all 152 young healthy controls story_separator_special_tag we introduce here a new algorithm , called softstaple , for computing estimates of segmentation generator performance and a reference standard segmentation from a collection of probabilistic segmentations of an image . these tasks have previously been investigated for segmentations with discrete label values , but few techniques exploit the information available in probabilistic segmentations . our new method may be used to evaluate classification algorithms , to fuse weak classifiers in a performance-weighted fashion , or to combine the results of a previous fusion of manual segmentations in an hierarchical manner . we describe and validate our new algorithm , and compare its performance to other techniques in two applications with real-world data . story_separator_special_tag the hippocampus is located within the medial temporal lobe and plays a key role in learning and episodic , semantic , and spatial memory . dysfunction has been reported in neurologic and psychiatric disorders including epilepsy ( wu et\xa0al. , 2005 ) , alzheimer 's disease ( apostolova et\xa0al. , 2006 ) , schizophrenia ( tanskanen et\xa0al. , 2005 ) , and depression ( bremner et\xa0al. , 2000 ) . temporal lobe epilepsy ( tle ) is the most common drug-resistant focal epilepsy , with seizures frequently arising from the hippocampus . in surgical series of tle , the pathology is often hippocampal sclerosis ( hs ) comprising neuronal loss and gliosis and marked by atrophy and signal change on magnetic resonance imaging ( van paesschen , 2004 ) . atrophy of the hippocampus through hs provides a good biomarker for the laterality of the seizure focus ( bernasconi et\xa0al. , 2003 ) , and combined with concordant neurophysiology and neuropsychological data can be sufficient to recommend surgery . hippocampal atrophy is associated with a favorable surgical outcome ( schramm & clusmann , 2008 ) . visual assessment of hippocampal volumes is unreliable , as it may be compromised by story_separator_special_tag we propose a novel framework for the automatic propagation of a set of manually labeled brain atlases to a diverse set of images of a population of subjects . a manifold is learned from a coordinate system embedding that allows the identification of neighborhoods which contain images that are similar based on a chosen criterion . within the new coordinate system , the initial set of atlases is propagated to all images through a succession of multi-atlas segmentation steps . this breaks the problem of registering images that are very `` dissimilar '' down into a problem of registering a series of images that are `` similar '' . at the same time , it allows the potentially large deformation between the images to be modeled as a sequence of several smaller deformations . we applied the proposed method to an exemplar region centered around the hippocampus from a set of 30 atlases based on images from young healthy subjects and a dataset of 796 images from elderly dementia patients and age-matched controls enrolled in the alzheimer 's disease neuroimaging initiative ( adni ) . we demonstrate an increasing gain in accuracy of the new method , compared to standard story_separator_special_tag a robust automated segmentation of abdominal organs can be crucial for computer aided diagnosis and laparoscopic surgery assistance . many existing methods are specialized to the segmentation of individual organs and struggle to deal with the variability of the shape and position of abdominal organs . we present a general , fully-automated method for multi-organ segmentation of abdominal computed tomography ( ct ) scans . the method is based on a hierarchical atlas registration and weighting scheme that generates target specific priors from an atlas database by combining aspects from multi-atlas registration and patch-based segmentation , two widely used methods in brain segmentation . the final segmentation is obtained by applying an automatically learned intensity model in a graph-cuts optimization step , incorporating high-level spatial knowledge . the proposed approach allows to deal with high inter-subject variation while being flexible enough to be applied to different organs . we have evaluated the segmentation on a database of 150 manually segmented ct images . the achieved results compare well to state-of-the-art methods , that are usually tailored to more specific questions , with dice overlap values of 94 % , 93 % , 70 % , and 92 % for liver story_separator_special_tag we propose a method for simultaneous segmentation of serially acquired magnetic resonance ( mr ) images . an existing graph-cuts based algorithm is extended and applied to 4-d images . a probabilistic atlas is generated for each baseline scan by intersubject registration of multiple labeled images . the atlases are used for baseline and aligned follow-up images and are combined with an intensity model to define a weighted graph that connects the timepoints . a minimal cut on this graph yields the segmentation for all timepoints . the resulting segmentations are consistent over time in boundary regions with weak gray scale definition , but reflect atrophy well where the structure boundary is well defined by mr intensity . the hippocampus was segmented in 568 baseline and follow-up images provided by the alzheimer 's disease neuroimaging initiative ( adni ) . the estimated atrophy rates correctly classified ad patients from controls at a rate of 82 % ( atrophy rates : ad 3.85 % /y . ; mci 2.31 % /y . ; controls : 0.85 % /y . ) story_separator_special_tag automated labeling of anatomical structures in medical images is very important in many neuroscience studies . recently , patch-based labeling has been widely investigated to alleviate the possible mis-alignment when registering atlases to the target image . however , the weights used for label fusion from the registered atlases are generally computed independently and thus lack the capability of preventing the ambiguous atlas patches from contributing to the label fusion . more critically , these weights are often calculated based only on the simple patch similarity , thus not necessarily providing optimal solution for label fusion . to address these limitations , we propose a generative probability model to describe the procedure of label fusion in a multi-atlas scenario , for the goal of labeling each point in the target image by the best representative atlas patches that also have the largest labeling unanimity in labeling the underlying point correctly . specifically , sparsity constraint is imposed upon label fusion weights , in order to select a small number of atlas patches that best represent the underlying target patch , thus reducing the risks of including the misleading atlas patches . the labeling unanimity among atlas patches is achieved by story_separator_special_tag atlas-based segmentation of mr brain images typically uses a single atlas ( e.g. , mni colin27 ) for region identification . normal individual variations in human brain structures present a significant challenge for atlas selection . previous researches mainly focused on how to create a specific template for different requirements ( e.g. , for a certain population ) . we address atlas selection with a different approach : instead of choosing a fixed brain atlas , we use a family of brain templates for atlas-based segmentation . for each subject and each region , the template selection method automatically chooses the 'best ' template with the highest local registration accuracy , based on normalized mutual information . the region classification performances of the template selection method and the single template method were quantified by the overlap ratios ( ors ) and intraclass correlation coefficients ( iccs ) between the manual tracings and the respective automated labeled results . two groups of brain images and multiple regions of interest ( rois ) , including the right anterior cingulate cortex ( acc ) and several subcortical structures , were tested for both methods . we found that the template selection method produced story_separator_special_tag subthalamic nucleus ( stn ) deep brain stimulation ( dbs ) is an effective surgical therapy to treat parkinson 's disease ( pd ) . conventional methods employ standard atlas coordinates to target the stn , which , along with the adjacent red nucleus ( rn ) and substantia nigra ( sn ) , are not well visualized on conventional t1w mris . however , the positions and sizes of the nuclei may be more variable than the standard atlas , thus making the pre surgical plans inaccurate . we investigated the morphometric variability of the stn , rn and sn by using label fusion segmentation results from 3t high resolution t2w mris of 33 advanced pd patients . in addition to comparing the size and position measurements of the cohort to the talairach atlas , principal component analysis ( pca ) was performed to acquire more intuitive and detailed perspectives of the measured variability . lastly , the potential correlation between the variability shown by pca results and the clinical scores was explored . hum brain mapp 35:4330 4344 , 2014 . \xa9 2014 wiley periodicals , inc . story_separator_special_tag purpose : to improve the efficiency of atlas-based segmentation without compromising accuracy , and to demonstrate the validity of the proposed method on mri-based prostate segmentation application . methods : accurate and efficient automatic structure segmentation is an important task in medical image processing . atlas-based methods , as the state-of-the-art , provide good segmentation at the cost of a large number of computationally intensive nonrigid registrations , for anatomical sites/structures that are subject to deformation . in this study , the authors propose to utilize a combination of global , regional , and local metrics to improve the accuracy yet significantly reduce the number of required nonrigid registrations . the authors first perform an affine registration to minimize the global mean squared error ( gmse ) to coarsely align each atlas image to the target . subsequently , atarget-specific regional mse ( rmse ) , demonstrated to be a good surrogate for dice similarity coefficient ( dsc ) , is used to select a relevant subset from the training atlas . only within this subset are nonrigid registrations performed between the training images and the target image , to minimize a weighted combination of gmse and rmse . finally story_separator_special_tag cardiac ct angiography ( ccta ) is widely used in the diagnosis of coronary heart disease . it can provide 4d ( 3d + t ) sequence with high spatial and temporal resolution . segmentation of left ventricle ( lv ) in 4d ccta sequence can provide useful information for clinical practice . in this paper , we present an automatic method for lv segmentation in 4d ccta sequence in this paper . this method mainly relies on an accurate multi-atlas registration method . thus , we first improve the multi-atlas registration method presented by kiris li et al . by adding an extra registration step with an estimated heart mask . then , we use a two-stage framework based on multi-atlas registration to segment the lv in the 4d sequence . quantitative evaluation results show that our proposed multi-atlas registration method outperforms the kiris li 's method . finally , experimental results using two 4d ccta sequences indicate that our method can segment lv accurately . story_separator_special_tag abstract in non-rigid registration , the tradeoff between warp regularization and image fidelity is typically determined empirically . in atlas-based segmentation , this leads to a probabilistic atlas of arbitrary sharpness : weak regularization results in well-aligned training images and a sharp atlas ; strong regularization yields a blurry atlas . in this paper , we employ a generative model for the joint registration and segmentation of images . the atlas construction process arises naturally as estimation of the model parameters . this framework allows the computation of unbiased atlases from manually labeled data at various degrees of sharpness , as well as the joint registration and segmentation of a novel brain in a consistent manner . we study the effects of the tradeoff of atlas sharpness and warp smoothness in the context of cortical surface parcellation . this is an important question because of the increasingly availability of atlases in public databases , and the development of registration algorithms separate from the atlas construction process . we find that the optimal segmentation ( parcellation ) corresponds to a unique balance of atlas sharpness and warp regularization , yielding statistically significant improvements over the freesurfer parcellation algorithm . furthermore , story_separator_special_tag image registration is typically formulated as an optimization problem with multiple tunable , manually set parameters . we present a principled framework for learning thousands of parameters of registration cost functions , such as a spatially-varying tradeoff between the image dissimilarity and regularization terms . our approach belongs to the classic machine learning framework of model selection by optimization of cross-validation error . this second layer of optimization of cross-validation error over and above registration selects parameters in the registration cost function that result in good registration as measured by the performance of the specific application in a training data set . much research effort has been devoted to developing generic registration algorithms , which are then specialized to particular imaging modalities , particular imaging targets and particular postregistration analyses . our framework allows for a systematic adaptation of generic registration cost functions to specific applications by learning the free parameters in the cost functions . here , we consider the application of localizing underlying cytoarchitecture and functional regions in the cerebral cortex by alignment of cortical folding . most previous work assumes that perfectly registering the macro-anatomy also perfectly aligns the underlying cortical function even though macro-anatomy does not story_separator_special_tag active contour segmentation and its robust implementation using level set methods are well-established theoretical approaches that have been studied thoroughly in the image analysis literature . despite the existence of these powerful segmentation methods , the needs of clinical research continue to be fulfilled , to a large extent , using slice-by-slice manual tracing . to bridge the gap between methodological advances and clinical routine , we developed an open source application called itk-snap , which is intended to make level set segmentation easily accessible to a wide range of users , including those with little or no mathematical expertise . this paper describes the methods and software engineering philosophy behind this new tool and provides the results of validation experiments performed in the context of an ongoing child autism neuroimaging study . the validation establishes snap intrarater and interrater reliability and overlap error statistics for the caudate nucleus and finds that snap is a highly reliable and efficient alternative to manual tracing . analogous results for lateral ventricle segmentation are provided . story_separator_special_tag we present and evaluate a new method for automatically labeling the subfields of the hippocampal formation in focal 0.4 \xd7 0.5 \xd7 2.0mm ( 3 ) resolution t2-weighted magnetic resonance images that can be acquired in the routine clinical setting with under 5 min scan time . the method combines multi-atlas segmentation , similarity-weighted voting , and a novel learning-based bias correction technique to achieve excellent agreement with manual segmentation . initial partitioning of mri slices into hippocampal 'head ' , 'body ' and 'tail ' slices is the only input required from the user , necessitated by the nature of the underlying segmentation protocol . dice overlap between manual and automatic segmentation is above 0.87 for the larger subfields , ca1 and dentate gyrus , and is competitive with the best results for whole-hippocampus segmentation in the literature . intraclass correlation of volume measurements in ca1 and dentate gyrus is above 0.89. overlap in smaller hippocampal subfields is lower in magnitude ( 0.54 for ca2 , 0.62 for ca3 , 0.77 for subiculum and 0.79 for entorhinal cortex ) but comparable to overlap between manual segmentations by trained human raters . these results support the feasibility of subfield-specific hippocampal story_separator_special_tag automatic segmentation of cardiac mri is an important but challenging task in clinical study of cardiac morphology . recently , fusing segmentations from multiple classifiers has been shown to achieve more accurate results than a single classifier . in this work , we propose a new strategy , multiple path propagation and segmentation ( mupps ) , in contrast with the currently widely used multi-atlas propagation and segmentation ( maps ) scheme . we showed that mupps outperformed the standard maps in the experiment using twenty-one in vivo cardiac mr images . furthermore , we studied and compared different path selection strategies for the mupps , to pursue an efficient implementation of the segmentation framework . we showed that the path ranking scheme using the image similarity after an affine registration converged faster and only needed eleven classifiers from the atlas repository . the fusion of eleven propagation results using the proposed path ranking scheme achieved a mean dice score of 0.911 in the whole heart segmentation and the highest gain of accuracy was obtained from myocardium segmentation . story_separator_special_tag we propose a method for multi-atlas label propagation ( malp ) based on encoding the individual atlases by randomized classification forests . most current approaches perform a non-linear registration between all atlases and the target image , followed by a sophisticated fusion scheme . while these approaches can achieve high accuracy , in general they do so at high computational cost . this might negatively affect the scalability to large databases and experimentation . to tackle this issue , we propose to use a small and deep classification forest to encode each atlas individually in reference to an aligned probabilistic atlas , resulting in an atlas forest ( af ) . our classifier-based encoding differs from current malp approaches , which represent each point in the atlas either directly as a single image/label value pair , or by a set of corresponding patches . at test time , each af produces one probabilistic label estimate , and their fusion is done by averaging . our scheme performs only one registration per target image , achieves good results with a simple fusion scheme , and allows for efficient experimentation . in contrast to standard forest schemes , in which each tree story_separator_special_tag \xa9 2014 ieee . one of the main sources of error in multi-atlas segmentation propagation approaches comes from the use of atlas databases that are morphologically dissimilar to the target image . in this work , we exploit the segmentation errors associated with poor atlas selection to build a computer-aided diagnosis ( cad ) system for pathological classification in post-operative dextro-transposition of the great arteries ( d-tga ) . the proposed approach extracts a set of features , which describe the quality of a segmentation , and introduces them into a logical decision tree that provides the final diagnosis . we have validated our method on a set of 60 whole heart mr images containing healthy cases and two different forms of post-operative d-tga . the reported overall cad system accuracy was of 93.33 % .
abstract : we give a new characterization of the affine kac-moody algebras in terms of extended affine lie algebras . we also present new realizations of the twisted affine kac-moody algebras . story_separator_special_tag abstract it is a well-known result that the fixed point subalgebra of a finite dimensional complex simple lie algebra under a finite order automorphism is a reductive lie algebra so it is a direct sum of finite dimensional simple lie subalgebras and an abelian subalgebra . we consider this for the class of extended affine lie algebras and are able to show that the fixed point subalgebra of an extended affine lie algebra under a finite order automorphism ( which satisfies certain natural properties ) is a sum of extended affine lie algebras ( up to existence of some isolated root spaces ) , a subspace of the center and a subspace which is contained in the centralizer of the core . moreover , we show that the core of the fixed point subalgebra modulo its center is isomorphic to the direct sum of the cores modulo centers of the involved summands . story_separator_special_tag we classify the bc -type extended affine root systems for nullity 3 , in its most general sense . we show that these abstractly defined root systems are the root systems of a class of lie algebras which are axiomatically defined and are closely related to the class of extended affine lie algebras . story_separator_special_tag lie algebras graded by finite reduced root systems have been classified up to isomorphism . in this paper we describe the derivation algebras of these lie algebras and determine when they possess invariant bilinear forms . the results which we develop to do this are much more general and apply to lie algebras that are completely reducible with respect to the adjoint action of a finite-dimensional subalgebra . received by the editors december 19 , 1996. the author gratefully acknowledges support from national science foundation grants # dms-9300523 and # dms-9622447 , and from the ellentuck fund at the institute for advanced study , princeton . ams subject classification : 17b20 , 17b70 , 17b25 . c canadian mathematical society 1998 . story_separator_special_tag the nappi-witten lie algebra was first introduced by c. nappi and e. witten in the study of wess-zumino-novikov-witten ( wznw ) models . they showed that the wznw model ( nw model ) based on a central extension of the two-dimensional euclidean group describes the homogeneous four-dimensional space-time corresponding to a gravitational plane wave . the associated lie algebra is neither abelian nor semisimple . recently k. christodoulopoulou studied the irreducible whittaker modules for finite- and infinite-dimensional heisenberg algebras and for the lie algebra obtained by adjoining a degree derivation to an infinite-dimensional heisenberg algebra , and used these modules to construct a new class of modules for non-twisted affine algebras , which are called imaginary whittaker modules . in this paper , imaginary whittaker modules of the twisted affine nappi-witten lie algebra are constructed based on whittaker modules of heisenberg algebras . it is proved that the imaginary whittaker module with the center acting as a non-zero scalar is irreducible . story_separator_special_tag we develop general results on centroids of lie algebras and apply them to determine the centroid of extended affine lie algebras , loop-like and kac-moody lie algebras , and lie algebras graded by finite root systems . story_separator_special_tag we classify centerless lie g-tori of type cr including the most difficult caser = 2 by applying techniques due to seligman . in particular , we show that the coordinate algebra of a lie g-torus of type c2 is either an associative g-torus with involution or a clifford g-torus . our results generalize the classification of the core of the extended affine lie algebras of type cr by allison and gao . story_separator_special_tag abstract . this paper classifies the lie algebras graded by doubly-laced finite root systems and applies this classification to identify the intersection matrix algebras arising from multiply affinized cartan matrices of types b , c , f , and g. this completes the determination of the lie algebras graded by finite root systems initiated by berman and moody who studied the simply-laced finite root systems of rank 2 . story_separator_special_tag abstract we study and classify those tame irreducible elliptic quasi-simple lie algebras which are simply laced and of rankl 3. the first step is to identify the core of such an algebra up to central isogeny by identifying the coordinates . when the type isdorethe coordinates are laurent polynomials in variables , while for typeathe coordinates can be any quantum torus in variables . the next step is to study the universal central extension as well as the derivation algebra of the core . these are related to the first connes cyclic homology group of the coordinates . the final step is to use this information to give constructions of lie algebras which we then prove yield representatives of all isomorphism classes of the above types of algebras . story_separator_special_tag this paper is about toroidal lie algebras , certain intersection matrix lie algebras defined by slodowy , and their relationship to one another and to certain lie algebra analogues of steinberg groups . the main result of the paper is the identification of the intersection matrix algebras arising from multiply-affinized cartan matrices of types a , d and e with certain steinberg lie algebras and toroidal lie algebras ( propositions 5.9 and 5.10 ) . a major part of the paper studies and classifies lie algebras graded by finite root systems . these become the princi- pal tool in our analysis of intersection matrix algebras . each lie algebra graded by a simply-laced finite root system of rank > 2 has attached to it an algebra which , according to the type and rank , is either commutative and associative , only associative , or alternative . all these possibilities occur in our description of inter- section matrix algebras . let r be any associative algebra with identity , not necessarily finite dimen- sional , over a field k of characteristic 0. for each positive integer n the associative algebra m , ( r ) of n n matrices with story_separator_special_tag vertex representations are obtained for toroidal lie algebras for any number of variables . these representations afford representations of certainn-variable generalizations of the virasoro algebra that are abelian extensions of the lie algebra of vector fields on a torus . story_separator_special_tag abstract some general results concerning derivations of finitely generated lie algebras are established . these are employed in order to determine the derivations and central extensions of kac-moody lie algebras . story_separator_special_tag we begin with a review of the structure of simple , simply-connected complex lie groups and their lie algebras , describe the chevalley lattice and the associated split group over the integers . this gives us a hyperspecial maximal compact subgroup of the p-adic lie group and we describe the other maximal parahoric subgroups and their lie algebras starting from the hyperspecial one . we then consider the killing form on the chevalley lattice and show that it is divisible by 2 times the dual coxeter number . the same holds for the lie algebras of the other maximal parahorics . we compute the discriminants of the resulting scaled forms . finally we consider jordan subgroups of the exceptional groups . we show that these jordan subgroups are globally maximal and determine their maximal compact overgroups in the p-adic lie group . the last section treats the jordan subgroups of the classical groups . story_separator_special_tag abstract we present methods and explicit formulas for describing simple weight modules over twisted generalized weyl algebras . when a certain commutative subalgebra is finitely generated over an algebraically closed field we obtain a classification of a class of locally finite simple weight modules as those induced from simple modules over a subalgebra isomorphic to a tensor product of noncommutative tori . as an application we describe simple weight modules over the quantized weyl algebra . story_separator_special_tag a theory of root systems over a totally ordered commutative ring is developed . this theory includes , in particular , the usual finite root systems and the kac-moody real root systems . it is adapted to the construction of twisted kac-moody groups . story_separator_special_tag in this paper we give necessary and sufficient conditions for a family of right ( or left ) invariant vector fields on a lie group g to be transitive . the concept of transitivity is essentially that of controllability in the literature on control systems . we consider families of right ( resp . left ) invariant vector fields on a lie group g which is a semidirect product of a compact group k and a vector space v on which k acts linearly . if 5f is a family of right-invariant vector fields , then the values of the elements of if at the identity define a subset t of 7 . ( 0 ) the lie algebra of g. we say that if is transitive on g if the semigroup generated by u xe\xa1 , { exp ( tx ) : t \xbb 0 } is equal to g. our main result is that if is transitive if and only if lie ( f ) , the lie algebra generated by t , is equal to l ( g ) . 0. introduction . in this paper we give necessary and sufficient conditions for a family of right-invariant story_separator_special_tag one of the great early achievements in lie theory is the classification of the finite dimensional simple complex lie algebras by w. killing and e. cartan . more than 50 years later new aspects were added to this classification by the theory of coxeter groups and the visualization of the classification in terms of dynkin diagrams . furthermore serre s description of the simple lie algebras by generators and relations provided a direct way to construct the lie algebras from the cartan matrix corresponding to the choice of a root base , i.e. , a system of simple roots . story_separator_special_tag jordan and alternative tori are the coordinate algebras of extended affine lie algebras of types a 1 and a 2 . in this paper we show that the derivation algebra of a jordan torus is a semidirect product of the ideal of inner derivations and the subalgebra of central derivations . in the course of proving this result , we investigate derivations of the more general class of division graded jordan and alternative algebras . we also describe invariant forms of these algebras . story_separator_special_tag it was shown by rordam and the second named author that a countable group g admits an action on a compact space such that the crossed product is a kirchberg algebra if , and only if , g is exact and non-amenable . this construction allows a certain amount of choice . we show that different choices can lead to different algebras , at least with the free group . story_separator_special_tag on s'interesse ici aux actions ( discretes , par isometries ) d'un groupe $ \\gamma $ sur un espace metrique mesure $ x $ et a la maniere dont ces actions ecartent les points . le lem me de margulis classique conclut lorsque $ x $ est une variete simplement connexe de courbure strictement negative et bornee . une version recente ( due a g. besson , g. courtois et s. gallot ) conclut lorsque $ x $ est un espace metrique mesure d'entropie bornee , mais est essentiellement limitee au cas ou $ \\gamma $ est un groupe fondamental d'une variete de courbure negative majoree et de rayon d'injectivite minore . nous montrons que ce dernier resultat ( et ses applications geometriques ) se generalise a une classe $ { \\cal c } $ plus vaste de groupes ( qui contient les groupes hyperboliques selon gromov , les produits libres et les produits amalgames `` malnormaux '' ) et aux quasi-actions par quasi-isometries ( avec points fixes eventuels ) de ces groupes sur un espace metrique mesure d'entropie bornee . nous montrons aussi que $ { \\cal c } $ est ferme pour une topologie naturelle . nous appliquons story_separator_special_tag if \xe7 is a split lie algebra , which means that \xe7 is a lie algebra with a root decomposition \xe7 = \xe8+ 1 \xe7 , then the roots of 1 can be classified into different types : a root 1 is said to be of nilpotent type if all subalgebras \xe7\x90x ; x \x91 x= span \x94x ; x ; \x92x ; x \x93\x95 for x\xb1 \xe7\xb1 are nilpotent , and of simple type if there exist elements x\xb1 \xe7\xb1 such that \xe7\x90x ; x \x91 = \xf3\xec\x902 ; \x91 . a simple root 1 is called integrable if there exist elements x\xb1 \xe7\xb1 such that \xe7\x90x ; x \x91 = \xf3\xec\x902 ; \x91 and the endomorphisms ad x\xb1 are locally nilpotent ( section i ) . the role of integrable roots in split lie algebras has been investigated by k.-h. neeb in [ ne98 ] . one important result of this paper is the local finiteness theorem which states that a split lie algebra with only integrable roots is locally finite , i.e. , the lie algebra is the direct limit of its finite dimensional subalgebras . in this paper we focus from the outset on locally finite story_separator_special_tag si l'on excepte les tables de valeurs , qui peuvent ~tre consid6r6es comme des modules pour le calcul des propositions , et qui remonteraient g frege , la notion de module , en logique matmmatique , remonte all th6or~me de lowenheim-skolem ( 19t5-1921 ) . i1 faut toutefois attendre la thdorie sdmantique de tarski ( der wahrheitsbegriff in den formalisierten sprachen , 1933 ) pour avoir la d6finition pr6cise de la valeur vraie ou fausse prise par une formule logique pour un module . rappelons qu'on appelle calcul dldmentaire le calcul logique des pr6dicats du premier ordre avec 6galit6 . une classe dldmentaire ( anciennement dire classe adthm6tique ) est celle des modules qui v6rifient une formule de ce calcul . deux modules sont dits dldmentairement dquivalents lorsqu'ils v6rifient les m~mes formules . la tmorie s6mantique a engendr6 deux th6ories qui , chacune , relie la logique m~th6matique et l'alg~bre . la plus connue est la logique algdbrique : alg~bres cylindriques de tarski et jonsson , alg~bres monadiques et polyadiques de halmos . on part du calcul togique 616mentaire , que l'on veut plonger dans l 'al # bre ; la notion de formule ( proposition ou fonction propositionnelle ) story_separator_special_tag 1. r. hartshorne , residues and duality , springer lecture notes 20 ( 1966 ) , is a standard reference . 2. c. weibel , an introduction to homological algebra , cambridge studies in advanced mathematics 38 ( 1994 ) , has a useful chapter at the end on derived categories and functors . 3. b. keller , derived categories and their uses , in handbook of algebra , vol . 1 , m. hazewinkel , ed. , elsevier ( 1996 ) , is another helpful synopsis . 4. j.-l. verdier s thesis cat\xe9gories d\xe9riv\xe9es is the original reference ; also his essay with the same title in sga 4-1/2 , springer lecture notes 569 ( 1977 ) .
a robot can feasibly be given knowledge of a set of tools for manipulation activities ( e.g . hammer , knife , spatula ) . if the robot then operates outside a closed environment it is likely to face situations where the tool it knows is not available , but alternative unknown tools are present . we tackle the problem of finding the best substitute tool based solely on 3d vision data . our approach has simple hand-coded models of known tools in terms of superquadrics and relationships among them . our system attempts to fit these models to point clouds of unknown tools , producing a numeric value for how good a fit is . this value can be used to rate candidate substitutes . we explicitly control how closely each part of a tool must match our model , under direction from parameters of a target task . we allow bottom-up information from segmentation to dictate the sizes that should be considered for various parts of the tool . these ideas allow for a flexible matching so that tools may be superficially quite different , but similar in the way that matters . we evaluate our system 's story_separator_special_tag recognizing manipulations performed by a human and the transfer and execution of this by a robot is a difficult problem . we address this in the current study by introducing a novel representation of the relations between objects at decisive time points during a manipulation . thereby , we encode the essential changes in a visual scenery in a condensed way such that a robot can recognize and learn a manipulation without prior object knowledge . to achieve this we continuously track image segments in the video and construct a dynamic graph sequence . topological transitions of those graphs occur whenever a spatial relation between some segments has changed in a discontinuous way and these moments are stored in a transition matrix called the semantic event chain ( sec ) . we demonstrate that these time points are highly descriptive for distinguishing between different manipulations . employing simple sub-string search algorithms , secs can be compared and type-similar manipulations can be recognized with high confidence . as the approach is generic , statistical learning can be used to find the archetypal sec of a given manipulation class . the performance of the algorithm is demonstrated on a set of real story_separator_special_tag understanding and learning the semantics of complex manipulation actions are intriguing and non-trivial issues for the development of autonomous robots . in this paper , we present a novel method for an on-line , incremental learning of the semantics of manipulation actions by observation . recently , we had introduced the semantic event chains ( secs ) as a new generic representation for manipulations , which can be directly computed from a stream of images and is based on the changes in the relationships between objects involved in a manipulation . we here show that the sec concept can be used to bootstrap the learning of the semantics of manipulation actions without using any prior knowledge about actions or objects . we create a new manipulation action benchmark with 8 different manipulation tasks including in total 120 samples to learn an archetypal sec model for each manipulation action . we then evaluate the learned sec models with 20 long and complex chained manipulation sequences including in total 103 manipulation samples . thereby we put the event chains to a decisive test asking how powerful is action classification when using this framework . we find that we reach up to 100 story_separator_special_tag the ability to perceive possible interactions with the environment is a key capability of task-guided robotic agents . an important subset of possible interactions depends solely on the objects of interest and their position and orientation in the scene . we call these object-based interactions 0-order affordances and divide them among non-hidden and hidden whether the current configuration of an object in the scene renders its affordance directly usable or not . conversely to other works , we propose that detecting affordances that are not directly perceivable increase the usefulness of robotic agents with manipulation capabilities , so that by appropriate manipulation they can modify the object configuration until the seeked affordance becomes available . in this paper we show how 0-order affordances depending on the geometry of the objects and their pose can be learned using a supervised learning strategy on 3d mesh representations of the objects allowing the use of the whole object geometry . moreover , we show how the learned affordances can be detected in real scenes obtained with a low-cost depth sensor like the microsoft kinect through object recognition and 6d0f pose estimation and present results for both learning on meshes and detection on real story_separator_special_tag this paper addresses the problem of having a robot executing motor tasks requested by a human through spoken language . verbal instructions do not typically have a one-to-one mapping to robot actions , due to various reasons : economy of spoken language , e.g. , one short instruction might indeed correspond to a complex sequence of robot actions , and details about action execution might be omitted ; grounding , e.g. , some actions might need to be added or adapted due to environmental contingencies ; embodiment , e.g. , a robot might have different means than the human ones to obtain the goals that the instruction refers to . we propose a general cognitive architecture to deal with these issues , based on three steps : i ) language-based semantic reasoning on the instruction ( high-level ) , ii ) formulation of goals in robot symbols and probabilistic planning to achieve them ( mid-level ) , iii ) action execution ( low-level ) . the description of the mid-level is the main focus of this paper . the robot plans are adapted to the current scenario , perceived in real-time and continuously updated , taking in consideration the robot story_separator_special_tag reasoning about object affordances allows an autonomous agent to perform generalised manipulation tasks among object instances . while current approaches to grasp affordance estimation are effective , they are limited to a single hypothesis . we present an approach for detection and extraction of multiple grasp affordances on an object via visual input . we define semantics as a combination of multiple attributes , which yields benefits in terms of generalisation for grasp affordance prediction . we use markov logic networks to build a knowledge base graph representation to obtain a probability distribution of grasp affordances for an object . to harvest the knowledge base , we collect and make available a novel dataset that relates different semantic attributes . we achieve reliable mappings of the predicted grasp affordances on the object by learning prototypical grasping patches from several examples . we show our method 's generalisation capabilities on grasp affordance prediction for novel instances and compare with similar methods in the literature . moreover , using a robotic platform , on simulated and real scenarios , we evaluate the success of the grasping task when conditioned on the grasp affordance prediction . story_separator_special_tag cognitive developmental robotics ( cdr ) aims to provide new understanding of how human 's higher cognitive functions develop by means of a synthetic approach that developmentally constructs cognitive functions . the core idea of cdr is ldquophysical embodimentrdquo that enables information structuring through interactions with the environment , including other agents . the idea is shaped based on the hypothesized development model of human cognitive functions from body representation to social behavior . along with the model , studies of cdr and related works are introduced , and discussion on the model and future issues are argued . story_separator_special_tag this article presents a method for online learning of robot navigation affordances from spatiotemporally correlated haptic and depth cues . the method allows the robot to incrementally learn which objects present in the environment are actually traversable . this is a critical requirement for any wheeled robot performing in natural environments , in which the inability to discern vegetation from non-traversable obstacles frequently hampers terrain progression . a wheeled robot prototype was developed in order to experimentally validate the proposed method . the robot prototype obtains haptic and depth sensory feedback from a pan-tilt telescopic antenna and from a structured light sensor , respectively . with the presented method , the robot learns a mapping between objects ' descriptors , given the range data provided by the sensor , and objects ' stiffness , as estimated from the interaction between the antenna and the object . learning confidence estimation is considered in order to progressively reduce the number of required physical interactions with acquainted objects . to raise the number of meaningful interactions per object under time pressure , the several segments of the object under analysis are prioritised according to a set of morphological criteria . field trials show story_separator_special_tag we present two approaches to modeling affordance relations between objects , actions and effects . the first approach we present focuses on a probabilistic approach which uses a voting function to learn which objects afford which types of grasps . we compare the success rate of this approach to a second approach which uses an ontological reasoning engine for learning affordances . our second approach employs a rule-based system with axioms to reason on grasp selection for a given object . story_separator_special_tag object grasping is commonly followed by some form of object manipulation - either when using the grasped object as a tool or actively changing its position in the hand through in-hand manipulation to afford further interaction . in this process , slippage may occur due to inappropriate contact forces , various types of noise and/or due to the unexpected interaction or collision with the environment . story_separator_special_tag in this paper , we address the problem of tactile exploration and subsequent extraction of grasp hypotheses for unknown objects with a multi-fingered anthropomorphic robot hand . we present extensions on our tactile exploration strategy for unknown objects based on a dynamic potential field approach resulting in selective exploration in regions of interest . in the subsequent feature extraction , faces found in the object model are considered to generate grasp affordances . candidate grasps are validated in a four stage filtering pipeline to eliminate impossible grasps . to evaluate our approach , experiments were carried out in a detailed physics simulation using models of the five-finger hand and the test objects . story_separator_special_tag this paper presents work on vision based robotic grasping . the proposed method adopts a learning framework where prototypical grasping points are learnt from several examples and then used on novel objects . for representation purposes , we apply the concept of shape context and for learning we use a supervised learning approach in which the classifier is trained with labelled synthetic images . we evaluate and compare the performance of linear and non-linear classifiers . our results show that a combination of a descriptor based on shape context with a non-linear classification algorithm leads to a stable detection of grasping points for a variety of objects . story_separator_special_tag recent approaches in robot perception follow the insight that perception is facilitated by interaction with the environment . these approaches are subsumed under the term interactive perception ( ip ) . this view of perception provides the following benefits . first , interaction with the environment creates a rich sensory signal that would otherwise not be present . second , knowledge of the regularity in the combined space of sensory data and action parameters facilitates the prediction and interpretation of the sensory signal . in this survey , we postulate this as a principle for robot perception and collect evidence in its support by analyzing and categorizing existing work in this area . we also provide an overview of the most important applications of ip . we close this survey by discussing remaining open questions . with this survey , we hope to help define the field of interactive perception and to provide a valuable resource for future research . story_separator_special_tag the problem of object recognition has not yet been solved in its general form . the most successful approach to it so far relies on object models obtained by training a statistical method on visual features obtained from camera images . the images must necessarily come from huge visual datasets , in order to circumvent all problems related to changing illumination , point of view , etc . we hereby propose to also consider , in an object model , a simple model of how a human being would grasp that object ( its affordance ) . this knowledge is represented as a function mapping visual features of an object to the kinematic features of a hand while grasping it . the function is practically enforced via regression on a human grasping database . after describing the database ( which is publicly available ) and the proposed method , we experimentally evaluate it , showing that a standard object classifier working on both sets of features ( visual and motor ) has a significantly better recognition rate than that of a visual-only classifier . story_separator_special_tag we present a method for enabling robots to determine appropriate grasp configurations for handovers - i.e. , where to grasp , and how to orient an object when handing it over . in our method , a robot first builds a knowledge base by observing demonstrations of how certain objects are used and their proper handover grasp configurations . objects in the knowledge base are then organized based on their movements and inter-object interaction features . the key point in this process is that similarity in affordances should be recognized . when subsequently asked to handover an object , the robot then computes an appropriate grasp configuration based on the object 's recognized affordances . experimental results show that our method was able to differentiate and group together objects according to their affordances . furthermore , when given a new object , our method was able to generalize data in the knowledge base and determine an appropriate grasp configuration . story_separator_special_tag a theory of affordances is outlined according to which affordances are relations between the abilities of animals and features of the environment . as relations , affordances are both real and perceivable but are not properties of either the environment or the animal . i argue that this theory has advantages over extant theories of affordances and briefly discuss the relations among affordances and niches , perceivers , and events . story_separator_special_tag using hypersets as an analytic tool , we compare traditionally gibsonian ( chemero 2003 ; turvey 1992 ) and representationalist ( sahin et al . this issue ) understandings of the notion ` affordance ' . we show that representationalist understandings are incompatible with direct perception and erect barriers between animal and environment . they are , therefore , scarcely recognizable as understandings of ` affordance ' . in contrast , gibsonian understandings are shown to treat animal-environment systems as unified complex systems and to be compatible with direct perception . we discuss the fruitful connections between gibsonian affordances and dynamical systems explanation in the behavioral sciences and point to prior fruitful application of gibsonian affordances in robotics . we conclude that it is unnecessary to re-imagine affordances as representations in order to make them useful for researchers in robotics . story_separator_special_tag this letter presents a deep learning framework to predict the affordances of object parts for robotic manipulation . the framework segments affordance maps by jointly detecting and localizing candidate regions within an image . rather than requiring annotated real-world images , the framework learns from synthetic data and adapts to real-world data without supervision . the method learns domain-invariant region proposal networks and task-level domain adaptation components with regularization on the predicted domains . a synthetic version of the umd data set is collected for autogenerating annotated , synthetic input data . experimental results show that the proposed method outperforms an unsupervised baseline , and achieves performance close to state-of-the-art supervised approaches . an ablation study establishes the performance gap between the proposed method and the supervised equivalent ( 30 % ) . real-world manipulation experiments demonstrate use of the affordance segmentations for task execution , which achieves the same performance with supervised approaches . story_separator_special_tag our work focuses on robots to be deployed in human environments . these robots , which will need specialized object manipulation skills , should leverage end-users to efficiently learn the affordances of objects in their environment . this approach is promising because people naturally focus on showing salient aspects of the objects [ 1 ] . we replicate prior results and build on them to create a combination of self and supervised learning . we present experimental results with a robot learning 5 affordances on 4 objects using 1219 interactions . we compare three conditions : ( 1 ) learning through self-exploration , ( 2 ) learning from supervised examples provided by 10 nai ve users , and ( 3 ) self-exploration biased by the user input . our results characterize the benefits of self and supervised affordance learning and show that a combined approach is the most efficient and successful . story_separator_special_tag learning affordances can be defined as learning action potentials , i.e. , learning that an object exhibiting certain regularities offers the possibility of performing a particular action . we propose a method to endow an agent with the capability of acquiring this knowledge by relating the object invariants with the potentiality of performing an action via interaction episodes with each object . we introduce a biologically inspired model to test this learning hypothesis and a set of experiments to check its validity in a webots simulator with a khepera robot in a simple environment . the experiment set aims to show the use of a gwr network to cluster the sensory input of the agent ; furthermore , that the aforementioned algorithm for neural clustering can be used as a starting point to build agents that learn the relevant functional bindings between the cues in the environment and the internal needs of an agent . story_separator_special_tag in the future , robots will be used more extensively as assistants in home scenarios and must be able to acquire expertise from trainers by learning through crossmodal interaction . one promising approach is interactive reinforcement learning ( irl ) where an external trainer advises an apprentice on actions to speed up the learning process . in this paper we present an irl approach for the domestic task of cleaning a table and compare three different learning methods using simulated robots : 1 ) reinforcement learning ( rl ) ; 2 ) rl with contextual affordances to avoid failed states ; and 3 ) the previously trained robot serving as a trainer to a second apprentice robot . we then demonstrate that the use of irl leads to different performance with various levels of interaction and consistency of feedback . our results show that the simulated robot completes the task with rl , although working slowly and with a low rate of success . with rl and contextual affordances fewer actions are needed and can reach higher rates of success . for good performance with irl it is essential to consider the level of consistency of feedback since inconsistencies can story_separator_special_tag we show aspects of brain processing on how visual perception , recognition , attention , cognitive control , value attribution , decision-making , affordances and action can be melded together in a coherent manner in a cognitive control architecture of the perception action cycle for visually guided reaching and grasping of objects by a robot or an agent . the work is based on the notion that separate visuomotor channels are activated in parallel by specific visual inputs and are continuously modulated by attention and reward , which control a robot s/agent s action repertoire . the suggested visual apparatus allows the robot/agent to recognize both the object s shape and location , extract affordances and formulate motor plans for reaching and grasping . a focus-of-attention signal plays an instrumental role in selecting the correct object in its corresponding location as well as selects the most appropriate arm reaching and hand grasping configuration from a list of other configurations based on the success of previous experiences . the cognitive control architecture consists of a number of neurocomputational mechanisms heavily supported by experimental brain evidence : spatial saliency , object selectivity , invariance to object transformations , focus of attention , story_separator_special_tag in this paper , we demonstrate that simple interactions with objects in the environment leads to a manifestation of the perceptual properties of objects . this is achieved by deriving a condensed representation of the effects of actions ( called effect prototypes in the paper ) , and investigating the relevance between perceptual features extracted from the objects and the actions that can be applied to them . with this at hand , we show that the agent can categorize ( i.e. , partition ) its raw sensory perceptual feature vector , extracted from the environment , which is an important step for development of concepts and language . moreover , after learning how to predict the effect prototypes of objects , the agent can categorize objects based on the predicted effects of actions that can be applied on them . story_separator_special_tag when presented with an object to be manipulated , a robot must identify the available forms of interaction . how might an agent acquire this mapping from object representation to action ? in this paper , we describe an approach that learns a mapping from objects to grasps from human demonstration . for a given object , the teacher demonstrates a set of feasible grasps . we cluster these grasps in terms of the position and orientation of the hand relative to the object . individual clusters in this pose space are represented using probability density functions , and thus correspond to variations around canonical grasp approaches . multiple clusters are captured through a mixture distribution-based representation . experimental results demonstrate the feasibility of extracting a compact set of canonical grasps from the human demonstration . each of these canonical grasps can then be used to parameterize a reach controller that brings the robot hand into a specific spatial relationship with the object . story_separator_special_tag the concept of affordances facilitates the encoding of relations between actions and effects in an environment centered around the agent . such an interpretation has important impacts on several cognitive capabilities and manifestations of intelligence , such as prediction and planning . in this paper , a new framework based on denoising auto-encoders ( da ) is proposed which allows an agent to explore its environment and actively learn the affordances of objects and tools by observing the consequences of acting on them . the da serves as a unified framework to fuse multi-modal data and retrieve an entire missing modality or a feature within a modality given information about other modalities . this work has two major contributions . first , since training the da is done in continuous space , there will be no need to discretize the dataset and higher accuracies in inference can be achieved with respect to approaches in which data discretization is required ( e.g . bayesian networks ) . second , by fixing the structure of the da , knowledge can be added incrementally making the architecture particularly useful in online learning scenarios . evaluation scores of real and simulated robotic experiments show story_separator_special_tag we address the issue of learning and representing object grasp affordance models . we model grasp affordances with continuous probability density functions ( grasp densities ) which link object-relative grasp poses to their success probability . the underlying function representation is nonparametric and relies on kernel density estimation to provide a continuous model . grasp densities are learned and refined from exploration , by letting a robot play with an object in a sequence of grasp-and-drop actions : the robot uses visual cues to generate a set of grasp hypotheses , which it then executes and records their outcomes . when a satisfactory amount of grasp data is available , an importance-sampling algorithm turns it into a grasp density . we evaluate our method in a largely autonomous learning experiment , run on three objects with distinct shapes . the experiment shows how learning increases success rates . it also measures the success rate of grasps chosen to maximize the probability of success , given reaching constraints . story_separator_special_tag this paper addresses the issue of human-swarm interactions by proposing a new set of affordances that make a multi-robot system amenable to human control . in particular , we propose to use clay- a deformable medium- as the joystick for controlling the swarm , supporting such affordances as stretching , splitting and merging , shaping , and mixing . the contribution beyond the formulation of these affordances is the coupling of an image recognition framework to decentralized control laws for the individual robots , and the developed human-swarm interaction methodology is applied to a team of mobile robots . story_separator_special_tag we propose affordancenet , a new deep learning approach to simultaneously detect multiple objects and their affordances from rgb images . our affordancenet has two branches : an object detection branch to localize and classify the object , and an affordance detection branch to assign each pixel in the object to its most probable affordance label . the proposed framework employs three key components for effectively handling the multiclass problem in the affordance mask : a sequence of deconvolutional layers , a robust resizing strategy , and a multi-task loss function . the experimental results on the public datasets show that our affordancenet outperforms recent state-of-the-art methods by a fair margin , while its end-to-end architecture allows the inference at the speed of 150ms per image . this makes our affordancenet well suitable for real-time robotic applications . furthermore , we demonstrate the effectiveness of affordancenet in different testing environments and in real robotic applications . the source code is available at https : //github.com/nqanh/affordance-net . story_separator_special_tag in this paper , we studied how a mobile robot equipped with a 3d laser scanner can start from primitive behaviors and learn to use them to achieve goal-directed behaviors . for this purpose , we propose a learning scheme that is based on the concept of `` affordances '' , where the robot first learns about the different kind of effects it can create in the environment and then links these effects with the perception of the initial environment and the executed primitive behavior . it uses these learned relations to create certain effects in the environment and achieve more complex behaviors . story_separator_special_tag this paper addresses the problem of learning and efficiently representing discriminative probabilistic models of object-specific grasp affordances particularly when the number of labeled grasps is extremely limited . the proposed method does not require an explicit 3d model but rather learns an implicit manifold on which it defines a probability distribution over grasp affordances . we obtain hypothetical grasp configurations from visual descriptors that are associated with the contours of an object . while these hypothetical configurations are abundant , labeled configurations are very scarce as these are acquired via time-costly experiments carried out by the robot . kernel logistic regression ( klr ) via joint kernel maps is trained to map the hypothesis space of grasps into continuous class-conditional probability values indicating their achievability . we propose a soft-supervised extension of klr and a framework to combine the merits of semi-supervised and active learning approaches to tackle the scarcity of labeled grasps . experimental evaluation shows that combining active and semi-supervised learning is favorable in the existence of an oracle . furthermore , semi-supervised learning outperforms supervised learning , particularly when the labeled data is very limited . story_separator_special_tag the darpa robotics challenge trials held in december 2013 provided a landmark demonstration of dexterous mobile robots executing a variety of tasks aided by a remote human operator using only data from the robot 's sensor suite transmitted over a constrained , field-realistic communications link . we describe the design considerations , architecture , implementation , and performance of the software that team mit developed to command and control an atlas humanoid robot . our design emphasized human interaction with an efficient motion planner , where operators expressed desired robot actions in terms of affordances fit using perception and manipulated in a custom user interface . we highlight several important lessons we learned while developing our system on a highly compressed schedule . story_separator_special_tag within the field of neuro robotics we are driven primarily by the desire to understand how humans and animals live and grow and solve every day 's problems . to this aim we adopted a `` learn by doing '' approach by building artificial systems , e.g . robots that not only look like human beings but also represent a model of some brain process . they should , ideally , behave and interact like human beings ( being situated ) . the main emphasis in robotics has been on systems that act as a reaction to an external stimulus ( e.g . tracking , reaching ) , rather than as a result of an internal drive to explore or `` understand '' the environment . we think it is now appropriate to try to move from acting , in the sense explained above , to `` understanding '' . as a starting point we addressed the problem of learning about the effects and consequences of self-generated actions . how does the robot learn how to pull an object toward itself or to push it away ? how does the robot learn that spherical objects roll while a cube only story_separator_special_tag this work is about the relevance of gibson 's concept of affordances [ 1 ] for visual perception in interactive and autonomous robotic systems . in extension to existing functional views on visual feature representations , we identify the importance of learning in perceptual cueing for the anticipation of opportunities for interaction of robotic agents . we investigate how the originally defined representational concept for the perception of affordances - in terms of using either optical flow or heuristically determined 3d features of perceptual entities - should be generalized to using arbitrary visual feature representations . in this context we demonstrate the learning of causal relationships between visual cues and predictable interactions , and emphasize on a novel framework for cueing and hypothesis verification of affordances that could play an important role in future robot control architectures . we argue that affordance based perception should enable systems to react to environment stimuli both more efficient and autonomous , and provide a potential to plan on the basis of responses to more complex perceptual configurations . we verify the concept with a concrete implementation applying state-of-the-art visual descriptors and regions of interest within a simulated robot scenario and prove that these story_separator_special_tag now well into its second decade , the field of computer-supported collaborative learning ( cscl ) appears healthy , encompassing a diversity of topics of study , methodologies , and representatives of various research communities . it is an appropriate time to ask : what central questions can integrate our work into a coherent field ? this paper proposes the study of technology affordances for intersubjective meaning making as an integrating research agenda for cscl . a brief survey of epistemologies of collaborative learning and forms of computer support for that learning characterize the field to be integrated and motivate the proposal . a hybrid of experimental , descriptive and design methodologies is proposed in support of this agenda . a working definition of intersubjective meaning making as joint composition of interpretations of a dynamically evolving context is provided , and used to propose a framework around which dialogue between analytic approaches can take place . story_separator_special_tag one of the major challenges in developing autonomous systems is to make them able to recognize and categorize objects robustly . however , the appearance-based algorithms that are widely employed for robot perception do not explore the functionality of objects , described in terms of their affordances . these affordances ( e.g. , manipulation , grasping ) are discriminative for object categories and are important cues for reliable robot performance in everyday environments . in this paper , we propose a strategy for object recognition that integrates both visual appearance and grasp affordance features . following previous work , we hypothesize that additional grasp information improves object recognition , even if we reconstruct the grasp modality from visual features using a mapping function . we considered two different representations for the grasp modality : ( 1 ) motor information of the hand posture while grasping and ( 2 ) a more general grasp affordance descriptor . using a multi-modal classifier we show that having real grasp information significantly boost object recognition . this improvement is preserved , although to a lesser extent , if the grasp modality is reconstructed using the mapping function . story_separator_special_tag { agoncalves , jabrantes , gsaponaro , ljamone , alex } @ isr.ist.utl.ptabstract \x97inspired by the extraordinary ability of younginfants to learn how to grasp and manipulate objects , manyworks in robotics have proposed developmental approaches toallow robots to learn the effects of their own motor actions onobjects , i.e. , the objects affordances . while holding an object , infants also promote its contact with other objects , resulting inobject\x96object interactions that may afford effects not possibleotherwise . depending on the characteristics of both the heldobject ( intermediate ) and the acted object ( primary ) , systematicoutcomes may occur , leading to the emergence of a primitiveconcept of tool . in this paper we describe experiments witha humanoid robot exploring object\x96object interactions in aplayground scenario and learning a probabilistic causal modelof the effects of actions as functions of the characteristics of bothobjects . the model directly links the objects ' 2d shape visualcues to the effects of actions . because no object recognitionskills are required , generalization to novel objects is possibleby exploiting the correlations between the shape descriptors.we show experiments where an affordance model is learned ina simulated environment , and is then used on the real story_separator_special_tag this paper introduces the affordance template ros package for quickly programming , adjusting , and executing robot applications in the ros rviz environment . this package extends the capabilities of rviz interactive markers [ 1 ] by allowing an operator to specify multiple end-effector waypoint locations and grasp poses in object-centric coordinate frames and to adjust these waypoints in order to meet the run-time demands of the task ( specifically , object scale and location ) . the affordance template package stores task specifications in a robot-agnostic json description format such that it is trivial to apply a template to a new robot . as such , the affordance template package provides a robot-generic ros tool appropriate for building semi-autonomous , manipulation-based applications . affordance templates were developed by the nasa-jsc darpa robotics challenge ( drc ) team and have since successfully been deployed on multiple platforms including the nasa valkyrie and robonaut 2 humanoids , the university of texas dreamer robot and the willow garage pr2 . in this paper , the specification and implementation of the affordance template package is introduced and demonstrated through examples for wheel ( valve ) turning , pick-and-place , and drill grasping , story_separator_special_tag we present a novel method for learning and predicting the affordances of an object based on its physical and visual attributes . affordance prediction is a key task in autonomous robot learning , as it allows a robot to reason about the actions it can perform in order to accomplish its goals . previous approaches to affordance prediction have either learned direct mappings from visual features to affordances , or have introduced object categories as an intermediate representation . in this paper , we argue that physical and visual attributes provide a more appropriate mid-level representation for affordance prediction , because they support informationsharing between affordances and objects , resulting in superior generalization performance . in particular , affordances are more likely to be correlated with the attributes of an object than they are with its visual appearance or a linguistically-derived object category . we provide preliminary validation of our method experimentally , and present empirical comparisons to both the direct and category-based approaches of affordance prediction . our encouraging results suggest the promise of the attributebased approach to affordance prediction . story_separator_special_tag we present a method by which a robot learns to predict effective contact locations for pushing as a function of object shape . the robot performs push experiments at many contact locations on multiple objects and records local and global shape features at each point of contact . each trial attempts to either push the object in a straight line or to rotate the object to a new orientation . the robot observes the outcome trajectories of the manipulations and computes either a push-stability or rotate-push score for each trial . the robot then learns a regression function for each score in order to predict push effectiveness as a function of object shape . with this mapping , the robot can infer effective push locations for subsequent objects from their shapes , regardless of whether they belong to a previously encountered object class . these results are demonstrated on a mobile manipulator robot pushing a variety of household objects on a tabletop surface . story_separator_special_tag a novel behavior representation is introduced that permits a robot to systematically explore the best methods by which to successfully execute an affordance-based behavior for a particular object . the approach decomposes affordance-based behaviors into three components . we first define controllers that specify how to achieve a desired change in object state through changes in the agent 's state . for each controller we develop at least one behavior primitive that determines how the controller outputs translate to specific movements of the agent . additionally we provide multiple perceptual proxies that define the representation of the object that is to be computed as input to the controller during execution . a variety of proxies may be selected for a given controller and a given proxy may provide input for more than one controller . when developing an appropriate affordance-based behavior strategy for a given object , the robot can systematically vary these elements as well as note the impact of additional task variables such as location in the workspace . we demonstrate the approach using a pr2 robot that explores different combinations of controller , behavior primitive , and proxy to perform a push or pull positioning behavior on story_separator_special_tag in this paper , we consider the influence of gibson 's affordance theory on the design of robotic agents . affordance theory ( and the ecological approach to agent design in gen- eral ) has in many cases contributed to the development of successful robotic systems ; we provide a brief survey of ai research in this area . however , there remain signifi- cant issues that complicate discussions on this topic , particularly in the exchange of ideas between researchers in artificial intelligence and ecological psychology . we identify some of these issues , specifically the lack of a generally accepted definition of `` affordance '' and fundamental differences in the current approaches taken in ai and ecological psychology . while we consider reconciliation between these fields to be possible and mutually beneficial , it will require some flexibility on the issue of direct perception . story_separator_special_tag abstract data sets are crucial not only for model learning and evaluation but also to advance knowledge on human behavior , thus fostering mutual inspiration between neuroscience and robotics . however , choosing the right data set to use or creating a new data set is not an easy task , because of the variety of data that can be found in the related literature . the first step to tackle this issue is to collect and organize those that are available . in this work , we take a significant step forward by reviewing data sets that were published in the past 10 years and that are directly related to object manipulation and grasping . we report on modalities , activities , and annotations for each individual data set and we discuss our view on its use for object manipulation . we also compare the data sets and summarize them . finally , we conclude the survey by providing suggestions and discussing the best practices for the creation of new data sets . story_separator_special_tag in this paper we describe a cognitive architecture for humanoids interacting with objects and caregivers in a developmental robotics scenario . the architecture is foundational to the macsi project : it is designed to support experiments to make a humanoid robot gradually enlarge its repertoire of known objects and skills combining autonomous learning , social guidance and intrinsic motivation . this complex learning process requires the capability to learn affordances first . here , we present the general framework for achieving these goals , focusing on the elementary action , perception and interaction modules . preliminary experiments performed on the humanoid robot icub are also discussed . story_separator_special_tag the concept of affordances appeared in psychology during the late 60s as an alternative perspective on the visual perception of the environment . it was revolutionary in the intuition that the way living beings perceive the world is deeply influenced by the actions they are able to perform . then , across the last 40 years , it has influenced many applied fields , e.g. , design , human-computer interaction , computer vision , and robotics . in this paper , we offer a multidisciplinary perspective on the notion of affordances . we first discuss the main definitions and formalizations of the affordance theory , then we report the most significant evidence in psychology and neuroscience that support it , and finally we review the most relevant applications of this concept in robotics . story_separator_special_tag for scene understanding , one popular approach has been to model the object-object relationships . in this paper , we hypothesize that such relationships are only an artifact of certain hidden factors , such as humans . for example , the objects , monitor and keyboard , are strongly spatially correlated only because a human types on the keyboard while watching the monitor . our goal is to learn this hidden human context ( i.e. , the human-object relationships ) , and also use it as a cue for labeling the scenes . we present infinite factored topic model ( iftm ) , where we consider a scene as being generated from two types of topics : human configurations and human-object relationships . this enables our algorithm to hallucinate the possible configurations of the humans in the scene parsimoniously . given only a dataset of scenes containing objects but not humans , we show that our algorithm can recover the human object relationships . we then test our algorithm on the task of attribute and object labeling in 3d scenes and show consistent improvements over the state-of-the-art . story_separator_special_tag humanoid robots that have to operate in cluttered and unstructured environments , such as man-made and natural disaster scenarios , require sophisticated sensorimotor capabilities . a crucial prerequisite for the successful execution of whole-body locomotion and manipulation tasks in such environments is the perception of the environment and the extraction of associated environmental affordances , i.e . the action possibilities of the robot in the environment , in order to generate whole-body locomotion and manipulation actions . we believe that such a coupling between perception and action could be a key to substantially increase the flexibility of humanoid robots . in this paper , we present an approach for the generation of whole-body locomotion and manipulation actions based on the affordances associated with environmental elements in the scene which are extracted via multimodal exploration . based on the properties of detected environmental primitives and the estimated empty space in the scene , we propose methods to generate hypotheses for feasible whole-body actions while taking into account additional task constraints such as manipulability and balance . we combine visual and inertial sensing modalities by means of a novel depth model for generating segmented and categorized geometric primitives . a rule-based system story_separator_special_tag autonomous robots that are intended to work in disaster scenarios like collapsed or contaminated buildings need to be able to efficiently identify action possibilities in unknown environments . this includes the detection of environmental elements that allow interaction , such as doors or debris , as well as the utilization of fixed environmental structures for stable whole-body loco-manipulation . affordances that refer to whole-body actions are especially valuable for humanoid robots as the necessity of stabilization is an integral part of their control strategies . based on our previous work we propose to apply the concept of affordances to actions of stable whole-body loco-manipulation , in particular to pushing and lifting of large objects . we extend our perceptual pipeline in order to build large-scale representations of the robot 's environment in terms of environmental primitives like planes , cylinders and spheres . a rule-based system is employed to derive whole-body affordance hypotheses from these primitives , which are then subject to validation by the robot . an experimental evaluation demonstrates our progress in detection , validation and utilization of whole-body affordances . story_separator_special_tag we propose a formalism for the hierarchical representation of affordances . starting with a perceived model of the environment consisting of geometric primitives like planes or cylinders , we define a hierarchical system for affordance extraction whose foundation are elementary power grasp affordances . higher-level affordances , e.g . bimanual affordances , result from combining lower-level affordances with additional properties concerning the underlying geometric primitives of the scene . we model affordances as continuous certainty functions taking into account properties of the environmental elements and the perceiving robot 's embodiment . the developed formalism is regarded as the basis for the description of whole-body affordances , i.e . affordances associated with whole-body actions . the proposed formalism was implemented and experimentally evaluated in multiple scenarios based on rgb-d camera data . the feasibility of the approach is demonstrated on a real robotic platform . story_separator_special_tag autonomous manipulation in unstructured environments will enable a large variety of exciting and important applications . despite its promise , autonomous manipulation remains largely unsolved . even the most rudimentary manipulation task -- such as removing objects from a pile -- remains challenging for robots . we identify three major challenges that must be addressed to enable autonomous manipulation : object segmentation , action selection , and motion generation . these challenges become more pronounced when unknown man-made or natural objects are cluttered together in a pile . we present a system capable of manipulating unknown objects in such an environment . our robot is tasked with clearing a table by removing objects from a pile and placing them into a bin . to that end , we address the three aforementioned challenges . our robot perceives the environment with an rgb-d sensor , segmenting the pile into object hypotheses using non-parametric surface models . our system then computes the affordances of each object , and selects the best affordance and its associated action to execute . finally , our robot instantiates the proper compliant motion primitive to safely execute the desired action . for efficient and reliable action selection story_separator_special_tag when a robot is deployed it needs to understand the nature of its surroundings . in this paper , we address the problem of semantic labeling 3d point clouds by object affordance ( e.g. , ` pushable ' , ` liftable ' ) . we propose a technique to extract geometric features from point cloud segments and build a classifier to predict associated object affordances . with the classifier , we have developed an algorithm to enhance object segmentation and reduce manipulation uncertainty by iterative clustering , along with minimizing labeling entropy . our incremental multiple view merging technique shows improved object segmentation . the novel feature of our approach is the semantic labeling that can be directly applied to manipulation planning . in our experiments with 6 affordance labels , an average of 81.8 % accuracy of affordance prediction is achieved . we demonstrate refined object segmentation by applying the classifier to data from the pr2 robot using a microsoft kinect in an indoor office environment . story_separator_special_tag we describe a technique to build an affordance map interactively for robotic tasks . affordances are predicted by a trained classifier using geometric features extracted from objects . based on 2d occupancy grid , a markov random field ( mrf ) model builds an affordance map with relational affordance with neighboring cells . the quality of the affordance map is refined by sequences of interactive manipulations selected from the model to yield the highest reduction in uncertainty . story_separator_special_tag this paper investigates object categorization according to function , i.e. , learning the affordances of objects from human demonstration . object affordances ( functionality ) are inferred from observations of humans using the objects in different types of actions . the intended application is learning from demonstration , in which a robot learns to employ objects in household tasks , from observing a human performing the same tasks with the objects . we present a method for categorizing manipulated objects and human manipulation actions in context of each other . the method is able to simultaneously segment and classify human hand actions , and detect and classify the objects involved in the action . this can serve as an initial step in a learning from demonstration method . experiments show that the contextual information improves the classification of both objects and actions . story_separator_special_tag objects in human environments support various functionalities which govern how people interact with their environments in order to perform tasks . in this work , we discuss how to represent and learn a functional understanding of an environment in terms of object affordances . such an understanding is useful for many applications such as activity detection and assistive robotics . starting with a semantic notion of affordances , we present a generative model that takes a given environment and human intention into account , and grounds the affordances in the form of spatial locations on the object and temporal trajectories in the 3d environment . the probabilistic model also allows uncertainties and variations in the grounded affordances . we apply our approach on rgb-d videos from cornell activity dataset , where we first show that we can successfully ground the affordances , and we then show that learning such affordances improves performance in the labeling tasks . story_separator_special_tag an important aspect of human perception is anticipation , which we use extensively in our day-to-day activities when interacting with other humans as well as with our surroundings . anticipating which activities will a human do next ( and how ) can enable an assistive robot to plan ahead for reactive responses . furthermore , anticipation can even improve the detection accuracy of past activities . the challenge , however , is two-fold : we need to capture the rich context for modeling the activities and object affordances , and we need to anticipate the distribution over a large space of future human activities . in this work , we represent each possible future using an anticipatory temporal conditional random field ( atcrf ) that models the rich spatial-temporal relations through object affordances . we then consider each atcrf as a particle and represent the distribution over the potential futures using a set of particles . in extensive evaluation on cad-120 human activity rgb-d dataset , we first show that anticipation improves the state-of-the-art detection results . we then show that for new subjects ( not seen in the training set ) , we obtain an activity anticipation accuracy ( story_separator_special_tag when robots work alongside humans for performing collaborative tasks , they need to be able to anticipate human s future actions and plan appropriate actions . the tasks we consider are performed in contextually-rich environments containing objects , and there is a large variation in the way humans perform these tasks . we use a graphical model to represent the state-space , where we model the humans through their low-level kinematics as well as their high-level intent , and model their interactions with the objects through physically-grounded object affordances . this allows our model to anticipate a belief about possible future human actions , and we model the human s and robot s behavior through an mdp in this rich state-space . we further discuss that due to perception errors and the limitations of the model , the human may not take the optimal action and therefore we present robot s anticipatory planning with different behaviors of the human within the model s scope . in experiments on cornell activity dataset , we show that our method performs better than various baselines for collaborative planning . story_separator_special_tag understanding human activities and object affordances are two very important skills , especially for personal robots which operate in human environments . in this work , we consider the problem of extracting a descriptive labeling of the sequence of sub-activities being performed by a human , and more importantly , of their interactions with the objects in the form of associated affordances . given a rgb-d video , we jointly model the human activities and object affordances as a markov random field where the nodes represent objects and sub-activities , and the edges represent the relationships between object affordances , their relations with sub-activities , and their evolution over time . we formulate the learning problem using a structural support vector machine ( ssvm ) approach , where labelings over various alternate temporal segmentations are considered as latent variables . we tested our method on a challenging dataset comprising 120 activity videos collected from 4 subjects , and obtained an accuracy of 79.4 % for affordance , 63.4 % for sub-activity and 75.0 % for high-level activity labeling . we then demonstrate the use of such descriptive labeling in performing assistive tasks by a pr2 robot . story_separator_special_tag autonomous robots should be able to move freely in unknown environments and avoid impacts with obstacles . the overall traversability estimation of the terrain and the subsequent selection of an obstacle-free route are prerequisites of a successful autonomous operation . this work proposes a computationally efficient technique for the traversability estimation of the terrain , based on a machine learning classification method . additionally , a new method for collision risk assessment is introduced . the proposed system uses stereo vision as a first step in order to obtain information about the depth of the scene . then , a v-disparity image calculation processing step extracts information-rich features about the characteristics of the scene , which are used to train a support vector machine ( svm ) separating the traversable and non-traversable scenes . the ones classified as traversable are further processed exploiting the polar transformation of the depth map . the result is a distribution of obstacle existence likelihoods for each direction , parametrized by the robot 's embodiment . story_separator_special_tag we describe a system for autonomous learning of visual object representations and their grasp affordances on a robot-vision system . it segments objects by grasping and moving 3d scene features , and creates probabilistic visual representations for object detection , recognition and pose estimation , which are then augmented by continuous characterizations of grasp affordances generated through biased , random exploration . thus , based on a careful balance of generic prior knowledge encoded in ( 1 ) the embodiment of the system , ( 2 ) a vision system extracting structurally rich information from stereo image sequences as well as ( 3 ) a number of built-in behavioral modules on the one hand , and autonomous exploration on the other hand , the system is able to generate object and grasping knowledge through interaction with its environment . story_separator_special_tag future service robots will need to perform a wide range of tasks using various objects . in order to perform complex tasks , robots require a suitable internal representation of the task . we propose a hybrid framework for representing manipulation tasks , which combines continuous motion planning and discrete task-level planning . in addition , we use a mid-level planner to optimize individual actions according to the plan . the proposed framework incorporates biologically-inspired concepts , such as affordances and motor primitives , in order to efficiently plan for manipulation tasks . the final framework is modular , can generalize well to different situations , and is straightforward to expand . our demonstrations also show how the use of affordances and mid-level planning can lead to improved performance . story_separator_special_tag the direct perception of actions allows a robot to predict the afforded actions of observed objects . in this paper , we present a non-parametric approach to representing the affordance-bearing subparts of objects . this representation forms the basis of a kernel function for computing the similarity between different subparts . using this kernel function , together with motor primitive actions , the robot can learn the required mappings to perform direct action perception . the proposed approach was successfully implemented on a real robot , which could then quickly learn to generalize grasping and pouring actions to novel objects . story_separator_special_tag abstract this paper formalises object action complexes ( oacs ) as a basis for symbolic representations of sensory motor experience and behaviours . oacs are designed to capture the interaction between objects and associated actions in artificial cognitive systems . this paper gives a formal definition of oacs , provides examples of their use for autonomous cognitive robots , and enumerates a number of critical learning problems in terms of oacs . story_separator_special_tag geometric information alone is not sufficient to guide foot placement in arobotic device . a travel path may be firm or soft , slippery or sticky , and will thus affect locomotion . in humans , associations are made between various travel surfaces and gait modification . if a preferred foot position is unavailable , an alternate is selected . this selection is biased , and not random . here we present a model of alternate foot placement . we demonstrate some novel properties of the model , thus showing its predictive value . we give results in a small bipedal walking mechanism . the power of our approach is that it captures key features of human performance and can be easily implemented in most walking machines . story_separator_special_tag objects are made of parts , each with distinct geometry , physics , functionality , and affordances . developing such a distributed , physical , interpretable representation of objects will facilitate intelligent agents to better explore and interact with the world . in this paper , we study physical primitive decomposition -- -understanding an object through its components , each with physical and geometric attributes . as annotated data for object parts and physics are rare , we propose a novel formulation that learns physical primitives by explaining both an object 's appearance and its behaviors in physical events . our model performs well on block towers and tools in both synthetic and real scenarios ; we also demonstrate that visual and physical observations often provide complementary signals . we further present ablation and behavioral studies to better understand our model and contrast it with human performance . story_separator_special_tag in this paper we build an imitation learning algorithm for a humanoid robot on top of a general world model provided by learned object affordances . we consider that the robot has previously learned a task independent affordance-based model of its interaction with the world . this model is used to recognize the demonstration by another agent ( a human ) and infer the task to be learned . we discuss several important problems that arise in this combined framework , such as the influence of an inaccurate model in the recognition of the demonstration . we illustrate the ideas in the paper with some experimental results obtained with a real robot . story_separator_special_tag tools can afford similar functionality if they share some common geometrical features . moreover , the effect that can be achieved with a tool depends as much on the action performed as on the way in which it is grasped . in the current paper we present a two step model for learning and predicting tool affordances which specifically tackles these issues . in the first place , we introduce oriented multi-scale extended gaussian image ( oms-egi ) , a set of 3d features devised to describe tools in interaction scenarios , able to encapsulate in a general and compact way the geometrical properties of a tool relative to the way in which it is grasped . then , based on these features , we propose an approach to learn and predict tool affordances in which the robot first discovers the available tool-pose categories of a set of hand-held tools , and then learns a distinct affordance model for each of the discovered tool-pose categories . results show that the combination of oms-egi 3d features and multi-model affordance learning approach is able to produce quite accurate predictions of the effect that an action performed with a tool grasped on a story_separator_special_tag the concept of affordance is popular in the hci community but not well understood . donald norman appropriated the concept of affordances from james j. gibson for the design of common objects and both implicitly and explicitly adjusted the meaning given by gibson . there was , however , ambiguity in norman s original definition and use of affordances which he has subsequently made efforts to clarify . his definition germinated quickly and through a review of the hci literature we show that this ambiguity has lead to widely varying uses of the concept . norman has recently acknowledged the ambiguity , however , important clarifications remain . using affordances as a basis , we elucidate the role of the designer and the distinction between usefulness and usability . we expand gibson s definition into a framework for design . story_separator_special_tag affordances capture the relationships between a robot and the environment in terms of the actions that the robot is able to perform . the notable characteristic of affordance-based perception is that an object is perceived by what it affords ( e.g. , graspable and rollable ) , instead of identities ( e.g. , name , color , and shape ) . affordances play an important role in basic robot capabilities such as recognition , planning , and prediction . the key challenges in affordance research are : 1 ) how to automatically discover the distinctive features that specify an affordance in an online and incremental manner and 2 ) how to generalize these features to novel environments . this survey provides an entry point for interested researchers , including : 1 ) a general overview ; 2 ) classification and critical analysis of existing work ; 3 ) discussion of how affordances are useful in developmental robotics ; 4 ) some open questions about how to use the affordance concept ; and 5 ) a few promising research directions . story_separator_special_tag searching for objects in occluded spaces is one of the problems robots need to solve when tackling mobile manipulation tasks . most approaches focus only on searching for a specific object . in this paper , we use the concept of relational affordances to improve occluded object search performance . affordances define action possibilities on an object in the environment and play a role in basic cognitive capabilities . relational affordances extend this concept by modelling relations between multiple objects . by learning and using a relational affordance model we can search for any of the multiple objects that afford a given action , each object type having a probability distribution over possible sizes and shapes , and where spatial relations between objects such as co-occurrence and stacking are modelled . the experimental results show the viability of the relational affordance models for occluded object search . story_separator_special_tag affordances define the action possibilities on an object in the environment and in robotics they play a role in basic cognitive capabilities . previous works have focused on affordance models for just one object even though in many scenarios they are defined by configurations of multiple objects that interact with each other . we employ recent advances in statistical relational learning to learn affordance models in such cases . our models generalize over objects and can deal effectively with uncertainty . two-object interaction models are learned from robotic interaction with the objects in the world and employed in situations with arbitrary numbers of objects . we illustrate these ideas with experimental results of an action recognition task where a robot manipulates objects on a shelf . story_separator_special_tag in this paper we study the learning of affordances through self-experimentation . we study the learning of local visual descriptors that anticipate the success of a given action executed upon an object . consider , for instance , the case of grasping . although graspable is a property of the whole object , the grasp action will only succeed if applied in the right part of the object . we propose an algorithm to learn local visual descriptors of good grasping points based on a set of trials performed by the robot . the method estimates the probability of a successful action ( grasp ) based on simple local features . experimental results on a humanoid robot illustrate how our method is able to learn descriptors of good grasping points and to generalize to novel objects based on prior experience . story_separator_special_tag we present a developmental perspective of robot learning that uses affordances as the link between sensory-motor coordination and imitation . the key concept is a general model for affordances able to learn the statistical relations between actions , object properties and the effects of actions on objects . based on the learned affordances , it is possible to perform simple imitation games providing both task interpretation and planning capabilities . to evaluate the approach , we provide results of affordance learning with a real robot and simple imitation games with people . story_separator_special_tag affordances represent the behavior of objects in terms of the robot 's motor and perceptual skills . this type of knowledge plays a crucial role in developmental robotic systems , since it is at the core of many higher level skills such as imitation . in this paper , we propose a general affordance model based on bayesian networks linking actions , object features and action effects . the network is learnt by the robot through interaction with the surrounding objects . the resulting probabilistic model is able to deal with uncertainty , redundancy and irrelevant information . we evaluate the approach using a real humanoid robot that interacts with objects . story_separator_special_tag affordances encode relationships between actions , objects , and effects . they play an important role on basic cognitive capabilities such as prediction and planning . we address the problem of learning affordances through the interaction of a robot with the environment , a key step to understand the world properties and develop social skills . we present a general model for learning object affordances using bayesian networks integrated within a general developmental architecture for social robots . since learning is based on a probabilistic model , the approach is able to deal with uncertainty , redundancy , and irrelevant information . we demonstrate successful learning in the real world by having an humanoid robot interacting with objects . we illustrate the benefits of the acquired knowledge in imitation games . story_separator_special_tag as robots begin to collaborate with humans in everyday workspaces , they will need to understand the functions of tools and their parts . to cut an apple or hammer a nail , robots need to not just know the tool 's name , but they must localize the tool 's parts and identify their functions . intuitively , the geometry of a part is closely related to its possible functions , or its affordances . therefore , we propose two approaches for learning affordances from local shape and geometry primitives : 1 ) superpixel based hierarchical matching pursuit ( s-hmp ) ; and 2 ) structured random forests ( srf ) . moreover , since a part can be used in many ways , we introduce a large rgb-depth dataset where tool parts are labeled with multiple affordances and their relative rankings . with ranked affordances , we evaluate the proposed methods on 3 cluttered scenes and over 105 kitchen , workshop and garden tools , using ranked correlation and a weighted f-measure score [ 26 ] . experimental results over sequences containing clutter , occlusions , and viewpoint changes show that the approaches return precise predictions that could story_separator_special_tag we present a new method to detect object affordances in real-world scenes using deep convolutional neural networks ( cnn ) , an object detector and dense conditional random fields ( crf ) . our system first trains an object detector to generate bounding box candidates from the images . a deep cnn is then used to learn the depth features from these bounding boxes . finally , these feature maps are post-processed with dense crf to improve the prediction along class boundaries . the experimental results on our new challenging dataset show that the proposed approach outperforms recent state-of-the-art methods by a substantial margin . furthermore , from the detected affordances we introduce a grasping method that is robust to noisy data . we demonstrate the effectiveness of our framework on the full-size humanoid robot walk-man using different objects in real-world scenarios . story_separator_special_tag tool-body assimilation is one of the intelligent human abilities . through trial and experience , humans are capable of using tools as if they are part of their own bodies . this paper presents a method to apply a robot 's active sensing experience for creating the tool-body assimilation model . the model is composed of a feature extraction module , dynamics learning module , and a tool recognition module . self-organizing map ( som ) is used for the feature extraction module to extract object features from raw images . multiple time-scales recurrent neural network ( mtrnn ) is used as the dynamics learning module . parametric bias ( pb ) nodes are attached to the weights of mtrnn as second-order network to modulate the behavior of mtrnn based on the tool . the generalization capability of neural networks provide the model the ability to deal with unknown tools . experiments are performed with hrp-2 using no tool , i-shaped , t-shaped , and l-shaped tools . the distribution of pb values have shown that the model has learned that the robot 's dynamic properties change when holding a tool . the results of the experiment show that the story_separator_special_tag there are many situations in which an object that needs to be grasped is not graspable , but could be grasped if it was situated at a different location . by applying nonprehensile manipulation actions such as poking , the object can be moved to a new location without first being grasped . we consider these issues in the context of an artificial cognitive system . the goal of the paper is twofold ; firstly , we study how the robot can acquire nonprehensile manipulation knowledge by observing the outcomes of exploratory movements on objects . we propose a learning process that enables the robot to acquire a general pushing rule describing the relationship between the direction of poke and the observed object motion for a class of objects . in this way the robot acquires new action knowledge without having any specialized prior model about the action . secondly , we investigate how the acquired action knowledge can be used to realize grasping in complex situations where the robot could not grasp the object without moving it to a new location . here the learned poking behavior serves as a support action for robot grasping . the proposed approach story_separator_special_tag analyzing affordances has its root in socio-cognitive development of primates . knowing what the environment , including other agents , can offer in terms of action capabilities is important for our day-to-day interaction and cooperation . in this paper , we will merge two complementary aspects of affordances : from agent-object perspective , what an agent afford to do with an object , and from agent-agent perspective , what an agent can afford to do for other agent , and present a unified notion of affordance graph . the graph will encode affordances for a variety of tasks : take , give , pick , put on , put into , show , hide , make accessible , etc . another novelty will be to incorporate the aspects of effort and perspective-taking in constructing such graph . hence , the affordance graph will tell about the action-capabilities of manipulating the objects among the agents and across the places , along with the information about the required level of efforts and the potential places . we will also demonstrate some interesting applications . story_separator_special_tag this work introduces an affordance characterization employing mechanical wrenches as a metric for predicting and planning with workspace affordances . although affordances are a commonly used high-level paradigm for robotic task-level planning and learning , the literature has been sparse regarding how to characterize the agent in this object-agent-environment framework . in this work , we propose decomposing a behavior into a vocabulary of characteristic requirements and capabilities that are suitable to predict the affordances of various parts of the workspace . specifically , we investigate mechanical wrenches as a viable representation of these affordance requirements and capabilities . we then use this vocabulary in a planning system to compose complex motions from simple behavior types in continuous space . the utility of the framework for complex planning is demonstrated on example scenarios both in simulation and with real-world industrial manipulators . story_separator_special_tag when it comes to learning how to manipulate objects from experience with minimal prior knowledge , robots encounter significant challenges . when the objects are unknown to the robot , the lack of prior object models demands a robust feature descriptor such that the robot can reliably compare objects and the effects of their manipulation . in this paper , using an experimental platform that gathers 3-d data from the kinect rgb-d sensor , as well as push action trajectories from a tracking system , we address these issues using an action-grounded 3-d feature descriptor . rather than using pose-invariant visual features , as is often the case with object recognition , we ground the features of objects with respect to their manipulation , that is , by using shape features that describe the surface of an object relative to the push contact point and direction . using this setup , object push affordance learning trials are performed by a human and both pre-push and post-push object features are gathered , as well as push action trajectories . a self-supervised multi-view online learning algorithm is employed to bootstrap both the discovery of affordance classes in the post-push view , as story_separator_special_tag this paper introduces and evaluates a new tensor field representation to express the geometric affordance of one object relative to another , a key competence for cognitive and autonomous robots . we expand the bisector surface representation to one that is weight-driven and that retains the provenance of surface points with directional vectors . we also incorporate the notion of affordance keypoints which allow for faster decisions at a point of query and with a compact and straightforward descriptor . using a single interaction example , we are able to generalize to previously-unseen scenarios ; both synthetic and also real scenes captured with rgb-d sensors . evaluations also include crowdsourcing comparisons that confirm the validity of our affordance proposals , which agree on average 84 % of the time with human judgments , that is 20 40 % better than the baseline methods . story_separator_special_tag the concept of affordances was introduced by j. j. gibson to explain how inherent `` values '' and `` meanings '' of things in the environment can be directly perceived and how this information can be linked to the action possibilities offered to the organism by the environment . although introduced in psychology , the concept influenced studies in other fields ranging from human computer interaction to autonomous robotics . in this article , we first introduce the concept of affordances as conceived by j. j. gibson and review the use of the term in different fields , with particular emphasis on its use in autonomous robotics . then , we summarize four of the major formalization proposals for the affordance term . we point out that there are three , not one , perspectives from which to view affordances and that much of the confusion regarding discussions on the concept has arisen from this . we propose a new formalism for affordances and discuss its implications for autonomous robot control . we report preliminary results obtained with robots and link them with these implications . story_separator_special_tag in this paper we introduce a knowledge engine , which learns and shares knowledge representations , for robots to carry out a variety of tasks . building such an engine brings with it the challenge of dealing with multiple data modalities including symbols , natural language , haptic senses , robot trajectories , visual features and many others . the \\textit { knowledge } stored in the engine comes from multiple sources including physical interactions that robots have while performing tasks ( perception , planning and control ) , knowledge bases from the internet and learned representations from several robotics research groups . we discuss various technical aspects and associated challenges such as modeling the correctness of knowledge , inferring latent information and formulating different robotic tasks as queries to the knowledge engine . we describe the system architecture and how it supports different mechanisms for users and robots to interact with the engine . finally , we demonstrate its use in three important research areas : grounding natural language , perception , and planning , which are the key building blocks for many robotic tasks . this knowledge engine is a collaborative effort and we call it robobrain . story_separator_special_tag this paper studies the learning of task constraints that allow grasp generation in a goal-directed manner . we show how an object representation and a grasp generated on it can be integrated with the task requirements . the scientific problems tackled are ( i ) identification and modeling of such task constraints , and ( ii ) integration between a semantically expressed goal of a task and quantitative constraint functions defined in the continuous object-action domains . we first define constraint functions given a set of object and action attributes , and then model the relationships between object , action , constraint features and the task using bayesian networks . the probabilistic framework deals with uncertainty , combines a-priori knowledge with observed data , and allows inference on target attributes given only partial observations . we present a system designed to structure data generation and constraint learning processes that is applicable to new tasks , embodiments and sensory data . the application of the task constraint model is demonstrated in a goal-directed imitation experiment . story_separator_special_tag we study embodiment-specific robot grasping tasks , represented in a probabilistic framework . the framework consists of a bayesian network ( bn ) integrated with a novel multi-variate discretization model . the bn models the probabilistic relationships among tasks , objects , grasping actions and constraints . the discretization model provides compact data representation that allows efficient learning of the conditional structures in the bn . to evaluate the framework , we use a database generated in a simulated environment including examples of a human and a robot hand interacting with objects . the results show that the different kinematic structures of the hands affect both the bn structure and the conditional distributions over the modeled variables . both models achieve accurate task classification , and successfully encode the semantic task requirements in the continuous observation spaces . in an imitation experiment , we demonstrate that the representation framework can transfer task knowledge between different embodiments , therefore is a suitable model for grasp planning and imitation in a goal-directed manner . story_separator_special_tag the main contribution of this paper is a probabilistic method for predicting human manipulation intention from image sequences of human-object interaction . predicting intention amounts to inferring the imminent manipulation task when human hand is observed to have stably grasped the object . inference is performed by means of a probabilistic graphical model that encodes object grasping tasks over the 3d state of the observed scene . the 3d state is extracted from rgb-d image sequences by a novel vision-based , markerless hand-object 3d tracking framework . to deal with the high-dimensional state-space and mixed data types ( discrete and continuous ) involved in grasping tasks , we introduce a generative vector quantization method using mixture models and self-organizing maps . this yields a compact model for encoding of grasping actions , able of handling uncertain and partial sensory data . experimentation showed that the model trained on simulated data can provide a potent basis for accurate goal-inference with partial and noisy observations of actual real-world demonstrations . we also show a grasp selection process , guided by the inferred human intention , to illustrate the use of the system for goal-directed grasp imitation . story_separator_special_tag grasping and manipulating everyday objects in a goal-directed manner is an important ability of a service robot . the robot needs to reason about task requirements and ground these in the sensorimotor information . grasping and interaction with objects are challenging in real-world scenarios , where sensorimotor uncertainty is prevalent . this paper presents a probabilistic framework for the representation and modeling of robot-grasping tasks . the framework consists of gaussian mixture models for generic data discretization , and discrete bayesian networks for encoding the probabilistic relations among various task-relevant variables , including object and action features as well as task constraints . we evaluate the framework using a grasp database generated in a simulated environment including a human and two robot hand models . the generative modeling approach allows the prediction of grasping tasks given uncertain sensory data , as well as object and grasp selection in a task-oriented manner . furthermore , the graphical model framework provides insights into dependencies between variables and features relevant for object grasping . story_separator_special_tag appearance-based estimation of grasp affordances is desirable when 3-d scans become unreliable due to clutter or material properties . we develop a general framework for estimating grasp affordances from 2-d sources , including local texture-like measures as well as object-category measures that capture previously learned grasp strategies . local approaches to estimating grasp positions have been shown to be effective in real-world scenarios , but are unable to impart object-level biases and can be prone to false positives . we describe how global cues can be used to compute continuous pose estimates and corresponding grasp point locations , using a max-margin optimization for category-level continuous pose regression . we provide a novel dataset to evaluate visual grasp affordance estimation ; on this dataset we show that a fused method outperforms either local or global methods alone , and that continuous pose estimation improves over discrete output models . finally , we demonstrate our autonomous object detection and grasping system on the willow garage pr2 robot . story_separator_special_tag current approaches to visual object class detection mainly focus on the recognition of basic level categories , such as cars , motorbikes , mugs and bottles . although these approaches have demonstrated impressive performance in terms of recognition , their restriction to these categories seems inadequate in the context of embodied , cognitive agents . here , distinguishing objects according to functional aspects based on object affordances is important in order to enable manipulation of and interaction between physical objects and cognitive agent . in this paper , we propose a system for the detection of functional object classes , based on a representation of visually distinct hints on object affordances ( affordance cues ) . it spans the complete range from tutordriven acquisition of affordance cues , learning of corresponding object models , and detecting novel instances of functional object classes in real images . story_separator_special_tag the idea that to perceive an object is to perceive its affordances that is , the interactions of the perceiver with the world that the object supports or affords is attractive from the point of view of theories in cognitive science that emphasize the fundamental role of actionsin representing an agent s knowledge about the world . however , in this general form , the notion has so far lacked a formal expression . this paper offers a representation for objects in terms of their affordances using linear dynamic event calculus , a formalism for reasoning about causal relations over events . it argues that a representation of this kind , linking objects to the events which they are characteristically involved in , underlies some universal operations of natural language syntactic and semantic composition that are postulated in combinatory categorial grammar ( ccg ) . these observations imply that the language faculty is more directly related to prelinguistic cognitive apparatus used for planning action than formal theories in either domain have previously seemed to allow . story_separator_special_tag in this article , i argue that affordances are properties of the animal-environment system , that is , that they are emergent properties that do not inhere in either the environment or the animal . i critique and review the formal definition of affordance offered by turvey ( 1992 ) . turvey defined affordances as properties of the environment ; i discuss some consequences of this and argue that turvey 's strategy of grounding the definition of affordance in terms of dispositional properties is problematic . i also suggest that turvey 's definition of affordance may lead to problems for the specification and direct perception of affordances . motivated by these problems , i propose a new definition of affordance , in which affordances are properties of the animal-environment system . this definition does not rely on the concept of dispositional properties and is consistent with direct perception . story_separator_special_tag this paper introduces a novel approach to representing and learning tool affordances by a robot . the tool representation described here uses a behavior-based approach to ground the tool affordances in the behavioral repertoire of the robot . the representation is learned during a behavioral babbling stage in which the robot randomly chooses different exploratory behaviors , applies them to the tool , and observes their effects on environmental objects . the paper shows how the autonomously learned affordance representation can be used to solve tool-using tasks by dynamically sequencing the exploratory behaviors based on their expected outcomes . the quality of the learned representation was tested on extension-of-reach tool-using tasks . story_separator_special_tag a fundamental requirement of any autonomous robot system is the ability to predict the affordances of its environment . the set of affordances define the actions that are available to the agent given the robot\xe2\x80\x99s context . a standard approach to affordance learning is direct perception , which learns direct mappings from sensor measurements to affordance labels . for example , a robot designed for cross-country navigation could map stereo depth information and image features directly into predictions about the traversability of terrain regions . while this approach can succeed for a small number of affordances , it does not scale well as the number of affordances increases . in this paper , we show that visual object categories can be used as an intermediate representation that makes the affordance learning problem scalable . we develop a probabilistic graphical model which we call the category\xe2\x80\x94affordance ( ca ) model , which describes the relationships between object categories , affordances , and appearance . this model casts visual object categorization as an intermediate inference step in affordance prediction . we describe several novel affordance learning and training strategies that are supported by our new model . experimental results with indoor mobile story_separator_special_tag this paper presents a novel object-object affordance learning approach that enables intelligent robots to learn the interactive functionalities of objects from human demonstrations in everyday environments . instead of considering a single object , we model the interactive motions between paired objects in a human-object-object way . the innate interaction-affordance knowledge of the paired objects are learned from a labeled training dataset that contains a set of relative motions of the paired objects , human actions , and object labels . the learned knowledge is represented with a bayesian network , and the network can be used to improve the recognition reliability of both objects and human actions and to generate proper manipulation motion for a robot if a pair of objects is recognized . this paper also presents an image-based visual servoing approach that uses the learned motion features of the affordance in interaction as the control goals to control a robot to perform manipulation tasks . story_separator_special_tag this paper presents a hierarchical , statistical topic model for representing the grasp preshapes of a set of objects . observations provided by teleoperation are clustered into latent affordances shared among all objects . each affordance defines a joint distribution over position and orientation of the hand relative to the object and conditioned on visual appearance . the parameters of the model are learned using a gibbs sampling method . after training , the model can be used to compute grasp preshapes for a novel object based on its visual appearance . the model is evaluated experimentally on a set of objects for its ability to generate grasp preshapes that lead to successful grasps , and compared to a baseline approach . story_separator_special_tag learning to predict the effects of actions applied to pairs of objects is a difficult task that requires learning complex relations with sparse , incomplete and noisy information . our knowledge propagation approach propagates affordance predictions by exploiting similarities among object properties , action parameters and resulting effects . the knowledge is propagated in a graph where a missing edge , corresponding to an unknown interaction between two objects ( nodes ) , is predicted via the superposition of all paths connecting those objects in the graph . the high complexity of affordance representation is addressed through the use of maximum margin multi-valued regression ( mmmvr ) , which scales well to complex problems of multiple layers . with increased diversity and size of object databases and the addition of other parametric combinatory actions , we expect to achieve complex systems that leverage learned structure for subsequent learning , achieving structural bootstrapping over lifelong development and learning . in this paper , we extend mmmvr for learning of paired-object affordances , i.e. , for predicting the effects of actions applied to pairs of objects . in our experiments , we evaluated this method on a dataset composed of 83 objects story_separator_special_tag a general learning task for a robot in a new environment is to learn about objects and what actions/effects they afford . to approach this , we look at ways that a human partner can intuitively help the robot learn , socially guided machine learning . we present experiments conducted with our robot , junior , and make six observations characterizing how people approached teaching about objects . we show that junior successfully used transparency to mitigate errors . finally , we present the impact of `` social '' versus `` non-social '' data sets when training svm classifiers . story_separator_special_tag one of the recurring challenges in humanoid robotics is the development of learning mechanisms to predict the effects of certain actions on objects . it is paramount to predict the functional properties of an object from afar , for example on a table , in a rack or a shelf , which would allow the robot to select beforehand and automatically an appropriate action ( or sequence of actions ) in order to achieve a particular goal . such sensory to motor schemas associated to objects , surfaces or other entities in the environment are called affordances [ 1 , 2 ] and , more recently , they have been formalized computationally under the name of object-action complexes [ 3 ] ( oacs ) . this paper describes an approach to the acquisition of affordances and tool use in a humanoid robot combining vision , learning and control . learning is structured to enable a natural progression of episodes that include objects , tools , and eventually knowledge of the complete task . we finally test the robot 's behavior in an object retrieval task where it has to choose among a number of possible elongated tools to reach the story_separator_special_tag actions must be controlled prospectively . this requires that the behavioral possibilities of surface layouts and events be perceived . in this article , the ontolog- ical basis for an understanding of prospective control in realist terms is outlined . the foundational idea is that of affordances and the promoted ontology is materialist and dynamicist . it is argued that research in the ecological approach to prospective control is ultimately the search for objective laws . because lawfulness is equated with real possibility , this amounts to the study of the affordances ( the real possibilities ) underlying prospective control and the circumstances that actualize them . the ontological assumptions and hypotheses bearing on this latter proposal are articulated . it is suggested that critical evaluation of the identified ontological themes may benefit the experimental and theoretical study of perception in the service of activity . story_separator_special_tag this work aims for bottom-up and autonomous development of symbolic planning operators from continuous interaction experience of a manipulator robot that explores the environment using its action repertoire . development of the symbolic knowledge is achieved in two stages . in the first stage , the robot explores the environment by executing actions on single objects , forms effect and object categories , and gains the ability to predict the object/effect categories from the visual properties of the objects by learning the nonlinear and complex relations among them . in the next stage , with further interactions that involve stacking actions on pairs of objects , the system learns logical high-level rules that return a stacking-effect category given the categories of the involved objects and the discrete relations between them . finally , these categories and rules are encoded in planning domain definition language ( pddl ) , enabling symbolic planning . we realized our method by learning the categories and rules in a physics-based simulator . the learned symbols and operators are verified by generating and executing non-trivial symbolic plans on the real robot in a tower building task . story_separator_special_tag the concept of affordances , as proposed by j.j. gibson , refers to the relationship between the organism and its environment and has become popular in autonomous robot control . the learning of affordances in autonomous robots , however , typically requires a large set of training data obtained from the interactions of the robot with its environment . therefore , the learning process is not only time-consuming , and costly but is also risky since some of the interactions may inflict damage on the robot . in this paper , we study the learning of traversability affordance on a mobile robot and investigate how the number of interactions required can be minimized with minimial degradation on the learning process . specifically , we propose a two step learning process which consists of bootstrapping and curiosity-based learning phases . in the bootstrapping phase , a small set of initial interaction data are used to find the relevant perceptual features for the affordance , and a support vector machine ( svm ) classifier is trained . in the curiosity-driven learning phase , a curiosity band around the decision hyperplane of the svm is used to decide whether a given interaction opportunity story_separator_special_tag we are interested in how the concept of affordances can affect our view to autonomous robot control , and how the results obtained from autonomous robotics can be reflected back upon the discussion and studies on the concept of affordances . in this paper , we studied how a mobile robot , equipped with a 3d laser scanner , can learn to perceive the traversability affordance and use it to wander in a room tilled with spheres , cylinders and boxes . the results showed that after learning , the robot can wander around avoiding contact with non-traversable objects ( i.e . boxes , upright cylinders , or lying cylinders in certain orientation ) , but moving over traversable objects ( such as spheres , and lying cylinders in a rollable orientation with respect to the robot ) rolling them out of its way . we have shown that for each action approximately 1 % of the perceptual features were relevant to determine whether it is afforded or not and that these relevant features are positioned in certain regions of the range image . the experiments are conducted both using a physics-based simulator and on a real robot . story_separator_special_tag in this paper we present the realization of the formalism we have proposed for affordance learning and its use for planning ( ahin et al. , 2007 ) on an anthropomorphic robotic hand . in this realization , the robot interacts with the objects in its environment using the programmed push and grasp-andlift behaviors , and records its interactions in triples that consists of the initial percept of the object , the behavior applied and the observed effect , defined as the difference between the initial and the final percept . the interaction with the environment allows the robot to learn object affordance relations to predict the change in the percept of the object when a certain behavior is applied . these relations can then be used to develop multi-step plans using forward chaining . our experiments have shown that the robot is able to learn the physical affordances of objects from 3d range images and use them to build symbols and relations that are used for making multi-step plans to achieve a given goal . story_separator_special_tag in this paper , we show that through self-interaction and self-observation , an anthropomorphic robot equipped with a range camera can learn object affordances and use this knowledge for planning . in the first step of learning , the robot discovers commonalities in its action-effect experiences by discovering effect categories . once the effect categories are discovered , in the second step , affordance predictors for each behavior are obtained by learning the mapping from the object features to the effect categories . after learning , the robot can make plans to achieve desired goals , emulate end states of demonstrated actions , monitor the plan execution and take corrective actions using the perceptual structures employed or discovered during learning . we argue that the learning system proposed shares crucial elements with the development of infants of 7-10 months age , who explore the environment and learn the dynamics of the objects through goal-free exploration . in addition , we discuss goal emulation and planning in relation to older infants with no symbolic inference capability and non-linguistic animals which utilize object affordances to make action plans . story_separator_special_tag in this paper , we use the notion of affordances , proposed in cognitive science , as a framework to propose a developmental method that would enable a robot to ground symbolic planning mechanisms in the continuous sensory-motor experiences of a robot . we propose a method that allows a robot to learn the symbolic relations that pertain to its interactions with the world and show that they can be used in planning . specifically , the robot interacts with the objects in its environment using a pre-coded repertoire of behaviors and records its interactions in a triple that consist of the initial percept of the object , the behavior applied and its effect , defined as the difference between the initial and the final percept . the method allows the robot to learn object affordance relations which can be used to predict the change in the percept of the object when a certain behavior is applied . these relations can then be used to develop plans using forward chaining . the method is implemented and evaluated on a mobile robot system with limited object manipulation capabilities . we have shown that the robot is able to learn the physical story_separator_special_tag inspired by infant development , we propose a three staged developmental framework for an anthropomorphic robot manipulator . in the first stage , the robot is initialized with a basic reach-and- enclose-on-contact movement capability , and discovers a set of behavior primitives by exploring its movement parameter space . in the next stage , the robot exercises the discovered behaviors on different objects , and learns the caused effects ; effectively building a library of affordances and associated predictors . finally , in the third stage , the learned structures and predictors are used to bootstrap complex imitation and action learning with the help of a cooperative tutor . the main contribution of this paper is the realization of an integrated developmental system where the structures emerging from the sensorimotor experience of an interacting real robot are used as the sole building blocks of the subsequent stages that generate increasingly more complex cognitive capabilities . the proposed framework includes a number of common features with infant sensorimotor development . furthermore , the findings obtained from the self-exploration and motionese guided human-robot interaction experiments allow us to reason about the underlying mechanisms of simple-to-complex sensorimotor skill progression in human infants . story_separator_special_tag afnet , the affordance network is an open affordance computing initiative that provides affordance knowledge ontologies for common household articles in terms of affordance features using surface forms termed as afbits ( affordance bits ) . afnet currently offers 68 base affordance features ( 25 structural , 10 material , 33 grasp ) , providing over 200 object category definitions in terms of 4000 afbits . symbol grounding algorithms for these affordance features enable recognition of objects in visual ( rgb-d ) data . while afnet is built as a generic visual knowledge ontology for recognition , it is well suited for deployment on domestic robots . in this paper , we describe afrob , an extension of afnet for robotic applications . afrob builds upon afnet by imbibing semantic context and mapping for holistic recognition and manipulation of objects in domestic environments . afrob also offers modules to enable robots to interact and grasp objects through the generation of grasp affordances . the paper also details the inference mechanisms that adapt afnet for robots in domestic contexts . results demonstrate the efficiency of the affordance driven approach to holistic visual processing . story_separator_special_tag an affordance is a relation between an object , an action , and the effect of that action in a given environmental context . one key benefit of the concept of affordance is that it provides information about the consequence of an action which can be stored and reused in a range of tasks that a robot needs to learn and perform . in this paper , we address the challenge of the on-line learning and use of affordances simultaneously while performing goal-directed tasks . this requires efficient online performance to ensure the robot is able to achieve its goal fast . by providing conceptual knowledge of action possibilities and desired effects , we show that a humanoid robot nao can learn and use affordances in two different task settings . we demonstrate the effectiveness of this approach by integrating affordances into an extended classifier system for learning general rules in a reinforcement learning framework . our experimental results show significant speedups in learning how a robot solves a given task . story_separator_special_tag for robots that have the capability to interact with the physical environment through their end effectors , understanding the surrounding scenes is not merely a task of image classification or object recognition . to perform actual tasks , it is critical for the robot to have a functional understanding of the visual scene . here , we address the problem of localization and recognition of functional areas in an arbitrary indoor scene , formulated as a two-stage deep learning based detection pipeline . a new scene functionality testing-bed , which is compiled from two publicly available indoor scene datasets , is used for evaluation . our method is evaluated quantitatively on the new dataset , demonstrating the ability to perform efficient recognition of functional areas from arbitrary indoor scenes . we also demonstrate that our detection model can be generalized to novel indoor scenes by cross validating it with images from two different datasets . story_separator_special_tag j. j. gibson s concept of affordance , one of the central pillars of ecological psychology , is a truly remarkable idea that provides a concise theory of animal perception predicated on environmental .
the hamiltonian constraint remains the major unsolved problem in loop quantum gravity ( lqg ) . seven years ago a mathematically consistent candidate hamiltonian constraint has been proposed but there are still several unsettled questions which concern the algebra of commutators among smeared hamiltonian constraints which must be faced in order to make progress . in this paper we propose a solution to this set of problems based on the so-called { \\bf master constraint } which combines the smeared hamiltonian constraints for all smearing functions into a single constraint . if certain mathematical conditions , which still have to be proved , hold , then not only the problems with the commutator algebra could disappear , also chances are good that one can control the solution space and the ( quantum ) dirac observables of lqg . even a decision on whether the theory has the correct classical limit and a connection with the path integral ( or spin foam ) formulation could be in reach . while these are exciting possibilities , we should warn the reader from the outset that , since the proposal is , to the best of our knowledge , completely new and has story_separator_special_tag in this work we will consider the concepts of partial and complete observables for canonical general relativity . these concepts provide a method to calculate dirac observables . the central result of this work is that one can compute dirac observables for general relativity by dealing with just one constraint . for this we have to introduce spatial diffeomorphism invariant hamiltonian constraints . it will turn out that these can be made to be abelian . furthermore the methods outlined here provide a connection between observables in the space -- time picture , i.e . quantities invariant under space -- time diffeomorphisms , and dirac observables in the canonical picture . story_separator_special_tag we introduce a general approximation scheme in order to calculate gauge invariant observables in the canonical formulation of general relativity . using this scheme we will show how the observables and the dynamics of field theories on a fixed background or equivalently the observables of the linearized theory can be understood as an approximation to the observables in full general relativity . gauge invariant corrections can be calculated up to an arbitrary high order and we will explicitly calculate the first non -- trivial correction . furthermore we will make a first investigation into the poisson algebra between observables corresponding to fields at different space -- time points and consider the locality properties of the observables . story_separator_special_tag linear cosmological perturbation theory is pivotal to a theoretical understanding of current cosmological experimental data provided e.g . by cosmic microwave anisotropy probes . a key issue in that theory is to extract the gauge invariant degrees of freedom which allow unambiguous comparison between theory and experiment . when one goes beyond first ( linear ) order , the task of writing the einstein equations expanded to n'th order in terms of quantities that are gauge invariant up to terms of higher orders becomes highly non-trivial and cumbersome . this fact has prevented progress for instance on the issue of the stability of linear perturbation theory and is a subject of current debate in the literature . in this series of papers we circumvent these difficulties by passing to a manifestly gauge invariant framework . in other words , we only perturb gauge invariant , i.e . measurable quantities , rather than gauge variant ones . thus , gauge invariance is preserved non perturbatively while we construct the perturbation theory for the equations of motion for the gauge invariant observables to all orders . in this first paper we develop the general framework which is based on a seminal paper story_separator_special_tag . ''but we do not have quantum gravity . '' this phrase is often used when analysis of a physical problem enters the regime in which quantum gravity effects should be taken into account . in fact , there are several models of the gravitational field coupled to ( scalar ) fields for which the quantization procedure can be completed using loop quantum gravity techniques . the model we present in this paper consists of the gravitational field coupled to a scalar field . the result has similar structure to the loop quantum cosmology models , except that it involves all the local degrees of freedom of the gravitational field because no symmetry reduction has been performed at the classical level . story_separator_special_tag spin foam models are hoped to provide the dynamics of loop-quantum gravity . however , the most popular of these , the barrett-crane model , does not have the good boundary state space and there are indications that it fails to yield good low-energy n-point functions . we present an alternative dynamics that can be derived as a quantization of a regge discretization of euclidean general relativity , where second class constraints are imposed weakly . its state space matches the so ( 3 ) loop gravity one and it yields an so ( 4 ) -covariant vertex amplitude for euclidean loop gravity . story_separator_special_tag we extend the definition of the `` flipped '' loop-quantum-gravity vertex to the case of a finite immirzi parameter . we cover the euclidean as well as the lorentzian case . we show that the resulting dynamics is defined on a hilbert space isomorphic to the one of loop quantum gravity , and that the area operator has the same discrete spectrum as in loop quantum gravity . this includes the correct dependence on the immirzi parameter , and , remarkably , holds in the lorentzian case as well . the ad hoc flip of the symplectic structure that was initially required to derive the flipped vertex is not anymore needed for finite immirzi parameter . these results establish a bridge between canonical loop quantum gravity and the spinfoam formalism in four dimensions . story_separator_special_tag the simplicial framework of engle-pereira-rovelli-livine spin-foam models is generalized to match the diffeomorphism invariant framework of loop quantum gravity . the simplicial spin-foams are generalized to arbitrary linear 2-cell spin-foams . the resulting framework admits all the spin-network states of loop quantum gravity , not only those defined by triangulations ( or cubulations ) . in particular the notion of embedded spin-foam we use allows to consider knotting or linking spin-foam histories . also the main tools as the vertex structure and the vertex amplitude are naturally generalized to arbitrary valency case . the correspondence between all the su ( 2 ) intertwiners and the su ( 2 ) $ \\times $ su ( 2 ) eprl intertwiners is proved to be 1-1 in the case of the barbero-immirzi parameter $ |\\gamma|\\ge 1 $ , unless the co-domain of the eprl map is trivial and the domain is non-trivial . story_separator_special_tag do the su ( 2 ) intertwiners parametrize the space of the eprl solutions to the simplicity constraint ? what is a complete form of the partition function written in terms of this parametrization ? we prove that the eprl map is injective for n-valent vertex in case when it is a map from so ( 3 ) into so ( 3 ) xso ( 3 ) representations . we find , however , that the eprl map is not isometric . in the consequence , in order to be written in a su ( 2 ) amplitude form , the formula for the partition function has to be rederived . we do it and obtain a new , complete formula for the partition function . the result goes beyond the su ( 2 ) spin-foam models framework . story_separator_special_tag quantum gravity is expected to be necessary in order to understand situations in which classical general relativity breaks down . in particular in cosmology one has to deal with initial singularities , i.e. , the fact that the backward evolution of a classical spacetime inevitably comes to an end after a finite amount of proper time . this presents a breakdown of the classical picture and requires an extended theory for a meaningful description . since small length scales and high curvatures are involved , quantum effects must play a role . not only the singularity itself but also the surrounding spacetime is then modified . one particular theory is loop quantum cosmology , an application of loop quantum gravity to homogeneous systems , which removes classical singularities . its implications can be studied at different levels . the main effects are introduced into effective classical equations , which allow one to avoid the interpretational problems of quantum theory . they give rise to new kinds of early-universe phenomenology with applications to inflation and cyclic models . to resolve classical singularities and to understand the structure of geometry around them , the quantum description is necessary . classical evolution is story_separator_special_tag a fully consistent linear perturbation theory for cosmology is derived in the presence of quantum corrections as they are suggested by properties of inverse volume operators in loop quantum gravity . the underlying constraints present a consistent deformation of the classical system , which shows that the discreteness in loop quantum gravity can be implemented in effective equations without spoiling space-time covariance . nevertheless , nontrivial quantum corrections do arise in the constraint algebra . since correction terms must appear in tightly controlled forms to avoid anomalies , detailed insights for the correct implementation of constraint operators can be gained . the procedures of this article thus provide a clear link between fundamental quantum gravity and phenomenology . story_separator_special_tag some long-standing issues concerning the quantum nature of the big bang are resolved in the context of homogeneous isotropic models with a scalar field . specifically , the known results on the resolution of the big-bang singularity in loop quantum cosmology are significantly extended as follows : ( i ) the scalar field is shown to serve as an internal clock , thereby providing a detailed realization of the `` emergent time '' idea ; ( ii ) the physical hilbert space , dirac observables , and semiclassical states are constructed rigorously ; ( iii ) the hamiltonian constraint is solved numerically to show that the big bang is replaced by a big bounce . thanks to the nonperturbative , background independent methods , unlike in other approaches the quantum evolution is deterministic across the deep planck regime . story_separator_special_tag in loop quantum cosmology , friedmann-lema\\^ { \\i } tre-robertson-walker space-times arise as well-defined approximations to specific quantum geometries . we initiate the development of a quantum theory of test scalar fields on these quantum geometries . emphasis is on the new conceptual ingredients required in the transition from classical space-time backgrounds to quantum space-times . these include a `` relational time '' \\ ` a la leibniz , the emergence of the hamiltonian operator of the test field from the quantum constraint equation , and ramifications of the quantum fluctuations of the background geometry on the resulting dynamics . the familiar quantum field theory on classical friedmann-lema\\^ { \\i } tre-robertson-walker models arises as a well-defined reduction of this more fundamental theory . story_separator_special_tag several conceptual aspects of quantum gravity are studied on the example of the homogeneous isotropic lqc model . in particular : $ ( i ) $ the proper time of the co-moving observers is showed to be a quantum operator { and } a quantum spacetime metric tensor operator is derived . $ ( ii ) $ solutions of the quantum scalar constraint for two different choices of the lapse function are compared and contrasted . in particular it is shown that in case of model with masless scalar field and cosmological constant $ \\lambda $ the physical hilbert spaces constructed for two choices of lapse are the same for $ \\lambda 0 $ . $ ( iii ) $ the mechanism of the singularity avoidance is analyzed via detailed studies of an energy density operator , whose essential spectrum was shown to be an interval $ [ 0 , \\rhoc ] $ , where $ \\rhoc\\approx 0.41\\rho_ { \\pl } $ . $ ( iv ) $ the relation between the kinematical and the physical quantum geometry is discussed on the level of relation between observables . story_separator_special_tag the goal of this article is to present an introduction to loop quantum gravity -a background independent , non-perturbative approach to the problem of unification of general relativity and quantum physics , based on a quantum theory of geometry . our presentation is pedagogical . thus , in addition to providing a bird 's eye view of the present status of the subject , the article should also serve as a vehicle to enter the field and explore it in detail . to aid non-experts , very little is assumed beyond elements of general relativity , gauge theories and quantum field theory . while the article is essentially self-contained , the emphasis is on communicating the underlying ideas and the significance of results rather than on presenting systematic derivations and detailed proofs . ( these can be found in the listed references . ) the subject can be approached in different ways . we have chosen one which is deeply rooted in well established physics and also has sufficient mathematical precision to ensure that there are no hidden infinities . in order to keep the article to a reasonable size , and to avoid overwhelming non-experts , we have had story_separator_special_tag this is an introduction to the by now fifteen years old research field of canonical quantum general relativity , sometimes called `` loop quantum gravity '' . the term `` modern '' in the title refers to the fact that the quantum theory is based on formulating classical general relativity as a theory of connections rather than metrics as compared to in original version due to arnowitt , deser and misner . canonical quantum general relativity is an attempt to define a mathematically rigorous , non-perturbative , background independent theory of lorentzian quantum gravity in four spacetime dimensions in the continuum . the approach is minimal in that one simply analyzes the logical consequences of combining the principles of general relativity with the principles of quantum mechanics . the requirement to preserve background independence has lead to new , fascinating mathematical structures which one does not see in perturbative approaches , e.g . a fundamental discreteness of spacetime seems to be a prediction of the theory providing a first substantial evidence for a theory in which the gravitational field acts as a natural uv cut-off . an effort has been made to provide a self-contained exposition of a restricted amount story_separator_special_tag the problem of finding the quantum theory of the gravitational field , and thus understanding what is quantum spacetime , is still open . one of the most active of the current approaches is loop quantum gravity . loop quantum gravity is a mathematically well-defined , non-perturbative and background independent quantization of general relativity , with its conventional matter couplings . the research in loop quantum gravity forms today a vast area , ranging from mathematical foundations to physical applications . among the most significative results obtained are : ( i ) the computation of the physical spectra of geometrical quantities such as area and volume ; which yields quantitative predictions on planck-scale physics . ( ii ) a derivation of the bekenstein-hawking black hole entropy formula . ( iii ) an intriguing physical picture of the microstructure of quantum physical space , characterized by a polymer-like planck scale discreteness . this discreteness emerges naturally from the quantum theory and provides a mathematically well-defined realization of wheeler 's intuition of a spacetime `` foam '' . long standing open problems within the approach ( lack of a scalar product , overcompleteness of the loop basis , implementation of reality conditions story_separator_special_tag a hamiltonian formulation of general relativity based on certain spinorial variables is introduced . these variables simplify the constraints of general relativity considerably and enable one to imbed the constraint surface in the phase space of einstein 's theory into that of yang-mills theory . the imbedding suggests new ways of attacking a number of problems in both classical and quantum gravity . some illustrative applications are discussed . story_separator_special_tag i suggest in this letter a new strategy to attack the problem of the reality conditions in the ashtekar approach to classical and quantum general relativity . by writing a modified hamiltonian constraint in the usual $ so ( 3 ) $ yang-mills phase space i show that it is possible to describe space-times with lorentzian signature without the introduction of complex variables . all the features of the ashtekar formalism related to the geometrical nature of the new variables are retained ; in particular , it is still possible , in principle , to use the loop variables approach in the passage to the quantum theory . the key issue in the new formulation is how to deal with the more complicated hamiltonian constraint that must be used in order to avoid the introduction of complex fields . story_separator_special_tag we study the hamiltonian formulation of the general first order action of general relativity compatible with local lorentz invariance and background independence . the most general simplectic structure ( compatible with diffeomorphism invariance and local lorentz transformations ) is obtained by adding to the holst action the pontriagin , euler and nieh-yan invariants with independent coupling constants . we perform a detailed canonical analysis of this general formulation ( in the time gauge ) exploring the structure of the phase space in terms of connection variables . we explain the relationship of these topological terms , and the effect of large su ( 2 ) gauge transformations in quantum theories of gravity defined in terms of the ashtekar-barbero connection . story_separator_special_tag both real and complex connections have been used for canonical gravity : the complex connection has sl ( 2 , c ) as gauge group , while the real connection has su ( 2 ) as gauge group . we show that there is an arbitrary parameter $ \\beta $ which enters in the definition of the real connection , in the poisson brackets , and therefore in the scale of the discrete spectra one finds for areas and volumes in the corresponding quantum theory . a value for $ \\beta $ could be could be singled out in the quantum theory by the hamiltonian constraint , or by the rotation to the complex ashtekar connection . story_separator_special_tag the immirzi parameter is a constant appearing in the general-relativity action used as a starting point for the loop quantization of gravity . the parameter is commonly believed not to appear in the equations of motion and not to have any physical effect besides nonperturbatrive quantum gravity . we show that this is not true in general : in the presence of minimally coupled fermions , the parameter appears in the equations of motion : it determines the coupling constant of a four-fermion interaction . under some general assumptions , there is therefore a relation between the immirzi parameter and physical effects that are observable in principle , independently from nonperturbative quantum gravity . story_separator_special_tag we extend the recently developed kinematical framework for diffeomorphism invariant theories of connections for compact gauge groups to the case of a diffeomorphism invariant quantum field theory which includes besides connections also fermions and higgs fields . this framework is appropriate for coupling matter to quantum gravity . the presence of diffeomorphism invariance forces us to choose a representation which is a rather non-fock-like one : the elementary excitations of the connection are along open or closed strings while those of the fermions or higgs fields are at the end points of the string . nevertheless we are able to promote the classical reality conditions to quantum adjointness relations which in turn uniquely fixes the gauge and diffeomorphism invariant probability measure that underlies the hilbert space . most of the fermionic part of this work is independent of the recent preprint by baez and krasnov and earlier work by rovelli and morales-tec\\'otl because we use new canonical fermionic variables , so-called grassman-valued half-densities , which enable us to to solve the difficult fermionic adjointness relations . story_separator_special_tag hamiltonian dynamics is implemented in a simple non-perturbative framework of interacting loops . the use of the c-representation , which incorporates automatically the mandelstam identities , makes it possible to discuss the theory for general n . story_separator_special_tag abstract we define a new representation for quantum general relativity , in which exact solutions of the quantum constraints may be obtained . the representation is constructed by means of a noncanonical graded poisson algebra of classical observables , defined in terms of ashtekar 's new variables . the observables in this algebra are nonlocal and involve parallel transport around loops in a three-manifold . the theory is quantized by constructing a linear representation of a deformation of this algebra . this representation is given in terms of an algebra of linear operators defined on a state space which consists of functionals of sets of loops in . the construction is general and can be applied also to yang-mills theories . the diffeomorphism constraint is defined in terms of a natural representation of the diffeomorphism group . the hamiltonian constraint , which contains the dynamics of quantum gravity , is constructed as a limit of a sequence of observables which incorporates a regularization prescription . we give the general solution of the diffeomorphism constraint in closed form . it is spanned by a countable basis which is in one-to-one correspondence with the diffeomorphism equivalence classes of multiple loops , which story_separator_special_tag holonomy algebras arise naturally in the classical description of yang-mills fields and gravity , and it has been suggested , at a heuristic level , that they may also play an important role in a nonperturbative treatment of the quantum theory . the aim of this paper is to provide a mathematical basis for this proposal . the quantum holonomy algebra is constructed , and , in the case of real connections , given the structure of a certain c * -algebra . a proper representation theory is then provided using the gel'fand spectral theory . a corollary of these general results is a precise formulation of the 'loop transform ' proposed by rovelli and smolin ( 1990 ) . several explicit representations of the holonomy algebra are constructed . the general theory developed here implies that the domain space of quantum states can always be taken to be the space of maximal ideals of the c * -algebra . the structure of this space is investigated and it is shown how observables labelled by 'strips ' arise naturally . story_separator_special_tag integral calculus on the space of gauge equivalent connections is developed . loops , knots , links and graphs feature prominently in this description . the framework is well -- suited for quantization of diffeomorphism invariant theories of connections . the general setting is provided by the abelian c * algebra of functions on the quotient space of connections generated by wilson loops ( i.e. , by the traces of holonomies of connections around closed loops ) . the representation theory of this algebra leads to an interesting and powerful `` duality '' between gauge -- equivalence classes of connections and certain equivalence classes of closed loops . in particular , regular measures on ( a suitable completion of ) connections/gauges are in 1 -- 1 correspondence with certain functions of loops and diffeomorphism invariant measures correspond to ( generalized ) knot and link invariants . by carrying out a non -- linear extension of the theory of cylindrical measures on topological vector spaces , a faithful , diffeomorphism invariant measure is introduced . this measure can be used to define the hilbert space of quantum states in theories of connections . the wilson -- loop functionals then serve as story_separator_special_tag integral calculus on the space of gauge equivalent connections is developed . by carring out a non-linear generalization of the theory of cylindrical measures on topological vector spaces , a faithfull , diffeomorphism invariant measure is introduced on a suitable completion of . the strip ( i.e . momentum ) operators are densely-defined in the resulting hilbert space and interact with the measure correctly story_separator_special_tag a general framework for integration over certain infinite dimensional spaces is first developed using projective limits of a projective family of compact hausdorff spaces . the procedure is then applied to gauge theories to carry out integration over the non linear , infinite dimensional spaces of connections modulo gauge transformations . this method of evaluating functional integrals can be used either in the euclidean path integral approach or the lorentzian canonical approach . a number of measures discussed are diffeomorphism invariant and therefore of interest to ( the connection dynamics version of ) quantum general relativity . the account is pedagogical ; in particular , prior knowledge of projective techniques is not assumed . story_separator_special_tag in a quantum mechanical treatment of gauge theories ( including general relativity ) , one is led to consider a certain completion , $ \\agb $ , of the space $ \\ag $ of gauge equivalent connections . this space serves as the quantum configuration space , or , as the space of all euclidean histories over which one must integrate in the quantum theory . $ \\agb $ is a very large space and serves as a `` universal home '' for measures in theories in which the wilson loop observables are well-defined . in this paper , $ \\agb $ is considered as the projective limit of a projective family of compact hausdorff manifolds , labelled by graphs ( which can be regarded as `` floating lattices '' in the physics terminology ) . using this characterization , differential geometry is developed through algebraic methods . in particular , we are able to introduce the following notions on $ \\agb $ : differential forms , exterior derivatives , volume forms , vector fields and lie brackets between them , divergence of a vector field with respect to a volume form , laplacians and associated heat kernels and heat story_separator_special_tag a new functional calculus , developed recently for a fully non-perturbative treatment of quantum gravity , is used to begin a systematic construction of a quantum theory of geometry . regulated operators corresponding to areas of 2-surfaces are introduced and shown to be self-adjoint on the underlying ( kinematical ) hilbert space of states . it is shown that their spectra are { \\it purely } discrete indicating that the underlying quantum geometry is far from what the continuum picture might suggest . indeed , the fundamental excitations of quantum geometry are 1-dimensional , rather like polymers , and the 3-dimensional continuum geometry emerges only on coarse graining . the full hilbert space admits an orthonormal decomposition into finite dimensional sub-spaces which can be interpreted as the spaces of states of spin systems . using this property , the complete spectrum of the area operators is evaluated . the general framework constructed here will be used in a subsequent paper to discuss 3-dimensional geometric operators , e.g. , the ones corresponding to volumes of regions . story_separator_special_tag the basic framework for a systematic construction of a quantum theory of riemannian geometry was introduced recently . the quantum versions of riemannian structures -- such as triad and area operators -- exhibit a non-commutativity . at first sight , this feature is surprising because it implies that the framework does not admit a triad representation . to better understand this property and to reconcile it with intuition , we analyze its origin in detail . in particular , a careful study of the underlying phase space is made and the feature is traced back to the classical theory ; there is no anomaly associated with quantization . we also indicate why the uncertainties associated with this non-commutativity become negligible in the semi-classical regime . story_separator_special_tag loop quantum gravity is an approach to quantum gravity that starts from the hamiltonian formulation in terms of a connection and its canonical conjugate . quantization proceeds in the spirit of dirac : first one defines an algebra of basic kinematical observables and represents it through operators on a suitable hilbert space . in a second step , one implements the constraints . the main result of the paper concerns the representation theory of the kinematical algebra : we show that there is only one cyclic representation invariant under spatial diffeomorphisms . while this result is particularly important for loop quantum gravity , we are rather general : the precise definition of the abstract * -algebra of the basic kinematical observables we give could be used for any theory in which the configuration variable is a connection with a compact structure group . the variables are constructed from the holonomy map and from the fluxes of the momentum conjugate to the connection . the uniqueness result is relevant for any such theory invariant under spatial diffeomorphisms or being a part of a diffeomorphism invariant theory . story_separator_special_tag the weyl algebra a of continuous functions and exponentiated fluxes , introduced by ashtekar , lewandowski and others , in quantum geometry is studied . it is shown that , in the piecewise analytic category , every regular representation of a having a cyclic and diffeomorphism invariant vector , is already unitarily equivalent to the fundamental representation . additional assumptions concern the dimension of the underlying analytic manifold ( at least three ) , the finite wide triangulizability of surfaces in it to be used for the fluxes and the naturality of the action of diffeomorphisms -- but neither any domain properties of the represented weyl operators nor the requirement that the diffeomorphisms act by pull-backs . for this , the general behaviour of c * -algebras generated by continuous functions and pull-backs of homeomorphisms , as well as the properties of stratified analytic diffeomorphisms are studied . additionally , the paper includes also a short and direct proof of the irreducibility of a . story_separator_special_tag we study the operator that corresponds to the measurement of volume , in non-perturbative quantum gravity , and we compute its spectrum . the operator is constructed in the loop representation , via a regularization procedure ; it is finite , background independent , and diffeomorphism-invariant , and therefore well defined on the space of diffeomorphism invariant states ( knot states ) . we find that the spectrum of the volume of any physical region is discrete . a family of eigenstates are in one to one correspondence with the spin networks , which were introduced by penrose in a different context . we compute the corresponding component of the spectrum , and exhibit the eigenvalues explicitly . the other eigenstates are related to a generalization of the spin networks , and their eigenvalues can be computed by diagonalizing finite dimensional matrices . furthermore , we show that the eigenstates of the volume diagonalize also the area operator . we argue that the spectra of volume and area determined here can be considered as predictions of the loop-representation formulation of quantum gravity on the outcomes of ( hypothetical ) planck-scale sensitive measurements of the geometry of space . story_separator_special_tag the aim of this letter is to indicate the differences between the rovelli-smolin quantum volume operator and other quantum volume operators existing in the literature . the formulas for the operators are written in a unifying notation of the graph projective framework . it is clarified whose results apply to which operators and why . story_separator_special_tag a functional calculus on the space of ( generalized ) connections was recently introduced without any reference to a background metric . it is used to continue the exploration of the quantum riemannian geometry . operators corresponding to volume of three-dimensional regions are regularized rigorously . it is shown that there are two natural regularization schemes , each of which leads to a well-defined operator . both operators can be completely specified by giving their action on states labelled by graphs . the two final results are closely related but differ from one another in that one of the operators is sensitive to the differential structure of graphs at their vertices while the second is sensitive only to the topological characteristics . ( the second operator was first introduced by rovelli and smolin and de pietri and rovelli using a somewhat different framework . ) the difference between the two operators can be attributed directly to the standard quantization ambiguity . underlying assumptions and subtleties of regularization procedures are discussed in detail in both cases because volume operators play an important role in the current discussions of quantum dynamics . story_separator_special_tag one of the celebrated results of loop quantum gravity ( lqg ) is the discreteness of the spectrum of geometrical operators such as length , area and volume operators . this is an indication that planck scale geometry in lqg is discontinuous rather than smooth . however , there is no rigorous proof thereof at present , because the afore mentioned operators are not gauge invariant , they do not commute with the quantum constraints . the relational formalism in the incarnation of rovelli 's partial and complete observables provides a possible mechanism for turning a non gauge invariant operator into a gauge invariant one . in this paper we investigate whether the spectrum of such a physical , that is gauge invariant , observable can be predicted from the spectrum of the corresponding gauge variant observables . we will not do this in full lqg but rather consider much simpler examples where field theoretical complications are absent . we find , even in those simpler cases , that kinematical discreteness of the spectrum does not necessarily survive at the gauge invariant level . whether or not this happens depends crucially on how the gauge invariant completion is performed . story_separator_special_tag i argue that the prediction of physical discreteness at the planck scale in loop gravity is a reasonable conclusion that derives from a sensible ensemble of hypotheses , in spite of some contrary arguments considered in an interesting recent paper by dittrich and thiemann . the counter-example presented by dittrich and thiemann illustrates a pathology which does not seem to be present in gravity . i also point out a common confusion between two distinct frameworks for the interpretation of general-covariant quantum theory , and observe that within one of these , the derivation of physical discreteness is immediate , and not in contradiction with gauge invariance . story_separator_special_tag the variables introduced by ashtekar in general relativity have several useful and nice properties . we point out a further feature of these variables : the so ( 3 ) -invariant norm of the variable conjugate to ashtekar s connection , namely , the inverse densitized triad or the analogue of the electric field , is the area two-form ; that is , it is the two-form that gives physical area to any surface . the new variables naturally determine areas in the same way in which the metric tensor naturally determines lengths . story_separator_special_tag we derive a closed formula for the matrix elements of the volume operator for canonical lorentzian quantum gravity in four spacetime dimensions in the continuum in a spin-network basis . we also display a new technique of regularization which is state dependent but we are forced to it in order to maintain diffeomorphism covariance and in that sense it is natural . we arrive naturally at the expression for the volume operator as defined by ashtekar and lewandowski up to a state independent factor . story_separator_special_tag the volume operator plays a crucial role in the definition of the quantum dynamics of loop quantum gravity ( lqg ) . efficient calculations for dynamical problems of lqg can therefore be performed only if one has sufficient control over the volume spectrum . while closed formulas for the matrix elements are currently available in the literature , these are complicated polynomials in 6j symbols which in turn are given in terms of racah 's formula which is too complicated in order to perform even numerical calculations for the semiclassically important regime of large spins . hence , so far not even numerically the spectrum could be accessed . in this article we demonstrate that by means of the elliot -- biedenharn identity one can get rid of all the 6j symbols for any valence of the gauge invariant vertex , thus immensely reducing the computational effort . we use the resulting compact formula to study numerically the spectrum of the gauge invariant 4 -- vertex . the techniques derived in this paper could be of use also for the analysis of spin -- spin interaction hamiltonians of many -- particle problems in atomic and nuclear physics . story_separator_special_tag we analyze combinatorial structures which play a central role in determining spectral properties of the volume operator ( ashtekar a and lewandowski j 1998 adv . theor . math . phys . 1 388 ) in loop quantum gravity ( lqg ) . these structures encode geometrical information of the embedding of arbitrary valence vertices of a graph in three-dimensional riemannian space and can be represented by sign strings containing relative orientations of embedded edges . we demonstrate that these signature factors are a special representation of the general mathematical concept of an oriented matroid ( ziegler g m 1998 electron . j . comb . ; bj ? rner a et al 1999 oriented matroids ( cambridge : cambridge university press ) ) . moreover , we show that oriented matroids can also be used to describe the topology ( connectedness ) of directed graphs . hence , the mathematical methods developed for oriented matroids can be applied to the difficult combinatorics of embedded graphs underlying the construction of lqg . as a first application we revisit the analysis of brunnemann and rideout ( 2008 class . quantum grav . 25 065001 and 065002 ) , and find that story_separator_special_tag we describe preliminary results of a detailed numerical analysis of the volume operator as formulated by ashtekar and lewandowski . due to a simplified explicit expression for its matrix elements , it is possible for the first time to treat generic vertices of valence greater than four . it is found that the vertex geometry characterizes the volume spectrum . story_separator_special_tag we analyze the spectral properties of the volume operator of ashtekar and lewandowski in loop quantum gravity , which is the quantum analogue of the classical volume expression for regions in three dimensional riemannian space . our analysis considers for the first time generic graph vertices of valence greater than four . here we find that the geometry of the underlying vertex characterizes the spectral properties of the volume operator , in particular the presence of a ` volume gap ' ( a smallest non-zero eigenvalue in the spectrum ) is found to depend on the vertex embedding . we compute the set of all non-spatially diffeomorphic non-coplanar vertex embeddings for vertices of valence 5 -- 7 , and argue that these sets can be used to label spatial diffeomorphism invariant states . we observe how gauge invariance connects vertex geometry and representation properties of the underlying gauge group in a natural way . analytical results on the spectrum on 4-valent vertices are included , for which the presence of a volume gap is proved . this paper presents our main results ; details are provided by a companion paper arxiv:0706.0382v1 . story_separator_special_tag given a real-analytic manifold m , a compact connected lie group g and a principal g-bundle p - > m , there is a canonical ` generalized measure ' on the space a/g of smooth connections on p modulo gauge transformations . this allows one to define a hilbert space l^2 ( a/g ) . here we construct a set of vectors spanning l^2 ( a/g ) . these vectors are described in terms of ` spin networks ' : graphs phi embedded in m , with oriented edges labelled by irreducible unitary representations of g , and with vertices labelled by intertwining operators from the tensor product of representations labelling the incoming edges to the tensor product of representations labelling the outgoing edges . we also describe an orthonormal basis of spin networks associated to any fixed graph phi . we conclude with a discussion of spin networks in the loop representation of quantum gravity , and give a category-theoretic interpretation of the spin network states . story_separator_special_tag we introduce a new basis on the state space of non-perturbative quantum gravity . the states of this basis are linearly independent , are well defined in both the loop representation and the connection representation , and are labeled by a generalization of penrose 's spin netoworks . the new basis fully reduces the spinor identities ( su ( 2 ) mandelstam identities ) and simplifies calculations in non-perturbative quantum gravity . in particular , it allows a simple expression for the exact solutions of the hamiltonian constraint ( wheeler-dewitt equation ) that have been discovered in the loop representation . since the states in this basis diagnolize operators that represent the three geometry of space , such as the area and volumes of arbitrary surfaces and regions , these states provide a discrete picture of quantum geometry at the planck scale . story_separator_special_tag quantization of diffeomorphism invariant theories of connections is studied . a solutions of the diffeomorphism constraints is found . the space of solutions is equipped with an inner product that is shown to satisfy the physical reality conditions . this provides , in particular , a quantization of the husain-kucha model . the main results also pave way to quantization of other diffeomorphism invariant theories such as general relativity . in the riemannian case ( i.e. , signature ++++ ) , the approach appears to contain all the necessary ingredients already . in the lorentzian case , it will have to combined in an appropriate fashion with a coherent state transform to incorporate complex connections . story_separator_special_tag gravitons should have momentum just as photons do ; and since graviton momentum would cause compression rather than elongation of spacetime outside of matter ; it does not appear that gravitons are compatible with swartzchild 's spacetime curvature . also , since energy is proportional to mass , and mass is proportional to gravity ; the energy of matter is proportional to gravity . the energy of matter could thus contract space within matter ; and because of the inter-connectedness of space , cause the elongation of space outside of matter . and this would be compatible with swartzchild spacetime curvature . since gravity could be initiated within matter by the energy of mass , transmitted to space outside of matter by the inter-connectedness of space ; and also transmitted through space by the same inter-connectedness of space ; and since spatial and relativistic gravities can apparently be produced without the aid of gravitons ; massive gravity could also be produced without gravitons as well . gravity divided by an infinite number of segments would result in zero expression of gravity , because it could not curve spacetime . so spatial segments must have a minimum size , which is story_separator_special_tag an anomaly-free operator corresponding to the wheeler-dewitt constraint of lorentzian , four-dimensional , canonical , non-perturbative vacuum gravity is constructed in the continuum . this operator is entirely free of factor ordering singularities and can be defined in symmetric and non-symmetric form . we work in the real connection representation and obtain a well-defined quantum theory . we compute the complete solution to the quantum einstein equations for the non-symmetric version of the operator and a physical inner product thereon . the action of the wheeler-dewitt constraint on spin-network states is by annihilating , creating and rerouting the quanta of angular momentum associated with the edges of the underlying graph while the adm-energy is essentially diagonalized by the spin-network states . we argue that the spin-network representation is the `` non-linear fock representation '' of quantum gravity , thus justifying the term `` quantum spin dynamics ( qsd ) '' . story_separator_special_tag we continue here the analysis of the previous paper of the wheeler-dewitt constraint operator for four-dimensional , lorentzian , non-perturbative , canonical vacuum quantum gravity in the continuum . in this paper we derive the complete kernel , as well as a physical inner product on it , for a non-symmetric version of the wheeler-dewitt operator . we then define a symmetric version of the wheeler-dewitt operator . for the euclidean wheeler-dewitt operator as well as for the generator of the wick transform from the euclidean to the lorentzian regime we prove existence of self-adjoint extensions and based on these we present a method of proof of self-adjoint extensions for the lorentzian operator . finally we comment on the status of the wick rotation transform in the light of the present results . story_separator_special_tag this paper deals with several technical issues of non-perturbative four-dimensional lorentzian canonical quantum gravity in the continuum that arose in connection with the recently constructed wheeler-dewitt quantum constraint operator . 1 ) the wheeler-dewitt constraint mixes the previously discussed diffeomorphism superselection sectors which thus become spurious , 2 ) thus , the inner product for diffeomorphism invariant states can be fixed by requiring that diffeomorphism group averaging is a partial isometry , 3 ) the established non-anomalous constraint algebra is clarified by computing commutators of duals of constraint operators , 4 ) the full classical constraint algebra is faithfully implemented on the diffeomorphism invariant hilbert space in an appropriate sense , 5 ) the hilbert space of diffeomorphism invariant states can be made separable if a natural new superselection principle is satisfied , 6 ) we propose a natural physical scalar product for quantum general relativity by extending the group average approach to the case of non-self-adjoint constraint operators like the wheeler-dewitt constraint and 7 ) equipped with this inner product , the construction of physical observables is straightforward . story_separator_special_tag the quantization of lorentzian or euclidean 2 + 1 gravity by canonical methods is a well studied problem . however , the constraints of 2 + 1 gravity are those of a topological field theory and therefore resemble very little those of the corresponding lorentzian 3 + 1 constraints . in this paper we canonically quantize euclidean 2 + 1 gravity for an arbitrary genus of the spacelike hypersurface with new , classically equivalent constraints that maximally probe the lorentzian 3 + 1 situation . we choose the signature to be euclidean because this implies that the gauge group is , as in the 3 + 1 case , su ( 2 ) rather than . we employ , and carry out to full completion , the new quantization method introduced in preceding papers of this series which resulted in a finite 3 + 1 lorentzian quantum field theory for gravity . the space of solutions to all constraints turns out to be much larger than that obtained by traditional approaches , however , it is fully included . thus , by a suitable restriction of the solution space , we can recover all former results which gives confidence in story_separator_special_tag it is an old speculation in physics that , once the gravitational field is successfully quantized , it should serve as the natural regulator of infrared and ultraviolet singularities that plague quantum field theories in a background metric . we demonstrate that this idea is implemented in a precise sense within the framework of four-dimensional canonical lorentzian quantum gravity in the continuum . specifically , we show that the hamiltonian of the standard model supports a representation in which finite linear combinations of wilson loop functionals around closed loops , as well as along open lines with fermionic and higgs field insertions at the end points are densely defined operators . this hamiltonian , surprisingly , does not suffer from any singularities , it is completely finite without renormalization . this property is shared by string theory . in contrast to string theory , however , we are dealing with a particular phase of the standard model coupled to gravity which is entirely non-perturbatively defined and second quantized . story_separator_special_tag this work introduces a new space $ \\t ' _ * $ of ` vertex-smooth ' states for use in the loop approach to quantum gravity . such states provide a natural domain for euclidean hamiltonian constraint operators of the type introduced by thiemann ( and using certain ideas of rovelli and smolin ) . in particular , such operators map $ \\t ' _ * $ into itself , and so are actual operators in this space . their commutator can be computed on $ \\t ' _ * $ and compared with the classical hypersurface deformation algebra . although the classical poisson bracket of hamiltonian constraints yields an inverse metric times an infinitesimal diffeomorphism generator , and despite the fact that the diffeomorphism generator has a well-defined non-trivial action on $ \\t ' _ * $ , the commutator of quantum constraints vanishes identically for a large class of proposals . story_separator_special_tag we point out several features of the quantum hamiltonian constraints recently introduced by thiemann for euclidean gravity . in particular we discuss the issue of the constraint algebra and of the quantum realization of the object $ q^ { ab } v_b $ , which is classically the poisson bracket of two hamiltonians . story_separator_special_tag some typical quantization ambiguities of quantum geometry are studied within isotropic models . since this allows explicit computations of operators and their spectra , one can investigate the effects of ambiguities in a quantitative manner . it is shown that those ambiguities do not affect the fate of the classical singularity , demonstrating that the absence of a singularity in loop quantum cosmology is a robust implication of the general quantization scheme . the calculations also allow conclusions about modified operators in the full theory . in particular , using holonomies in a non-fundamental representation of su ( 2 ) to quantize connection components turns out to lead to significant corrections to classical behavior at macroscopic volume for large values of the spin of the chosen representation . story_separator_special_tag this is the second paper in our series of five in which we test the master constraint programme for solving the hamiltonian constraint in loop quantum gravity . in this work we begin with the simplest examples : finite dimensional models with a finite number of first or second class constraints , abelean or non -- abelean , with or without structure functions . story_separator_special_tag this is the fourth paper in our series of five in which we test the master constraint programme for solving the hamiltonian constraint in loop quantum gravity . we now move on to free field theories with constraints , namely maxwell theory and linearized gravity . since the master constraint involves squares of constraint operator valued distributions , one has to be very careful in doing that and we will see that the full flexibility of the master constraint programme must be exploited in order to arrive at sensible results . story_separator_special_tag this is the final fifth paper in our series of five in which we test the master constraint programme for solving the hamiltonian constraint in loop quantum gravity . here we consider interacting quantum field theories , specificlly we consider the non -- abelean gauss constraints of einstein -- yang -- mills theory and 2+1 gravity . interestingly , while yang -- mills theory in 4d is not yet rigorously defined as an ordinary ( wightman ) quantum field theory on minkowski space , in background independent quantum field theories such as loop quantum gravity ( lqg ) this might become possible by working in a new , background independent representation . story_separator_special_tag we derive a spacetime formulation of quantum general relativity from ( hamiltonian ) loop quantum gravity . in particular , we study the quantum propagator that evolves the three-geometry in proper time . we show that the perturbation expansion of this operator is finite and computable order by order . by giving a graphical representation in the manner of feynman of this expansion , we find that the theory can be expressed as a sum over topologically inequivalent ( branched , colored ) two-dimensional ( 2d ) surfaces in 4d . the contribution of one surface to the sum is given by the product of one factor per branching point of the surface . therefore branching points play the role of elementary vertices of the theory . their value is determined by the matrix elements of the hamiltonian constraint , which are known . the formulation we obtain can be viewed as a continuum version of reisenberger 's simplicial quantum gravity . also , it has the same structure as the ooguri-crane-yetter 4d topological field theory , with a few key differences that illuminate the relation between quantum gravity and topological quantum field theory . finally , we suggest that story_separator_special_tag a covariant spin-foam formulation of quantum gravity has been recently developed , characterized by a kinematics which appears to match well the one of canonical loop quantum gravity . in particular , the geometrical observable giving the area of a surface has been shown to be the same as the one in loop quantum gravity . here we discuss the volume observable . we derive the volume operator in the covariant theory , and show that it matches the one of loop quantum gravity , as does the area . we also reconsider the implementation of the constraints that defines the model : we derive in a simple way the boundary hilbert space of the theory from a suitable form of the classical constraints , and show directly that all constraints vanish weakly on this space . story_separator_special_tag we introduce a new regularization for thiemann 's hamiltonian constraint . the resulting constraint can generate the 1-4 pachner moves and is therefore more compatible with the dynamics defined by the spinfoam formalism . we calculate its matrix elements and observe the appearance of the 15j wigner symbol in these . story_separator_special_tag the asymptotics of some spin foam amplitudes for a quantum 4-simplex is known to display rapid oscillations whose frequency is the regge action . in this note , we reformulate this result through a difference equation , asymptotically satisfied by these models , and whose semi-classical solutions are precisely the sine and the cosine of the regge action . this equation is then interpreted as coming from the canonical quantization of a simple constraint in regge calculus . this suggests to lift and generalize this constraint to the phase space of loop quantum gravity parametrized by twisted geometries . the result is a reformulation of the flat model for topological bf theory from the hamiltonian perspective . the wheeler-de-witt equation in the spin network basis gives difference equations which are exactly recursion relations on the 15j-symbol . moreover , the semiclassical limit is investigated using coherent states , and produces the expected results . it mimics the classical constraint with quantized areas , and for regge geometries it reduces to the semi-classical equation which has been introduced in the beginning . story_separator_special_tag we review the present status of black hole thermodynamics . our review includes discussion of classical black hole thermodynamics , hawking radiation from black holes , the generalized second law , and the issue of entropy bounds . a brief survey also is given of approaches to the calculation of black hole entropy . we conclude with a discussion of some unresolved open issues . story_separator_special_tag quantum black holes have been studied extensively in quantum gravity and string theory , using various semiclassical or background dependent approaches . we explore the possibility of studying black holes in the full non-perturbative quantum theory , without recurring to semiclassical considerations , and in the context of loop quantum gravity . we propose a definition of a quantum black hole as the collection of the quantum degrees of freedom that do not influence observables at infinity . from this definition , it follows that for an observer at infinity a black hole is described by an su ( 2 ) intertwining operator . the dimension of the hilbert space of such intertwiners grows exponentially with the horizon area . these considerations shed some light on the physical nature of the microstates contributing to the black hole entropy . in particular , it can be seen that the microstates being counted for the entropy have the interpretation of describing different horizon shapes . the space of black hole microstates described here is related to the one arrived at recently by engle , noui and perez , and sometime ago by smolin , but obtained here directly within the full quantum story_separator_special_tag using the earlier developed classical hamiltonian framework as the point of departure , we carry out a non-perturbative quantization of the sector of general relativity , coupled to matter , admitting non-rotating isolated horizons as inner boundaries . the emphasis is on the quantum geometry of the horizon . polymer excitations of the bulk quantum geometry pierce the horizon endowing it with area . the intrinsic geometry of the horizon is then described by the quantum chern-simons theory of a u ( 1 ) connection on a punctured 2-sphere , the horizon . subtle mathematical features of the quantum chern-simons theory turn out to be important for the existence of a coherent quantum theory of the horizon geometry . heuristically , the intrinsic geometry is flat everywhere except at the punctures . the distributional curvature of the u ( 1 ) connection at the punctures gives rise to quantized deficit angles which account for the overall curvature . for macroscopic black holes , the logarithm of the number of these horizon microstates is proportional to the area , irrespective of the values of ( non-gravitational ) charges . thus , the black hole entropy can be accounted for entirely by story_separator_special_tag a `` black hole sector '' of nonperturbative canonical quantum gravity is introduced . the quantum black hole degrees of freedom are shown to be described by a chern-simons field theory on the horizon . it is shown that the entropy of a large nonrotating black hole is proportional to its horizon area . the constant of proportionality depends upon the immirzi parameter , which fixes the spectrum of the area operator in loop quantum gravity ; an appropriate choice of this parameter gives the bekenstein-hawking formula $ s\\phantom { \\rule { 0ex } { 0ex } } =\\phantom { \\rule { 0ex } { 0ex } } a/4 { \\ensuremath { \\ell } } _ { p } ^ { 2 } $ . with the same choice of the immirzi parameter , this result also holds for black holes carrying electric or dilatonic charge , which are not necessarily near extremal . story_separator_special_tag quantum geometry ( the modern loop quantum gravity using graphs and spin-networks instead of the loops ) provides microscopic degrees of freedom that account for the black-hole entropy . however , the procedure for state counting used in the literature contains an error and the number of the relevant horizon states is underestimated . in our paper a correct method of counting is presented . our results lead to a revision of the literature of the subject . it turns out that the contribution of spins greater then 1/2 to the entropy is not negligible . hence , the value of the barbero-immirzi parameter involved in the spectra of all the geometric and physical operators in this theory is different than previously derived . also , the conjectured relation between quantum geometry and the black hole quasi-normal modes should be understood again . story_separator_special_tag we calculate the black hole entropy in loop quantum gravity as a function of the horizon area and provide the exact formula for the leading and sub-leading terms . by comparison with the bekenstein-hawking formula we uniquely fix the value of the 'quantum of area ' in the theory . story_separator_special_tag we derive an exact formula for the dimensionality of the hilbert space of the boundary states of su ( 2 ) chern-simons theory , which , according to the recent work of ashtekar et al , leads to the bekenstein-hawking entropy of a four dimensional schwarzschild black hole . our result stems from the relation between the ( boundary ) hilbert space of the chern-simons theory with the space of conformal blocks of the wess-zumino model on the boundary 2-sphere . story_separator_special_tag the exact formula derived by us earlier for the entropy of a four dimensional nonrotating black hole within the quantum geometry formulation of the event horizon in terms of boundary states of a three dimensional chern-simons theory is reexamined for large horizon areas . in addition to the semiclassical bekenstein-hawking contribution proportional to the area obtained earlier , we find a contribution proportional to the logarithm of the area together with subleading corrections that constitute a series in inverse powers of the area . story_separator_special_tag black holes ( bh 's ) in equilibrium can be defined locally in terms of the so-called isolated horizon boundary condition given on a null surface representing the event horizon . we show that this boundary condition can be treated in a manifestly su ( 2 ) invariant manner . upon quantization , state counting is expressed in terms of the dimension of chern-simons hilbert spaces on a sphere with punctures . remarkably , when considering an ensemble of fixed horizon area a ( h ) , the counting can be mapped to simply counting the number of su ( 2 ) intertwiners compatible with the spins labeling the punctures . the resulting bh entropy is proportional to a ( h ) with logarithmic corrections s=-3/2 loga ( h ) . our treatment from first principles settles previous controversies concerning the counting of states . story_separator_special_tag a detailed analysis of the spherically symmetric isolated horizon system is performed in terms of the connection formulation of general relativity . the system is shown to admit a manifestly su ( 2 ) invariant formulation where the ( effective ) horizon degrees of freedom are described by an su ( 2 ) chern-simons theory . this leads to a more transparent description of the quantum theory in the context of loop quantum gravity and modifications of the form of the horizon entropy . story_separator_special_tag loop gravity provides a microscopic derivation of black hole entropy . in this paper , i show that the microstates counted admit a semiclassical description in terms of shapes of a tessellated horizon . the counting of microstates and the computation of the entropy can be done via a mapping to an equivalent statistical mechanical problem : the counting of conformations of a closed polymer chain . this correspondence suggests a number of intriguing relations between the thermodynamics of black holes and the physics of polymers . story_separator_special_tag equilibrium states of black holes can be modelled by isolated horizons . if the intrinsic geometry is spherical , they are called type i while if it is axi-symmetric , they are called type ii . the detailed theory of geometry of \\emph { quantum } type i horizons and the calculation of their entropy can be generalized to type ii , thereby including arbitrary distortions and rotations . the leading term in entropy of large horizons is again given by 1/4th of the horizon area for the \\emph { same } value of the barbero-immirzi parameter as in the type i case . ideas and constructions underlying this extension are summarized . story_separator_special_tag isolated horizons model equilibrium states of classical black holes . a detailed quantization , starting from a classical phase space restricted to spherically symmetric horizons , exists in the literature and has since been extended to axisymmetry . this paper extends the quantum theory to horizons of arbitrary shape . surprisingly , the hilbert space obtained by quantizing the full phase space of \\textit { all } generic horizons with a fixed area is identical to that originally found in spherical symmetry . the entropy of a large horizon remains one quarter its area , with the barbero-immirzi parameter retaining its value from symmetric analyses . these results suggest a reinterpretation of the intrinsic quantum geometry of the horizon surface . story_separator_special_tag counting of microscopic states of black holes is performed within the framework of loop quantum gravity . this is the first calculation of the pure horizon states using statistical methods , which reveals the possibility of additional states missed in the earlier calculations , leading to an increase of entropy . also for the first time a microcanonical temperature is introduced within the framework . story_separator_special_tag we show that , for space-times with inner boundaries , there exists a natural area operator different from the standard one used in loop quantum gravity . this new flux-area operator has equidistant eigenvalues . we discuss the consequences of substituting the standard area operator in the ashtekar-baez-corichi-krasnov definition of black hole entropy by the new one . our choice simplifies the definition of the entropy and allows us to consider only those areas that coincide with the one defined by the value of the level of the chern-simons theory describing the horizon degrees of freedom . we give a prescription to count the number of relevant horizon states by using spin components and obtain exact expressions for the black hole entropy . finally we derive its asymptotic behavior , discuss several issues related to the compatibility of our results with the bekenstein-hawking area law and the relation with schwarzschild quasinormal modes . story_separator_special_tag we study the classical field theoretical formulation of static generic isolated horizons in a manifestly su ( 2 ) invariant formulation . we show that the usual classical description requires revision in the non-static case due to the breaking of diffeomorphism invariance at the horizon leading to the non-conservation of the usual pre-symplectic structure . we argue how this difficulty could be avoided by a simple enlargement of the field content at the horizon that restores diffeomorphism invariance . restricting our attention to static isolated horizons we study the effective theories describing the boundary degrees of freedom . a quantization of the horizon degrees of freedom is proposed . by defining a statistical mechanical ensemble where only the area ah of the horizon is fixed macroscopically states with fluctuations away from spherical symmetry are allowed we show that it is possible to obtain agreement with the hawkings area law ( s = ah / ( 4l 2p ) ) without fixing the immirzi parameter to any particular value : consistency with the area law only imposes a relationship between the immirzi parameter and the level of the chern-simons theory involved in the effective description of the horizon degrees of freedom story_separator_special_tag ever since the pioneering works of bekenstein and hawking , black hole entropy has been known to have a quantum origin . furthermore , it has long been argued by bekenstein that entropy should be quantized in discrete ( equidistant ) steps given its identification with horizon area in ( semi- ) classical general relativity and the properties of area as an adiabatic invariant . this lead to the suggestion that the black hole area should also be quantized in equidistant steps to account for the discrete black hole entropy . here we shall show that loop quantum gravity , in which area is not quantized in equidistant steps , can nevertheless be consistent with bekenstein 's equidistant entropy proposal in a subtle way . for that we perform a detailed analysis of the number of microstates compatible with a given area and show consistency with the bekenstein framework when an oscillatory behavior in the entropy-area relation is properly interpreted . story_separator_special_tag quantum black holes within the loop quantum gravity ( lqg ) framework are considered . the number of microscopic states that are consistent with a black hole of a given horizon area $ a_0 $ are counted and the statistical entropy , as a function of the area , is obtained for $ a_0 $ up to $ 550 l^2_ { \\rm pl } $ . the results are consistent with an asymptotic linear relation and a logarithmic correction with a coefficient equal to -1/2 . the barbero-immirzi parameter that yields the asymptotic linear relation compatible with the bekenstein-hawking entropy is shown to coincide with a value close to $ \\gamma=0.274 $ , which has been previously obtained analytically . however , a new and oscillatory functional form for the entropy is found for small , planck size , black holes that calls for a physical interpretation . story_separator_special_tag in this paper , we carry out the counting of states for a black hole in loop quantum gravity , assuming however an equidistant area spectrum . we find that this toy-model is exactly solvable , and we show that its behavior is very similar to that of the correct model . thus this toy-model can be used as a nice and simplifying 'laboratory ' for questions about the full theory . story_separator_special_tag in a remarkable numerical analysis of the spectrum of states for a spherically symmetric black hole in loop quantum gravity , corichi , diaz-polo and fernandez-borja found that the entropy of the black hole horizon increases in what resembles discrete steps as a function of area . in the present article we reformulate the combinatorial problem of counting horizon states in terms of paths through a certain space . this formulation sheds some light on the origins of this steplike behavior of the entropy . in particular , using a few extra assumptions we arrive at a formula that reproduces the observed step length to a few tenths of a percent accuracy . however , in our reformulation the periodicity ultimately arises as a property of some complicated process , the properties of which , in turn , depend on the properties of the area spectrum in loop quantum gravity in a rather opaque way . thus , in some sense , a deep explanation of the observed periodicity is still lacking . story_separator_special_tag motivated by the analogy proposed by witten between chern-simons and conformal field theories , we explore an alternative way of computing the entropy of a black hole starting from the isolated horizon framework in loop quantum gravity . the consistency of the result opens a window for the interplay between conformal field theory and the description of black holes in loop quantum gravity . story_separator_special_tag we give a complete and detailed description of the computation of black hole entropy in loop quantum gravity by employing the most recently introduced number-theoretic and combinatorial methods . the use of these techniques allows us to perform a detailed analysis of the precise structure of the entropy spectrum for small black holes , showing some relevant features that were not discernible in previous computations . the ability to manipulate and understand the spectrum up to the level of detail that we describe in the paper is a crucial step toward obtaining the behavior of entropy in the asymptotic ( large horizon area ) regime . story_separator_special_tag we use mathematical methods based on generating functions to study the statistical properties of the black hole degeneracy spectrum in loop quantum gravity . in particular we will study the persistence of the observed effective quantization of the entropy as a function of the horizon area . we will show that this quantization disappears as the area increases despite the existence of black hole configurations with a large degeneracy . the methods that we describe here can be adapted to the study of the statistical properties of the black hole degeneracy spectrum for all the existing proposals to define black hole entropy in loop quantum gravity . story_separator_special_tag we use the combinatorial and number-theoretical methods developed in previous works by the authors to study black hole entropy in the new proposal put forth by engle , noui , and perez . specifically , we give the generating functions relevant for the computation of the entropy and use them to derive its asymptotic behavior , including the value of the immirzi parameter and the coefficient of the logarithmic correction . story_separator_special_tag in this article we outline a rather general construction of diffeomorphism covariant coherent states for quantum gauge theories . by this we mean states $ \\psi_ { ( a , e ) } $ , labelled by a point ( a , e ) in the classical phase space , consisting of canonically conjugate pairs of connections a and electric fields e respectively , such that ( a ) they are eigenstates of a corresponding annihilation operator which is a generalization of a-ie smeared in a suitable way , ( b ) normal ordered polynomials of generalized annihilation and creation operators have the correct expectation value , ( c ) they saturate the heisenberg uncertainty bound for the fluctuations of $ \\hat { a } , \\hat { e } $ and ( d ) they do not use any background structure for their definition , that is , they are diffeomorphism covariant . this is the first paper in a series of articles entitled `` gauge field theory coherent states ( gcs ) '' which aim at connecting non-perturbative quantum general relativity with the low energy physics of the standard model . in particular , coherent states enable us story_separator_special_tag in this article we apply the methods outlined in the previous paper of this series to the particular set of states obtained by choosing the complexifier to be a laplace operator for each edge of a graph . the corresponding coherent state transform was introduced by hall for one edge and generalized by ashtekar , lewandowski , marolf , mour\\~ao and thiemann to arbitrary , finite , piecewise analytic graphs . however , both of these works were incomplete with respect to the following two issues : ( a ) the focus was on the unitarity of the transform and left the properties of the corresponding coherent states themselves untouched . ( b ) while these states depend in some sense on complexified connections , it remained unclear what the complexification was in terms of the coordinates of the underlying real phase space . in this paper we resolve these issues , in particular , we prove that this family of states satisfies all the usual properties : i ) peakedness in the configuration , momentum and phase space ( or bargmann-segal ) representation , ii ) saturation of the unquenched heisenberg uncertainty bound . iii ) ( over ) story_separator_special_tag in the preceding paper of this series of articles we established peakedness properties of a family of coherent states that were introduced by hall for any compact gauge group and were later generalized to gauge field theory by ashtekar , lewandowski , marolf , mour\\~ao and thiemann . in this paper we establish the `` ehrenfest property '' of these states which are labelled by a point ( a , e ) , a connection and an electric field , in the classical phase space . by this we mean that i ) the expectation value of { \\it all } elementary quantum operators $ \\hat { o } $ with respect to the coherent state with label ( a , e ) is given to zeroth order in $ \\hbar $ by the value of the corresponding classical function o evaluated at the phase space point ( a , e ) and ii ) the expectation value of the commutator between two elementary quantum operators $ [ \\hat { o } _1 , \\hat { o } _2 ] / ( i\\hbar ) $ divided by $ i\\hbar $ with respect to the coherent state with label ( a story_separator_special_tag we summarize a recently proposed concrete programme for investigating the ( semi ) classical limit of canonical , lorentzian , continuum quantum general relativity in four spacetime dimensions . the analysis is based on a novel set of coherent states labelled by graphs . these fit neatly together with an infinite tensor product ( itp ) extension of the currently used hilbert space . the itp construction enables us to give rigorous meaning to the infinite volume ( thermodynamic ) limit of the theory which has been out of reach so far . story_separator_special_tag we study light propagation in the picture of semiclassical space-time that emerges in canonical quantum gravity in the loop representation . in such a picture , where space-time exhibits a polymerlike structure at microscales , it is natural to expect departures from the perfect nondispersiveness of an ordinary vacuum . we evaluate these departures , computing the modifications to maxwell 's equations due to quantum gravity and showing that under certain circumstances nonvanishing corrections appear that depend on the helicity of propagating waves . these effects could lead to observable cosmological predictions of the discrete nature of quantum space-time . in particular , recent observations of nondispersiveness in the spectra of gamma-ray bursts at various energies could be used to constrain the type of semiclassical state that describes the universe . story_separator_special_tag massive spin-1/2 fields are studied in the framework of loop quantum gravity by considering a state approximating , at a length scale l much greater than planck length lp = 1.2 \xd7 10 33 cm , a spin-1/2 field in flat spacetime . the discrete structure of spacetime at lp yields corrections to the field propagation at scale l. next , neutrino bursts ( \xaf p 10 5 gev ) accompaning gamma ray bursts that have travelled cosmological distances , l 10 10 l.y. , are considered . the dominant correction is helicity independent and leads to a time delay w.r.t . the speed of light , c , of order ( \xaf p lp ) l/c 10 4 s. to next order in \xaf lp the correction has the form of the gambini and pullin effect for photons . its contribution to time delay is comparable to that caused by the mass term . finally , a dependence l 1 os / \xaf p 2 lp is found for a two-flavour neutrino oscillation length . story_separator_special_tag in this article and a companion paper we address the question of how one might obtain the semiclassical limit of ordinary matter quantum fields ( qft ) propagating on curved spacetimes ( cst ) from full fledged quantum general relativity ( qgr ) , starting from first principles . we stress that we do not claim to have a satisfactory answer to this question , rather our intention is to ignite a discussion by displaying the problems that have to be solved when carrying out such a program . in the present paper we propose a scheme that one might follow in order to arrive at such a limit . we discuss the technical and conceptual problems that arise in doing so and how they can be solved in principle . as to be expected , completely new issues arise due to the fact that qgr is a background independent theory . for instance , fundamentally the notion of a photon involves not only the maxwell quantum field but also the metric operator - in a sense , there is no photon vacuum state but a `` photon vacuum operator '' ! while in this first paper we focus on story_separator_special_tag the present paper is the companion of [ 1 ] in which we proposed a scheme that tries to derive the quantum field theory ( qft ) on curved spacetimes ( cst ) limit from background independent quantum general relativity ( qgr ) . the constructions of [ 1 ] make heavy use of the notion of semiclassical states for qgr . in the present paper , we employ the complexifier coherent states for qgr recently proposed by thiemann and winkler as semiclassical states , and thus fill the general formulas obtained in [ 1 ] with life . we demonstrate how one can , under some simplifying assumptions , explicitely compute expectation values of the operators relevant for the gravity-matter hamiltonians of [ 1 ] in the complexifier coherent states . these expectation values give rise to effective matter hamiltonians on the background on which the gravitational coherent state is peaked and thus induce approximate notions of n-particle states and matter propagation on fluctuating spacetimes . we display the details for the scalar and the electromagnetic field . the effective theories exhibit two types of corrections as compared to the the ordinary qft on cst . the first is story_separator_special_tag motivated by phenomenological questions in quantum gravity , we consider the propagation of a scalar field on a random lattice . we describe a procedure to calculate the dispersion relation for the field by taking a limit of a periodic lattice . we use this to calculate the lowest order coefficients of the dispersion relation for a specific one-dimensional model . story_separator_special_tag there is a unique lorentz-violating modification of the maxwell theory of photons , which maintains gauge invariance , cpt , and renormalizability . restricting the modified-maxwell theory to the isotropic sector and adding a standard spin- ( 1/2 ) dirac particle p { sup { +- } } with minimal coupling to the nonstandard photon { gamma } -tilde , the resulting modified-quantum-electrodynamics model involves a single dimensionless 'deformation parameter ' , { kappa } -tilde { sub tr } . the exact tree-level decay rates for two processes have been calculated : vacuum cherenkov radiation p { sup { +- } } { yields } p { sup { +- } } { gamma } -tilde for the case of positive { kappa } -tilde { sub tr } and photon decay { gamma } -tilde { yields } p { sup + } p { sup - } for the case of negative { kappa } -tilde { sub tr } . from the inferred absence of these decays for a particular high-quality ultrahigh-energy-cosmic-ray event detected at the pierre auger observatory and a well-established excess of tev gamma-ray events observed by the high energy stereoscopic system telescopes , story_separator_special_tag a simple model is constructed which allows to compute modified dispersion relations with effects from loop quantum gravity . different quantization choices can be realized and their effects on the order of corrections studied explicitly . a comparison with more involved semiclassical techniques shows that there is agreement even at a quantitative level . furthermore , by contrasting hamiltonian and lagrangian descriptions we show that possible lorentz symmetry violations may be blurred as an artifact of the approximation scheme . whether this is the case in a purely hamiltonian analysis can be resolved by an improvement in the effective semiclassical analysis .
1 the retina is a thin sheet of neural tissue that partially lines the orb of the eye . this tiny outpost of the central nervous system is responsible for collecting all the visual information that reaches the brain . signals from the retina must carry reliable information about properties of objects in the world over many orders of magnitude of illumination . furthermore , these signals are generated by transducers whose characteristics are innately mismatched and must be continuously self-calibrated . some of the mechanisms by which the retina achieves this feat are embodied in a two-dimensional cmos chip , the silicon retina . story_separator_special_tag this paper describes a 128 128 pixel cmos vision sensor . each pixel independently and in continuous time quantizes local relative intensity changes to generate spike events . these events appear at the output of the sensor as an asynchronous stream of digital pixel addresses . these address-events signify scene reflectance change and have sub-millisecond timing precision . the output data rate depends on the dynamic content of the scene and is typically orders of magnitude lower than those of conventional frame-based imagers . by combining an active continuous-time front-end logarithmic photoreceptor with a self-timed switched-capacitor differencing circuit , the sensor achieves an array mismatch of 2.1 % in relative intensity event threshold and a pixel bandwidth of 3 khz under 1 klux scene illumination . dynamic range is 120 db and chip power consumption is 23 mw . event latency shows weak light dependency with a minimum of 15 s at 1 klux pixel illumination . the sensor is built in a 0.35 m 4m2p process . it has 40 40 m pixels with 9.4 % fill factor . by providing high pixel bandwidth , wide dynamic range , and precisely timed sparse digital output , this silicon retina story_separator_special_tag the biomimetic cmos dynamic vision and image sensor described in this paper is based on a qvga ( 304\xd7240 ) array of fully autonomous pixels containing event-based change detection and pulse-width-modulation ( pwm ) imaging circuitry . exposure measurements are initiated and carried out locally by the individual pixel that has detected a change of brightness in its field-of-view . pixels do not rely on external timing signals and independently and asynchronously request access to an ( asynchronous arbitrated ) output channel when they have new grayscale values to communicate . pixels that are not stimulated visually do not produce output . the visual information acquired from the scene , temporal contrast and grayscale data , are communicated in the form of asynchronous address-events ( aer ) , with the grayscale values being encoded in inter-event intervals . the pixel-autonomous and massively parallel operation ideally results in lossless video compression through complete temporal redundancy suppression at the pixel level . compression factors depend on scene activity and peak at ~1000 for static scenes . due to the time-based encoding of the illumination information , very high dynamic range - intra-scene dr of 143 db static and 125 db at 30 story_separator_special_tag fast sensory-motor processing is challenging when using traditional frame-based cameras and computers . here the authors show how a hybrid neuromorphic-procedural system consisting of an address-event silicon retina , a computer , and a servo motor can be used to implement a fast sensory-motor reactive controller to track and block balls shot at a goal . the system consists of a 128times128 retina that asynchronously reports scene reflectance changes , a laptop pc , and a servo motor controller . components are interconnected by usb . the retina looks down onto the field in front of the goal . moving objects are tracked by an event-driven cluster tracker algorithm that detects the ball as the nearest object that is approaching the goal . the ball 's position and velocity are used to control the servo motor . running under windows xp , the reaction latency is 2.8plusmn0.5 ms at a cpu load of 1 million events per second ( meps ) , although fast balls only create ~30 keps . this system demonstrates the advantages of hybrid event-based sensory motor processing story_separator_special_tag event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames . these cameras do not suffer from motion blur and have a very high dynamic range , which enables them to provide reliable visual information during high-speed motions or in scenes characterized by high dynamic range . these features , along with a very low power consumption , make event cameras an ideal complement to standard cameras for vr/ar and video game applications . with these applications in mind , this paper tackles the problem of accurate , low-latency tracking of an event camera from an existing photometric depth map ( i.e. , intensity plus depth information ) built via classic dense reconstruction pipelines . our approach tracks the 6-dof pose of the event camera upon the arrival of each event , thus virtually eliminating latency . we successfully evaluate the method in both indoor and outdoor scenes and show that because of the technological advantages of the event camera our pipeline works in scenes characterized by high-speed motion , which are still inaccessible to standard cameras . story_separator_special_tag event cameras are novel sensors that report brightness changes in the form of a stream of asynchronous events instead of intensity frames . they offer significant advantages with respect to conventional cameras : high temporal resolution , high dynamic range , and no motion blur . while the stream of events encodes in principle the complete visual signal , the reconstruction of an intensity image from a stream of events is an ill-posed problem in practice . existing reconstruction approaches are based on hand-crafted priors and strong assumptions about the imaging process as well as the statistics of natural images . in this work we propose to learn to reconstruct intensity images from event streams directly from data instead of relying on any hand-crafted priors . we propose a novel recurrent network to reconstruct videos from a stream of events , and train it on a large amount of simulated event data . during training we propose to use a perceptual loss to encourage reconstructions to follow natural image statistics . we further extend our approach to synthesize color images from color event streams . our quantitative experiments show that our network surpasses state-of-the-art reconstruction methods by a large margin story_separator_special_tag neuromorphic vision sensing ( nvs ) devices represent visual information as sequences of asynchronous discrete events ( a.k.a. , `` spikes ' '' ) in response to changes in scene reflectance . unlike conventional active pixel sensing ( aps ) , nvs allows for significantly higher event sampling rates at substantially increased energy efficiency and robustness to illumination changes . however , object classification with nvs streams can not leverage on state-of-the-art convolutional neural networks ( cnns ) , since nvs does not produce frame representations . to circumvent this mismatch between sensing and processing with cnns , we propose a compact graph representation for nvs . we couple this with novel residual graph cnn architectures and show that , when trained on spatio-temporal nvs data for object classification , such residual graph cnns preserve the spatial and temporal coherence of spike events , while requiring less computation and memory . finally , to address the absence of large real-world nvs datasets for complex recognition tasks , we present and make available a 100k dataset of nvs recordings of the american sign language letters , acquired with an inilabs davis240c device under real-world conditions . story_separator_special_tag event sensors implement circuits that capture partial functionality of biological sensors , such as the retina and cochlea . as with their biological counterparts , event sensors are drivers of their own output . that is , they produce dynamically sampled binary events to dynamically changing stimuli . algorithms and networks that process this form of output representation are still in their infancy , but they show strong promise . this article illustrates the unique form of the data produced by the sensors and demonstrates how the properties of these sensor outputs make them useful for power-efficient , low-latency systems working in real time . story_separator_special_tag conventional vision-based robotic systems that must operate quickly require high video frame rates and consequently high computational costs . visual response latencies are lower-bound by the frame period , e.g. , 20 ms for 50 hz frame rate . this paper shows how an asynchronous neuromorphic dynamic vision sensor ( dvs ) silicon retina is used to build a fast self-calibrating robotic goalie , which offers high update rates and low latency at low cpu load . independent and asynchronous per pixel illumination change events from the dvs signify moving objects and are used in software to track multiple balls . motor actions to block the most threatening ball are based on measured ball positions and velocities . the goalie also sees its single-axis goalie arm and calibrates the motor output map during idle periods so that it can plan open-loop arm movements to desired visual locations . blocking capability is about 80 % for balls shot from 1 m from the goal even with the fastest-shots , and approaches 100 % accuracy when the ball does not beat the limits of the servo motor to move the arm to the necessary position in time . running with standard usb story_separator_special_tag the fast temporal-dynamics and intrinsic motion segmentation of event-based cameras are beneficial for robotic tasks that require low-latency visual tracking and control , for example a robot catching a ball . when the event-driven icub humanoid robot grasps an object its head and torso move , inducing camera motion , and tracked objects become no longer trivially segmented amongst the mass of background clutter . current event-based tracking algorithms have mostly considered stationary cameras that have clean event-streams with minimal clutter . this paper introduces novel methods to extend the hough-based circle detection algorithm using optical flow information that is readily extracted from the spatio-temporal event space . results indicate the proposed directed-hough algorithm is more robust to other moving objects and the background event-clutter . finally , we demonstrate successful on-line robot control and gaze following on the icub robot . story_separator_special_tag this work presents an embedded optical sensory system for traffic monitoring and vehicles speed estimation based on a neuromorphic `` silicon-retina '' image sensor , and the algorithm developed for processing the asynchronous output data delivered by this sensor . the main purpose of these efforts is to provide a flexible , compact , low-power and low-cost traffic monitoring system which is capable of determining the velocity of passing vehicles simultaneously on multiple lanes . the system and algorithm proposed exploit the unique characteristics of the image sensor with focal-plane analog preprocessing . these features include sparse asynchronous data output with high temporal resolution and low latency , high dynamic range and low power consumption . the system is able to measure velocities of vehicles in the range 20 to 300 km/h on up to four lanes simultaneously , day and night and under variable atmospheric conditions , with a resolution of 1 km/h . results of vehicle speed measurements taken from a test installation of the system on a four-lane highway are presented and discussed . the accuracy of the speed estimate has been evaluated on the basis of calibrated light-barrier speed measurements . the speed estimation error has story_separator_special_tag this paper introduces a spiking hierarchical model for object recognition which utilizes the precise timing information inherently present in the output of biologically inspired asynchronous address event representation ( aer ) vision sensors . the asynchronous nature of these systems frees computation and communication from the rigid predetermined timing enforced by system clocks in conventional systems . freedom from rigid timing constraints opens the possibility of using true timing to our advantage in computation . we show not only how timing can be used in object recognition , but also how it can in fact simplify computation . specifically , we rely on a simple temporal-winner-take-all rather than more computationally intensive synchronous operations typically used in biologically inspired neural networks for object recognition . this approach to visual computation represents a major paradigm shift from conventional clocked systems and can find application in other sensory modalities and computational tasks . we showcase effectiveness of the approach by achieving the highest reported accuracy to date ( 97.5 % $ \\pm $ 3.5 % ) for a previously published four class card pip recognition task and an accuracy of 84.9 % $ \\pm $ 1.9 % for a new more difficult 36 story_separator_special_tag we propose a real-time hand gesture interface based on combining a stereo pair of biologically inspired event-based dynamic vision sensor ( dvs ) silicon retinas with neuromorphic event-driven postprocessing . compared with conventional vision or 3-d sensors , the use of dvss , which output asynchronous and sparse events in response to motion , eliminates the need to extract movements from sequences of video frames , and allows significantly faster and more energy-efficient processing . in addition , the rate of input events depends on the observed movements , and thus provides an additional cue for solving the gesture spotting problem , i.e. , finding the onsets and offsets of gestures . we propose a postprocessing framework based on spiking neural networks that can process the events received from the dvss in real time , and provides an architecture for future implementation in neuromorphic hardware devices . the motion trajectories of moving hands are detected by spatiotemporally correlating the stereoscopically verged asynchronous events from the dvss by using leaky integrate-and-fire ( lif ) neurons . adaptive thresholds of the lif neurons achieve the segmentation of trajectories , which are then translated into discrete and finite feature vectors . the feature story_separator_special_tag we present the first gesture recognition system implemented end-to-end on event-based hardware , using a truenorth neurosynaptic processor to recognize hand gestures in real-time at low power from events streamed live by a dynamic vision sensor ( dvs ) . the biologically inspired dvs transmits data only when a pixel detects a change , unlike traditional frame-based cameras which sample every pixel at a fixed frame rate . this sparse , asynchronous data representation lets event-based cameras operate at much lower power than frame-based cameras . however , much of the energy efficiency is lost if , as in previous work , the event stream is interpreted by conventional synchronous processors . here , for the first time , we process a live dvs event stream using truenorth , a natively event-based processor with 1 million spiking neurons . configured here as a convolutional neural network ( cnn ) , the truenorth chip identifies the onset of a gesture with a latency of 105 ms while consuming less than 200 mw . the cnn achieves 96.5 % out-of-sample accuracy on a newly collected dvs dataset ( dvsgesture ) comprising 11 hand gesture categories from 29 subjects under 3 illumination conditions story_separator_special_tag we present a novel event-based stereo matching algorithm that exploits the asynchronous visual events from a pair of silicon retinas . unlike conventional frame-based cameras , recent artificial retinas transmit their outputs as a continuous stream of asynchronous temporal events , in a manner similar to the output cells of the biological retina . our algorithm uses the timing information carried by this representation in addressing the stereo-matching problem on moving objects . using the high temporal resolution of the acquired data stream for the dynamic vision sensor , we show that matching on the timing of the visual events provides a new solution to the real-time computation of 3-d objects when combined with geometric constraints using the distance to the epipolar lines . the proposed algorithm is able to filter out incorrect matches and to accurately reconstruct the depth of moving objects despite the low spatial resolution of the sensor . this brief sets up the principles for further event-based vision processing and demonstrates the importance of dynamic information and spike timing in processing asynchronous streams of visual events . story_separator_special_tag event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames . they offer significant advantages over standard cameras , namely a very high dynamic range , no motion blur , and a latency in the order of microseconds . however , because the output is composed of a sequence of asynchronous events rather than actual intensity images , traditional vision algorithms can not be applied , so that a paradigm shift is needed . we introduce the problem of event-based multi-view stereo ( emvs ) for event cameras and propose a solution to it . unlike traditional mvs methods , which address the problem of estimating dense 3d structure from a set of known viewpoints , emvs estimates semi-dense 3d structure from an event camera with known trajectory . our emvs solution elegantly exploits two inherent properties of an event camera : ( 1 ) its ability to respond to scene edges which naturally provide semi-dense geometric information without any pre-processing operation and ( 2 ) the fact that it provides continuous measurements as the sensor moves . despite its simplicity ( it can be implemented in a few lines of code ) , story_separator_special_tag structured light 3d scanning systems are fundamentally constrained by limited sensor bandwidth and light source power , hindering their performance in real-world applications where depth information is essential , such as industrial automation , autonomous transportation , robotic surgery , and entertainment . we present a novel structured light technique called motion contrast 3d scanning ( mc3d ) that maximizes bandwidth and light source power to avoid performance trade-offs . the technique utilizes motion contrast cameras that sense temporal gradients asynchronously , i.e. , independently for each pixel , a property that minimizes redundant sampling . this allows laser scanning resolution with single-shot speed , even in the presence of strong ambient illumination , significant inter-reflections , and highly reflective surfaces . the proposed approach will allow 3d vision systems to be deployed in challenging and hitherto inaccessible real-world scenarios requiring high performance using limited power and bandwidth . story_separator_special_tag this paper introduces a new methodology to compute dense visual flow using the precise timings of spikes from an asynchronous event-based retina . biological retinas , and their artificial counterparts , are totally asynchronous and data-driven and rely on a paradigm of light acquisition radically different from most of the currently used frame-grabber technologies . this paper introduces a framework to estimate visual flow from the local properties of events ' spatiotemporal space . we will show that precise visual flow orientation and amplitude can be estimated using a local differential approach on the surface defined by coactive events . experimental results are presented ; they show the method adequacy with high data sparseness and temporal resolution of event-based acquisition that allows the computation of motion flow with microsecond accuracy and at very low computational cost . story_separator_special_tag event-based cameras have shown great promise in a variety of situations where frame based cameras suffer , such as high speed motions and high dynamic range scenes . however , developing algorithms for event measurements requires a new class of hand crafted algorithms . deep learning has shown great success in providing model free solutions to many problems in the vision community , but existing networks have been developed with frame based images in mind , and there does not exist the wealth of labeled data for events as there does for images for supervised training . to these points , we present ev-flownet , a novel self-supervised deep learning pipeline for optical flow estimation for event based cameras . in particular , we introduce an image based representation of a given event stream , which is fed into a self-supervised neural network as the sole input . the corresponding grayscale images captured from the same camera at the same time as the events are then used as a supervisory signal to provide a loss function at training time , given the estimated flow from the network . we show that the resulting network is able to accurately predict optical story_separator_special_tag biological systems process visual input using a distributed representation , with different areas encoding different aspects of the visual interpretation . while current engineering habits tempt us to think of this processing in terms of a pipelined sequence of filters and other feed-forward processing stages , cortical anatomy suggests quite a different architecture , using strong recurrent connectivity between visual areas . here we design a network to interpret input from a neuromorphic sensor by means of recurrently interconnected areas , each of which encodes a different aspect of the visual interpretation , such as light intensity or optic flow . as each area of the network tries to be consistent with the information in neighboring areas , the visual interpretation converges towards global mutual consistency . rather than applying input in a traditional feed-forward manner , the sensory input is only used to weakly influence the information flowing both ways through the middle of the network . even with this seemingly weak use of input , this network of interacting maps is able to maintain its interpretation of the visual scene in real time , proving the viability of this interacting map approach to computation . story_separator_special_tag an event camera is a silicon retina which outputs not a sequence of video frames like a standard camera , but a stream of asynchronous spikes , each with pixel location , sign and precise timing , indicating when individual pixels record a threshold log intensity change . by encoding only image change , it offers the potential to transmit the information in a standard video but at vastly reduced bitrate , and with huge added advantages of very high dynamic range and temporal resolution . however , event data calls for new algorithms , and in particular we believe that algorithms which incrementally estimate global scene models are best placed to take full advantages of its properties . here , we show for the first time that an event stream , with no additional sensing , can be used to track accurate camera rotation while building a persistent and high quality mosaic of a scene which is super-resolution accurate and has high dynamic range . our method involves parallel camera rotation tracking and template reconstruction from estimated gradients , both operating on an event-by-event basis and based on probabilistic filtering . story_separator_special_tag we propose a method which can perform real-time 3d reconstruction from a single hand-held event camera with no additional sensing , and works in unstructured scenes of which it has no prior knowledge . it is based on three decoupled probabilistic filters , each estimating 6-dof camera motion , scene logarithmic ( log ) intensity gradient and scene inverse depth relative to a keyframe , and we build a real-time graph of these to track and model over an extended local workspace . we also upgrade the gradient estimate for each keyframe into an intensity image , allowing us to recover a real-time video-like intensity sequence with spatial and temporal super-resolution from the low bit-rate input event stream . to the best of our knowledge , this is the first algorithm provably able to track a general 6d motion along with reconstruction of arbitrary structure including its intensity and the reconstruction of grayscale video that exclusively relies on event camera data . story_separator_special_tag we present evo , an event-based visual odometry algorithm . our algorithm successfully leverages the outstanding properties of event cameras to track fast camera motions while recovering a semidense three-dimensional ( 3-d ) map of the environment . the implementation runs in real time on a standard cpu and outputs up to several hundred pose estimates per second . due to the nature of event cameras , our algorithm is unaffected by motion blur and operates very well in challenging , high dynamic range conditions with strong illumination changes . to achieve this , we combine a novel , event-based tracking approach based on image-to-model alignment with a recent event-based 3-d reconstruction algorithm in a parallel fashion . additionally , we show that the output of our pipeline can be used to reconstruct intensity images from the binary event stream , though our algorithm does not require such intensity information . we believe that this work makes significant progress in simultaneous localization and mapping by unlocking the potential of event cameras . this allows us to tackle challenging scenarios that are currently inaccessible to standard cameras . story_separator_special_tag event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames . these cameras do not suffer from motion blur and have a very high dynamic range , which enables them to provide reliable visual information during high speed motions or in scenes characterized by high dynamic range . however , event cameras output only little information when the amount of motion is limited , such as in the case of almost still motion . conversely , standard cameras provide instant and rich information about the environment most of the time ( in low-speed and good lighting scenarios ) , but they fail severely in case of fast motions , or difficult lighting such as high dynamic range or low light scenes . in this paper , we present the first state estimation pipeline that leverages the complementary advantages of these two sensors by fusing in a tightly-coupled manner events , standard frames , and inertial measurements . we show on the publicly available event camera dataset that our hybrid pipeline leads to an accuracy improvement of 130 % over event-only pipelines , and 85 % over standard-frames-only visual-inertial systems , while still being computationally story_separator_special_tag event-based cameras can measure intensity changes ( called events ) with microsecond accuracy under high-speed motion and challenging lighting conditions . with the active pixel sensor ( aps ) , the event camera allows simultaneous output of the intensity frames . however , the output images are captured at a relatively low frame-rate and often suffer from motion blur . a blurry image can be regarded as the integral of a sequence of latent images , while the events indicate the changes between the latent images . therefore , we are able to model the blur-generation process by associating event data to a latent image . in this paper , we propose a simple and effective approach , the event-based double integral ( edi ) model , to reconstruct a high frame-rate , sharp video from a single blurry frame and its event data . the video generation is based on solving a simple non-convex optimization problem in a single scalar variable . experimental results on both synthetic and real images demonstrate the superiority of our edi model and optimization method in comparison to the state-of-the-art . story_separator_special_tag a revolutionary type of imaging device , known as a silicon retina or event-based sensor , has recently been developed and is gaining in popularity in the field of artificial vision systems . these devices are inspired by a biological retina and operate in a significantly different way to traditional ccd-based imaging sensors . while a ccd produces frames of pixel intensities , an event-based sensor produces a continuous stream of events , each of which is generated when a pixel detects a change in log light intensity . these pixels operate asynchronously and independently , producing an event-based output with high temporal resolution . there are also no fixed exposure times , allowing these devices to offer a very high dynamic range independently for each pixel . additionally , these devices offer high-speed , low power operation and a sparse spatio-temporal output . as a consequence , the data from these sensors must be interpreted in a significantly different way to traditional imaging sensors and this paper explores the advantages this technology provides for space imaging . the applicability and capabilities of event-based sensors for ssa applications are demonstrated through telescope field trials . trial results have confirmed that story_separator_special_tag star trackers are primarily optical devices that are used to estimate the attitude of a spacecraft by recognising and tracking star patterns . currently , most star trackers use conventional optical sensors . in this application paper , we propose the usage of event sensors for star tracking . there are potentially two benefits of using event sensors for star tracking : lower power consumption and higher operating speeds . our main contribution is to formulate an algorithmic pipeline for star tracking from event data that includes novel formulations of rotation averaging and bundle adjustment . in addition , we also release with this paper a dataset for star tracking using event cameras . with this work , we introduce the problem of star tracking using event cameras to the computer vision community , whose expertise in slam and geometric optimisation can be brought to bear on this commercially important application . story_separator_special_tag real time artificial vision is traditionally limited to the frame rate . in many scenarios most frames contain information redundant both within and across frames . here we report on the development of an addressevent representation ( aer ) [ 1 ] silicon retina chip ` tmpdiff that generates events corresponding to changes in log intensity . the resulting address-events are output asynchronously on a shared digital bus . this chip responds with high temporal and low spatial resolution , analogous to the biological magnocellular pathway . it has 64x64 pixels , each with 2 outputs ( on and off ) , which are communicated off-chip on a 13bit digital bus . it is fabricated in a 0.35u 4m 2p process and occupies an area of ( 3.3 mm ) . each pixel has 28 transistors and 3 capacitors and uses a self-clocked switched-capacitor design to limit response fpn . dynamic operating range is at least 5 decades and minimum scene illumination with f/1.4 lens is less than 10 lux . story_separator_special_tag real time artificial vision is traditionally limited to the frame rate . in many scenarios most frames contain information redundant both within and across frames . here we report on the development of an address-event representation ( aer ) [ 1 ] silicon retina chip ` tmpdiff that generates events corresponding to changes in log intensity . the resulting address-events are output asynchronously on a shared digital bus . this chip responds with high temporal and low spatial resolution , analogous to the biological magnocellular pathway . it has 64x64 pixels , each with 2 outputs ( on and off ) , which are communicated off-chip on a 13-bit digital bus . it is fabricated in a 0.35u 4m 2p process and occupies an area of ( 3.3 mm ) . each ( 40u ) pixel has 28 transistors and 3 capacitors and uses a self-clocked switched-capacitor design to limit response fpn . dynamic operating range is at least 5 decades and minimum scene illumination with f/1.4 lens is less than 10 lux . chip power consumption is 7mw . story_separator_special_tag outline dear reader , today you have probably been filmed by several electronic cameras . you were in the visual field of a video camera while you stood close to a cash-machine , while you walked through a railway station or an airport , anytime you entered a bank or large public building , during the shopping in the supermarket and also on many public spaces . electronic eyes became very abundant in our environment in the last few years . but surveillance is just one of many application fields for electronic vision devices . artificial vision is also used in industrial manufacture , in safety systems in industrial environments , for visual quality control and failure investigation , for visual stock control , for barcode reading , to control automated guided vehicles etc . in these applications , the human workmanship is replaced by an electronic camera paired with sophisticated computer vision software running on the computer to which the camera is attached . the complete system , comprising one or more cameras , computers and sometimes also actuators ( i. e. robots ) , is called a machine vision system . although machine vision systems are widely and story_separator_special_tag a vision sensor responds to temporal contrast with asynchronous output . each pixel independently and continuously quantizes changes in log intensity . the 128times128-pixel chip has 120db illumination operating range and consumes 30mw . pixels respond in < 100mus at 1klux scene illumination with < 10 % contrast-threshold fpn story_separator_special_tag 23 abstract ( deutsch ) 25deutsch ) 25 1 an introduction to event-based sensors and machine learning 27 1.1 the amazing progress of deep learning . 27 1.2 towards artificial agents that exist in the world . 28 1.3 an introduction to event-based sensors . 28 1.3.1 dynamic vision sensors . 29 1.3.2 dynamic audio sensors . 31 1.4 event-based inputs and networks : new challenges and new opportunities . 31 2 event-based hardware systems for deep networks 37 2.1 why hardware ? . 37 2.2 minitaur . 38 2.2.1 prior work . 39 story_separator_special_tag state-of-the-art image sensors suffer from significant limitations imposed by their very principle of operation . these sensors acquire the visual information as a series of snapshot images , recorded at discrete points in time . visual information gets time quantized at a predetermined frame rate which has no relation to the dynamics present in the scene . furthermore , each recorded frame conveys the information from all pixels , regardless of whether this information , or a part of it , has changed since the last frame had been acquired . this acquisition method limits the temporal resolution , potentially missing important information , and leads to redundancy in the recorded image data , unnecessarily inflating data rate and volume . biology is leading the way to a more efficient style of image acquisition . biological vision systems are driven by events happening within the scene in view , and not , like image sensors , by artificially created timing and control signals . translating the frameless paradigm of biological vision to artificial imaging systems implies that control over the acquisition of visual information is no longer being imposed externally to an array of pixels but the decision making is story_separator_special_tag we present a transmitter for a scalable multiple-access inter-chip link that communicates binary activity between two-dimensional arrays fabricated in deep submicrometer cmos . transmission is initiated by active cells but cells are not read individually . an entire row is read in parallel ; this increases communication capacity with integration density . access is random but not inequitable . a row is not reread until all those waiting are serviced ; this increases parallelism as more of its cells become active in the mean time . row and column addresses identify active cells but they are not transmitted simultaneously . the row address is followed sequentially by a column address for each active cell ; this cuts pad count in half without sacrificing capacity . we synthesized an asynchronous implementation by performing a series of program decompositions , starting from a high-level description . links using this design have been implemented successfully in three generations of submicrometer cmos technology . story_separator_special_tag thu , 25 dec 2014 23:53:00 gmt event based neuromorphic systems pdf event-based neuromorphic systems are inspired by the brain 's efficient data-driven communication design , which is key to its quick responses and remarkable . sun , 15 apr 2018 18:03:00 gmt wiley : event-based neuromorphic systems shih-chii liu . event-based neuromorphic systems are inspired by the brain 's efficient data-driven communication design , . pdf request permissions ; chapter story_separator_special_tag to help meet the increasing need for dynamic vision sensor ( dvs ) event camera data , we developed the v2e toolbox , which generates synthetic dvs event streams from intensity frame videos . videos can be of any type , either real or synthetic . v2e optionally uses synthetic slow motion to upsample the video frame rate and then generates dvs events from these frames using a realistic pixel model that includes event threshold mismatch , finite illumination-dependent bandwidth , and several types of noise . v2e includes an algorithm that determines the dvs thresholds and bandwidth so that the synthetic event stream statistics match a given reference dvs recording . v2e is the first toolbox that can synthesize realistic low light dvs data . this paper also clarifies misleading claims about dvs characteristics in some of the computer vision literature . the v2e website is this https url and code is hosted at this https url . story_separator_special_tag this thesis describes the development and testing of a simple visual system fabricated using complementary metal-oxide-semiconductor ( cmos ) very large scale integration ( vlsi ) technology . this visual system is composed of three subsystems . a silicon retina , fabricated on a single chip , transduces light and performs signal processing in a manner similar to a simple vertebrate retina . a stereocorrespondence chip uses bilateral retinal input to estimate the location of objects in depth . a silicon optic nerve allows communication between chips by a method that preserves the idiom of action potential transmission in the nervous system . each of these subsystems illuminates various aspects of the relationship between vlsi analogs and their neurobiological counterparts . the overall synthetic visual system demonstrates that analog vlsi can capture a significant portion of the function of neural structures at a systems level , and concomitantly , that incorporating neural architectures leads to new engineering approaches to computation in vlsi . the relationship between neural systems and vlsi is rooted in the shared limitations imposed by computing in similar physical media . the systems discussed in this text support the belief that the physical limitations imposed by the story_separator_special_tag bioinspired vision sensors have become very attractive in recent years because of their inherent redundancy suppression , integrated processing , fast sensing capability , wide dynamic range , and low power consumption . these sensors combine functionalities of the biological where and what systems of the human visual system and process the visual information using an asynchronous event-driven method . since the emergence of bioinspired vision sensors , various applications based on them have been proposed in the computer vision and robotics fields . in this paper , we review bioinspired vision sensors and their applications . the reviewed sensors include dynamic vision sensors ( dvss ) , asynchronous time-based image sensors ( atiss ) , and dynamic and active pixel vision sensors ( daviss ) . the reviewed applications based on the bioinspired vision sensors include visual tracking , detection , recognition , simultaneous localization and mapping ( slam ) , reconstruction , stereo matching , and control . story_separator_special_tag any visual sensor , whether artificial or biological , maps the 3d-world on a 2d-representation . the missing dimension is depth and most species use stereo vision to recover it . stereo vision implies multiple perspectives and matching , hence it obtains depth from a pair of images . algorithms for stereo vision are also used prosperously in robotics . although , biological systems seem to compute disparities effortless , artificial methods suffer from high energy demands and latency . the crucial part is the correspondence problem ; finding the matching points of two images . the development of event-based cameras , inspired by the retina , enables the exploitation of an additional physical constraint time . due to their asynchronous course of operation , considering the precise occurrence of spikes , spiking neural networks take advantage of this constraint . in this work , we investigate sensors and algorithms for event-based stereo vision leading to more biologically plausible robots . hereby , we focus mainly on binocular stereo vision . story_separator_special_tag we designed and tested a two-dimensional silicon receptor array constructed from pixels that temporally high-pass filter the incident image . there are no surround interactions in the array ; all pixels operate independently except for their correlation due to the input image . the high- pass output signal is computed by sampling the output of an adaptive , high-gain , logarithmic photoreceptor during the scanout of the array . after a pixel is sampled , the output of the pixel is reset to a fixed value . an interesting capacitive coupling mechanism results in a controllable high-pass filtering operation . the resulting array has very low offsets . the computation that the array performs may be useful for time-domain image processing , for example , motion computation . story_separator_special_tag biology provides examples of efficient machines which greatly outperform conventional technology . designers in neuromorphic engineering aim to construct electronic systems with the same efficient style of computation . this task requires a melding of novel engineering principles with knowledge gleaned from neuroscience . we discuss recent progress in realizing neuromorphic sensory systems which mimic the biological retina and cochlea , and subsequent sensor processing . the main trends are the increasing number of sensors and sensory systems that communicate through asynchronous digital signals analogous to neural spikes ; the improved performance and usability of these sensors ; and novel sensory processing methods which capitalize on the timing of spikes from these sensors . experiments using these sensors can impact how we think the brain processes sensory information . story_separator_special_tag the four chips [ 1 4 ] presented in the special session on `` activity-driven , event-based vision sensors '' quickly output compressed digital data in the form of events . these sensors reduce redundancy and latency and increase dynamic range compared with conventional imagers . the digital sensor output is easily interfaced to conventional digital post processing , where it reduces the latency and cost of post processing compared to imagers . the asynchronous data could spawn a new area of dsp that breaks from conventional nyquist rate signal processing . this paper reviews the rationale and history of this event-based approach , introduces sensor functionalities , and gives an overview of the papers in this session . the paper concludes with a brief discussion on open questions . story_separator_special_tag this paper provides a personal perspective on our group 's efforts in building event-based vision sensors , algorithms , and applications over the period 2002-2012. some recent advances from other groups are also briefly described . story_separator_special_tag conventional image/video sensors acquire visual information from a scene in time-quantized fashion at some predetermined frame rate . each frame carries the information from all pixels , regardless of whether or not this information has changed since the last frame had been acquired , which is usually not long ago . this method obviously leads , depending on the dynamic contents of the scene , to a more or less high degree of redundancy in the image data . acquisition and handling of these dispensable data consume valuable resources ; sophisticated and resource-hungry video compression methods have been developed to deal with these data . story_separator_special_tag an arbitrated address-event imager has been designed and fabricated in a 0.6-/spl mu/m cmos process . the imager is composed of 80 /spl times/ 60 pixels of 32 /spl times/ 30 /spl mu/m . the value of the light intensity collected by each photosensitive element is inversely proportional to the pixel 's interspike time interval . the readout of each spike is initiated by the individual pixel ; therefore , the available output bandwidth is allocated according to pixel output demand . this encoding of light intensities favors brighter pixels , equalizes the number of integrated photons across light intensity , and minimizes power consumption . tests conducted on the imager showed a large output dynamic range of 180 db ( under bright local illumination ) for an individual pixel . the array , on the other hand , produced a dynamic range of 120 db ( under uniform bright illumination and when no lower bound was placed on the update rate per pixel ) . the dynamic range is 48.9 db value at 30-pixel updates/s . power consumption is 3.4 mw in uniform indoor light and a mean event rate of 200 khz , which updates each pixel 41.6 story_separator_special_tag this paper presents a frame-free time-domain imaging approach designed to alleviate the non-ideality of finite exposure measurement time ( intrinsic to all integrating imagers ) , limiting the temporal resolution of the atis asynchronous time-based image sensor concept . the method uses the time-domain correlated double sampling ( tcds ) and change detection circuitry already present in the data-driven autonomous atis pixels and does not involve any additional data to be transmitted by the sensor , but is entirely based on the data available in normal operation . three consecutive exposure estimation / measurement steps apply different trade-offs between measurement speed , accuracy and noise . the early estimates yield between 10 and 100 times faster pixel updates than the standard full-swing integrating exposure measurement operation . the results from the three individual measurement steps can be used separately or in combination , enabling event-driven asynchronous high-speed imaging at moderate light levels . story_separator_special_tag this paper proposes a cmos vision sensor that combines event-driven asychronous readout of temporal contrast with synchronous frams-based active pixel sensor ( aps ) readout of intensity . the image frames can be used for scene content analysis and the temporal constrast events can be used to track fast moving objects , to adjust the frame rate , or to guide a region of interest readout therefore the sensor is suitable for mobile applications because it allows low latency at low data rate and low system-level power consumption . sharing the photodiode for both readout types allows a compact pixel design that is 60 % smaller than a comparable design . the 240x180 sensor has a power consumption of 10mw . it is built in 0.18um technology with 18.5um pixels . the temporal contrast pathway has a minimum latency of 12us , a dynamic range of 120db , 12 % contrast detection threshold and 3.5 % contrast matching . the aps readout has 55db dynamic range with 1 % fpn . story_separator_special_tag cmos active pixel sensors ( aps ) have performance competitive with charge-coupled device ( ccd ) technology , and offer advantages in on-chip functionality , system power reduction , cost , and miniaturization . this paper discusses the requirements for cmos image sensors and their historical development , cmos devices and circuits for pixels , analog signal chain , and on-chip analog-to-digital conversion are reviewed and discussed . story_separator_special_tag this paper describes caviar , a massively parallel hardware implementation of a spike-based sensing-processing-learning-actuating system inspired by the physiology of the nervous system . caviar uses the asynchronous address-event representation ( aer ) communication framework and was developed in the context of a european union funded project . it has four custom mixed-signal aer chips , five custom digital aer interface components , 45 k neurons ( spiking cells ) , up to 5 m synapses , performs 12 g synaptic operations per second , and achieves millisecond object recognition and tracking latencies . story_separator_special_tag balancing a normal pencil on its tip requires rapid feedback control with latencies on the order of milliseconds . this demonstration shows how a pair of spike-based silicon retina dynamic vision sensors ( dvs ) is used to provide fast visual feedback for controlling an actuated table to balance an ordinary pencil . two dvss view the pencil from right angles . movements of the pencil cause spike address-events ( aes ) to be emitted from the dvss . these aes are transmitted to a pc over usb interfaces and are processed procedurally in real time . the pc updates its estimate of the pencil 's location and angle in 3d space upon each incoming ae , applying a novel tracking method based on spike-driven fitting to a model of the vertical shape of the pencil . a pd-controller adjusts x-y-position and velocity of the table to maintain the pencil balanced upright . the controller also minimizes the deviation of the pencil 's base from the center of the table . the actuated table is built using ordinary high-speed hobby servos which have been modified to obtain feedback from linear position encoders via a microcontroller . our system can balance story_separator_special_tag robust perception-action models should be learned from training data with diverse visual appearances and realistic behaviors , yet current approaches to deep visuomotor policy learning have been generally limited to in-situ models learned from a single vehicle or a simulation environment . we advocate learning a generic vehicle motion model from large scale crowd-sourced video data , and develop an end-to-end trainable architecture for learning to predict a distribution over future vehicle egomotion from instantaneous monocular camera observations and previous vehicle state . our model incorporates a novel fcn-lstm architecture , which can be learned from large-scale crowd-sourced vehicle action data , and leverages available scene segmentation side tasks to improve performance under a privileged learning paradigm . story_separator_special_tag the effect of temperature and parasitic photocurrent on event-based dynamic vision sensors ( dvs ) is important because of their application in uncontrolled robotic , automotive , and surveillance applications . this paper considers the temperature dependence of dvs threshold temporal contrast ( tc ) , dark current , and background activity caused by junction leakage . new theory shows that if bias currents have a constant ratio , then ideally the dvs threshold tc is temperature independent , but the presence of temperature dependent junction leakage currents causes nonideal behavior at elevated temperature . both measured photodiode dark current and leakage induced event activity follow arhenius activation . this paper also defines a new metric for parasitic photocurrent quantum efficiency and measures the sensitivity of dvs pixels to parasitic photocurrent . story_separator_special_tag we thank the reviewers for their careful analysis of [ 1 ] , especially for spotting two errors in the formulas for inferring temporal contrast effects of leak and parasitic photocurrent . the revised results increase the values of the inferred parasitic leak currents by a factor of about $ 11\\times $ . story_separator_special_tag a dynamic vision sensor ( dvs ) encodes temporal contrast ( tc ) of light intensity into address-events that are asynchronously transmitted for subsequent processing . this paper describes a dvs with improved tc sensitivity and event encoding . to enhance the tc sensitivity , each pixel employs a common-gate photoreceptor for low output noise and a capacitively-coupled programmable gain amplifier for continuous-time signal amplification without sacrificing the intra-scene dynamic range . a proposed in-pixel asynchronous delta modulator ( adm ) better preserves signal integrity in event encoding compared with self-timed reset ( str ) used in previous dvss . a 60 \xd7 30 prototype sensor array with a 31.2 m pixel pitch was fabricated in a 1p6m 0.18 m cmos technology . it consumes 720 w at a 100k event/s output rate . measurements show that a 1 % tc sensitivity with a 35 % relative standard deviation is achieved and that the in-pixel adm is up to 3.5 times less susceptible to signal loss than str in terms of event number . these improvements can facilitate the application of dvss in areas like optical neuroimaging which is demonstrated in a simulated experiment . story_separator_special_tag applications requiring detection of small visual contrast require high sensitivity . event cameras can provide higher dynamic range ( dr ) and reduce data rate and latency , but most existing event cameras have limited sensitivity . this paper presents the results of a 180-nm towerjazz cis process vision sensor called sdavis192 . it outputs temporal contrast dynamic vision sensor ( dvs ) events and conventional active pixel sensor frames . the sdavis192 improves on previous davis sensors with higher sensitivity for temporal contrast . the temporal contrast thresholds can be set down to 1 % for negative changes in logarithmic intensity ( off events ) and down to 3.5 % for positive changes ( on events ) . the achievement is possible through the adoption of an in-pixel preamplification stage . this preamplifier reduces the effective intrascene dr of the sensor ( 70\xa0db for off and 50\xa0db for on ) , but an automated operating region control allows up to at least 110-db dr for off events . a second contribution of this paper is the development of characterization methodology for measuring dvs event detection thresholds by incorporating a measure of signal-to-noise ratio ( snr ) . at average story_separator_special_tag the series of lectures on the process of vision in both human and electronic systems v/as based predominantly on a number of publications in scattered parts of the literature . several of these papers are reproduced here and serve , at least , the convenience of juxtaposition . story_separator_special_tag event cameras provide asynchronous , data-driven measurements of local temporal contrast over a large dynamic range with extremely high temporal resolution . conventional cameras capture low-frequency reference intensity information . these two sensor modalities provide complementary information . we propose a computationally efficient , asynchronous filter that continuously fuses image frames and events into a single high-temporal-resolution , high-dynamic-range image state . in absence of conventional image frames , the filter can be run on events only . we present experimental results on high-speed , high-dynamic-range sequences , as well as on new ground truth datasets we generate to demonstrate the proposed algorithm outperforms existing state-of-the-art methods . story_separator_special_tag spatial convolution is arguably the most fundamental of two-dimensional image processing operations . conventional spatial image convolution can only be applied to a conventional image , that is , an array of pixel values ( or similar image representation ) that are associated with a single instant in time . event cameras have serial , asynchronous output with no natural notion of an image frame , and each event arrives with a different timestamp . in this letter , we propose a method to compute the convolution of a linear spatial kernel with the output of an event camera . the approach operates on the event stream output of the camera directly without synthesising pseudoimage frames as is common in the literature . the key idea is the introduction of an internal state that directly encodes the convolved image information , which is updated asynchronously as each event arrives from the camera . the state can be read off as often as and whenever required for use in higher level vision algorithms for real-time robotic systems . we demonstrate the application of our method to corner detection , providing an implementation of a harris corner-response state that can be used story_separator_special_tag we present a method that leverages the complementarity of event cameras and standard cameras to track visual features with low-latency . event cameras are novel sensors that output pixel-level brightness changes , called events . they offer significant advantages over standard cameras , namely a very high dynamic range , no motion blur , and a latency in the order of microseconds . however , because the same scene pattern can produce different events depending on the motion direction , establishing event correspondences across time is challenging . by contrast , standard cameras provide intensity measurements ( frames ) that do not depend on motion direction . our method extracts features on frames and subsequently tracks them asynchronously using events , thereby exploiting the best of both types of data : the frames provide a photometric representation that does not depend on motion direction and the events provide low-latency updates . in contrast to previous works , which are based on heuristics , this is the first principled method that uses raw intensity measurements directly , based on a generative event model within a maximum-likelihood framework . as a result , our method produces feature tracks that are both more story_separator_special_tag event cameras are novel bio-inspired vision sensors that output pixel-level intensity changes , called events , instead of traditional video images . these asynchronous sensors naturally respond to motion in the scene with very low latency ( microseconds ) and have a very high dynamic range . these features , along with a very low power consumption , make event cameras an ideal sensor for fast robot localization and wearable applications , such as ar/vr and gaming . considering these applications , we present a method to track the 6-dof pose of an event camera in a known environment , which we contemplate to be described by a photometric 3d map ( i.e. , intensity plus depth information ) built via classic dense 3d reconstruction algorithms . our approach uses the raw events , directly , without intermediate features , within a maximum-likelihood framework to estimate the camera motion that best explains the events via a generative model . we successfully evaluate the method using both simulated and real data , and show improved results over the state of the art . we release the datasets to the public to foster reproducibility and research in this topic . story_separator_special_tag event-based vision sensors mimic the operation of biological retina and they represent a major paradigm shift from traditional cameras . instead of providing frames of intensity measurements synchronously , at artificially chosen rates , event-based cameras provide information on brightness changes asynchronously , when they occur . such non-redundant pieces of information are called `` events '' . these sensors overcome some of the limitations of traditional cameras ( response time , bandwidth and dynamic range ) but require new methods to deal with the data they output . we tackle the problem of event-based camera localization in a known environment , without additional sensing , using a probabilistic generative event model in a bayesian filtering framework . our main contribution is the design of the likelihood function used in the filter to process the observed events . based on the physical characteristics of the sensor and on empirical evidence of the gaussian-like distribution of spiked events with respect to the brightness change , we propose to use the contrast residual as a measure of how well the estimated pose of the event-based camera and the environment explain the observed events . the filter allows for localization in the general story_separator_special_tag this paper presents a general overview of substrate integrated transmission lines , from the perspective of historical background and progress of guided-wave structures and their impacts on the development of microwave circuits and integration solutions . this is highlighted through a technology roadmap involving the categorized five generations of microwave circuits . in particular , the substrate integration technologies are reviewed and discussed with focus on technical features , design highlights , component developments , structures evolution , and systems integration . a number of examples are presented to showcase some of the selected milestone research and development activities and accomplishments in connection with substrate integrated transmission line technologies , with particular focus on substrate integrated waveguide ( siw ) techniques . practical applications and industrial interests are also presented with key references and technical results , which show more and more product developments in the end-user sectors . it can be found that the popularity of siw techniques is closely related to the achieved seamless integration of planar and non-planar structures into a unified design space , thereby allowing the possibility of combining major advantages of all the structures while alleviating their potential drawbacks . the future perspectives of story_separator_special_tag we demonstrate a high resolution dynamic vision sensor ( dvs ) with 768 \xd7 640 pixels , and 200meps ( event per second ) high speed readout . the sensor has a dual-channel synchronous interface and can operate at 100 mhz . it has a few unique features , namely three-in-one ( coordinate , brightness and time stamp ) event packet , capability of producing full-array picture-on-demand [ 1 ] and on-chip optical flow computation . the sensor will find broad applications in real-time machine vision . story_separator_special_tag we demonstrate a new generation smart image sensor , celex-v. with 1280\xd7800 pixels , 9.8um pitch , the sensor integrates several vision functions into one chip , such as full-array-parallel motion detection and on-chip optical flow extraction . celex-v is also capable of producing high-quality full-frame pictures and thus is compatible with traditional picture-based algorithms . the sensor supports both mipi and parallel interface , with typical 400mw power consumption . story_separator_special_tag this demonstration shows how an inexpensive high frame-rate usb camera is used to emulate existing and proposed activity-driven event-based vision sensors . a ps3-eye camera which runs at a maximum of 125 frames/second with colour qvga ( 320\xd7240 ) resolution is used to emulate several event-based vision sensors , including a dynamic vision sensor ( dvs ) , a colour-change sensitive dvs ( cdvs ) , and a hybrid vision sensor with dvs+cdvs pixels . the emulator is integrated into the jaer software project for event-based real-time vision and is used to study use cases for future vision sensor designs . story_separator_special_tag event cameras are revolutionary sensors that work radically differently from standard cameras . instead of capturing intensity images at a fixed rate , event cameras measure changes of intensity asynchronously , in the form of a stream of events , which encode per-pixel brightness changes . in the last few years , their outstanding properties ( asynchronous sensing , no motion blur , high dynamic range ) have led to exciting vision applications , with very low-latency and high robustness . however , these sensors are still scarce and expensive to get , slowing down progress of the research community . to address these issues , there is a huge demand for cheap , high-quality synthetic , labeled event for algorithm prototyping , deep learning and algorithm benchmarking . the development of such a simulator , however , is not trivial since event cameras work fundamentally differently from framebased cameras . we present the first event camera simulator that can generate a large amount of reliable event data . the key component of our simulator is a theoretically sound , adaptive rendering scheme that only samples frames when necessary , through a tight coupling between the rendering engine and the story_separator_special_tag the agility of a robotic system is ultimately limited by the speed of its processing pipeline . the use of a dynamic vision sensors ( dvs ) , a sensor producing asynchronous events as luminance changes are perceived by its pixels , makes it possible to have a sensing pipeline of a theoretical latency of a few microseconds . however , several challenges must be overcome : a dvs does not provide the grayscale value but only changes in the luminance ; and because the output is composed by a sequence of events , traditional frame-based visual odometry methods are not applicable . this paper presents the first visual odometry system based on a dvs plus a normal cmos camera to provide the absolute brightness values . the two sources of data are automatically spatiotemporally calibrated from logs taken during normal operation . we design a visual odometry method that uses the dvs events to estimate the relative displacement since the previous cmos frame by processing each event individually . experiments show that the rotation can be estimated with surprising accuracy , while the translation can be estimated only very noisily , because it produces few events due to very story_separator_special_tag spiking sensors such as the silicon retina and cochlea encode analog signals into massively parallel asynchronous spike train output where the information is contained in the precise spike timing . the variation of the spike timing that arises from spike transmission degrades signal encoding quality . using the signal-to-distortion ratio ( sdr ) metric with nonlinear spike train decoding based on frame theory , two particular sources of delay variation including comparison delay $ t_ { \\mathbf { dc } } $ and queueing delay $ t_ { \\mathbf { dq } } $ are evaluated on two encoding mechanisms which have been used for implementations of silicon array spiking sensors : asynchronous delta modulation and self-timed reset . as specific examples , $ t_ { \\mathbf { dc } } $ is obtained from a 2t current-mode comparator , and $ t_ { \\mathbf { dq } } $ is obtained from an m/d/1 queue for 1-d sensors like the silicon cochlea and an $ \\text { m } ^ { \\mathrm { \\mathbf { x } } } $ /d/1 queue for 2-d sensors like the silicon retina . quantitative relations between the sdr and the circuit and story_separator_special_tag dynamic and active pixel vision sensors ( daviss ) are a new type of sensor that combine a frame-based intensity readout with an event-based temporal contrast readout . this paper demonstrates that these sensors inherently perform high-speed , video compression in each pixel by describing the first decompression algorithm for this data . the algorithm performs an online optimization of the event decoding in real time . example scenes were recorded by the 240\xd7180 pixel sensor at sub-hz frame rates and successfully decompressed yielding an equivalent frame rate of 2khz . a quantitative analysis of the compression quality resulted in an average pixel error of 0.5dn intensity resolution for non-saturating stimuli . the system exhibits an adaptive compression ratio which depends on the activity in a scene ; for stationary scenes it can go up to 1862. the low data rate and power consumption of the proposed video compression system make it suitable for distributed sensor networks . story_separator_special_tag two in-pixel encoding mechanisms to convert analog input to spike output for vision sensors are modeled and compared with the consideration of feedback delay : one is feedback and reset ( far ) , and the other is feedback and subtract ( fas ) . matlab simulations of linear signal reconstruction from spike trains generated by the two encoders show that far in general has a lower signal-to-distortion ratio ( sdr ) compared to fas due to signal loss during the reset phase and hold period , and the sdr merit of fas increases as the quantization bit number and input signal frequency increases . a 500 \xb5m2 in-pixel circuit implementation of fas using asynchronous switched capacitors in a umc 0.18\xb5m 1p6m process is described , and the post-layout simulation results are given to verify the fas encoding mechanism . story_separator_special_tag conventional image sensors produce massive amounts of redundant data and are limited in temporal resolution by the frame rate . this paper reviews our recent breakthrough in the development of a high- performance spike-event based dynamic vision sensor ( dvs ) that discards the frame concept entirely , and then describes novel digital methods for efficient low-level filtering and feature extraction and high-level object tracking that are based on the dvs spike events . these methods filter events , label them , or use them for object tracking . filtering reduces the number of events but improves the ratio of informative events . labeling attaches additional interpretation to the events , e.g . orientation or local optical flow . tracking uses the events to track moving objects . processing occurs on an event-by-event basis and uses the event time and identity as the basis for computation . a common memory object for filtering and labeling is a spatial map of most recent past event times . processing methods typically use these past event times together with the present event in integer branching logic to filter , label , or synthesize new events . these methods are straightforwardly computed on serial story_separator_special_tag neuromorphic vision sensors are an emerging technology inspired by how retina processing images . a neuromorphic vision sensor only reports when a pixel value changes rather than continuously outputting the value every frame as is done in an `` ordinary '' active pixel sensor ( asp ) . this move from a continuously sampled system to an asynchronous event driven one effectively allows for much faster sampling rates ; it also fundamentally changes the sensor interface . in particular , these sensors are highly sensitive to noise , as any additional event reduces the bandwidth , and thus effectively lowers the sampling rate . in this work we introduce a novel spatiotemporal filter with o ( n ) memory complexity for reducing background activity noise in neuromorphic vision sensors . our design consumes $ 10 \\times $ less memory and has $ 100 \\times $ reduction in error compared to previous designs . our filter is also capable of recovering real events and can pass up to 180 % more real events . story_separator_special_tag bio-inspired address event representation change detection image sensors , also known as silicon retinae , have matured to the point where they can be purchased commercially , and are easily operated by laymen . noise is present in the output of these sensors , and improved noise filtering will enhance performance in many applications . a novel approach is proposed for quantifying the quality of data received from a silicon retina , and quantifying the performance of different noise filtering algorithms . we present a test rig which repetitively records printed test patterns , along with a method for averaging over repeated recordings to estimate the likelihood of an event being signal or noise . the calculated signal and noise probabilities are used to quantitatively compare the performance of 8 different filtering algorithms while varying each filter 's parameters . we show how the choice of best filter and parameters varies as a function of the stimulus , particularly the temporal rate of change of intensity for a pixel , especially when the assumption of sharp temporal edges is not valid . story_separator_special_tag asynchronous event-based sensors , or `` silicon retinae , '' are a new class of vision sensors inspired by biological vision systems . the output of these sensors often contains a significant number of noise events along with the signal . filtering these noise events is a common preprocessing step before using the data for tasks such as tracking and classification . this paper presents a novel spiking neural network-based approach to filtering noise events from data captured by an asynchronous time-based image sensor on a neuromorphic processor , the ibm truenorth neurosynaptic system . the significant contribution of this work is that it demonstrates our proposed filtering algorithm outperforms the traditional nearest neighbor noise filter in achieving higher signal to noise ratio ( ~10 db higher ) and retaining the events related to signal ( ~3x more ) . in addition , for our envisioned application of object tracking and classification under some parameter settings , it can also generate some of the missing events in the spatial neighborhood of the signal for all classes of moving objects in the data which are unattainable using the nearest neighbor filter . story_separator_special_tag neuromorphic spike event-based dynamic vision sensors ( dvs ) offer the possibility of fast , computationally efficient visual processing for navigation in mobile robotics . to extract motion parallax cues relating to 3d scene structure , the uninformative camera rotation must be removed from the visual input to allow the un-blurred features and informative relative optical flow to be analyzed . here we describe the integration of an inertial measurement unit ( imu ) with a 240\xd7180 pixel dvs . the algorithm for electronic stabilization of the visual input against camera rotation is described . examples are presented showing the stabilization performance of the system . story_separator_special_tag this paper studies the suitability of neuromorphic event-based vision cameras for spaceflight and the effects of neutron radiation on their performance . neuromorphic event-based vision cameras are novel sensors that implement asynchronous , clockless data acquisition , providing information about the change in illuminance $ \\ge 120db $ with sub-millisecond temporal precision . these sensors have huge potential for space applications as they provide an extremely sparse representation of visual dynamics while removing redundant information , thereby conforming to low-resource requirements . an event-based sensor was irradiated under wide-spectrum neutrons at los alamos neutron science center and its effects were classified . radiation-induced damage of the sensor under wide-spectrum neutrons was tested , as was the radiative effect on the signal-to-noise ratio of the output at different angles of incidence from the beam source . we found that the sensor had very fast recovery during radiation , showing high correlation of noise event bursts with respect to source macro-pulses . no statistically significant differences were observed between the number of events induced at different angles of incidence but significant differences were found in the spatial structure of noise events at different angles . the results show that event-based cameras are story_separator_special_tag back side illumination has become standard image sensor technology owing to its superior quantum efficiency and fill factor . a direct comparison of front and back side illumination ( fsi and bsi ) used in event-based dynamic and active pixel vision sensors ( davis ) is interesting because of the potential of bsi to greatly increase the small 20 % fill factor of these complex pixels . this brief compares identically designed front and back illuminated davis silicon retina vision sensors . they are compared in term of quantum efficiency ( qe ) , leak activity and modulation transfer function ( mtf ) . the bsi davis achieves a peak qe of 93 % compared with the fsi davis , peak qe of 24 % , but reduced mtf , due to pixel crosstalk and parasitic photocurrent . significant leak events in the bsi davis limit its use to controlled illumination scenarios without very bright light sources . effects of parasitic photocurrent and modulation transfer functions with and without ir cut filters are also reported . story_separator_special_tag the circuit described in this paper uses a `` verta-color '' stacked two-diode structure to measure relative long and short wavelength spectral content . the p-type source-drain to nwell forms the top diode and the nwell-psubstrate diode forms the bottom diode . the circuit output is a digital pwm signal whose frequency encodes absolute intensity and whose duty cycle encodes the relative photodiode current . this signal is formed by a self-timed circuit that alternately discharges the top and bottom photodiodes . this circuit was fabricated in a standard 3m 2p 0.5 mum cmos process . monochromatic stimulation shows that the duty cycle varies between 50 % and 7 % as the photon wavelength is varied between 400 nm to 750 nm . the output frequency is 150 hz at incident irradiance of 1.7 w/m2 . chip-to-chip variation of pwm duty cycle and frequency is about 1 % measured over 5 chips . power consumption is 20 muw . a modified version of this circuit could form the basis for simple color vision sensors built in widely-available vanilla cmos . story_separator_special_tag this paper proposes a simple focal plane pattern detector architecture using a novel pixel sensor based on the dichromatic vertacolor structure . additionally , the sensor transfers dichromatic intensity values using a self-timed time-to- first-spike scheme , which provides high dynamic range imaging . the intensity information is transmitted using the address event representation protocol . the spectral information is sampled automatically at each intensity reading in a ratioed way that maintains high dynamic range . a test chip consisting of 20 pixels has been fabricated in 1.5 um 2p 2m cmos and characterized . the combined pattern detector/ imager core consumes 45 ua at 5 v supply voltage . story_separator_special_tag this article investigates the potential of a bio-inspired vision sensor with pixels that detect transients between three primary colors . the in-pixel color processing is inspired by the retinal color opponency that are found in mammalian retinas . color transitions in a pixel are represented by voltage spikes , which are akin to a neuron s action potential . these spikes are conveyed off-chip by the address event representation ( aer ) protocol . to achieve sensitivity to three different color spectra within the visual spectrum , each pixel has three stacked photodiodes at different depths in the silicon substrate . the sensor has been fabricated in the standard tsmc 90 nm cmos technology . a post-processing method to decode events into color transitions has been proposed and implemented as a custom interface to display real-time color changes in the visual scene . experimental results are provided . color transitions can be detected at high speed ( up to 2.7 khz ) . the sensor has a dynamic range of 58 db and a power consumption of 22.5 mw . this type of sensor can be of use in industrial , robotics , automotive and other applications where essential information story_separator_special_tag a digital imager apparatus uses the differences in absorption length in silicon of light of different wavelengths for color separation . a preferred imaging array is based upon a three-color pixel sensor using a triple-well structure . the array results in elimination of color aliasing by measuring each of the three primary colors ( rgb ) in each pixel in the same location . story_separator_special_tag in the two centuries of photography , there has been a wealth of invention and innovation aimed at capturing a realistic and pleasing full-color twodimensional representation of a scene . in this paper , we look back at the historical milestones of color photography and bring into focus a fascinating parallelism between the evolution of chemical based color imaging starting over a century ago , and the evolution of electronic photography which continues today . the second part of our paper is dedicated to a technical discussion of the new foveon x3 multilayer color image sensor ; what could be descried as a new more advanced species of camera sensor technology . the x3 technology is compared to other competing sensor technologies ; we compare spectral sensitivities using one of many possible figures of merit . finally we show and describe how , like the human visual system , the foveon x3 sensor has an inherent luminancechrominance behavior which results in higher image quality using fewer image pixels . story_separator_special_tag this paper reports the design of a color dynamic and active-pixel vision sensor ( c-davis ) for robotic vision applications . the c-davis combines monochrome eventgenerating dynamic vision sensor pixels and 5-transistor active pixels sensor ( aps ) pixels patterned with an rgbw color filter array . the c-davis concurrently outputs rolling or global shutter rgbw coded vga resolution frames and asynchronous monochrome qvga resolution temporal contrast events . hence the c-davis is able to capture spatial details with color and track movements with high temporal resolution while keeping the data output sparse and fast . the c-davis chip is fabricated in towerjazz 0.18um cmos image sensor technology . an rgbw 2\xd72-pixel unit measures 20um \xd7 20um . the chip die measures 8mm \xd7 6.2mm . story_separator_special_tag this paper introduces the first simulations and measurements of event data obtained from the first dynamic and active vision sensors ( davis ) with rgbw color filters . the absolute quantum efficiency spectral responses of the rgbw photodiodes were measured , the behavior of the color-sensitive dvs pixels were simulated and measured , and reconstruction through color events interpolation was developed . story_separator_special_tag event cameras are novel , bio-inspired visual sensors , whose pixels output asynchronous and independent timestamped spikes at local intensity changes , called 'events ' . event cameras offer advantages over conventional frame-based cameras in terms of latency , high dynamic range ( hdr ) and temporal resolution . until recently , event cameras have been limited to outputting events in the intensity channel , however , recent advances have resulted in the development of color event cameras , such as the color-davis346 . in this work , we present and release the first color event camera dataset ( ced ) , containing 50 minutes of footage with both color frames and events . ced features a wide variety of indoor and outdoor scenes , which we hope will help drive forward event-based vision research . we also present an extension of the event camera simulator esim that enables simulation of color events . finally , we present an evaluation of three state-of-the-art image reconstruction methods that can be used to convert the color-davis346 into a continuous-time , hdr , color video camera to visualise the event stream , and for use in downstream vision applications . story_separator_special_tag this paper introduces a color asynchronous neuromorphic event-based camera and a methodology to process color output from the device to perform color segmentation and tracking at the native temporal resolution of the sensor ( down to one microsecond ) . our color vision sensor prototype is a combination of three asynchronous time-based image sensors , sensitive to absolute color information . we devise a color processing algorithm leveraging this information . it is designed to be computationally cheap , thus showing how low level processing benefits from asynchronous acquisition and high temporal resolution data . the resulting color segmentation and tracking performance is assessed both with an indoor controlled scene and two outdoor uncontrolled scenes . the tracking 's mean error to the ground truth for the objects of the outdoor scenes ranges from two to twenty pixels . story_separator_special_tag in this paper , a novel event-based dynamic ir vision sensor is presented . the device combines an uncooled microbolometer array with biology-inspired ( ldquoneuromorphicrdquo ) readout circuitry to implement an asynchronous , ldquospikingrdquo vision sensor for the 8-15 mum thermal infrared spectral range . the sensor 's autonomous pixels independently respond to changes in thermal ir radiation and communicate detected variations in the form of asynchronous ldquoaddress-events.rdquo the 64times64 pixel roic chip has been fabricated in a 0.35 mum 2p4m standard cmos process , covers about 4times4 mm2 of silicon area and consumes 8 mw of power . an amorphous silicon ( a-si ) microbolometer array has been processed on top of the roic and contacted to the pixel circuits . we discuss the bolometer detector properties , describe the pixel circuits and the implemented sensor architecture , and show measurement results of the readout circuits . subsequently , a dft-based approach to the characterization of asynchronous , spiking sensor arrays is discussed and applied . test results and analysis of sensitivity , bandwidth , and noise of the fabricated ir sensor prototype are presented . story_separator_special_tag new vision sensors , such as the dynamic and active-pixel vision sensor ( davis ) , incorporate a conventional global-shutter camera and an event-based sensor in the same pixel array . these sensors hav . story_separator_special_tag event cameras are novel vision sensors that output pixel-level brightness changes ( `` events '' ) instead of traditional video frames . these asynchronous sensors offer several advantages over traditional cameras , such as , high temporal resolution , very high dynamic range , and no motion blur . to unlock the potential of such sensors , motion compensation methods have been recently proposed . we present a collection and taxonomy of twenty two objective functions to analyze event alignment in motion compensation approaches . we call them focus loss functions since they have strong connections with functions used in traditional shape-from-focus applications . the proposed loss functions allow bringing mature computer vision tools to the realm of event cameras . we compare the accuracy and runtime performance of all loss functions on a publicly available dataset , and conclude that the variance , the gradient and the laplacian magnitudes are among the best loss functions . the applicability of the loss functions is shown on multiple tasks : rotational motion , depth and optical flow estimation . the proposed focus loss functions allow to unlock the outstanding properties of event cameras . story_separator_special_tag event cameras are vision sensors that record asynchronous streams of per-pixel brightness changes , referred to as `` events . they have appealing advantages over frame based cameras for computer vision , including high temporal resolution , high dynamic range , and no motion blur . due to the sparse , non-uniform spatio-temporal layout of the event signal , pattern recognition algorithms typically aggregate events into a grid-based representation and subsequently process it by a standard vision pipeline , e.g. , convolutional neural network ( cnn ) . in this work , we introduce a general framework to convert event streams into grid-based representations by means of strictly differentiable operations . our framework comes with two main advantages : ( i ) allows learning the input event representation together with the task dedicated network in an end to end manner , and ( ii ) lays out a taxonomy that unifies the majority of extant event representations in the literature and identifies novel ones . empirically , we show that our approach to learning the event representation end-to-end yields an improvement of approximately 12 % on optical flow estimation and object recognition over state-of-the-art methods . story_separator_special_tag we propose a novel algorithm for robot self-localization using an embedded event-based sensor . this sensor produces a stream of events at microsecond time resolution which only represents pixel-level illumination changes in a scene , as e.g . caused by perceived motion . this is in contrast to classical image sensors , which wastefully transmit redundant information at a much lower frame rate . our method adapts the commonly used condensation particle filter tracker to such event-based sensors . it works directly with individual , highly ambiguous pixel-events and does not employ event integration over time . the lack of complete discrete sensory measurements is addressed by applying an exponential decay model for hypotheses likelihood computation . the proposed algorithm demonstrates robust performance at low computation requirements ; turning it suitable for implementation in embedded hardware on small autonomous robots . we evaluate our algorithm in a simulation environment and with experimental recorded data . story_separator_special_tag the combination of spiking neural networks and event-based vision sensors holds the potential of highly efficient and high-bandwidth optical flow estimation . this paper presents the first hierarchical spiking architecture in which motion ( direction and speed ) selectivity emerges in an unsupervised fashion from the raw stimuli generated with an event-based camera . a novel adaptive neuron model and stable spike-timing-dependent plasticity formulation are at the core of this neural network governing its spike-based processing and learning , respectively . after convergence , the neural architecture exhibits the main properties of biological visual motion systems , namely feature extraction and local and global motion perception . convolutional layers with input synapses characterized by single and multiple transmission delays are employed for feature and local motion perception , respectively ; while global motion selectivity emerges in a final fully-connected layer . the proposed solution is validated using synthetic and real event sequences . along with this paper , we provide the cusnn library , a framework that enables gpu-accelerated simulations of large-scale spiking neural networks . source code and samples are available at https : //github.com/tudelft/cusnn . story_separator_special_tag event cameras are a paradigm shift in camera technology . instead of full frames , the sensor captures a sparse set of events caused by intensity changes . since only the changes are transferred , those cameras are able to capture quick movements of objects in the scene or of the camera itself . in this work we propose a novel method to perform camera tracking of event cameras in a panoramic setting with three degrees of freedom . we propose a direct camera tracking formulation , similar to state-of-the-art in visual odometry . we show that the minimal information needed for simultaneous tracking and mapping is the spatial position of events , without using the appearance of the imaged scene point . we verify the robustness to fast camera movements and dynamic objects in the scene on a recently proposed dataset and self-recorded sequences . story_separator_special_tag event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames . they offer significant advantages over standard cameras , namely a very high dynamic range , no motion blur , and a latency in the order of microseconds . however , due to the fundamentally different structure of the sensor 's output , new algorithms that exploit the high temporal resolution and the asynchronous nature of the sensor are required . recent work has shown that a continuous-time representation of the event camera pose can deal with the high temporal resolution and asynchronous nature of this sensor in a principled way . in this paper , we leverage such a continuous-time representation to perform visual-inertial odometry with an event camera . this representation allows direct integration of the asynchronous events with micro-second accuracy and the inertial measurements at high frequency . the event camera trajectory is approximated by a smooth curve in the space of rigid-body motions using cubic splines . this formulation significantly reduces the number of variables in trajectory estimation problems . we evaluate our method on real data from several scenes and compare the results against ground truth from a motion-capture story_separator_special_tag dynamic vision sensors ( dvs ) output asynchronous log intensity change events . they have potential applications in high-speed robotics , autonomous cars and drones . the precise event timing , sparse output , and wide dynamic range of the events are well suited for optical ow , but conventional optical ow ( of ) algorithms are not well matched to the event stream data . this paper proposes an event-driven of algorithm called adaptive block-matching optical ow ( abmof ) . abmof uses time slices of accumulated dvs events . the time slices are adaptively rotated based on the input events and of results . compared with other methods such as gradient-based of , abmof can ef ciently be implemented in compact logic circuits . we developed both abmof and lucas-kanade ( lk ) algorithms using our adapted slices . results shows that abmof accuracy is comparable with lk accuracy on natural scene data including sparse and dense texture , high dynamic range , and fast motion exceeding 30,000 pixels per second . story_separator_special_tag convolutional neural networks ( cnns ) have become the dominant neural network architecture for solving many state-of-the-art ( soa ) visual processing tasks . even though graphical processing units are most often used in training and deploying cnns , their power efficiency is less than 10 gop/s/w for single-frame runtime inference . we propose a flexible and efficient cnn accelerator architecture called nullhop that implements soa cnns useful for low-power and low-latency application scenarios . nullhop exploits the sparsity of neuron activations in cnns to accelerate the computation and reduce memory requirements . the flexible architecture allows high utilization of available computing resources across kernel sizes ranging from $ 1\\times 1 $ to $ 7\\times 7 $ . nullhop can process up to 128 input and 128 output feature maps per layer in a single pass . we implemented the proposed architecture on a xilinx zynq field-programmable gate array ( fpga ) platform and presented the results showing how our implementation reduces external memory transfers and compute time in five different cnns ranging from small ones up to the widely known large vgg16 and vgg19 cnns . postsynthesis simulations using mentor modelsim in a 28-nm process with a clock frequency story_separator_special_tag this paper presents a silicon retina-based stereo vision system , which is used for a pre-crash warning application for side impacts . we use silicon retina imagers for this task , because the advantages of the camera , derived from the human vision system , are high temporal resolution up to 1ms and the handling of various lighting conditions with a dynamic range of ~120db . a silicon retina delivers asynchronous data which are called address events ( ae ) . different stereo matching algorithms are available , but these algorithms normally work with full frame images . in this paper we evaluate how the ae data from the silicon retina sensors must be adapted to work with full-frame area-based and feature-based stereo matching algorithms . story_separator_special_tag event cameras are bio-inspired vision sensors that naturally capture the dynamics of a scene , filtering out redundant information . this paper presents a deep neural network approach that unlocks the potential of event cameras on a challenging motion-estimation task : prediction of a vehicle 's steering angle . to make the best out of this sensor-algorithm combination , we adapt state-of-the-art convolutional architectures to the output of event sensors and extensively evaluate the performance of our approach on a publicly available large scale event-camera dataset ( ~1000 km ) . we present qualitative and quantitative explanations of why event cameras allow robust steering prediction even in cases where traditional cameras fail , e.g . challenging illumination conditions and fast motion . finally , we demonstrate the advantages of leveraging transfer learning from traditional to event-based vision , and show that our approach outperforms state-of-the-art algorithms based on standard cameras . story_separator_special_tag this paper describes novel event-based spatio-temporal features called time-surfaces and how they can be used to create a hierarchical event-based pattern recognition architecture . unlike existing hierarchical architectures for pattern recognition , the presented model relies on a time oriented approach to extract spatio-temporal features from the asynchronously acquired dynamics of a visual scene . these dynamics are acquired using biologically inspired frameless asynchronous event-driven vision sensors . similarly to cortical structures , subsequent layers in our hierarchy extract increasingly abstract features using increasingly large spatio-temporal windows . the central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood . we demonstrate that this concept can robustly be used at all stages of an event-based hierarchical model . first layer feature units operate on groups of pixels , while subsequent layer feature units operate on the output of lower level feature units . we report results on a previously published 36 class character recognition task and a four class canonical dynamic card pip task , achieving near 100 percent accuracy on each . we introduce a new seven class story_separator_special_tag the motion history image ( mhi ) approach is a view-based temporal template method which is simple but robust in representing movements and is widely employed by various research groups for action recognition , motion analysis and other related applications . in this paper , we provide an overview of mhi-based human motion recognition techniques and applications . since the inception of the mhi template for motion representation , various approaches have been adopted to improve this basic mhi technique . we present all important variants of the mhi method . this paper points some areas for further research based on the mhi method and its variants . story_separator_special_tag the recent emergence of bioinspired event cameras has opened up exciting new possibilities in high-frequency tracking , bringing robustness to common problems in traditional vision , such as lighting changes and motion blur . in order to leverage these attractive attributes of the event cameras , research has been focusing on understanding how to process their unusual output : an asynchronous stream of events . with the majority of existing techniques discretizing the event-stream essentially forming frames of events grouped according to their timestamp , we are still to exploit the power of these cameras . in this spirit , this letter proposes a new , purely event-based corner detector , and a novel corner tracker , demonstrating that it is possible to detect corners and track them directly on the event stream in real time . evaluation on benchmarking datasets reveals a significant boost in the number of detected corners and the repeatability of such detections over the state of the art even in challenging scenarios with the proposed approach while enabling more than a 4\xd7 speed-up when compared to the most efficient algorithm in the literature . the proposed pipeline detects and tracks corners at a rate of story_separator_special_tag we propose a learning approach to corner detection for event-based cameras that is stable even under fast and abrupt motions . event-based cameras offer high temporal resolution , power efficiency , and high dynamic range . however , the properties of event-based data are very different compared to standard intensity images , and simple extensions of corner detection methods designed for these images do not perform well on event-based data . we first introduce an efficient way to compute a time surface that is invariant to the speed of the objects . we then show that we can train a random forest to recognize events generated by a moving corner from our time surface . random forests are also extremely efficient , and therefore a good choice to deal with the high capture frequency of event-based cameras -- -our implementation processes up to 1.6mev/s on a single cpu . thanks to our time surface formulation and this learning approach , our method is significantly more robust to abrupt changes of direction of the corners compared to previous ones . our method also naturally assigns a confidence score for the corners , which can be useful for postprocessing . moreover , story_separator_special_tag event-based cameras have recently drawn the attention of the computer vision community thanks to their advantages in terms of high temporal resolution , low power consumption and high dynamic range , compared to traditional frame-based cameras . these properties make event-based cameras an ideal choice for autonomous vehicles , robot navigation or uav vision , among others . however , the accuracy of event-based object classification algorithms , which is of crucial importance for any reliable system working in real-world conditions , is still far behind their frame-based counterparts . two main reasons for this performance gap are : 1. the lack of effective low-level representations and architectures for event-based object classification and 2. the absence of large real-world event-based datasets . in this paper we address both problems . first , we introduce a novel event-based feature representation together with a new machine learning architecture . compared to previous approaches , we use local memory units to efficiently leverage past temporal information and build a robust event-based representation . second , we release the first large real-world event-based dataset for object classification . we compare our method to the state-of-the-art with extensive experiments , showing better classification performance and story_separator_special_tag the detection of consistent feature points in an image is fundamental for various kinds of computer vision techniques , such as stereo matching , object recognition , target tracking and optical flow computation . this paper presents an event-based approach to the detection of corner points , which benefits from the high temporal resolution , compressed visual information and low latency provided by an asynchronous neuromorphic event-based camera . the proposed method adapts the commonly used harris corner detector to the event-based data , in which frames are replaced by a stream of asynchronous events produced in response to local light changes at s temporal resolution . responding only to changes in its field of view , an event-based camera naturally enhances edges in the scene , simplifying the detection of corner features . we characterised and tested the method on both a controlled pattern and a real scenario , using the dynamic vision sensor ( dvs ) on the neuromorphic icub robot . the method detects corners with a typical error distribution within 2 pixels . the error is constant for different motion velocities and directions , indicating a consistent detection across the scene and over time . we story_separator_special_tag event cameras offer many advantages over standard frame-based cameras , such as low latency , high temporal resolution , and a high dynamic range . they respond to pixel- level brightness changes and , therefore , provide a sparse output . however , in textured scenes with rapid motion , millions of events are generated per second . therefore , state- of-the-art event-based algorithms either require massive parallel computation ( e.g. , a gpu ) or depart from the event-based processing paradigm . inspired by frame-based pre-processing techniques that reduce an image to a set of features , which are typically the input to higher-level algorithms , we propose a method to reduce an event stream to a corner event stream . our goal is twofold : extract relevant tracking information ( corners do not suffer from the aperture problem ) and decrease the event rate for later processing stages . our event-based corner detector is very efficient due to its design principle , which consists of working on the surface of active events ( a map with the timestamp of the lat- est event at each pixel ) using only comparison operations . our method asynchronously processes event by story_separator_special_tag event cameras are bio-inspired sensors that offer several advantages , such as low latency , high-speed and high dynamic range , to tackle challenging scenarios in computer vision . this paper presents a solution to the problem of 3d reconstruction from data captured by a stereo event-camera rig moving in a static scene , such as in the context of stereo simultaneous localization and mapping . the proposed method consists of the optimization of an energy function designed to exploit small-baseline spatio-temporal consistency of events triggered across both stereo image planes . to improve the density of the reconstruction and to reduce the uncertainty of the estimation , a probabilistic depth-fusion strategy is also developed . the resulting method has no special requirements on either the motion of the stereo event-camera rig or on prior knowledge about the scene . experiments demonstrate our method can deal with both texture-rich scenes as well as sparse scenes , outperforming state-of-the-art stereo methods based on event data image representations . story_separator_special_tag event cameras are bio-inspired vision sensors which mimic retinas to measure per-pixel intensity change rather than outputting an actual intensity image . this proposed paradigm shift away from traditional frame cameras offers significant potential advantages : namely avoiding high data rates , dynamic range limitations and motion blur . unfortunately , however , established computer vision algorithms may not at all be applied directly to event cameras . methods proposed so far to reconstruct images , estimate optical flow , track a camera and reconstruct a scene come with severe restrictions on the environment or on the motion of the camera , e.g . allowing only rotation . here , we propose , to the best of our knowledge , the first algorithm to simultaneously recover the motion field and brightness image , while the camera undergoes a generic motion through any scene . our approach employs minimisation of a cost function that contains the asynchronous event data as well as spatial and temporal regularisation within a sliding window time interval . our implementation relies on gpu optimisation and runs in near real-time . in a series of examples , we demonstrate the successful operation of our framework , including story_separator_special_tag event cameras have a lot of advantages over traditional cameras , such as low latency , high temporal resolution , and high dynamic range . however , since the outputs of event cameras are the sequences of asynchronous events over time rather than actual intensity images , existing algorithms could not be directly applied . therefore , it is demanding to generate intensity images from events for other tasks . in this paper , we unlock the potential of event camera-based conditional generative adversarial networks to create images/videos from an adjustable portion of the event data stream . the stacks of space-time coordinates of events are used as inputs and the network is trained to reproduce images based on the spatio-temporal intensity changes . the usefulness of event cameras to generate high dynamic range ( hdr ) images even in extreme illumination conditions and also non blurred images under rapid motion is also shown . in addition , the possibility of generating very high frame rate videos is demonstrated , theoretically up to 1 million frames per second ( fps ) since the temporal resolution of event cameras is about 1 microsecond . proposed methods are evaluated by comparing the story_separator_special_tag in this work , we propose a novel framework for unsupervised learning for event cameras that learns motion information from only the event stream . in particular , we propose an input representation of the events in the form of a discretized volume that maintains the temporal distribution of the events , which we pass through a neural network to predict the motion of the events . this motion is used to attempt to remove any motion blur in the event image . we then propose a loss function applied to the motion compensated event image that measures the motion blur in this image . we train two networks with this framework , one to predict optical flow , and one to predict egomotion and depths , and evaluate these networks on the multi vehicle stereo event camera dataset , along with qualitative results from a variety of different scenes . story_separator_special_tag event cameras are novel sensors that report brightness changes in the form of asynchronous `` events '' instead of intensity frames . they have significant advantages over conventional cameras : high temporal resolution , high dynamic range , and no motion blur . since the output of event cameras is fundamentally different from conventional cameras , it is commonly accepted that they require the development of specialized algorithms to accommodate the particular nature of events . in this work , we take a different view and propose to apply existing , mature computer vision techniques to videos reconstructed from event data . we propose a novel , recurrent neural network to reconstruct videos from a stream of events and train it on a large amount of simulated event data . our experiments show that our approach surpasses state-of-the-art reconstruction methods by a large margin ( > 20 % ) in terms of image quality . we further apply off-the-shelf computer vision algorithms to videos reconstructed from event data on tasks such as object classification and visual-inertial odometry , and show that this strategy consistently outperforms algorithms that were specifically designed for event data . we believe that our approach opens story_separator_special_tag event cameras are bio-inspired vision sensors that mimic retinas to asynchronously report per-pixel intensity changes rather than outputting an actual intensity image at regular intervals . this new paradigm of image sensor offers significant potential advantages ; namely , sparse and non-redundant data representation . unfortunately , however , most of the existing artificial neural network architectures , such as a cnn , require dense synchronous input data , and therefore , can not make use of the sparseness of the data . we propose eventnet , a neural network designed for real-time processing of asynchronous event streams in a recursive and event-wise manner . eventnet models dependence of the output on tens of thousands of causal events recursively using a novel temporal coding scheme . as a result , at inference time , our network operates in an event-wise manner that is realized with very few sum-of-the-product operations -- -look-up table and temporal feature aggregation -- -which enables processing of 1 mega or more events per second on standard cpu . in experiments using real data , we demonstrated the real-time performance and robustness of our framework . story_separator_special_tag this paper presents an embedded vision system for object tracking applications based on a 128times128 pixel cmos temporal contrast vision sensor . this imager asynchronously responds to relative illumination intensity changes in the visual scene , exhibiting a usable dynamic range of 120 db and a latency of under 100 mus . the information is encoded in the form of address-event representation ( aer ) data . an algorithm for object tracking with 1 millisecond timestamp resolution of the aer data stream is presented . as a real-world application example , vehicle tracking for a traffic-monitoring is demonstrated in real time . the potential of the proposed algorithm for people tracking is also shown . due to the efficient data pre-processing in the imager chip focal plane , the embedded vision system can be implemented using a low-cost , low-power digital signal processor story_separator_special_tag micromanipulation systems have recently been receiving increased attention . teleoperated or automated micromanipulation is a challenging task due to the need for high-frequency position or force feedback to guarantee stability . in addition , the integration of sensors within micromanipulation platforms is complex . vision is a commonly used solution for sensing ; unfortunately , the update rate of the frame-based acquisition process of current available cameras can not ensure-at reasonable costs-stable automated or teleoperated control at the microscale level , where low inertia produces highly unreachable dynamic phenomena . this paper presents a novel vision-based microrobotic system combining both an asynchronous address event representation silicon retina and a conventional frame-based camera . unlike frame-based cameras , recent artificial retinas transmit their outputs as a continuous stream of asynchronous temporal events in a manner similar to the output cells of a biological retina , enabling high update rates . this paper introduces an event-based iterative closest point algorithm to track a microgripper 's position at a frequency of 4 khz . the temporal precision of the asynchronous silicon retina is used to provide a haptic feedback to assist users during manipulation tasks , whereas the frame-based camera is used to story_separator_special_tag this letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded , frame-free visual data . recent interdisciplinary developments , combining inputs from engineering and biology , have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous , temporal events . these events encode temporal contrast and intensity locally in space and time . we show that the sparse yet accurately timed information is well suited as a computational input for object tracking . in this letter , visual data processing is performed for each incoming event at the time it arrives . the method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object . it can handle isometry , similarities , and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard pc . furthermore , by using the dimension of time that is currently underexploited by most artificial vision systems , the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly . story_separator_special_tag because standard cameras sample the scene at constant time intervals , they do not provide any information in the blind time between subsequent frames . however , for many high-speed robotic and vision applications , it is crucial to provide high-frequency measurement updates also during this blind time . this can be achieved using a novel vision sensor , called davis , which combines a standard camera and an asynchronous event-based sensor in the same pixel array . the davis encodes the visual content between two subsequent frames by an asynchronous stream of events that convey pixel-level brightness changes at microsecond resolution . we present the first algorithm to detect and track visual features using both the frames and the event data provided by the davis . features are first detected in the grayscale frames and then tracked asynchronously in the blind time between frames using the stream of events . to best take into account the hybrid characteristics of the davis , features are built based on large , spatial contrast variations ( i.e. , visual edges ) , which are the source of most of the events generated by the sensor . an event-based algorithm is further presented story_separator_special_tag new vision sensors , such as the dynamic and active-pixel vision sensor ( davis ) , incorporate a conventional camera and an event-based sensor in the same pixel array . these sensors have great potential for robotics because they allow us to combine the benefits of conventional cameras with those of event-based sensors : low latency , high temporal resolution , and high dynamic range . however , new algorithms are required to exploit the sensor characteristics and cope with its unconventional output , which consists of a stream of asynchronous brightness changes ( called events ) and synchronous grayscale frames . in this paper , we present a low-latency visual odometry algorithm for the davis sensor using event-based feature tracks . features are first detected in the grayscale frames and then tracked asynchronously using the stream of events . the features are then fed to an event-based visual odometry algorithm that tightly interleaves robust pose optimization and probabilistic mapping . we show that our method successfully tracks the 6-dof motion of the sensor in natural scenes . this is the first work on event-based visual odometry with the davis sensor using feature tracks . story_separator_special_tag we present an algorithm to estimate the rotational motion of an event camera . in contrast to traditional cameras , which produce images at a fixed rate , event cameras have independent pixels that respond asynchronously to brightness changes , with microsecond resolution . our method leverages the type of information conveyed by these novel sensors ( i.e. , edges ) to directly estimate the angular velocity of the camera , without requiring optical flow or image intensity estimation . the core of the method is a contrast maximization design . the method performs favorably against ground truth data and gyroscopic measurements from an inertial measurement unit , even in the presence of very high-speed motions ( close to 1000 deg/s ) . story_separator_special_tag we present a unifying framework to solve several computer vision problems with event cameras : motion , depth and optical flow estimation . the main idea of our framework is to find the point trajectories on the image plane that are best aligned with the event data by maximizing an objective function : the contrast of an image of warped events . our method implicitly handles data association between the events , and therefore , does not rely on additional appearance information about the scene . in addition to accurately recovering the motion parameters of the problem , our framework produces motion-corrected edge-like images with high dynamic range that can be used for further scene analysis . the proposed method is not only simple , but more importantly , it is , to the best of our knowledge , the first method that can be successfully applied to such a diverse set of important vision tasks with event cameras . story_separator_special_tag event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames . they offer significant advantages over standard cameras , namely a very high dynamic range , no motion blur , and a latency in the order of microseconds . we propose a novel , accurate tightly-coupled visual-inertial odom- etry pipeline for such cameras that leverages their outstanding properties to estimate the camera ego-motion in challenging conditions , such as high-speed motion or high dynamic range scenes . the method tracks a set of features ( extracted on the image plane ) through time . to achieve that , we consider events in overlapping spatio-temporal windows and align them using the current camera motion and scene structure , yielding motion-compensated event frames . we then combine these feature tracks in a keyframe- based , visual-inertial odometry algorithm based on nonlinear optimization to estimate the camera s 6-dof pose , velocity , and imu biases . the proposed method is evaluated quantitatively on the public event camera dataset [ 19 ] and significantly outperforms the state-of-the-art [ 28 ] , while being computationally much more efficient : our pipeline can run much faster than real-time on story_separator_special_tag asynchronous event-based sensors present new challenges in basic robot vision problems like feature tracking . the few existing approaches rely on grouping events into models and computing optical flow after assigning future events to those models . such a hard commitment in data association attenuates the optical flow quality and causes shorter flow tracks . in this paper , we introduce a novel soft data association modeled with probabilities . the association probabilities are computed in an intertwined em scheme with the optical flow computation that maximizes the expectation ( marginalization ) over all associations . in addition , to enable longer tracks we compute the affine deformation with respect to the initial point and use the resulting residual as a measure of persistence . the computed optical flow enables a varying temporal integration different for every feature and sized inversely proportional to the length of the flow . we show results in egomotion and very fast vehicle sequences and we show the superiority over standard frame-based cameras . story_separator_special_tag event-based cameras provide a new visual sensing model by detecting changes in image intensity asynchronously across all pixels on the camera . by providing these events at extremely high rates ( up to 1mhz ) , they allow for sensing in both high speed and high dynamic range situations where traditional cameras may fail . in this paper , we present the first algorithm to fuse a purely event-based tracking algorithm with an inertial measurement unit , to provide accurate metric tracking of a cameras full 6dof pose . our algorithm is asynchronous , and provides measurement updates at a rate proportional to the camera velocity . the algorithm selects features in the image plane , and tracks spatiotemporal windows around these features within the event stream . an extended kalman filter with a structureless measurement model then fuses the feature tracks with the output of the imu . the camera poses from the filter are then used to initialize the next step of the tracker and reject failed tracks . we show that our method successfully tracks camera motion on the event-camera dataset in a number of challenging situations . story_separator_special_tag event-based vision sensors , such as the dynamic vision sensor ( dvs ) , are ideally suited for real-time motion analysis . the unique properties encompassed in the readings of such sensors provide high temporal resolution , superior sensitivity to light and low latency . these properties provide the grounds to estimate motion efficiently and reliably in the most sophisticated scenarios , but these advantages come at a price - modern event-based vision sensors have extremely low resolution , produce a lot of noise and require the development of novel algorithms to handle the asynchronous event stream . this paper presents a new , efficient approach to object tracking with asynchronous cameras . we present a novel event stream representation which enables us to utilize information about the dynamic ( temporal ) component of the event stream . the 3d geometry of the event stream is approximated with a parametric model to motion-compensate for the camera ( without feature tracking or explicit optical flow computation ) , and then moving objects that do n't conform to the model are detected in an iterative process . we demonstrate our framework on the task of independent motion detection and tracking , where story_separator_special_tag we present the first event-based learning approach for motion segmentation in indoor scenes and the first event-based dataset ev-imo which includes accurate pixel-wise motion masks , egomotion and ground truth depth . our approach is based on an efficient implementation of the sfm learning pipeline using a low parameter neural network architecture on event data . in addition to camera egomotion and a dense depth map , the network estimates independently moving object segmentation at the pixel-level and computes per-object 3d translational velocities of moving objects . we also train a shallow network with just 40k parameters , which is able to compute depth and egomotion.our ev-imo dataset features 32 minutes of indoor recording with up to 3 fast moving objects in the camera field of view . the objects and the camera are tracked using a vicon\xae motion capture system . by 3d scanning the room and the objects , ground truth of the depth map and pixel-wise object masks are obtained . we then train and evaluate our learning pipeline on ev-imo and demonstrate that it is well suited for scene constrained robotics applications . supplementary material the supplementary video , code , trained models , appendix and story_separator_special_tag event-based sensing , i.e . the asynchronous detection of luminance changes , promises low-energy , high dynamic range , and sparse sensing . this stands in contrast to whole image frame-wise acquisition by standard cameras . here , we systematically investigate the implications of event-based sensing in the context of visual motion , or flow , estimation . starting from a common theoretical foundation , we discuss different principal approaches for optical flow detection ranging from gradient-based methods over plane-fitting to filter based methods and identify strengths and weaknesses of each class . gradient-based methods for local motion integration are shown to suffer from the sparse encoding in address-event representations ( aer ) . approaches exploiting the local plane like structure of the event cloud , on the other hand , are shown to be well suited . within this class , filter based approaches are shown to define a proper detection scheme which can also deal with the problem of representing multiple motions at a single location ( motion transparency ) . a novel biologically inspired efficient motion detector is proposed , analyzed and experimentally validated . furthermore , a stage of surround normalization is incorporated . together with story_separator_special_tag event cameras or neuromorphic cameras mimic the human perception system as they measure the per-pixel intensity change rather than the actual intensity level . in contrast to traditional cameras , such cameras capture new information about the scene at mhz frequency in the form of sparse events . the high temporal resolution comes at the cost of losing the familiar per-pixel intensity information . in this work we propose a variational model that accurately models the behaviour of event cameras , enabling reconstruction of intensity images with arbitrary frame rate in real-time . our method is formulated on a per-event-basis , where we explicitly incorporate information about the asynchronous nature of events via an event manifold induced by the relative timestamps of events . in our experiments we verify that solving the variational model on the manifold produces high-quality images without explicitly estimating optical flow . this paper is an extended version of our previous work ( reinbacher et al . in british machine vision conference ( bmvc ) , 2016 ) and contains additional details of the variational model , an investigation of different data terms and a quantitative evaluation of our method against competing methods as well as story_separator_special_tag event-driven vision sensors have the potential to support a new generation of efficient and robust robots . this requires the development of a new computational framework that exploits not only the spatial information , like in the traditional frame-based approach , but also the temporal content of the sensory data . we propose a method for unsupervised learning of filters for the processing of the visual signal from event-driven sensors . this method exploits the temporal coincidence of events generated by each object in a spatial location of the visual field . the approach is based on a modification of spike timing dependent plasticity that takes into account the specific implementation on the robot and the characteristics of the used sensor . it gives rise to oriented spatial filters that are very similar to the receptive fields observed in the primary visual cortex and traditionally used in bio-inspired hierarchical structures for object recognition , as well as to novel curved spatial structures . using mutual information measure we provide a quantitative evidence that such curved spatial filters provide more information than equivalent oriented gabor filters and can be an important aspect for object recognition in robotic applications . story_separator_special_tag event-driven visual sensors have attracted interest from a number of different research communities . they provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at a given `` frame rate . '' event-driven vision sensors take inspiration from biology . each pixel sends out an event ( spike ) when it senses something meaningful is happening , without any notion of a frame . a special type of event-driven sensor is the so-called dynamic vision sensor ( dvs ) where each pixel computes relative changes of light or `` temporal contrast . '' the sensor output consists of a continuous flow of pixel events that represent the moving objects in the scene . pixel events become available with microsecond delays with respect to `` reality . '' these events can be processed `` as they flow '' by a cascade of event ( convolution ) processors . as a result , input and output event flows are practically coincident in time , and objects can be recognized as soon as the sensor provides enough meaningful events . in this paper , we present a methodology for mapping from a properly story_separator_special_tag deep belief networks ( dbns ) have recently shown impressive performance on a broad range of classification problems . their generative properties allow better understanding of the performance , and provide a simpler solution for sensor fusion tasks . however , because of their inherent need for feedback and parallel update of large numbers of units , dbns are expensive to implement on serial computers . this paper proposes a method based on the siegert approximation for integrate-and-fire neurons to map an offline-trained dbn onto an efficient event-driven spiking neural network suitable for hardware implementation . the method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of mnist handwritten digits with input from a 128 \xd7 128 dynamic vision sensor ( dvs ) silicon retina , and sensory-fusion using additional input from a 64-channel aer-ear silicon cochlea . the system is implemented through the open-source software in the jaer project and runs in real-time on a laptop computer . it is demonstrated that the system can recognize digits in the presence of distractions , noise , scaling , translation and rotation , and that the degradation of recognition story_separator_special_tag deep neural networks such as convolutional networks ( convnets ) and deep belief networks ( dbns ) represent the state-of-the-art for many machine learning and computer vision classification problems . to overcome the large computational cost of deep networks , spiking deep networks have recently been proposed , given the specialized hardware now available for spiking neural networks ( snns ) . however , this has come at the cost of performance losses due to the conversion from analog neural networks ( anns ) without a notion of time , to sparsely firing , event-driven snns . here we analyze the effects of converting deep anns into snns with respect to the choice of parameters for spiking neurons such as firing rates and thresholds . we present a set of optimization techniques to minimize performance loss in the conversion process for convnets and fully connected deep networks . these techniques yield networks that outperform all previous snns on the mnist database to date , and many networks here are close to maximum performance after only 20 ms of simulated time . the techniques include using rectified linear units ( relus ) with zero bias during training , and using a story_separator_special_tag deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks . independently , neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons , low precision synapses , and a scalable communication network . here , we demonstrate that neuromorphic computing , despite its novel architectural primitives , can implement deep convolution networks that ( i ) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech , ( ii ) perform inference while preserving the hardware s underlying energy-efficiency and high throughput , running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mw ( effectively > 6,000 frames/s per watt ) , and ( iii ) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning . this approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors , bringing the promise of embedded , intelligent , brain-inspired computing one step closer . story_separator_special_tag spiking neural networks ( snns ) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven . previous work showed that simple continuous-valued deep convolutional neural networks ( cnns ) can be converted into accurate spiking equivalents . these networks did not include certain common operations such as max-pooling , softmax , batch-normalization and inception-modules . this paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary cnn architectures . we show conversion of popular cnn architectures , including vgg-16 and inception-v3 , into snns that produce the best results reported to date on mnist , cifar-10 and the challenging imagenet dataset . snns can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate . from the examples of lenet for mnist and binarynet for cifar-10 , we show that with an increase in error rate of a few percentage points , the snns can achieve more than 2x reductions in operations compared to the original cnns . this highlights the potential of snns in particular story_separator_special_tag configuring deep spiking neural networks ( snns ) is an exciting research avenue for low power spike event based computation . however , the spike generation function is non-differentiable and therefore not directly compatible with the standard error backpropagation algorithm . in this paper , we introduce a new general backpropagation mechanism for learning synaptic weights and axonal delays which overcomes the problem of non-differentiability of the spike function and uses a temporal credit assignment policy for backpropagating error to preceding layers . we describe and release a gpu accelerated software implementation of our method which allows training both fully connected and convolutional neural network ( cnn ) architectures . using our software , we compare our method against existing snn based learning approaches and standard ann to snn conversion techniques and show that our method achieves state of the art performance for an snn on the mnist , nmnist , dvs gesture , and tidigits datasets . story_separator_special_tag deep spiking neural networks ( snns ) hold great potential for improving the latency and energy efficiency of deep neural networks through event-based computation . however , training such networks is difficult due to the non-differentiable nature of asynchronous spike events . in this paper , we introduce a novel technique , which treats the membrane potentials of spiking neurons as differentiable signals , where discontinuities at spike times are only considered as noise . this enables an error backpropagation mechanism for deep snns , which works directly on spike signals and membrane potentials . thus , compared with previous methods relying on indirect training and conversion , our technique has the potential to capture the statics of spikes more precisely . our novel framework outperforms all previously reported results for snns on the permutation invariant mnist benchmark , as well as the n-mnist benchmark recorded with event-based vision sensors . story_separator_special_tag the success of deep networks and recent industry involvement in brain-inspired computing is igniting a widespread interest in neuromorphic hardware that emulates the biological processes of the brain on an electronic substrate . this review explores interdisciplinary approaches anchored in machine learning theory that enable the applicability of neuromorphic technologies to real-world , human-centric tasks . we find that ( 1 ) recent work in binary deep networks and approximate gradient descent learning are strikingly compatible with a neuromorphic substrate ; ( 2 ) where real-time adaptability and autonomy are necessary , neuromorphic technologies can achieve significant advantages over main-stream ones ; and ( 3 ) challenges in memory technologies , compounded by a tradition of bottom-up approaches in the field , block the road to major breakthroughs . we suggest that a neuromorphic learning framework , tuned specifically for the spatial and temporal constraints of the neuromorphic substrate , will help guiding hardware algorithm co-design and deploying neuromorphic hardware for proactive learning of real-world data . story_separator_special_tag several industry , home , or automotive applications need 3d or at least range data of the observed environment to operate . such applications are , e.g. , driver assistance systems , home care systems , or 3d sensing and measurement for industrial production . state-of-the-art range sensors are laser range finders or laser scanners ( lidar , light detection and ranging ) , time-of-flight ( tof ) cameras , and ultrasonic sound sensors . all of them are embedded , which means that the sensors operate independently and have an integrated processing unit . this is advantageous because the processing power in the mentioned applications is limited and they are computationally intensive anyway . another benefits of embedded systems are a low power consumption and a small form factor . furthermore , embedded systems are full customizable by the developer and can be adapted to the specific application in an optimal way . a promising alternative to the mentioned sensors is stereo vision . classic stereo vision uses a stereo camera setup , which is built up of two cameras ( stereo camera head ) , mounted in parallel and separated by the baseline . it captures a synchronized story_separator_special_tag classification of spatiotemporal events captured by neuromorphic vision sensors or event based cameras in which each pixel senses the luminance changes of related spatial location and produces a sequence of events , has been of great interest in recent years . in this paper , we find that the classification accuracy can be significantly improved by combing random forest ( rf ) classifier with pixel-wise features . rf is a statistical framework with high generalization accuracy and fast training time . we uncover that random forest could grow deep and tend to learn highly irregular patterns of spatiotemporal events with low bias , and thus it is more suitable for achieving the classification objective . the experimental results on mnist-dvs dataset and aer posture dataset show that the rf based classification approach in this work outperforms the state of art algorithms in both classification accuracy and computation time cost . story_separator_special_tag we present a new method to relocalize the 6dof pose of an event camera solely based on the event stream . our method first creates the event image from a list of events that occurs in a very short time interval , then a stacked spatial lstm network ( sp-lstm ) is used to learn the camera pose . our sp-lstm is composed of a cnn to learn deep features from the event images and a stack of lstm to learn spatial dependencies in the image feature space . we show that the spatial dependency plays an important role in the relocalization task with event images and the sp-lstm can effectively learn this information . the extensively experimental results on a publicly available dataset show that our approach outperforms recent state-of-the-art methods by a substantial margin , as well as generalizes well in challenging training/testing splits . the source code and trained models are available at https : //github.com/nqanh/pose_relocalization . story_separator_special_tag we propose an algorithm to estimate the lifetime of events from retinal cameras , such as a dynamic vision sensor ( dvs ) . unlike standard cmos cameras , a dvs only transmits pixel-level brightness changes ( events ) at the time they occur with micro-second resolution . due to its low latency and sparse output , this sensor is very promising for high-speed mobile robotic applications . we develop an algorithm that augments each event with its lifetime , which is computed from the event 's velocity on the image plane . the generated stream of augmented events gives a continuous representation of events in time , hence enabling the design of new algorithms that outperform those based on the accumulation of events over fixed , artificially-chosen time intervals . a direct application of this augmented stream is the construction of sharp gradient ( edge-like ) images at any time instant . we successfully demonstrate our method in different scenarios , including high-speed quadrotor flips , and compare it to standard visualization methods . story_separator_special_tag 3d reconstruction from multiple viewpoints is an important problem in machine vision that allows recovering tridimensional structures from multiple two-dimensional views of a given scene . reconstruction from multiple views is conventionally achieved through a process of pixel luminance-based matching between different views . unlike conventional machine vision methods that solve matching ambiguities by operating only on spatial constraints and luminance , this paper introduces a full time-based solution to stereovision using the high temporal resolution of neuromorphic asynchronous event-based cameras . these cameras output dynamic visual information and luminance encoded in time . they allow a formulation of stereovision as a pure event coincidence detection problem . we will introduce a methodology for time based stereovision in the context of binocular and trinocular configurations using time based event matching criterion combining for the first time all together : space , time , luminance and motion . story_separator_special_tag this paper describes the application of a convolutional neural network ( cnn ) in the context of a predator/prey scenario . the cnn is trained and run on data from a dynamic and active pixel sensor ( davis ) mounted on a summit xl robot ( the predator ) , which follows another one ( the prey ) . the cnn is driven by both conventional image frames and dynamic vision sensor frames that consist of a constant number of davis on and off events . the network is thus data driven at a sample rate proportional to the scene activity , so the effective sample rate varies from 15 hz to 240 hz depending on the robot speeds . the network generates four outputs : steer right , left , center and non-visible . after off-line training on labeled data , the network is imported on the on-board summit xl robot which runs jaer and receives steering directions in real time . successful results on closed-loop trials , with accuracies up to 87 % or 92 % ( depending on evaluation criteria ) are reported . although the proposed approach discards the precise davis event timing , it offers story_separator_special_tag this demonstration presents a convolutional neural network ( cnn ) playing roshambo ( rock-paper-scissors ) against human opponents in real time . the network is driven by dynamic and active-pixel vision sensor ( davis ) events , acquired by accumulating events into fixed event-number frames . story_separator_special_tag event cameras , such as dynamic vision sensors ( dvs ) , and dynamic and active-pixel vision sensors ( davis ) can supplement other autonomous driving sensors by providing a concurrent stream of standard active pixel sensor ( aps ) images and dvs temporal contrast events . the aps stream is a sequence of standard grayscale global-shutter image sensor frames . the dvs events represent brightness changes occurring at a particular moment , with a jitter of about a millisecond under most lighting conditions . they have a dynamic range of > 120 db and effective frame rates > 1 khz at data rates comparable to 30 fps ( frames/second ) image sensors . to overcome some of the limitations of current image acquisition technology , we investigate in this work the use of the combined dvs and aps streams in end-to-end driving applications . the dataset ddd17 accompanying this paper is the first open dataset of annotated davis driving recordings . ddd17 has over 12 h of a 346x260 pixel davis sensor recording highway and city driving in daytime , evening , night , dry and wet weather conditions , along with vehicle speed , gps position , driver story_separator_special_tag in this work we present a lightweight , unsupervised learning pipeline for \\textit { dense } depth , optical flow and egomotion estimation from sparse event output of the dynamic vision sensor ( dvs ) . to tackle this low level vision task , we use a novel encoder-decoder neural network architecture - ecn . our work is the first monocular pipeline that generates dense depth and optical flow from sparse event data only . the network works in self-supervised mode and has just 150k parameters . we evaluate our pipeline on the mvsec self driving dataset and present results for depth , optical flow and and egomotion estimation . due to the lightweight design , the inference part of the network runs at 250 fps on a single gpu , making the pipeline ready for realtime robotics applications . our experiments demonstrate significant improvements upon previous works that used deep learning on event data , as well as the ability of our pipeline to perform well during both day and night . story_separator_special_tag we present an algorithm ( sofas ) to estimate the optical flow of events generated by a dynamic vision sensor ( dvs ) . where traditional cameras produce frames at a fixed rate , dvss produce asynchronous events in response to intensity changes with a high temporal resolution . our algorithm uses the fact that events are generated by edges in the scene to not only estimate the optical flow but also to simultaneously segment the image into objects which are travelling at the same velocity . this way it is able to avoid the aperture problem which affects other implementations such as lucas-kanade . finally , we show that sofas produces more accurate results than traditional optic flow algorithms . story_separator_special_tag in contrast to traditional cameras , whose pixels have a common exposure time , event-based cameras are novel bio-inspired sensors whose pixels work independently and asynchronously output intensity changes ( called `` events '' ) , with microsecond resolution . since events are caused by the apparent motion of objects , event-based cameras sample visual information based on the scene dynamics and are , therefore , a more natural fit than traditional cameras to acquire motion , especially at high speeds , where traditional cameras suffer from motion blur . however , distinguishing between events caused by different moving objects and by the camera 's ego-motion is a challenging task . we present the first per-event segmentation method for splitting a scene into independently moving objects . our method jointly estimates the event-object associations ( i.e. , segmentation ) and the motion parameters of the objects ( or the background ) by maximization of an objective function , which builds upon recent results on event-based motion-compensation . we provide a thorough evaluation of our method on a public dataset , outperforming the state-of-the-art by as much as 10 % . we also show the first quantitative evaluation of a segmentation story_separator_special_tag comunicacion presentada al `` biocas 2014 '' celebrado en laussane ( suiza ) del 22 al 24 de octubre de 2014 story_separator_special_tag current interest in neuromorphic computing continues to drive development of sensors and hardware for spike-based computation . here we describe a hierarchical architecture for visual motion estimation which uses a spiking neural network to exploit the sparse high temporal resolution data provided by neuromorphic vision sensors . although spike-based computation differs from traditional computer vision approaches , our architecture is similar in principle to the canonical lucas-kanade algorithm . output spikes from the architecture represent the direction of motion to the nearest 45 degrees , and the speed within a factor of 2 over the range 0.02 to 0.27 pixels/ms . story_separator_special_tag the growing interest in pulse-mode processing by neural networks is encouraging the development of hardware implementations of massively parallel , distributed networks of integrate-and-fire ( i & f ) neurons . we have developed a reconfigurable multi-chip neuronal system for modeling feature selectivity and applied it to oriented visual stimuli . our system comprises a temporally differentiating imager and a vlsi competitive network of neurons which use an asynchronous address event representation ( aer ) for communication . here we describe the overall system , and present experimental data demonstrating the effect of recurrent connectivity on the pulse-based orientation selectivity . story_separator_special_tag the spatio-temporal receptive fields ( rfs ) of cells in the macaque monkey lateral geniculate nucleus ( lgn ) and striate cortex ( v1 ) have been examined and two distinct sub-populations of non-directional v1 cells have been found : those with a slow largely monophasic temporal rf , and those with a fast very biphasic temporal response . these two sub-populations are in temporal quadrature , the fast biphasic cells crossing over from one response phase to the reverse just as the slow monophasic cells reach their peak response . the two sub-populations also differ in the spatial phases of their rfs . a principal components analysis of the spatio-temporal rfs of directional v1 cells shows that their rfs could be constructed by a linear combination of two components , one of which has the temporal and spatial characteristics of a fast biphasic cell , and the other the temporal and spatial characteristics of a slow monophasic cell . magnocellular lgn cells are fast and biphasic and lead the fast-biphasic v1 subpopulation by 7 ms ; parvocellular lgn cells are slow and largely monophasic and lead the slow monophasic v1 sub-population by 12 ms. we suggest that directional v1 story_separator_special_tag fast reaction to sudden and potentially interesting stimuli is a crucial feature for safe and reliable interaction with the environment . here we present a biologically inspired attention system developed for the humanoid robot icub . it is based on input from unconventional event-driven vision sensors and an efficient computational method . the resulting system shows low-latency and fast determination of the location of the focus of attention . the performance is benchmarked against an instance of the state of the art in robotics artificial attention system used in robotics . results show that the proposed system is two orders of magnitude faster that the benchmark in selecting a new stimulus to attend . story_separator_special_tag five important trends have emerged from recent work on computational models of focal visual attention that emphasize the bottom-up , image-based control of attentional deployment . first , the perceptual saliency of stimuli critically depends on the surrounding context . second , a unique 'saliency map ' that topographically encodes for stimulus conspicuity over the visual scene has proved to be an efficient and plausible bottom-up control strategy . third , inhibition of return , the process by which the currently attended location is prevented from being attended again , is a crucial element of attentional deployment . fourth , attention and eye movements tightly interplay , posing computational challenges with respect to the coordinate system used to control attention . and last , scene understanding and object recognition strongly constrain the selection of attended locations . insights from these five key areas provide a framework for a computational and neurobiological understanding of visual attention . story_separator_special_tag the extraction of stereo-disparity information from two images depends upon establishing a correspondence between them . in this article we analyze the nature of the correspondence computation and derive a cooperative algorithm that implements it . we show that this algorithm successfully extracts information from random-dot stereograms , and its implications for the psychophysics and neurophysiology of the visual system are briefly discussed . story_separator_special_tag stereo vision is an important feature that enables machine vision systems to perceive their environment in 3d . while machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem , their implementation and integration in small , fast , and efficient hardware vision systems remains a difficult challenge . recent advances made in neuromorphic engineering offer a possible solution to this problem , with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain . here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel , compact , low-latency and low-power neuromorphic engineering devices . we validate the model with experimental results , highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings . we demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems . story_separator_special_tag we demonstrate a spiking neural network that extracts spatial depth information from a stereoscopic visual input stream . the system makes use of a scalable neuromorphic computing platform , spinnaker , and neuromorphic vision sensors , so called silicon retinas , to solve the stereo matching ( correspondence ) problem in real-time . it dynamically fuses two retinal event streams into a depth-resolved event stream with a fixed latency of 2 ms , even at input rates as high as several 100,000 events per second . the network design is simple and portable so it can run on many types of neuromorphic computing platforms including fpgas and dedicated silicon . story_separator_special_tag this paper presents an adaptive cooperative approach towards the 3d reconstruction tailored for a bio-inspired depth camera : the stereo dynamic vision sensor ( dvs ) . dvs consists of self-spiking pixels that asynchronously generate events upon relative light intensity changes . these sensors have the advantage to allow simultaneously high temporal resolution ( better than 10 s ) and wide dynamic range ( > 120db ) at sparse data representation , which is not possible with frame-based cameras . in order to exploit the potential of dvs and benefit from its features , depth calculation should take into account the spatiotemporal and asynchronous aspect of data provided by the sensor . this work deals with developing an appropriate approach for the asynchronous , event-driven stereo algorithm . we propose a modification of the cooperative network in which the history of the recent activity in the scene is stored to serve as spatiotemporal context used in disparity calculation for each incoming event . the network constantly evolves in time - as events are generated . in our work , not only the spatiotemporal aspect of the data is preserved but also the matching is performed asynchronously . the results of story_separator_special_tag vergence control and tracking allow a robot to maintain an accurate estimate of a dynamic object three dimensions , improving depth estimation at the fixation point . brain-inspired implementations of vergence control are based on models of complex binocular cells of the visual cortex sensitive to disparity . the energy of cells activation provides a disparity-related signal that can be reliably used for vergence control . we implemented such a model on the neuromorphic icub , equipped with a pair of brain inspired vision sensors . such sensors provide low-latency , compressed and high temporal resolution visual information related to changes in the scene . we demonstrate the feasibility of a fully neuromorphic system for vergence control and show that this implementation works in real-time , providing fast and accurate control for a moving stimulus up to 2 hz , sensibly decreasing the latency associated to frame-based cameras . additionally , thanks to the high dynamic range of the sensor , the control shows the same accuracy under very different illumination . story_separator_special_tag visual processing in cortex is classically modeled as a hierarchy of increasingly sophisticated representations , naturally extending the model of simple to complex cells of hubel and wiesel . surprisingly , little quantitative modeling has been done to explore the biological feasibility of this class of models to explain aspects of higher-level visual processing such as object recognition . we describe a new hierarchical model consistent with physiological data from inferotemporal cortex that accounts for this complex visual task and makes testable predictions . the model is based on a max-like operation applied to inputs to certain cortical neurons that may have a general role in cortical function . story_separator_special_tag this letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms . the concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems . the use of images naturally leads to generating incorrect artificial and redundant spike timings and , more important , also contradicts biological findings indicating that visual processing is massively parallel , asynchronous with high temporal resolution . a new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors . unlike conventional cameras , these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input , resulting optimally sparse in space and time-pixel individually and precisely timed only if new , previously unknown information is available event based . this letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition story_separator_special_tag the recently developed dynamic vision sensors ( dvs ) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution . this high temporal precision makes the output of these sensors especially suited for dynamic 3d visual reconstruction , by matching corresponding events generated by two different sensors in a stereo setup . this paper explores the use of gabor filters to extract information about the orientation of the object edges that produce the events , therefore increasing the number of constraints applied to the matching algorithm . this strategy provides more reliably matched pairs of events , improving the final 3d reconstruction . story_separator_special_tag deep neural networks ( dnns ) and convolutional neural networks ( cnns ) are useful for many practical tasks in machine learning . synaptic weights , as well as neuron activation functions within the deep network are typically stored with high-precision formats , e.g . 32 bit floating point . however , since storage capacity is limited and each memory access consumes power , both storage capacity and memory access are two crucial factors in these networks . here we present a method and present the adaption toolbox to extend the popular deep learning library caffe to support training of deep cnns with reduced numerical precision of weights and activations using fixed point notation . adaption includes tools to measure the dynamic range of weights and activations . using the adaption tools , we quantized several cnns including vgg16 down to 16-bit weights and activations with only 0.8 % drop in top-1 accuracy . the quantization , especially of the activations , leads to increase of up to 50 % of sparsity especially in early and intermediate layers , which we exploit to skip multiplications with zero , thus performing faster and computationally cheaper inference . story_separator_special_tag this paper introduces a novel methodology for training an event-driven classifier within a spiking neural network ( snn ) system capable of yielding good classification results when using both synthetic input data and real data captured from dynamic vision sensor ( dvs ) chips . the proposed supervised method uses the spiking activity provided by an arbitrary topology of prior snn layers to build histograms and train the classifier in the frame domain using the stochastic gradient descent algorithm . in addition , this approach can cope with leaky integrate-and-fire neuron models within the snn , a desirable feature for real-world snn applications , where neural activation must fade away after some time in the absence of inputs . consequently , this way of building histograms captures the dynamics of spikes immediately before the classifier . we tested our method on the mnist data set using different synthetic encodings and real dvs sensory data sets such as n-mnist , mnist-dvs , and poker-dvs using the same network topology and feature maps . we demonstrate the effectiveness of our approach by achieving the highest classification accuracy reported on the n-mnist ( 97.77 % ) and poker-dvs ( 100 % ) real story_separator_special_tag an ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain . one increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks . however , the workhorse of deep learning , the gradient descent back propagation ( bp ) rule , often relies on the immediate availability of network-wide information stored with high-precision memory , and precise operations that are difficult to realize in neuromorphic hardware . remarkably , recent work showed that exact backpropagated weights are not essential for learning deep representations . random bp replaces feedback weights with random ones and encourages the network to adjust its feed-forward weights to learn pseudo-inverses of the ( random ) feedback weights . building on these results , we demonstrate an event-driven random bp ( erbp ) rule that uses an error-modulated synaptic plasticity for learning deep representations in neuromorphic computing hardware . the rule requires only one addition and two comparisons for each synaptic weight using a two-compartment leaky integrate & fire ( i & f ) neuron , making it story_separator_special_tag apparent motion of the surroundings on an agent 's retina can be used to navigate through cluttered environments , avoid collisions with obstacles , or track targets of interest . the pattern of apparent motion of objects , ( i.e. , the optic flow ) , contains spatial information about the surrounding environment . for a small , fast-moving agent , as used in search and rescue missions , it is crucial to estimate the distance to close-by objects to avoid collisions quickly . this estimation can not be done by conventional methods , such as frame-based optic flow estimation , given the size , power , and latency constraints of the necessary hardware . a practical alternative makes use of event-based vision sensors . contrary to the frame-based approach , they produce so-called events only when there are changes in the visual scene . we propose a novel asynchronous circuit , the spiking elementary motion detector ( semd ) , composed of a single silicon neuron and synapse , to detect elementary motion from an event-based vision sensor . the semd encodes the time an object 's image needs to travel across the retina into a burst of spikes story_separator_special_tag neuromorphic electronic systems exhibit advantageous characteristics , in terms of low energy consumption and low response latency , which can be useful in robotic applications that require compact and low power embedded computing resources . however , these neuromorphic circuits still face significant limitations that make their usage challenging : these include low precision , variability of components , sensitivity to noise and temperature drifts , as well as the currently limited number of neurons and synapses that are typically emulated on a single chip . in this paper , we show how it is possible to achieve functional robot control strategies using a mixed signal analog/digital neuromorphic processor interfaced to a mobile robotic platform equipped with an event-based dynamic vision sensor . we provide a proof of concept implementation of obstacle avoidance and target acquisition using biologically plausible spiking neural networks directly emulated by the neuromorphic hardware . to our knowledge , this is the first demonstration of a working spike-based neuromorphic robotic controller in this type of hardware which illustrates the feasibility , as well as limitations , of this approach . story_separator_special_tag the lobula giant movement detector ( lgmd ) is a an identified neuron of the locust that detects looming objects and triggers its escape responses . understanding the neural principles and networks that lead to these fast and robust responses can lead to the design of efficient facilitate obstacle avoidance strategies in robotic applications . here we present a neuromorphic spiking neural network model of the lgmd driven by the output of a neuromorphic dynamic vision sensor ( dvs ) , which has been optimised to produce robust and reliable responses in the face of the constraints and variability of its mixed signal analogue-digital circuits . as this lgmd model has many parameters , we use the differential evolution ( de ) algorithm to optimise its parameter space . we also investigate the use of self-adaptive differential evolution ( sade ) which has been shown to ameliorate the difficulties of finding appropriate input parameters for de . we explore the use of two biological mechanisms : synaptic plasticity and membrane adaptivity in the lgmd . we apply de and sade to find parameters best suited for an obstacle avoidance system on an unmanned aerial vehicle ( uav ) , and story_separator_special_tag optically based measurements in high reynolds number fluid flows often require high-speed imaging techniques . these cameras typically record data internally and thus are limited by the amount of onboard memory available . a novel camera technology for use in particle tracking velocimetry is presented in this paper . this technology consists of a dynamic vision sensor in which pixels operate in parallel , transmitting asynchronous events only when relative changes in intensity of approximately 10 % are encountered with a temporal resolution of 1\xa0 s. this results in a recording system whose data storage and bandwidth requirements are about 100 times smaller than a typical high-speed image sensor . post-processing times of data collected from this sensor also increase to about 10 times faster than real time . we present a proof-of-concept study comparing this novel sensor with a high-speed cmos camera capable of recording up to 2,000 fps at 1,024\xa0\xd7\xa01,024 pixels . comparisons are made in the ability of each system to track dense ( \xa0 > 1\xa0g/cm3 ) particles in a solid liquid two-phase pipe flow . reynolds numbers based on the bulk velocity and pipe diameter up to 100,000 are investigated . story_separator_special_tag summary this paper presents a new high speed vision system using an asynchronous address-event representation camera . within this framework , an asynchronous event-based real-time hough circle transform is developed to track microspheres . thetechnologypresentedinthispaperallowsforarobustrealtimeevent-basedmultiobjectpositiondetectionatafrequency of several khz with a low computational cost . brownian motion is also detected within this context with both high speed and precision . the carried-out work is adapted to the automated or remote-operated microrobotic systems fulfilling theirneedofanextremelyfastvisionfeedback.itisalsoavery promising solution tothemicrophysical phenomena analysis and particularly for the micro/nanoscale force measurement . story_separator_special_tag although motion analysis has been extensively investigated in the literature and a wide variety of tracking algorithms have been proposed , the problem of tracking objects using the dynamic vision sensor requires a slightly different approach . dynamic vision sensors are biologically inspired vision systems that asynchronously generate events upon relative light intensity changes . unlike conventional vision systems , the output of such sensor is not an image ( frame ) but an address events stream . therefore , most of the conventional tracking algorithms are not appropriate for the dvs data processing . in this paper , we introduce algorithm for spatiotemporal tracking that is suitable for dynamic vision sensor . in particular , we address the problem of multiple persons tracking in the occurrence of high occlusions . we investigate the possibility to apply gaussian mixture models for detection , description and tracking objects . preliminary results prove that our approach can successfully track people even when their trajectories are intersecting . story_separator_special_tag this paper presents a number of new methods for visual tracking using the output of an event-based asynchronous neuromorphic dynamic vision sensor . it allows the tracking of multiple visual features in real time , achieving an update rate of several hundred kilohertz on a standard desktop pc . the approach has been specially adapted to take advantage of the event-driven properties of these sensors by combining both spatial and temporal correlations of events in an asynchronous iterative framework . various kernels , such as gaussian , gabor , combinations of gabor functions , and arbitrary user-defined kernels , are used to track features from incoming events . the trackers described in this paper are capable of handling variations in position , scale , and orientation through the use of multiple pools of trackers . this approach avoids the n2 operations per event associated with conventional kernel-based convolution operations with n \xd7 n kernels . the tracking performance was evaluated experimentally for each type of kernel in order to demonstrate the robustness of the proposed solution . story_separator_special_tag event cameras are a new technology that can enable low-latency , fast visual sensing in dynamic environments towards faster robotic vision as they respond only to changes in the scene and have a very high temporal resolution ( < 1 s ) . moving targets produce dense spatio-temporal streams of events that do not suffer from information loss between frames , which can occur when traditional cameras are used to track fast-moving targets . event-based tracking algorithms need to be able to follow the target position within the spatio-temporal data , while rejecting clutter events that occur as a robot moves in a typical office setting . we introduce a particle filter with the aim to be robust to temporal variation that occurs as the camera and the target move with different relative velocities , which can lead to a loss in visual information and missed detections . the proposed system provides a more persistent tracking compared to prior state-of-the-art , especially when the robot is actively following a target with its gaze . experiments are performed on the icub humanoid robot performing ball tracking and gaze following . story_separator_special_tag object tracking is an important step in many artificial vision tasks . the current state-of-the-art implementations remain too computationally demanding for the problem to be solved in real time with high dynamics . this paper presents a novel real-time method for visual part-based tracking of complex objects from the output of an asynchronous event-based camera . this paper extends the pictorial structures model introduced by fischler and elschlager 40 years ago and introduces a new formulation of the problem , allowing the dynamic processing of visual input in real time at high temporal resolution using a conventional pc . it relies on the concept of representing an object as a set of basic elements linked by springs . these basic elements consist of simple trackers capable of successfully tracking a target with an ellipse-like shape at several kilohertz on a conventional computer . for each incoming event , the method updates the elastic connections established between the trackers and guarantees a desired geometric structure corresponding to the tracked object in real time . this introduces a high temporal elasticity to adapt to projective deformations of the tracked object in the focal plane . the elastic energy of this virtual mechanical story_separator_special_tag the primary problem dealt with in this paper is the following . given some description of a visual object , find that object in an actual photograph . part of the solution to this problem is the specification of a descriptive scheme , and a metric on which to base the decision of `` goodness '' of matching or detection . story_separator_special_tag the problem we are addressing in alvey project mmi149 is that of using computer vision to understand the unconstrained 3d world , in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work . for example , we desire to obtain an understanding of natural scenes , containing roads , buildings , trees , bushes , etc. , as typified by the two frames from a sequence illustrated in figure 1. the solution to this problem that we are pursuing is to use a computer vision system based upon motion analysis of a monocular image sequence from a mobile camera . by extraction and tracking of image features , representations of the 3d analogues of these features can be constructed . story_separator_special_tag image registration finds a variety of applications in computer vision . unfortunately , traditional image registration techniques tend to be costly . we present a new image registration technique that makes use of the spatial intensity gradient of the images to find a good match using a type of newton-raphson iteration . our technique is taster because it examines far fewer potential matches between the images than existing techniques furthermore , this registration technique can be generalized to handle rotation , scaling and shearing . we show how our technique can be adapted tor use in a stereo vision system . story_separator_special_tag this paper introduces an event-based luminance-free method to detect and match corner events from the output of asynchronous event-based neuromorphic retinas . the method relies on the use of space-time properties of moving edges . asynchronous event-based neuromorphic retinas are composed of autonomous pixels , each of them asynchronously generating `` spiking '' events that encode relative changes in pixels ' illumination at high temporal resolutions . corner events are defined as the spatiotemporal locations where the aperture problem can be solved using the intersection of several geometric constraints in events ' spatiotemporal spaces . a regularization process provides the required constraints , i.e . the motion attributes of the edges with respect to their spatiotemporal locations using local geometric properties of visual events . experimental results are presented on several real scenes showing the stability and robustness of the detection and matching . story_separator_special_tag where feature points are used in real-time frame-rate applications , a high-speed feature detector is necessary . feature detectors such as sift ( dog ) , harris and susan are good methods which yield high quality features , however they are too computationally intensive for use in real-time applications of any complexity . here we show that machine learning can be used to derive a feature detector which can fully process live pal video using less than 7 % of tlie available processing time . by comparison neither the harris detector ( 120 % ) nor the detection stage of sift ( 300 % ) can operate at full frame rate . clearly a high-speed detector is of limited use if the features produced are unsuitable for downstream processing . in particular , the same scene viewed from two different positions should yield features which correspond to the same real-world 3d locations [ 1 ] . hence the second contribution of this paper is a comparison corner detectors based on this criterion applied to 3d scenes . this comparison supports a number of claims made elsewhere concerning existing corner detectors . further , contrary to our initial expectations , we story_separator_special_tag unlike standard cameras that send intensity images at a constant frame rate , event-driven cameras asynchronously report pixel-level brightness changes , offering low latency and high temporal resolution ( both in the order of micro-seconds ) . as such , they have great potential for fast and low power vision algorithms for robots . visual tracking , for example , is easily achieved even for very fast stimuli , as only moving objects cause brightness changes . however , cameras mounted on a moving robot are typically non-stationary and the same tracking problem becomes confounded by background clutter events due to the robot ego-motion . in this paper , we propose a method for segmenting the motion of an independently moving object for event-driven cameras . our method detects and tracks corners in the event stream and learns the statistics of their motion as a function of the robot 's joint velocities when no independently moving objects are present . during robot operation , independently moving objects are identified by discrepancies between the predicted corner velocities from ego-motion and the measured corner velocities . we validate the algorithm on data collected from the neuromorphic icub robot . we achieve a story_separator_special_tag benchmarks have played a vital role in the advancement of visual object recognition and other fields of computer vision ( lecun et al. , 1998 ; deng et al. , 2009 ; ) . the challenges posed by these standard datasets have helped identify and overcome the shortcomings of existing approaches , and have led to great advances of the state of the art . even the recent massive increase of interest in deep learning methods can be attributed to their success in difficult benchmarks such as imagenet ( krizhevsky et al. , 2012 ; lecun et al. , 2015 ) . neuromorphic vision uses silicon retina sensors such as the dynamic vision sensor ( dvs ; lichtsteiner et al. , 2008 ) . these sensors and their davis ( dynamic and active-pixel vision sensor ) and atis ( asynchronous time-based image sensor ) derivatives ( brandli et al. , 2014 ; posch et al. , 2014 ) are inspired by biological vision by generating streams of asynchronous events indicating local log-intensity brightness changes . they thereby greatly reduce the amount of data to be processed , and their dynamic nature makes them a good fit for domains such as story_separator_special_tag in this study we compare nine optical flow algorithms that locally measure the flow normal to edges according to accuracy and computation cost . in contrast to conventional , frame-based motion flow algorithms , our open-source implementations compute optical flow based on address-events from a neuromorphic dynamic vision sensor ( dvs ) . for this benchmarking we created a dataset of two synthesized and three real samples recorded from a 240x180 pixel dynamic and active-pixel vision sensor ( davis ) . this dataset contains events from the dvs as well as conventional frames to support testing state-of-the-art frame-based methods . we introduce a new source for the ground truth : in the special case that the perceived motion stems solely from a rotation of the vision sensor around its three camera axes , the true optical flow can be estimated using gyro data from the inertial measurement unit integrated with the davis camera . this provides a ground-truth to which we can compare algorithms that measure optical flow by means of motion cues . an analysis of error sources led to the use of a refractory period , more accurate numerical derivatives and a savitzky-golay filter to achieve significant improvements story_separator_special_tag this paper introduces a process to compute optical flow using an asynchronous event-based retina at high speed and low computational load . a new generation of artificial vision sensors has now started to rely on biologically inspired designs for light acquisition . biological retinas , and their artificial counterparts , are totally asynchronous and data driven and rely on a paradigm of light acquisition radically different from most of the currently used frame-grabber technologies . this paper introduces a framework for processing visual data using asynchronous event-based acquisition , providing a method for the evaluation of optical flow . the paper shows that current limitations of optical flow computation can be overcome by using event-based visual acquisition , where high data sparseness and high temporal resolution permit the computation of optical flow with micro-second accuracy and at very low computational cost . story_separator_special_tag this paper compares image motion estimation with asynchronous event-based cameras to computer vision approaches using as input frame-based video sequences . since dynamic events are triggered at significant intensity changes , which often are at the border of objects , we refer to the event-based image motion as contour motion . algorithms are presented for the estimation of accurate contour motion from local spatio-temporal information for two camera models : the dynamic vision sensor ( dvs ) , which asynchronously records temporal changes of the luminance , and a family of new sensors which combine dvs data with intensity signals . these algorithms take advantage of the high temporal resolution of the dvs and achieve robustness using a multiresolution scheme in time . it is shown that , because of the coupling of velocity and luminance information in the event distribution , the image motion estimation problem becomes much easier with the new sensors which provide both events and image intensity than with the dvs alone . experiments on synthesized data from computer vision benchmarks show that our algorithm on combined data outperforms computer vision methods in accuracy and can achieve real-time performance , and experiments on real data confirm story_separator_special_tag this paper presents a method for image motion estimation for event-based sensors . accurate and fast image flow estimation still challenges computer vision . a new paradigm based on asynchronous event-based data provides an interesting alternative and has shown to provide good estimation at high contrast contours by estimating motion based on very accurate timing . however , these techniques still fail in regions of high-frequency texture . this work presents a simple method for locating those regions , and a novel phase-based method for event sensors that estimates more accurately these regions . finally , we evaluate and compare our results with other state-of-the-art techniques . story_separator_special_tag this paper presents a novel , drastically simplified method to compute optic flow on a miniaturized embedded vision system , suitable on-board of miniaturized indoor flying robots . estimating optic flow is a common technique for robotic motion stabilization in systems without ground contact , such as unmanned aerial vehicles ( uavs ) . because of high computing power requirements to process video camera data , most optic flow algorithms are implemented off-board on pcs or on dedicated hardware , connected through tethered or wireless links . here , in contrast , we present a miniaturized stand-alone embedded system that utilizes a novel neuro-biologically inspired event-based vision sensor ( dvs ) to extract optic flow on-board in real-time with minimal computing requirements . the dvs provides asynchronous events that resemble temporal contrast changes at individual pixel level , instead of full image frames at regular time intervals . such a representation provides high temporal resolution while simultaneously reducing the amount of data to be processed . we present a simple algorithm to extract optic flow information from such event-based vision data , which is sufficiently efficient in terms of data storage and processing power to be executed on an embedded story_separator_special_tag computational models of visual processing often use frame-based image acquisition techniques to process a temporally changing stimulus . this approach is unlike biological mechanisms that are spike-based and independent of individual frames . the neuromorphic dynamic vision sensor ( dvs ) [ lichtsteiner et al. , 2008 ] provides a stream of independent visual events that indicate local illumination changes , resembling spiking neurons at a retinal level . we introduce a new approach for the modelling of cortical mechanisms of motion detection along the dorsal pathway using this type of representation . our model combines filters with spatio-temporal tunings also found in visual cortex to yield spatio-temporal and direction specificity . we probe our model with recordings of test stimuli , articulated motion and ego-motion . we show how our approach robustly estimates optic flow and also demonstrate how this output can be used for classification purposes . story_separator_special_tag this paper describes a fully spike-based neural network for optical flow estimation from dynamic vision sensor data . a low power embedded implementation of the method , which combines the asynchronous time-based image sensor with ibm 's truenorth neurosynaptic system , is presented . the sensor generates spikes with submillisecond resolution in response to scene illumination changes . these spike are processed by a spiking neural network running on truenorth with a 1-ms resolution to accurately determine the order and time difference of spikes from neighbouring pixels , and therefore infer the velocity . the spiking neural network is a variant of the barlow levick method for optical flow estimation . the system is evaluated on two recordings for which ground truth motion is available , and achieves an average endpoint error of 11 % at an estimated power budget of under 80\xa0mw for the sensor and computation . story_separator_special_tag event-based cameras are a new passive sensing modality with a number of benefits over traditional cameras , including extremely low latency , asynchronous data acquisition , high dynamic range , and very low power consumption . there has been a lot of recent interest and development in applying algorithms to use the events to perform a variety of three-dimensional perception tasks , such as feature tracking , visual odometry , and stereo depth estimation . however , there currently lacks the wealth of labeled data that exists for traditional cameras to be used for both testing and development . in this letter , we present a large dataset with a synchronized stereo pair event based camera system , carried on a handheld rig , flown by a hexacopter , driven on top of a car , and mounted on a motorcycle , in a variety of different illumination levels and environments . from each camera , we provide the event stream , grayscale images , and inertial measurement unit ( imu ) readings . in addition , we utilize a combination of imu , a rigidly mounted lidar system , indoor and outdoor motion capture , and gps to provide story_separator_special_tag we present a technique for the computation of 2d component velocity from image sequences . initially , the image sequence is represented by a family of spatiotemporal velocity-tuned linear filters . component velocity , computed from spatiotemporal responses of identically tuned filters , is expressed in terms of the local first-order behavior of surfaces of constant phase . justification for this definition is discussed from the perspectives of both 2d image translation and deviations from translation that are typical in perspective projections of 3d scenes . the resulting technique is predominantly linear , efficient , and suitable for parallel processing . moreover , it is local in space-time , robust with respect to noise , and permits multiple estimates within a single neighborhood . promising quantiative results are reported from experiments with realistic image sequences , including cases with sizeable perspective deformation . story_separator_special_tag from the publisher : a basic problem in computer vision is to understand the structure of a real world scene given several images of it . recent major developments in the theory and practice of scene reconstruction are described in detail in a unified framework . the book covers the geometric principles and how to represent objects algebraically so they can be computed and applied . the authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly . story_separator_special_tag biologically-inspired dynamic vision sensors have been introduced in 2002 which asynchronously detect the significant relative light intensity changes in a scene and output them in a form of address-event representation . these vision sensors capture dynamical discontinuities on-chip for a reduced data volume compared to that from intensity images . therefore , they support detection , segmentation and tracking of moving objects in the address-event space by exploiting the generated events , as a reaction to intensity changes , resulting from the scene dynamics . object tracking has been previously demonstrated and reported in scientific publications using monocular dynamic vision sensors . this paper contributes with presenting and demonstrating a tracking algorithm using the 3d sensing technology based on the stereo dynamic vision sensor . this system is capable of detecting and tracking persons within a 4m range at an effective refresh rate of the depth map of up to 200 per second . the 3d system is evaluated for people tracking and the tests showed that up to 60k address-events/s can be processed for real-time tracking . story_separator_special_tag in this paper we present different approaches of 3d stereo matching for bio-inspired image sensors . in contrast to conventional digital cameras , this image sensor , called silicon retina , delivers asynchronous events instead of synchronous intensity or color images . the events represent either an increase ( on-event ) or a decrease ( off-event ) of a pixel 's intensity . the sensor can provide events with a time resolution of up to 1ms and it operates in a dynamic range of up to 120db . in this work we use two silicon retina cameras as a stereo sensor setup for 3d reconstruction of the observed scene , as already known from conventional cameras . the polarity , the timestamp , and a history of the events are used for stereo matching . due to the different information content and data type of the events , in comparison to conventional pixels , standard stereo matching approaches can not directly be used . thus , we developed an area-based , an event-image-based , and a time-based approach and evaluated the results achieving promising results for stereo matching based on events . story_separator_special_tag this demonstration shows a natural gesture interface for console entertainment devices using as input a stereo pair of dynamic vision sensors . the event-based processing of the sparse sensor output allows fluid interaction at a laptop processor load of less than 3 % . story_separator_special_tag dynamic vision sensors ( dvss ) encode visual input as a stream of events generated upon relative light intensity changes in the scene . these sensors have the advantage of allowing simultaneously high temporal resolution ( better than 10 \xb5s ) and wide dynamic range ( > 120 db ) at sparse data representation , which is not possible with clocked vision sensors . in this paper , we focus on the task of stereo reconstruction . the spatiotemporal and asynchronous aspects of data provided by the sensor impose a different stereo reconstruction approach from the one applied for synchronous frame-based cameras . we propose to model the event-driven stereo matching by a cooperative network ( marr and poggio 1976 science 194 283 7 ) . the history of the recent activity in the scene is stored in the network , which serves as spatiotemporal context used in disparity calculation for each incoming event . the network constantly evolves in time , as events are generated . in our work , not only the spatiotemporal aspect of the data is preserved but also the matching is performed asynchronously . the results of the experiments prove that the proposed approach is story_separator_special_tag epipolar geometry , the cornerstone of perspective stereo vision , has been studied extensively since the advent of computer vision . establishing such a geometric constraint is of primary importance , as it allows the recovery of the 3-d structure of scenes . estimating the epipolar constraints of nonperspective stereo is difficult , they can no longer be defined because of the complexity of the sensor geometry . this paper will show that these limitations are , to some extent , a consequence of the static image frames commonly used in vision . the conventional frame-based approach suffers from a lack of the dynamics present in natural scenes . we introduce the use of neuromorphic event-based-rather than frame-based-vision sensors for perspective stereo vision . this type of sensor uses the dimension of time as the main conveyor of information . in this paper , we present a model for asynchronous event-based vision , which is then used to derive a general new concept of epipolar geometry linked to the temporal activation of pixels . practical experiments demonstrate the validity of the approach , solving the problem of estimating the fundamental matrix applied , in a first stage , to classic story_separator_special_tag this paper presents a novel n-ocular 3d reconstruction algorithm for event-based vision data from bio-inspired artificial retina sensors . artificial retinas capture visual information asynchronously and encode it into streams of asynchronous spike-like pulse signals carrying information on , e.g. , temporal contrast events in the scene . the precise time of the occurrence of these visual features are implicitly encoded in the spike timings . due to the high temporal resolution of the asynchronous visual information acquisition , the output of these sensors is ideally suited for dynamic 3d reconstruction . the presented technique takes full benefit of the event-driven operation , i.e . events are processed individually at the moment they arrive . this strategy allows us to preserve the original dynamics of the scene , hence allowing for more robust 3d reconstructions . as opposed to existing techniques , this algorithm is based on geometric and time constraints alone , making it particularly simple to implement and largely linear . story_separator_special_tag similarity measuring plays as an import role in stereo matching , whether for visual data from standard cameras or for those from novel sensors such as dynamic vision sensors ( dvs ) . generally speaking , robust feature descriptors contribute to designing a powerful similarity measurement , as demonstrated by classic stereo matching methods . however , the kind and representative ability of feature descriptors for dvs data are so limited that achieving accurate stereo matching on dvs data becomes very challenging . in this paper , a novel feature descriptor is proposed to improve the accuracy for dvs stereo matching . our feature descriptor can describe the local context or distribution of the dvs data , contributing to constructing an effective similarity measurement for dvs data matching , yielding an accurate stereo matching result . our method is evaluated by testing our method on groundtruth data and comparing with various standard stereo methods . experiments demonstrate the efficiency and effectiveness of our method . story_separator_special_tag real-world depth perception applications require precise reaction to fast motion , and the ability to operate in scenes which contain large intensity differences or high dynamic range . standard cmos cameras based methods for depth computing , such as stereo matching , run into the problem of huge power consuming at high frame-rates or inaccurate depths with noise or holes . the event camera , dvs ( dynamic vision sensor ) , aims to be robust to fast motion and light change with low power consumption and sparse representation , offering great potential to replace standard cameras for depth perception . however , it is not trivial to directly apply dvs for stereo matching due to its nature of low latency and sparsity which will result in extremely limited available information and imperfect imaging quality . to overcome these problems and make the dvs available for depth perception , this paper introduces a novel method which can enhance the stream of events and estimate the dense depth through event driven stereo matching . our event stream enhancement algorithm efficiently buffers events according to time continuous rather than using artificially-chosen time intervals , and our stereo matching method can robust estimate story_separator_special_tag biologically-inspired event-driven silicon retinas , so called dynamic vision sensors ( dvs ) , allow efficient solutions for various visual perception tasks , e.g . surveillance , tracking , or motion detection . similar to retinal photoreceptors , any perceived light intensity change in the dvs generates an event at the corresponding pixel . the dvs thereby emits a stream of spatiotemporal events to encode visually perceived objects that in contrast to conventional frame-based cameras , is largely free of redundant background information . the dvs offers multiple additional advantages , but requires the development of radically new asynchronous , event-based information processing algorithms . in this paper we present a fully event-based disparity matching algorithm for reliable 3d depth perception using a dynamic cooperative neural network . the interaction between cooperative cells applies cross-disparity uniqueness-constraints and within-disparity continuity-constraints , to asynchronously extract disparity for each new event , without any need of buffering individual events . we have investigated the algorithm 's performance in several experiments ; our results demonstrate smooth disparity maps computed in a purely event-based manner , even in the scenes with temporally-overlapping stimuli . story_separator_special_tag abstract . we present two improvement techniques for stereo matching algorithms using silicon retina sensors . we verify the results with ground truth data . in contrast to conventional monochrome/color cameras , silicon retina sensors deliver an asynchronous flow of events instead of common framed and discrete intensity or color images . while using this kind of sensor in a stereo setup to enable new fields of applications , it also introduces new challenges in terms of stereo image analysis . using this type of sensor , stereo matching algorithms have to deal with sparse event data , thus , less information . this affects the quality of the achievable disparity results and renders improving the stereo matching algorithms a necessary task . for this reason , we introduce two techniques for increasing the accuracy of silicon retina stereo results , in the sense that the average distance error is reduced . the first method is an adapted belief propagation approach optimizing the initial matching cost volume , and the second is an innovative two-stage postfilter for smoothing and outlier rejection . the evaluation shows that the proposed techniques increase the accuracy of the stereo matching and constitute a useful
as we are moving towards the internet of things ( iot ) , the number of sensors deployed around the world is growing at a rapid pace . market research has shown a significant growth of sensor deployments over the past decade and has predicted a significant increment of the growth rate in the future . these sensors continuously generate enormous amounts of data . however , in order to add value to raw sensor data we need to understand it . collection , modelling , reasoning , and distribution of context in relation to sensor data plays critical role in this challenge . context-aware computing has proven to be successful in understanding sensor data . in this paper , we survey context awareness from an iot perspective . we present the necessary background by introducing the iot paradigm and context-aware fundamentals at the beginning . then we provide an in-depth analysis of context life cycle . we evaluate a subset of projects ( 50 ) which represent the majority of research and commercial solutions proposed in the field of context-aware computing conducted over the last decade ( 2001-2011 ) based on our own taxonomy . finally , based on story_separator_special_tag this paper addresses the internet of things . main enabling factor of this promising paradigm is the integration of several technologies and communications solutions . identification and tracking technologies , wired and wireless sensor and actuator networks , enhanced communication protocols ( shared with the next generation internet ) , and distributed intelligence for smart objects are just the most relevant . as one can easily imagine , any serious contribution to the advance of the internet of things must necessarily be the result of synergetic activities conducted in different fields of knowledge , such as telecommunications , informatics , electronics and social science . in such a complex scenario , this survey is directed to those who want to approach this complex discipline and contribute to its development . different visions of this internet of things paradigm are reported and enabling technologies reviewed . what emerges is that still major issues shall be faced by the research community . the most relevant among them are addressed in details . story_separator_special_tag the internet of things ( iot ) is the logical further development of today s internet . technological advancements lead to smart objects being capable of identifying , locating , sensing and connecting and thus leading to new forms of communication between people and things and things themselves . ambient assisted living ( aal ) encompasses technical systems to support elderly people in their daily routine to allow an independent and safe lifestyle as long as possible . keep in touch ( kit ) uses smart objects and technologies ( near field communication and radio frequency identification ) to facilitate telemonitoring processes . closed loop healthcare services take use of kit technology and are capable of processing relevant data and establishing communication channels between elderly people and their environment and different groups of care-givers ( physicians , relatives , mobile care providers ) . the combination of kit technology ( smart objects ) and closed loop healthcare services results in an applied iot infrastructure for aal scenarios . already applied iot and aal applications in telemonitoring and medication intake compliance projects show that these applications are useful and accepted by the elderly and that the developed infrastructure enables a new story_separator_special_tag internet of things ( iot ) has been recognized as a part of future internet and ubiquitous computing . it creates a true ubiquitous or smart environment . it demands a complex distributed architecture with numerous diverse components , including the end devices and application and association with their context . this article provides the significance of middleware system for ( iot ) . the middleware for iot acts as a bond joining the heterogeneous domains of applications communicating over heterogeneous interfaces . first , to enable the better understanding of the current gap and future directions in this field a comprehensive review of the existing middleware systems for iot is provided here . second , fundamental functional blocks are proposed for this middleware system , and based on that feature wise classification is performed on the existing iot-middleware . third , open issues are analyzed and our vision on the research scope in this area is presented . story_separator_special_tag internet of things ( iot ) has provided a promising opportunity to build powerful industrial systems and applications by leveraging the growing ubiquity of radio-frequency identification ( rfid ) , and wireless , mobile , and sensor devices . a wide range of industrial iot applications have been developed and deployed in recent years . in an effort to understand the development of iot in industries , this paper reviews the current research of iot , key enabling technologies , major iot applications in industries , and identifies research trends and challenges . a main contribution of this review paper is that it summarizes the current state-of-the-art iot in industries systematically . story_separator_special_tag it sounds like mission impossible to connect everything on the earth together via internet , but internet of things ( iot ) will dramatically change our life in the foreseeable future , by making many `` impossibles '' possible . to many , the massive data generated or captured by iot are considered having highly useful and valuable information . data mining will no doubt play a critical role in making this kind of system smart enough to provide more convenient services and environments . this paper begins with a discussion of the iot . then , a brief review of the features of `` data from iot '' and `` data mining for iot ' is given . finally , changes , potentials , open issues , and future trends of this field are addressed . story_separator_special_tag this paper examines the benefits of edge mining -data mining that takes place on the wireless , battery-powered , and smart sensing devices that sit at the edge points of the internet of things . through local data reduction and transformation , edge mining can quantifiably reduce the number of packets that must be sent , reducing energy usage , and remote storage requirements . in addition , edge mining has the potential to reduce the risk in personal privacy through embedding of information requirements at the sensing point , limiting inappropriate use . the benefits of edge mining are examined with respect to three specific algorithms : linear spanish inquisition protocol ( l-sip ) , classact , and bare necessities ( bn ) , which are all instantiations of general sip . in general , the benefits provided by edge mining are related to the predictability of data streams and availability of precise information requirements ; results show that l-sip typically reduces packet transmission by around 95 % ( 20-fold ) , bn reduces packet transmission by 99.98 % ( 5000-fold ) , and classact reduces packet transmission by 99.6 % ( 250-fold ) . although energy reduction is story_separator_special_tag the internet of things is a paradigm that allows the interaction of ubiquitous devices through a network to achieve common goals . this paradigm like any man-made infrastructure is subject to disasters , outages and other adversarial conditions . under these situations provisioned communications fail , rendering this paradigm with little or no use . hence , network self-organization among these devices is needed to allow for communication resilience . this paper presents a survey of related work in the area of self-organization and discusses future research opportunities and challenges for self-organization in the internet of things . we begin this paper with a system perspective of the internet of things . we then identify and describe the key components of self-organization in the internet of things and discuss enabling technologies . finally we discuss possible tailoring of prior work of other related applications to suit the needs of self-organization in the internet of things paradigm . story_separator_special_tag social networking concepts have been applied to several communication network settings , which span from delay-tolerant to peer-to-peer networks . more recently , one can observe a flourish of proposals aimed at giving social-like capabilities to the objects in the internet of things . such proposals address the design of conceptual ( and software ) platforms , which can be exploited to easily develop and implement complex applications that require direct interactions among objects . the major goal is to build techniques that allow the network to enhance the level of trust between objects that are `` friends '' with each other . furthermore , a social paradigm could definitely guarantee network navigability even if the number of nodes becomes orders of magnitude higher than in the traditional internet . objectives of this article are to analyze the major opportunities arising from the integration of social networking concepts into the internet of things , present the major ongoing research activities , and point out the most critical technical challenges . story_separator_special_tag the internet of things ( iot ) shall be able to incorporate transparently and seamlessly a large number of different and heterogeneous end systems , while providing open access to selected subsets of data for the development of a plethora of digital services . building a general architecture for the iot is hence a very complex task , mainly because of the extremely large variety of devices , link layer technologies , and services that may be involved in such a system . in this paper , we focus specifically to an urban iot system that , while still being quite a broad category , are characterized by their specific application domain . urban iots , in fact , are designed to support the smart city vision , which aims at exploiting the most advanced communication technologies to support added-value services for the administration of the city and for the citizens . this paper hence provides a comprehensive survey of the enabling technologies , protocols , and architecture for an urban iot . furthermore , the paper will present and discuss the technical solutions and best-practice guidelines adopted in the padova smart city project , a proof-of-concept deployment of an story_separator_special_tag technologies to support the internet of things are becoming more important as the need to better understand our environments and make them smart increases . as a result it is predicted that intelligent devices and networks , such as wsns , will not be isolated , but connected and integrated , composing computer networks . so far , the ip-based internet is the largest network in the world ; therefore , there are great strides to connect wsns with the internet . to this end , the ietf has developed a suite of protocols and open standards for accessing applications and services for wireless resource constrained networks . however , many open challenges remain , mostly due to the complex deployment characteristics of such systems and the stringent requirements imposed by various services wishing to make use of such complex systems . thus , it becomes critically important to study how the current approaches to standardization in this area can be improved , and at the same time better understand the opportunities for the research community to contribute to the iot field . to this end , this article presents an overview of current standards and research activities in both story_separator_special_tag the term internet-of-things is used as an umbrella keyword for covering various aspects related to the extension of the internet and the web into the physical realm , by means of the widespread deployment of spatially distributed devices with embedded identification , sensing and/or actuation capabilities . internet-of-things envisions a future in which digital and physical entities can be linked , by means of appropriate information and communication technologies , to enable a whole new class of applications and services . in this article , we present a survey of technologies , applications and research challenges for internetof-things . story_separator_special_tag ubiquitous sensing enabled by wireless sensor network ( wsn ) technologies cuts across many areas of modern day living . this offers the ability to measure , infer and understand environmental indicators , from delicate ecologies and natural resources to urban environments . the proliferation of these devices in a communicating-actuating network creates the internet of things ( iot ) , wherein , sensors and actuators blend seamlessly with the environment around us , and the information is shared across platforms in order to develop a common operating picture ( cop ) . fuelled by the recent adaptation of a variety of enabling device technologies such as rfid tags and readers , near field communication ( nfc ) devices and embedded sensor and actuator nodes , the iot has stepped out of its infancy and is the the next revolutionary technology in transforming the internet into a fully integrated future internet . as we move from www ( static pages web ) to web2 ( social networking web ) to web3 ( ubiquitous computing web ) , the need for data-on-demand using sophisticated intuitive queries increases significantly . this paper presents a cloud centric vision for worldwide implementation of internet story_separator_special_tag the world population is growing at a rapid pace . towns and cities are accommodating half of the world 's population thereby creating tremendous pressure on every aspect of urban living . cities are known to have large concentration of resources and facilities . such environments attract people from rural areas . however , unprecedented attraction has now become an overwhelming issue for city governance and politics . the enormous pressure towards efficient city management has triggered various smart city initiatives by both government and private sector businesses to invest in information and communication technologies to find sustainable solutions to the growing issues . the internet of things iot has also gained significant attention over the past decade . iot envisions to connect billions of sensors to the internet and expects to use them for efficient and effective resource management in smart cities . today , infrastructure , platforms and software applications are offered as services using cloud technologies . in this paper , we explore the concept of sensing as a service and how it fits with the iot . our objective is to investigate the concept of sensing as a service model in technological , economical and social story_separator_special_tag the initial vision of the internet of things was of a world in which all physical objects are tagged and uniquely identified by rfid transponders . however , the concept has grown into multiple dimensions , encompassing sensor networks able to provide real-world intelligence and goal-oriented collaboration of distributed smart objects via local networks or global interconnections such as the internet . despite significant technological advances , difficulties associated with the evaluation of iot solutions under realistic conditions in real-world experimental deployments still hamper their maturation and significant rollout . in this article we identify requirements for the next generation of iot experimental facilities . while providing a taxonomy , we also survey currently available research testbeds , identify existing gaps , and suggest new directions based on experience from recent efforts in this field . story_separator_special_tag we have witnessed the fixed internet emerging with virtually every computer being connected today ; we are currently witnessing the emergence of the mobile internet with the exponential explosion of smart phones , tablets and net-books . however , both will be dwarfed by the anticipated emergence of the internet of things ( iot ) , in which everyday objects are able to connect to the internet , tweet or be queried . whilst the impact onto economies and societies around the world is undisputed , the technologies facilitating such a ubiquitous connectivity have struggled so far and only recently commenced to take shape . to this end , this paper introduces in a timely manner and for the first time the wireless communications stack the industry believes to meet the important criteria of power-efficiency , reliability and internet connectivity . industrial applications have been the early adopters of this stack , which has become the de-facto standard , thereby bootstrapping early iot developments with already thousands of wireless nodes deployed . corroborated throughout this paper and by emerging industry alliances , we believe that a standardized approach , using latest developments in the ieee 802.15.4 and ietf working groups story_separator_special_tag a proposed internet of things system architecture offers a solution to the broad array of challenges researchers face in terms of general system security , network security , and application security . story_separator_special_tag the security issues of the internet of things ( iot ) are directly related to the wide application of its system . beginning with introducing the architecture and features of iot security , this paper expounds several security issues of iot that exist in the three-layer system structure , and comes up with solutions to the issues above coupled with key technologies involved . among these safety measures concerned , the ones about perception layer are particularly elaborated , including key management and algorithm , security routing protocol , data fusion technology , as well as authentication and access control , etc . story_separator_special_tag embedding nanosensors in the environment would add a new dimension to the internet of things , but realizing the iont vision will require developing new communication paradigms and overcoming various technical obstacles . story_separator_special_tag tools like microsoft .net gadgeteer offer the ability to quickly prototype , test , and deploy connected devices , providing a key element that will accelerate our understanding of the challenges in realizing the internet of things vision . gadgeteer is a general-purpose device development platform where the key elements includes rapid construction and reconfiguration of electronic device hardware , ease of programming and debugging , and the ability to leverage online web services for additional storage , communication , and processing . story_separator_special_tag induction motors are widely used in industry . one example is applied to elevators used in multi-storey buildings . in its application , the elevator uses the principle of forward and reverse circuits because there are options when the elevator goes up or down . based on the description of the above problem then designed a prototype reversal control device rotation 3k phase induction motor based on atmega 328p microcontroller board . the entire core system is controlled by arduino uno r3 ( microcontroller atmega 328p ) with visual basic version 6.0 software . the inverting process of a three-phase induction motor based on atmega 328p microcontroller board can simplify the equipment and can be operated from a place far enough that it tends to be more efficient and sophisticated than the conventional way . story_separator_special_tag the combination of the internet and emerging technologies such as nearfield communications , real-time localization , and embedded sensors lets us transform everyday objects into smart objects that can understand and react to their environment . such objects are building blocks for the internet of things and enable novel computing applications . as a step toward design and architectural principles for smart objects , the authors introduce a hierarchy of architectures with increasing levels of real-world awareness and interactivity . in particular , they describe activity- , policy- , and process-aware smart objects and demonstrate how the respective architectural abstractions support increasingly complex application . story_separator_special_tag at the university of washington , the rfid ecosystem creates a microcosm for the internet of things . the authors developed a suite of web-based , user-level tools and applications designed to empower users by facilitating their understanding , management , and control of personal rfid data and privacy settings . they deployed these applications in the rfid ecosystem and conducted a four-week user study to measure trends in adoption and utilization of the tools and applications as well as users ' qualitative reactions . story_separator_special_tag urban performance currently depends not only on a city 's endowment of hard infrastructure ( physical capital ) , but also , and increasingly so , on the availability and quality of knowledge communication and social infrastructure ( human and social capital ) . the latter form of capital is decisive for urban competitiveness . against this background , the concept of the smart city has recently been introduced as a strategic device to encompass modern urban production factors in a common framework and , in particular , to highlight the importance of information and communication technologies ( icts ) in the last 20 years for enhancing the competitive profile of a city . the present paper aims to shed light on the often elusive definition of the concept of the smart city . we provide a focused and operational definition of this construct and present consistent evidence on the geography of smart cities in the eu27 . our statistical and graphical analyses exploit in depth , for the first time to our knowledge , the most re . story_separator_special_tag landslide early detection system consists of a sensing system and an alarm system that is not in the same location ( remote area ) . the sensing system is located in a vulnerable area , while the alarm system is placed in a residential area . the sensing system uses sensors found in plug and sense from libelium to read rainfall , soil moisture and soil movement and use a 3g module . the alarm system uses arduino uno r3 as a controller and has a relay and sim800l module . the installation area of the system depends on the area that has an internet network . from the overall test results , the system successfully measured rainfall , soil moisture , and soil movement . the system also managed to activate an alarm if there is a potential for landslides and successfully sends measurement data to the website . the activation of the alarm system has a delay of about \xb1 30 seconds . keywords : landslide early warning , plug and sense smart , 3g , arduino , waspmote . story_separator_special_tag the design and development of wearable biosensor systems for health monitoring has garnered lots of attention in the scientific community and the industry during the last years . mainly motivated by increasing healthcare costs and propelled by recent technological advances in miniature biosensing devices , smart textiles , microelectronics , and wireless communications , the continuous advance of wearable sensor-based systems will potentially transform the future of healthcare by enabling proactive personal health management and ubiquitous monitoring of a patient 's health condition . these systems can comprise various types of small physiological sensors , transmission modules and processing capabilities , and can thus facilitate low-cost wearable unobtrusive solutions for continuous all-day and any-place health , mental and activity status monitoring . this paper attempts to comprehensively review the current research and development on wearable biosensor systems for health monitoring . a variety of system implementations are compared in an approach to identify the technological shortcomings of the current state-of-the-art in wearable biosensor solutions . an emphasis is given to multiparameter physiological sensing system designs , providing reliable vital signs measurements and incorporating real-time decision support for early detection of symptoms or context awareness . in order to evaluate the story_separator_special_tag providing accurate and opportune information on people 's activities and behaviors is one of the most important tasks in pervasive computing . innumerable applications can be visualized , for instance , in medical , security , entertainment , and tactical scenarios . despite human activity recognition ( har ) being an active field for more than a decade , there are still key aspects that , if addressed , would constitute a significant turn in the way people interact with mobile devices . this paper surveys the state of the art in har based on wearable sensors . a general architecture is first presented along with a description of the main components of any har system . we also propose a two-level taxonomy in accordance to the learning approach ( either supervised or semi-supervised ) and the response time ( either offline or online ) . then , the principal issues and challenges are discussed , as well as the main solutions to each one of them . twenty eight systems are qualitatively evaluated in terms of recognition performance , energy consumption , obtrusiveness , and flexibility , among others . finally , we present some open problems and ideas story_separator_special_tag wearable computing pursues an interface ideal of a continuously worn , intelligent assistant that augments memory , intellect , creativity , communication , and physical senses and abilities . many challenges await wearable designers as they balance innovative interfaces , power requirements , network resources , and privacy concerns . this survey describes the possibilities offered by wearable systems and , in doing so , demonstrates attributes unique to this class of computing . story_separator_special_tag the invention relates to a rest device , and in particular to a rest device for a hydraulic design engineer . the technical problem to be solved by the invention is to provide the rest device for the hydraulic design engineer , wherein the rest device is simple in structure , high in working efficiency and easy to adjust in a working process . in order to solve the technical problem , the invention provides the rest device for the hydraulic design engineer . the rest device for the hydraulic design engineer comprises a bracket , a convex block , a first slide rail , a first slider , a first rack , a motor , a mobile frame , a first gear , a second rack , a seat , a first elastic piece , a supporting rod , an arc-shaped connecting rod , a massage hammer , a cattail leaf fan , a second elastic piece , a third rack , a second gear , a second slider , a second slide rail , a pull wire , a fixed pulley and a connecting rod ; the first slide rail is arranged at the bottom of the bracket story_separator_special_tag quasi-two-dimensional coalescence of two parallel cylindrical flux ropes and the development of three-dimensional merged structures are observed and studied in the reconnection scaling experiment [ furno et al. , rev . sci . instrum . 74 , 2324 ( 2003 ) ] . these experiments were conducted in a collisional regime with very strong guide magnetic field ( bguide breconnection ) , which can be adjusted independently of plasma density , current density , and temperature . during initial coalescence , a reconnection current sheet forms between the two flux ropes , and the direction of the current is opposite to the flux rope currents . the measured current sheet thickness is larger than the electron skin depth but smaller than the ion skin depth . furthermore , the thickness does not vary for three different values of the strong external guide field . it is shown that the geometry of the observed current sheet is consistent with the sweet parker model using a parallel spitzer resistivity . the flux ropes eventually become kink unstab . story_separator_special_tag abstract in this paper i argue that representations of homosexuality in modern arabic literature have tended to isolate it and contain its threat through a conceptual strai ( gh ) tjacket that i term epistemic closure . i begin by analyzing sa d allah wannus 's play tuqus al-isharat wa-l-tahawwulat as an essentialist paradigm of closure , where a language of interiority and essence identifies male homosexuality with passivity and femininity , subordinated a priori to a sexually and socially dominant masculinity . then , i examine ala al-aswani 's novel imarat ya qubyan as a constructionist example of the same closure , in which homosexuality is explained through a narrative of abnormal development that circumscribes its diffuse potential . finally , i read huda barakat 's sayyidi wa-habibi as a queer novel that links homosexuality to the continuum of male homosocial desire , thereby disrupting the normative distribution of center and margin and suggesting a way out of the epistemic closure imposed on homosexuality . story_separator_special_tag how does expansion in the high-tech sector influence the broader economy of a region ? we demonstrate that an infusion of venture capital in a region appears associated with : ( i ) a decline in entrepreneurship , employment , and average incomes in other industries in the tradable sector ; ( ii ) an increase in entrepreneurship and employment in the non-tradable sector ; and ( iii ) an increase in income inequality in the non-tradable sector . an expansion in the high-tech sector therefore appears to lead to a less diverse tradable sector and to increasing inequality in the region . story_separator_special_tag j sports sci & med is published using the open access model . all content are available free of charge without restrictions from the journal 's website at : http : //www.jssm.org story_separator_special_tag basis sets are some of the most important input data for computational models in the chemistry , materials , biology , and other science domains that utilize computational quantum mechanics methods . providing a shared , web-accessible environment where researchers can not only download basis sets in their required format but browse the data , contribute new basis sets , and ultimately curate and manage the data as a community will facilitate growth of this resource and encourage sharing both data and knowledge . we describe the basis set exchange ( bse ) , a web portal that provides advanced browsing and download capabilities , facilities for contributing basis set data , and an environment that incorporates tools to foster development and interaction of communities . the bse leverages and enables continued development of the basis set library originally assembled at the environmental molecular sciences laboratory . story_separator_special_tag this paper is a fast abstract presented in conjunction with nca 06. it is not a part of the conference proceedings and it had not been accepted based on peer reviews . in this paper , we introduce a novel 3d emergency service that aims to guide people to safe places when emergencies happen . at normal time , the network is responsible for monitoring the environment . when emergency events are detected , the network can adaptively modify its topology to ensure transportation reliability , quickly identify hazardous regions that should be avoided , and find safe navigation paths that can lead people to exits . in particular , the structures of 3d buildings are taken into account in our design . prototyping results will be demonstrated , which show that our protocols can react to emergencies quickly at low message cost and can find safe paths to exits . i. system architecture recently , wireless sensor networks have been widely discussed in many applications , such as habitat monitoring , object tracking , and so on . in this paper , we design a wireless sensor network in a 3d indoor environment for emergency guiding service . fig story_separator_special_tag service matchmaking among heterogeneous software agents in the internet is usually done dynamically and must be efficient . there is an obvious trade-off between the quality and efficiency of matchmaking on the internet . we define a language called larks for agent advertisements and requests , and present a flexible and efficient matchmaking process that uses larks . the larks matchmaking process performs both syntactic and semantic matching , and in addition allows the specification of concepts ( local ontologies ) via itl , a concept language . the matching process uses five different filters : context matching , profile comparison , similarity matching , signature matching and constraint matching . different degrees of partial matching can result from utilizing different combinations of these filters . we briefly report on our implementation of larks and the matchmaking process in java . fielded applications of matchmaking using larks in several application domains for systems of information agents are ongoing efforts . story_separator_special_tag bipolar ecg signals obtained from sensors in a band on the left upper-arm after signal processing can provide recordings of sufficient quality for long-term ecg monitoring . we present an cable-free , wearable sensor system ( wamecg1 ) for bipolar arm-ecg recording and wireless data transmission over a wi-fi link . the system 's functional blocks were integrated into an ergonomically designed armband ecg device . a retrospective pilot analysis of the wastcard arm-ecg mapping database from our previous work , was perforned to obtain the optimal axis rotation of the bipolar electrodes pair with respect to the frontal ecg plane and the arm axilla point . the optimal signal was found to be at -30\xb0 axis rotation . then , signal quality of the recorded far-field bipolar arm-ecg was validated in a pilot trial with 10 volunteer subjects at rest using the prototype device . the overall r peak detection accuracy was 99.67 % . without using any signal enhancement algorithm , the average signal-to-noise-ratio ( snr ) values was 16.73. these performance assessment results validated the performance of the wearable arm-band prototype device . story_separator_special_tag this chapter provides an exploration of the reasons why a canadian federal court refused to compel five internet service providers to disclose the identities of twenty nine isp subscribers alleged to have been engaged in p2p file-sharing . the authors argue that there are important lessons to be learned from the decision , particularly in the area of online privacy , including the possibility that the decision may lead to powerful though unintended consequences . at the intersection of digital copyright enforcement and privacy , the court 's decision could have the ironic effect of encouraging more powerful private-sector surveillance of our online activities , which would likely result in a technological backlash by some to ensure that internet users have even more impenetrable anonymous places to roam . consequently , the authors encourage the court to further develop its analysis of how , when and why the compelled disclosure of identity by third party intermediaries should be ordered by including as an element in the analysis a broader-based public interest in privacy . story_separator_special_tag the old norse language , dialects of which were spoken across scandinavia in the middle ages , has no equivalent of the modern english umbrella term humour . ( of course , neither do many languages , dead or living : old norse acts as a case study here for a methodology that could be applied to multiple other instances . ) one way , then , to approach a culture s sense of what for modern english speakers falls under humour , is to examine its own vocabulary of terms relating to phenomena like amusement , entertainment , jokes , and so on , and the contexts in which they appear . this allows for a culturally specific mapping of what forms of humour were prevalent , appropriate , prized , or otherwise . the chapter offers contextual discussion of old norse terms like gaman ( amusement ) , skemmtun ( entertainment ) , leikr ( game , play ) , hl\xe6gi ( ridicule ) , glens ( jesting ) , h\xe1\xf0 ( mockery ) , and others . in doing so it aims to highlight the problems and complexities of translation and interpretation from a linguistic and cultural story_separator_special_tag abstract introduction personal portable information technology is advancing at a breathtaking speed . google has recently introduced glass , a device that is worn like conventional glasses , but that combines a computerized central processing unit , touchpad , display screen , high-definition camera , microphone , bone-conduction transducer , and wireless connectivity . we have obtained a glass device through google 's explorer program and have tested its applicability in our daily pediatric surgical practice and in relevant experimental settings . methods glass was worn daily for 4 consecutive weeks in a university children 's hospital . a daily log was kept , and activities with a potential applicability were identified . performance of glass was evaluated for such activities . in-vitro experiments were conducted where further testing was indicated . results wearing glass throughout the day for the study interval was well tolerated . colleagues , staff , families and patients overwhelmingly had a positive response to glass . useful applications for glass were hands-free photo/videodocumentation , making hands-free telephone calls , looking up billing codes , and internet searches for unfamiliar medical terms or syndromes . drawbacks encountered with the current equipment were low battery endurance , story_separator_special_tag codenamed zeppelin , amd 's next-generation system-on-a-chip ( soc ) was designed for use in multiple products and packages in multiple markets , including server , mainstream pc desktop , and high-end desktop . utilizing globalfoundries ' 14nm lpp finfet process technology , the zeppelin soc has over 4.8b transistors . it contains high-performance amd x86 cores codenamed zen [ 1 ] [ 2 ] , caches , memory controllers , pcie\xae , sata , and other io controllers , and integrated x86 southbridge chipset capabilities . all these functions are connected on the soc and between multichip packages and multi-socket systems by amd infinity fabric . story_separator_special_tag we shall not spend time on the study of the various post-kantian authors , such as balmes31 or lotze,32 who , while they formulated the principle of the causal theory of time quite precisely , did not enrich it with any new ideas . story_separator_special_tag a new species of the genus helius , h. anetae kania & kopec , is described from baltic amber . the distinctive features of this new species are the ratio between the lengths of rostrum , palpus , antenna and head , as well as the character of wing venation , especially the position of the vein rs situated in front of half of the length of the wing , and the cross-vein m-cu situated behind half of the length of d-cell from the bifurcation of mb . a comparison with its closest fossil relatives is provided . morphological patterns and aspects of the evolution of the genus helius are discussed . story_separator_special_tag this paper presents an intelligent health monitoring system for post management of stroke . fitbit sensor was used to take the reading of four stroke patients in federal teaching hospital , gombe state , and the vital readings recorded were heartbeat rate , sleeping rate , the number of steps taken , for a period of four weeks . the developed appfabric , web service and appfeedback synchronized the operation of the sensor , the user mobile device and the medical diagnostic platform . the readings taken by the sensor were made available to the medical experts and the monitoring team using web service . the evaluation of the system in terms of efficiency and reliability using t-test were ( 82.3 , 85.9 ) and ( 1.729133 , 2.093024 ) respectively . the results show that the developed system performed better than the existing manual method for monitoring stroke patients . story_separator_special_tag background a correct measurement of the qt interval in the out-of-hospital setting is important whenever the long qt syndrome ( lqts ) is suspected or a therapy might lead to drug-induced lqts ( dilqts ) because qt interval monitoring in the initial days of therapy could alert to dangerous qt prolongation . we explored whether automated qtc measurements ( bgm ) by bodyguardian ( bg ) , a wearable remote monitoring system , are sufficiently reliable compared to our own manual measurements ( mm ) performed on the same beats during 12 lead holter recordings in lqts patients ( pts ) and in healthy controls . methods we performed 351 measurements in 20 lqts pts and 16 controls . mm and bgm were compared by a bland-altman plot ( bap ) . high values of bap indicate large differences between measurements . results in all 36 subjects qtc was 446 \xb1 41 and 445 \xb1 47 ms in mm and bgm , respectively . the mean \xb1 se bap was -1.4 \xb1 1.8 ms for qtc in all subjects , 8.3 \xb1 2.3 and -7.2 \xb1 2.5 ms respectively in controls and lqts . the disagreement between bgm and mm story_separator_special_tag application of wireless technologies in the smart home is dealt with by pointing out advantages and limitations of available approaches for the solution of heterogeneous and coexisting problems related to the distributed monitoring of the home and the inhabitants . some hot challenges facing the exploitation of noninvasive wireless devices for user behavior monitoring are then addressed and the application fields of smart power management and elderly people monitoring are chosen as representative cases where the estimation of user activities improves the potential of location-aware services in the smart home . the problem of user localization is considered with great care to minimize the invasiveness of the monitoring system . wireless architectures are reviewed and discussed as flexible and transparent tools toward the paradigm of a totally automatic/autonomic environment . with respect to available state-of-the-art solutions , our proposed architecture is based also on existing wireless devices and exploits , in an opportunistic way , the characteristics of wireless signals to estimate the presence , the movements , and the behaviors of inhabitants , reducing the system complexity and costs . selected and representative examples from real implementations are presented to give some insight on state-of-the-art solutions also envisaging possible story_separator_special_tag a new generation of a ubiquitous health smart home is being developed to support the elderly and/or people with chronic diseases in their own home . the goal of the u-health smart home is to help the elderly to continue to live a more independent life as long as possible in their own home while being monitored and assisted in an unobtrusive manner . this concept of a ubiquitous health care ( u-health ) smart home for the elderly has been identified by governments and medical institutions as an important part of the economical , technological , and socially acceptable solution to maintain the health welfare system viable for future generation . story_separator_special_tag today , organizations use ieee802.15.4 and zigbee to effectively deliver solutions for a variety of areas including consumer electronic device control , energy management and efficiency home and commercial building automation as well as industrial plant management . the smart home energy network has gained widespread attentions due to its flexible integration into everyday life . this next generation green home system transparently unifies various home appliances , smart sensors and wireless communication technologies . the green home energy network gradually forms a complex system to process various tasks . developing this trend , we suggest a new smart home energy management system ( shems ) based on an ieee802.15.4 and zigbee ( we call it as a `` zigbee sensor network '' ) . the proposed smart home energy management system divides and assigns various home network tasks to appropriate components . it can integrate diversified physical sensing information and control various consumer home devices , with the support of active sensor networks having both sensor and actuator components . we develop a new routing protocol dmpr ( disjoint multi path based routing ) to improve the performance of our zigbee sensor networks . this paper introduces the proposed story_separator_special_tag a main goal of hot water distribution research is to improve the system 's efficiency , i.e. , to fulfill hot water requirements while minimizing energy and water losses . central domestic hot water ( cdhw ) systems represent an important part of current installations worldwide , e.g. , hotels , hospitals , sports centers , social facilities , and multifamily residential or apartment buildings . the optimization of such systems claims for forecasting capabilities and context-aware enhancements are based on patterns of use . thus , the level of uncertainty is reduced , and systems are not forced to operate using blind/oversized/generic assumptions . this paper presents a novel control strategy based on habit profiles for the management of a cdhw system . a simulated environment is utilized to compare the introduced strategy with habitual performances . simulations are supported by real databases concerning users ' behavioral patterns . results are promising and point to place profile-based strategies as a suitable approach for an optimized water and energy management in future buildings . story_separator_special_tag the current smart home is a ubiquitous computing environment consisting of multiple autonomous spaces , and its advantage is that a service interacting with home users can be set with different configurations in space , hardware , software , and quality . as well as being smart technologically speaking , a smart home should also never forget to retain the home nature when it is serving its users . in this paper , we first analyze the relationship among services , spaces , and users , and then we propose a framework as well as a corresponding algorithm to model their interaction relationship . later , we also realize the human-system interaction framework to implement a smart home system and develop pervasive applications to demonstrate how to utilize our framework to fulfill the human-centric interaction requirement of a smart home . finally , our preliminary evaluations show that our proposed work can enhance the performance of the human-system interaction in a smart home environment . story_separator_special_tag the casas architecture facilitates the development and implementation of future smart home technologies by offering an easy-to-install lightweight design that provides smart home capabilities out of the box with no customization or training . story_separator_special_tag there exists a long tradition of orthography guides or style manuals for slovene dedicated to `` good writing '' ( slo . pravopis , ger . rechtschreibung ) , with the first one published in 1899 and the most recent in 2001. the new web portal developed within the communication in slovene project is taking the concept originating from the world of print one step further into the digital environment , with a question-answering system which analyses the question entered into a query window in natural language and aims to provide a three-layered answer , from a more condensed and graphical one using data from extensive corpora , lexicons , dictionaries and other online resources , to a more general user-friendly description of the problem , together with links to digitized modern and historical normative resources related to the identified language problem . the paper describes a demo version of the portal with demonstration data for 15 language problems . story_separator_special_tag network devices for the home such as remotely controllable locks , lights , thermostats , cameras , and motion sensors are now readily available and inexpensive . in theory , this enables scenarios like remotely monitoring cameras from a smartphone or customizing climate control based on occupancy patterns . however , in practice today , such smarthome scenarios are limited to expert hobbyists and the rich because of the high overhead of managing and extending current technology . we present homeos , a platform that bridges this gap by presenting users and developers with a pc-like abstraction for technology in the home . it presents network devices as peripherals with abstract interfaces , enables cross-device tasks via applications written against these interfaces , and gives users a management interface designed for the home environment . homeos already has tens of applications and supports a wide range of devices . it has been running in 12 real homes for 4-8 months , and 42 students have built new applications and added support for additional devices independent of our efforts . story_separator_special_tag researchers who develop new home technologies using connected devices often want to conduct large-scale field studies in homes to evaluate their technology , but conducting such studies today is extremely challenging . inspired by the success of planetlab , which enabled development and evaluation of global network services , we are developing a shared infrastructure for home environments , called lab of things . our goal is to substantially lower the barrier to developing and evaluating new technologies for the home environment . story_separator_special_tag ubiquitin-positive , tau- and alpha-synuclein-negative inclusions are hallmarks of frontotemporal lobar degeneration with ubiquitin-positive inclusions and amyotrophic lateral sclerosis . although the identity of the ubiquitinated protein specific to either disorder was unknown , we showed that tdp-43 is the major disease protein in both disorders . pathologic tdp-43 was hyper-phosphorylated , ubiquitinated , and cleaved to generate c-terminal fragments and was recovered only from affected central nervous system regions , including hippocampus , neocortex , and spinal cord . tdp-43 represents the common pathologic substrate linking these neurodegenerative disorders . story_separator_special_tag crowdsourcing techniques are frequently used across science to supplement traditional means of data collection . although atmospheric science has so far been slow to harness the technology , developments have now reached the point where the benefits of the approaches simply can not be ignored : crowdsourcing has potentially far-reaching consequences for the way in which measurements are collected and used in the discipline . to illustrate this point , this paper uses air temperature data from the prolific , low-cost , netatmo weather station to quantify the urban heat island of london over the summer of 2015. the results are broadly comparable with previous studies , and indeed standard observations ( albeit with a warm bias , a likely consequence of non-standard site exposure ) , showing a range of magnitudes of between 1 and 6 \xb0c across the city depending on atmospheric stability . however , not all the results can be easily explained by physical processes and therefore highlight quality issues with crowdsourced data that need to be resolved . this paper aims to kickstart a step-change in the use of crowdsourcing in urban meteorology by encouraging atmospheric scientists to more positively engage with the new generation story_separator_special_tag this paper focuses on the development of an energy-efficient street lighting remote management system making use of low-rate wireless personal area networks and the digital addressable lighting interface ( dali ) protocol to get the bidirectional communication necessary for checking lamp parameters like lamp status , current level , etc . because of the fact that two-thirds of the installed street lighting systems use old and inefficient technologies , there exists a huge potential to renew street lighting and save energy consumption . the proposed system uses the dali protocol in street lighting , increasing the maximum number of ballasts that can be controlled with dali originally it can only drive up to 64 ballasts . some aspects of the wireless communication system and experimental measurements are presented and discussed . story_separator_special_tag in the last few days home automation is one of the biggest developments that technology has achieved today . nowadays there are many products that control automatically , or by using remote control , or by voice commands too . in home control automation , we can control home appliances using a smart phone . the main focus of this automation research is that the system can control the status of the on / off of the room lights and the door of the house remotely . in this study , we have designed and built a home control system with voice commands via a smartphone . the system consists of mit app inventor , firebase , wemos d1 microcontroller equipped with an embedded micro-controller and wi-fi onboard shield , 4-channel relay used for three led lights and one door lock for the house . the test results , the system can control the on / off status of the room lights and the house door remotely well . this home control system is very useful in everyday life because it reduces human workload , saves electricity , and reduces worries about home security for people who work . story_separator_special_tag recent high profile developments of autonomous learning thermostats by companies such as nest labs and honeywell have brought to the fore the possibility of ever greater numbers of intelligent devices permeating our homes and working environments into the future . however , the specific learning approaches and methodologies utilised by these devices have never been made public . in fact little information is known as to the specifics of how these devices operate and learn about their environments or the users who use them . this paper proposes a suitable learning architecture for such an intelligent thermostat in the hope that it will benefit further investigation by the research community . our architecture comprises a number of different learning methods each of which contributes to create a complete autonomous thermostat capable of controlling a hvac system . a novel state action space formalism is proposed to enable a reinforcement learning agent to successfully control the hvac system by optimising both occupant comfort and energy costs . our results show that the learning thermostat can achieve cost savings of 10 % over a programmable thermostat , whilst maintaining high occupant comfort standards . story_separator_special_tag title : diagnostic and statistical manual of mental disorders ( dsm-5 ) author : american psychiatric association editors of croatian edition : vlado jukic , goran arbanas isbn : 978-953-191-787-2 publisher : naklada slap , jastrebarsko , croatia number of pages : 936diagnostic and statistical manual of mental disorders is a national classification , but since its third edition it became a worldwide used manual . [ 1 ] it has been published by the american psychiatric association and two years ago the fifth edition was released . [ 2 ] croatian was among the first languages this book was translated to . [ 3 ] dsm-5 was translated by psychiatrists and psychologists , mainly from the university psychiatric hospital vrapce and published by the naklada slap publisher.dsm has always been more publicly debated than the other main classification - the international classification of diseases ( icd ) . [ 4 ] the same happened with this fifth edition . even before it was released , numerous individuals , organizations , groups and associations were publicly speaking about the classification , new diagnostic entities and changing criteria . [ 5 ] although there is a tendency of authors of both story_separator_special_tag this paper proposes the smart water bottle system based on internet of things ( iot ) using fuzzy logic method . this system has input data from water level and temperature levels . various water levels and room temperatures are used to calculate the level of water consumption . the fuzzy inference system provides three classes , i.e . low , medium and high , which indicates user drinking requirements . fuzzy results are selected according to the input given by the sensor for predicting water consumption , after generating the output directly sent to the server to be converted into a drink reminder notification . by using the fuzzy logic method , the prediction of the consumption of drinking water needed for daily activity is quite accurate and it can also be observed that the prediction model by fuzzy produces the appropriate output . this model is tested with a 3-hour notification period for one day and is able to predict water consumption . the result shows the effectiveness of smart water bottle system on predicting water consumption . story_separator_special_tag the utility model discloses an led ( light emitting diode ) small night lamp mainly composed of an upper cover , a lower cover and a light-transmittable shade . the upper cover is provided with a light sensitive cap , a switch cap and a control module . the control module comprises a photoresistor . the lower cover is provided with a pressing piece , a plug , a battery cover , an led light source module , a support and a rechargeable battery . the light-transmittable shade is made of a high-diffusion material ; therefore , the shade is environmentally friendly and reduces glare so that light becomes softer and uncomfortable feeling of human eyes is avoided . the small night lamp can be turned on and off can be automatically controlled according to the brightness of the ambient environment , and further can be used as an emergency lamp in case of power failure ; and the lamp is pluggable optionally and good in flexibility . story_separator_special_tag based on the digital city , smart city is widely used in daily livelihood , environmental protection , public security , city services and other fields . in this paper , we mainly focus on recent research and the concept of smart city , summarizing the relationship between `` smart city '' and `` digital city '' , putting forward the main content of application systems as well as the importance and difficulty of the construction of smart city , and making a brief statement of the influence of developing smart city in china . story_separator_special_tag information and communication technologies ( ict ) represent a fundamental element in the growth and performance of smart grids . a sophisticated , reliable and fast communication infrastructure is , in fact , necessary for the connection among the huge amount of distributed elements , such as generators , substations , energy storage systems and users , enabling a real time exchange of data and information necessary for the management of the system and for ensuring improvements in terms of efficiency , reliability , flexibility and investment return for all those involved in a smart grid : producers , operators and customers . this paper overviews the issues related to the smart grid architecture from the perspective of potential applications and the communications requirements needed for ensuring performance , flexible operation , reliability and economics . story_separator_special_tag for 100 years , there has been no change in the basic structure of the electrical power grid . experiences have shown that the hierarchical , centrally controlled grid of the 20th century is ill-suited to the needs of the 21st century . to address the challenges of the existing power grid , the new concept of smart grid has emerged . the smart grid can be considered as a modern electric power grid infrastructure for enhanced efficiency and reliability through automated control , high-power converters , modern communications infrastructure , sensing and metering technologies , and modern energy management techniques based on the optimization of demand , energy and network availability , and so on . while current power systems are based on a solid information and communication infrastructure , the new smart grid needs a different and much more complex one , as its dimension is much larger . this paper addresses critical issues on smart grid technologies primarily in terms of information and communication technology ( ict ) issues and opportunities . the main objective of this paper is to provide a contemporary look at the current state of the art in smart grid communications as well story_separator_special_tag gemas einem ausfuhrungsbeispiel der erfindung wird eine zundschlusselfunktechnik zusatzlich zu wlan-basierter kommunikation in einem fahrzeug verwendet , um mit anderen fahrzeugen zu kommunizieren . hierbei werden mittels der zundschlusselfunktechnik nur ausgewahlte daten gesendet , die sich signifikant verandert haben . die ubrigen daten werden nicht gesendet oder fur die wlan-kommunikation aufgespart . story_separator_special_tag in 1997 the swedish parliament decided , in accordance with the so-called vision zero , that one official goal for the national traffic safety effort is that the number of traffic fatalities in the year 2007 must not exceed 270. in order to monitor efforts toward this hard-won goal , it is of course of utmost importance that official statistics on traffic deaths are reliable . in a meticulous analysis of all 580 officially registered traffic deaths in sweden in 1999 , we found that 490 were true accidental deaths , while 18 were suicides , 12 were deaths due to indeterminate causes , 59 were natural deaths and 1 case was not possible to evaluate due to missing data . thus , only 84 % of the officially registered `` accidental traffic deaths '' were bona fide accidents . in order to enhance the reliability of the official statistics , we suggest that regulations concerning police investigation and medicolegal autopsy of all unnatural deaths be adhered to all deaths reported to the swedish national road administration should be checked in the database of autopsied cases in the national board of forensic medicine in order to exclude natural deaths the story_separator_special_tag ( 1 ) background : public sidewalk gis data are essential for smart city development . we developed an automated street-level sidewalk detection method with image-processing google street view data . ( 2 ) methods : street view images were processed to produce graph-based segmentations . image segment regions were manually labeled and a random forest classifier was established . we used multiple aggregation steps to determine street-level sidewalk presence . ( 3 ) results : in total , 2438 gsv street images and 78,255 segmented image regions were examined . the image-level sidewalk classifier had an 87 % accuracy rate . the street-level sidewalk classifier performed with nearly 95 % accuracy in most streets in the study area . ( 4 ) conclusions : highly accurate street-level sidewalk gis data can be successfully developed using street view images . story_separator_special_tag the term vuca represents a four-letter acronym including definitions and contents of volatility , uncertainty , complexity , and ambiguity . these are the features of a modern business environment in which managers have to face and deal with . the area of research is designing and developing the managers training according to the qualification requirements revealed in both the educational and professional standards accordingly to social and business demands . nowadays , modern , effective managers are known as being highly stress-resistant , flexible , creative , and having available hard and soft skills for managing risks in unstable situations . smart business environment and a university s activities are based on the ideas of digitalization increasing the social demand for employees who are ready to process large amounts of information , multi-dimensional analysis of data , and identifying intelligent management solutions quickly while meeting the situation of uncertainty and risk . the goal of the research is to review some relevant innovative means and organizational methods of students training for being smart and effective managers in conditions of volatility , uncertainty , complexity , and ambiguity . modern means and methods of personnel training in the digitalization age story_separator_special_tag an efficacious implementation of internet of things ( iots ) based omnipresent sensing and monitoring system for domestic as well as non-domestic environments . the structure of the sensing-monitoring system is established on the combination of ubiquitous sensing units , controlling system for data acquisition , manipulation and aggregation and internet based platform for setting an efficient monitoring . the proposed model consists of sensing units which perceives the environmental value ( such as humidity , temperature , heat index , gas , etc ) , voltage and current parameters of the various household appliances for monitoring the amount of power consumed . which is further calibrated by the controlling system to yields the aggregated data and finally collected on the internet based platform . the framework of connecting the smart sensor to the internet is achieved by an iot platform called xively , which provides channel utility to deploy the prototype into an integrated product . story_separator_special_tag the present invention discloses a large-scale industrialization of the multiple benefits of refuse processing system resources , including large-scale recycling tank , a control system monitoring , grit chamber , acidification tank , a large fertilizer biogas production biogas production fermentation tank , an overflow tank , marsh fluid purification system , biogas collection pool and slag pool . the present invention organic waste collected by the collection tank and garbage collection pool for human and animal fecal matter , organic sewage , garbage and straw , corn stalks , sugar cane leaves and other agricultural waste , its recycling can solve some handling problems , the waste material waste po , multiple increase economic efficiency ; the present invention is also provided with a solar warming systems , to provide heat for the fermentation tank can be effectively utilized by the solar system , to solve the problem mainly in winter fermentation tank internal temperature , resulting in low efficiency of fermentation , the present invention intelligent automatic control by setting the control monitoring system to ensure efficient operation of each step . story_separator_special_tag this paper presents lorawan based smart street lighting control system which allows to control night time street light autonomously with minimum energy consumption . the power saving system is done by using motion detection and illuminance sensor to measure the traffic congestion and illumination level . every street light has been equipped with lorawan communication modules that sends and receives data with the main server to control led street light which has illuminance level calculated by dialux . when autonomously controlling street lighting system , the server system decides which street light should be dimmed according to motion sensors . if no vehicle passes the area for adjusted amount of time , the street light will dim . if any of the vehicle passes the area , the closest street to that area will raise the illumination level to normal level . story_separator_special_tag studying the social dynamics of a city on a large scale has traditionally been a challenging endeavor , often requiring long hours of observation and interviews , usually resulting in only a partial depiction of reality . to address this difficulty , we introduce a clustering model and research methodology for studying the structure and composition of a city on a large scale based on the social media its residents generate . we apply this new methodology to data from approximately 18 million check-ins collected from users of a location-based online social network . unlike the boundaries of traditional municipal organizational units such as neighborhoods , which do not always reflect the character of life in these areas , our clusters , which we call livehoods , are representations of the dynamic areas that comprise the city . we take a qualitative approach to validating these clusters , interviewing 27 residents of pittsburgh , pa , to see how their perceptions of the city project onto our findings there . our results provide strong support for the discovered clusters , showing how livehoods reveal the distinctly characterized areas of the city and the forces that shape them . story_separator_special_tag abstract the adoption of black , asian and minority ethnic children has long been deeply controversial in the uk , with tensions over racial/ethnic matching and transracial adoption into white families respectively . media organizations have been key participants in these struggles , as commentators but also campaigners , yet there has been negligible research into their framing of the issues . this article explores press coverage in five national newspapers ( plus sunday sister papers ) of the coalition government s adoption reform programme . in particular , it focuses on patterns of deracialization and racialization of debates as they relate to identities , family dynamics and wider social currents with respect to race and ethnicity . while in some senses adoption represents a complex and atypical case study , coverage nonetheless reveals a powerful combination , simultaneously downplaying the significance of race , while amplifying the threat posed by ethnic matching . findings are discussed in relation to the concept of moral panic . story_separator_special_tag this research aimed to ( 1 ) study pattern from ancient fabrics ( 2 ) to study design of women s apparel by digital printing techniques and ( 3 ) study the satisfaction women s apparel in 3 designs consisting of printed fabric with four-petaled , diamond design in a trellis pattern , printed fabric with design in a puttern pattern . the population was 100 assessors within the age range 35 40 yearn in phra nakhon , bangkok . the research instrument was a questionnaire . data were statistically analyzed for percentage , mean and standard deviation . results showed that most respondents ages 31 35 years , being single , earning from 15,000 25,000 bath . most got bachelor 's degree , working private business and used to use women s apparel by printed fabric , choose to buy/use your own style , purchase from the opportunity to be used , motivated by personal preferences and , selection issues from shapes . the satisfaction of the respondents towards women 's apparel products has a high level of average in all 3 products , according to the following average values : shawls with an average of 4.10 , shoulder story_separator_special_tag free-floating sensor packages that take local measurements and track flows in water systems , known as drifters , are a standard tool in oceanography , but are new to estuarial and riverine studies . a system based on drifters for making estimates on a hydrodynamic system requires the drifters themselves , a communication network , and a method for integrating the gathered data into an estimate of the state of the hydrodynamics . this paper presents a complete drifter system and documents a pilot experiment in a controlled channel . the utility of the system for making measurements in unknown environments is highlighted by a combined parameter estimation and data assimilation algorithm using an extended kalman filter . the performance of the system is illustrated with field data collected at the hydraulic engineering research unit , stillwater , ok . story_separator_special_tag population growth , energy demand , and climate change are placing an unprecedented strain on water resources , requiring a fundamental shift in how these resources are managed . more precisely , resource management programs must embrace a new paradigm , one with realtime environmental monitoring at its core . the intelligent river\xa9 is an environmental and hydrological observation system engineered to support research and management of water resources at watershed scales . the system architecture is comprised of three primary tiers : ( i ) a field-deployed sensor fabric and uplink infrastructure , ( ii ) real-time data streaming middleware , and ( iii ) repository , presentation , and web services . sensor web enablement ( swe ) adoption decisions revolve around balancing efficiency concerns and implementation time with capability and standards compliance . in this context , our team has examined , applied , and evaluated swe technologies to enable data archival , access , and discovery . we have found varying levels of success with swe adoption across the three tiers . at the fabric layer , platform configurability and ease-of-integration have been important engineering drivers . sensorml arose as a natural candidate solution ; however story_separator_special_tag the composition of an animal group can impact greatly on the survival and success of its individual members . much recent work has concentrated on behavioral variation within animal populations along the bold/shy continuum . here , we screened individual guppies , poecilia reticulata , for boldness using an overhead fright stimulus . we created groups consisting of 4 bold individuals ( bold shoals ) , 4 shy individuals ( shy shoals ) , or 2 bold and 2 shy individuals ( mixed shoals ) . the performance of these different shoal types was then tested in a novel foraging scenario . we found that both bold and mixed shoals approached a novel feeder in less time than shy shoals . interestingly , we found that more fish from mixed shoals fed than in either bold or shy shoals . we suggest that this can be explained by the fact that nearly all the cases where one fish was followed into the feeder by another occurred within mixed shoals and that it was almost always a shy fish following a bold one . these results suggest clear foraging benefits to shy individuals through associating with bold ones . surprisingly , story_separator_special_tag forecasting flood inundation in urban areas is challenging due to the lack of validation data . recent developments have led to new genres of data sources , such as images and videos from smartphones and cctv cameras . if the reference dimensions of objects , such as bridges or buildings , in images are known , the images can be used to estimate water levels using computer vision algorithms . such algorithms employ deep learning and edge detection techniques to identify the water surface in an image , which can be used as additional validation data for forecasting inundation . in this study , a methodology is presented for flood inundation forecasting that integrates validation data generated with the assistance of computer vision . six equifinal models are run simultaneously , one of which is selected for forecasting based on a goodness-of-fit ( least error ) , estimated using the validation data . collection and processing of images is done offline on a regular basis or following a flood event . the results show that the accuracy of inundation forecasting can be improved significantly using additional validation data . this is an open access article distributed under the terms of story_separator_special_tag in this paper , we present a novel approach to visual smoke detection based on stereo vision . general smoke detection is usually performed by analyzing the images from remote cameras using various computer vision techniques . the literature on smoke detection shows a variety of approaches , and the focus of this paper is the improvement of the general smoke detection process by introducing stereo vision . two cameras are used to estimate the distance and size of the detected phenomena based on stereo triangulation . using this information , the minimum size and overall dynamics of the detected regions are further examined to ensure the elimination of false alarms induced by various phenomena ( such as the movement of objects located at short distances from the camera ) . such false alarms could easily be detected by the proposed stereo system , allowing the increase of the sensitivity and overall performance of the detection . we analyzed the requirements of such system in terms of precision and robustness to possible error sources , especially when dealing with detection of smoke at various distances from the camera . for evaluation , three existing smoke detection methods were tested and story_separator_special_tag efficient water use assessment and irrigation management is critical for the sustainability of irrigated agriculture , especially under changing climate conditions . due to the impracticality of maintaining ground instrumentation over wide geographic areas , remote sensing and numerical model-based fine-scale mapping of soil water conditions have been applied for water resource applications at a range of spatial scales . here , we present a prototype framework for integrating high-resolution thermal infrared ( tir ) and synthetic aperture radar ( sar ) remote sensing data into a soil-vegetation-atmosphere-transfer ( svat ) model with the aim of providing improved estimates of surface- and root-zone soil moisture that can support optimized irrigation management strategies . specifically , remotely-sensed estimates of water stress ( from tir ) and surface soil moisture retrievals ( from sar ) are assimilated into a 30-m resolution svat model over a vineyard site in the central valley of california , u.s. the efficacy of our data assimilation algorithm is investigated via both the synthetic and real data experiments . results demonstrate that a particle filtering approach is superior to an ensemble kalman filter for handling the nonlinear relationship between model states and observations . in addition , biophysical story_separator_special_tag summary the grapple and bumblebee mineralisation at the lake mackay project have been discovered using a combination of routine fine fraction soil sampling , drilling and focused ground electromagnetic methods . soil sampling initially provided the target areas with subsequent em surveys delineating basement conductors . bumblebee returned sub economic intersections when drill testing , while the third drill hole at grapple returned a significant cu-au intersection . the methodology has been expanded to use airborne electromagnetic methods to rapidly screen the large tenement holding , assist in the understanding of the soil geochemistry results and plan ground em surveys going forward . the airborne em method also allows us to test areas deeper under cover than soil sampling . story_separator_special_tag the utility model discloses a seed platform that sprouts convenient to observe and statistics , it includes : the revolving stage , it is rotated by the rotary mechanism drive , the locating hole is provided with a plurality of rings of locating holes on the revolving stage , the distribution center of each circle locating hole is the rotation center of revolving stage , the basin that sprouts has beenplaced to the basin that sprouts on every locating hole , horizontal track , it erects in the top of revolving stage , and the projection of horizontal track on the revolving stage is through the rotation center of revolving stage , the slide of making a video recording , it is in order can gliding mode setting up on horizontal track , and the slide of making a video recording is slided by the slide mechanism drive , and the camera , it sets up in the bottom of the slide of making a video recording . the utility model discloses a rotation of revolving stage and the slip cause camera of slide of makinga video recording sprout to the seed and observe and make statistics of every basin story_separator_special_tag in order to integrally manage computers and peripheral devices at a user site , when a device monitoring server ( 203 a ) receives a trouble message from a device , it sends a message indicating this to a device center server ( 210 ) via a router ( 204 ) . if the received message is a trouble message , the device center server ( 210 ) controls an event adapter to convert the received message into a format that a center server can process , and transfers the converted message to the center server ( 110 ) . upon receiving the message , the center server ( 110 ) displays occurrence of a trouble in an event list using an event monitor ( 110 a ) . the user can simultaneously monitor the versatile computers and peripheral devices by observing the event monitor . story_separator_special_tag purpose : an image monitoring and fire extinguishing system is provided to actively extinguish a fire by operating a fire extinguisher when the fire breaks out . constitution : a plurality of monitoring cameras ( 10 ) is installed in a fire monitoring target area . the plurality of monitoring cameras receives a control signal and takes a photograph through motion control . a plurality of fire extinguishers ( 20 ) is installed in the fire monitoring target area and discharges a fire extinguishing agent after receiving the control signal . the plurality of fire extinguishers transmits state information . a monitoring server ( 30 ) is connected to the plurality of monitoring cameras and the plurality of fire extinguishers by a wire/wireless network . the monitoring server is comprised of an analysis module , a control module , and a disaster preventing module . the analysis module analyzes an image transmitted from the plurality of monitoring cameras . the control module selectively applies the control signal to the plurality of fire extinguishers according to an analyzing result . the disaster preventing module transmits fire information to a prescribed terminal of an administrator managing the fire monitoring target area . story_separator_special_tag the argument for the implementation of smart metering , which is an elastic term , varies according to circumstance and place . in some countries , the business case for establishing an advanced metering infrastructure ( ami ) relies in part on improving consumption feedback to customers and assisting in the transition to lower-impact energy systems . there is an expectation that ami will lead to reductions in both the demand and the cost to serve customers through improved communication , but little evidence exists to show overall demand reduction . to what extent might smart meters improve the prospects for customer engagement ? to assess this question , end-user perceptions and practices must be considered along with metering hardware and economics . using the theory of affordances , qualitative research is examined to understand how householders have used consumption feedback , with and without smart meters . although ami offers possibilities for household energy management and customer utility relations , there is . story_separator_special_tag tef grain yield is low , at 1.75kgha-1 in ethiopia . therefore , the objectives of this study were to compare biological superiority of the technology package ; to conduct partial budget cost-benefit analysis of the technology and to improve the full package of recommendations . three interventions packages on the tef production system which are : extension package , agricultural transformation agency of ethiopia package and the research package ( row and broadcast planting ) application was laid out in a randomized complete block design with the replication ( farmers/ locations as replication ) . the experimental plot size was 500m2 . the result indicates that research package on broadcast planting and raw planting systems were found to be superior in grain yield 1580kgha-1 and 1550 kgha-1 , respectively . similarly research row sowing and broadcasting recommendations were gave higher above ground biomass 10167kgha-1 and 10000kgha-1 , respectively as compared to the ata and extension package practice . thus , the result revealed that seed rate of 10-15 kgha-1 both broad cast and row sowing gives better grain yield and shoot biomass providing the highest return with marginal rate of return , whereas ata package was found to be story_separator_special_tag abstract big data is increasingly available in all areas of manufacturing and operations , which presents an opportunity for better decision making and discovery of the next generation of innovative technologies . recently , there have been substantial developments in the field of patent analytics , which describes the science of analysing large amounts of patent information to discover trends . we define intellectual property analytics ( ipa ) as the data science of analysing large amount of ip information , to discover relationships , trends and patterns for decision making . in this paper , we contribute to the ongoing discussion on the use of intellectual property analytics methods , i.e artificial intelligence methods , machine learning and deep learning approaches , to analyse intellectual property data . this literature review follows a narrative approach with search strategy , where we present the state-of-the-art in intellectual property analytics by reviewing 57 recent articles . the bibliographic information of the articles are analysed , followed by a discussion of the articles divided in four main categories : knowledge management , technology management , economic value , and extraction and effective management of information . we hope research scholars and industrial story_separator_special_tag in this paper we describe the development of a number of projects which , through the use of new systems and technologies , aim to change the way we perceive and study the urban environment . citydashboard , procedural cities and pigeonsim are some of the projects presented that will attempt to provide an insight into the process of creating , modelling and communicating aspects of a smart city . in this framework , we are leading to the development of a comprehensive system which will aid in the analysis and understanding of the urban environment through urban visualizations , open data platforms , complexity theories and interactive systems . story_separator_special_tag due to recent advances in iot ( internet of things ) related technology , demands of highly accurate indoor positioning system and services have been remarkably increased . the proposed indoor positioning system utilizes 4 estimote ble ( bluetooth low energy ) beacons to measure the user position with a smartphone . each beacon broadcasts radio signal and a smartphone can receive these signals to estimate the distance by measuring rssi ( received signal strength indicator ) values . in this paper , we proposed a new estimation algorithm to robustly calculate a user s position based on the strength of received signals . the proposed algorithm collects signals at a pre-processing stage to improve the quality of rssi values , calculates the distance according to the signal strength from each beacon , and then estimates the user position with the triangulation of overlapped regions . the experimental result of our proposed indoor positioning system shows approximately 80 % of accuracy within 2m error bound for three types of room . the proposed indoor position tracking is simple and well-suited with low-cost ble beacons . story_separator_special_tag s / journal of nutrition & intermediary metabolism 1 ( 2014 ) 1e55 3 challenges . the tendency in health promotion to promote only the positive eat more messages is based on the false notion of diet as a fixed pipe whereby pushing more vegetables in one end results in more confectionary and unhealthy snacks falling out the other end . public health should ideally have as detailed and nuanced appreciation of consumer choice as commercial marketing currently has but themost pressing task is to persuade politicians that to achieve healthy population diets requires multiple strategies , some of which are readily branded by the food industry as nanny state . influencing consumers to choose healthy foods will first require influencing policy-makers to choose effective food policies and actions . the practical questions thus shift from being problem-oriented ( understanding what factors currently influence consumer choices ) to solutionoriented ( understanding what interventions are achievable and what impacts they are likely to have on creating healthier diets ) . thus , from the myriad of influences arise a handful of potential interventions to focus on and the usual proposals include : restricting unhealthy food marketing to children ; taxes and story_separator_special_tag in 1995 , the american society of human genetics ( ashg ) and american college of medical genetics and genomics ( acmg ) jointly published a statement on genetic testing in children and adolescents . in the past 20 years , much has changed in the field of genetics , including the development of powerful new technologies , new data from genetic research on children and adolescents , and substantial clinical experience . this statement represents current opinion by the ashg on the ethical , legal , and social issues concerning genetic testing in children . these recommendations are relevant to families , clinicians , and investigators . after a brief review of the 1995 statement and major changes in genetic technologies in recent years , this statement offers points to consider on a broad range of test technologies and their applications in clinical medicine and research . recommendations are also made for record and communication issues in this domain and for professional education . story_separator_special_tag this research was carried out to assess the impact of treated wastewater irrigation on soil bacteriological and physicochemical properties and turfgrass bacteriological quality . two golf courses were studied : a golf course a irrigated with freshwater ( fw ) and a golf course b irrigated with uv-treated wastewater ( uv-tw ) . the physicochemical parameters ( electrical conductivity and ph ) of the soil were determined . fw , uv-tw , lake-stored water ( lsw ) , turfgrass , and soil were collected , and their bacteriological parameters were determined . these parameters include : escherichia coli , faecal enterococci , and faecal coliform . the results showed that the soil irrigated with treated wastewater ( s-tw ) showed a significant increase in the ph when compared with the soil irrigated with freshwater ( s-fw ) . however , no significant difference was recorded in soil electrical conductivity . faecal indicators concentration of the irrigation water samples varied considerably , and the concentrations in lsw frequently exceed those of the water at the output of the treatment plant ( uv-tw ) . the comparison of the faecal contamination between the two golf courses indicates no significant difference in e. story_separator_special_tag electronic eye describes the design and implementation of door image capture using microcontroller based security system for home and offices . it provides the user with efficient and reliable security system for door image capture for home , offices and industries that supports the use of an sensor at the door to send the signals to control unit of electronic eye with buzzer alarm for security purpose with image capture as soon as the door opens with image capture at the output of laptop or pc with vb application . keywords microcontroller , electronic eye , security system , control unit , sensor , vb application . introduction security is primary concern with day to day life and properties in our environment . this paper describes effective security alarm system that can monitor image capture system with the help of vb application . as soon as door opens sensor gets activated with image captured with help of web camera in pc captured image gets saved within vb application . it also serves function of sensing and detecting false intrusion ( using input sensory device and gives early warning devices alarm and remotely controlled security system ) . the term false story_separator_special_tag the asset recovery center ( ppa ) as the republic of indonesia general attorney 's unit is responsible for ensuring that asset recovery in indonesia is conducted with an integrated system that is effective , efficient , transparent and accountable , by tracing , securing , maintaining , seizing , and returning assets of criminal acts of corruption handled by the prosecutor 's office . however , the number of asset recovery resulting from corruption by the ppa remains small , and the current implementation is only done after a court decision , even though asset tracking should be done before the verdict . in addition , the urgency of its existence remains questionable given its scope is almost equal to the labuksi kpk and rupbasan at the ministry of law and human rights , which indirectly creates a tug of war between the law enforcement units . therefore , using a normative juridical approach and data obtained directly through library research and interview mechanisms , this paper found the importance of establishing a ppa for the prosecutor 's office related to its duties and functios , as described in the law and other regulations in the recovery of assets story_separator_special_tag technological advancements facilitate managers in remote monitoring and control , leading to improved job performance . in this research , an automatic monitoring and control system is presented for fuel management at various generation points across the country . centralized fuel management system helps in optimizing fuel consumption by controlling generators run-time , fuel level , timely fueling , consumption record , refilling alarms , and fuel theft . fuel management system is primarily used as a solution for the problems that occur during manual fuelling at remote sites . this phenomenon occurs especially for fuel thefts representing one of the major issues being faced . automatic alarm generation for theft , unusual opening of the tank , fuel level fall , temperature rise or any abnormal activity may help in better management . the data acquired from sensors and flow meters at human machine interface ( hmi ) is analyzed to obtain insights for efficient management . the results with and without fuel management system are compared for average fuel consumption and total cost . as notable improvements have been found at each site of the project , it is concluded that effective implementation of specific technological advancements for story_separator_special_tag smart cargo container system comprising auditable , secure , sealable , stackable , trackable and pollable , universal , pallet boxes used : 1 ) auto-latchingly secured to the under-carriage transverse i-beams of over-the-road semi-trailers by means of a guiding latching system ; and 2 ) stackable , up to three or more high , in the trailers or warehouses . sophisticated battery-powered electronic locks , sensors and alarms are provided , as well as an rf communications and gps locator module that radios to a base station the time , location and status of the inventive smart cargo container , and any anomalous events as they occur , including unauthorized attempts to open or break into the container , or potential damage events . in addition , both the locks and comm modules are programmable , and provide extensive , and selectably pollable and downloadable event , access and transport history and audit trails . the comm system permits remote tracking and real time status check via the internet , lan or wan wireless networks . story_separator_special_tag this book provides a self-contained introduction to the simulation of flow and transport in porous media , written by a developer of numerical methods . the reader will learn how to implement reservoir simulation models and computational algorithms in a robust and efficient manner . the book contains a large number of numerical examples , all fully equipped with online code and data , allowing the reader to reproduce results , and use them as a starting point for their own work . all of the examples in the book are based on the matlab reservoir simulation toolbox ( mrst ) , an open-source toolbox popular popularity in both academic institutions and the petroleum industry . the book can also be seen as a user guide to the mrst software . it will prove invaluable for researchers , professionals and advanced students using reservoir simulation methods . story_separator_special_tag freezing burns are not very frequent , but a good first performance by health professionals is no less important becase of it . the treatment begins when deciding whether to reheat or not . it must be ensured that the affected area can be kept defrosted and heated until the patient arrives at the hospital . the reheating must be quick and starts by immersion in water and iodinated antiseptic solution at 38oc for 30 minutes . it is difficult to have a prognosis at the time a freezing burn occurs , because there are many factors that guide us to be unfavorable ( slow or passive overheating , freezing time , appearance of dark blisters ) . therefore , we will maintain an expectant and conservative attitude , with a waiting period until the necrosis is perfectly defined . dry ice is an element that has many uses . it has a temperature of 40 \xb0 c below zero so it must be touched with some insulating gloves , otherwise it can cause freezing and/or dry ice burns that result in considerable injuries . it is important to take preventive measures and educate the population on the management of story_separator_special_tag radial profiles of electron temperature and density are measured at high spatial ( 1 mm ) and temporal ( 10 s ) resolution using a thermal supersonic helium jet . a highly accurate detection system is applied to well-developed collisional-radiative model codes to produce the profiles . agreement between this measurement and an edge thomson scattering measurement is found to be within the error bars ( 20 % ) . the diagnostic is being used to give profiles near the ion cyclotron resonant heating antenna on textor to better understand rf coupling to the core . story_separator_special_tag increasingly , scholars suggest thinking of the street as a social space , rather than just a channel for movement . studies that address the relationships between social behavior and environmental quality of the street tend to separate the study of physical features from land uses and hence do not address the interrelationships between behavioral patterns and physical features of the street and its sociability . this article is an empirical examination of behavioral responses of people to the environmental quality of neighborhood commercial streets . structured and semistructured observations are used to study stationary , lingering , and social activities on three neighborhood commercial streets . eleven land use and physical characteristics of buildings and the street are identified based on the literature review and extensive observations . these are measured and tested to understand which characteristics support stationary , lingering , and social activities . the findings reveal that people are equally concerne . story_separator_special_tag abstract commercial noise-cancelation headphones are based on active noise control ( anc ) algorithm . however , all existing anc headphones are based on bilateral anc approach , where two independent monaural anc systems are used respectively for the left and the right ear cups . the performance of bilateral anc approach strictly depends on the number of noise sources , direction of noise sources and noise types . its performance decreases when the number of noise sources increases . human s hearing system is binaural in nature and noise control can take advantage of binaural processing to further enhance noise reduction compared to conventional bilateral anc headphones . in this paper , we first propose a binaural anc algorithm to evaluate its performance over the bilateral anc algorithm and subsequently , modify into a combined bilateral-binaural anc ( cbbanc ) algorithm for headphones in order to improve noise reduction performance for different cases when there are more than one noise sources and all noise sources are situated at different locations and for diffuse field noise . experimentation results show that the combined binaural-bilateral anc has better performance in all our tests compared to the conventional bilateral anc headphones . story_separator_special_tag this paper presents a low-power and miniaturized design for acoustic direction-of-arrival ( doa ) estimation and source localization , called owlet . the required aperture , power consumption , and hardware complexity of the traditional array-based spatial sensing techniques make them unsuitable for small and power-constrained iot devices . aiming to overcome these fundamental limitations , owlet explores acoustic microstructures for extracting spatial information . it uses a carefully designed 3d-printed metamaterial structure that covers the microphone . the structure embeds a direction-specific signature in the recorded sounds . owlet system learns the directional signatures through a one-time in-lab calibration . the system uses an additional microphone as a reference channel and develops techniques that eliminate environmental variation , making the design robust to noises and multipaths in arbitrary locations of operations . owlet prototype shows 3.6\xb0 median error in doa estimation and 10cm median error in source localization while using a 1.5cm \xd7 1.3cm acoustic structure for sensing . the prototype consumes less than 100th of the energy required by a traditional microphone array to achieve similar doa estimation accuracy . owlet opens up possibilities of low-power sensing through 3d-printed passive structures . story_separator_special_tag medication adherence is one of the leading factors that can make the difference between life and death , especially for patients managing chronic conditions . these issues have driven a recent wave of research , including the development of smart pill bottles that monitor when a pill is extracted . the goal of my phd research is to develop systems that can identify who has taken the pill and when . to do so , we have designed different generations of smart pill bottles and associated algorithms for enabling several applications . we use 3d-printed pill bottles equipped with a magnetic switch sensor and an accelerometer . the bottles are carefully designed to minimize power consumption and we devise new machine learning-based techniques that use the accelerometer data generated during bottle interaction ( pill extraction ) to capture the user gesture that is extracting the pill . our work can be classified into 3 core thrust areas : 1 ) user identification using smart pill bottle systems . 2 ) adaptive learning techniques for user identification across multiple smart pill bottles . 3 ) latent conditions monitoring using smart pill bottles . story_separator_special_tag life satisfaction is a key indicator of subjective well-being . this article is a review of the multidisciplinary literature on the relationship between life satisfaction and the work domain . a discussion of top-down and bottom-up theories of life satisfaction is included , and the literatures on work-related antecedents of life satisfaction , the proximal mediators ( quality of work life , quality of nonwork life , and feelings of self-worth ) , and consequences of life satisfaction were reviewed . a meta-analysis of life satisfaction with respect to career satisfaction , job performance , turnover intentions , and organizational commitment was performed . each major section of the article concludes with a future opportunities subsection where gaps in the research are discussed . story_separator_special_tag abstract the use of maker community tools and iot technologies inside classrooms is spreading to an ever-increasing number of education and science fields . gaia is a european research project focused on achieving behavior change for sustainability and energy awareness in schools . in this work , we report on how a large iot deployment in a number of educational buildings and real-world data from this infrastructure , are utilized to support a maker lab kit activity inside the classroom . we also provide some insights to the integration of these activities in the school curriculum , along with a discussion on feedback produced through a series of workshop activities in a number of schools in greece . moreover , we discuss the application of the lab kit framework towards implementing an interactive installation . we also report on how the lab kit is paired with a serious game and an augmented reality app for smartphones and tablets , supporting the in-class activities . our initial evaluation results show a very positive first reaction by the school community . story_separator_special_tag information extraction methods can help discover critical knowledge buried in the vast repositories of unstructured clinical data . however , these methods are underutilized in clinical research , potentially due to the absence of free software geared towards clinicians with little technical expertise . the skills required for developing/using such software constitute a major barrier for medical researchers wishing to employ these methods . to address this , we have developed canary , a free and open-source solution designed for users without natural language processing ( nlp ) or software engineering experience . it was designed to be fast and work out of the box via a user-friendly graphical interface . story_separator_special_tag coronavirus-19 ( covi-19 ) involves humans as well as animals and may cause serious damage to the respiratory tract , including the lung : coronavirus disease ( covid-19 ) . this pathogenic virus has been identified in swabs performed on the throat and nose of patients who suffer from or are suspected of the disease . when covi-19 infect the upper and lower respiratory tract it can cause mild or highly acute respiratory syndrome with consequent release of pro-inflammatory cytokines , including interleukin ( il ) -1 and il-6 . the binding of covi-19 to the toll like receptor ( tlr ) causes the release of pro-il-1 which is cleaved by caspase-1 , followed by inflammasome activation and production of active mature il-1 which is a mediator of lung inflammation , fever and fibrosis . suppression of pro-inflammatory il-1 family members and il-6 have been shown to have a therapeutic effect in many inflammatory diseases , including viral infections . cytokine il-37 has the ability to suppress innate and acquired immune response and also has the capacity to inhibit inflammation by acting on il-18r receptor . il-37 performs its immunosuppressive activity by acting on mtor and increasing the adenosine monophosphate story_separator_special_tag this paper examines automated iris recognition as a biometrically based technology for personal identification and verification . the motivation for this endeavor stems from the observation that the human iris provides a particularly interesting structure on which to base a technology for noninvasive biometric assessment . in particular the biomedical literature suggests that irises are as distinct as fingerprints or patterns of retinal blood vessels . further , since the iris is an overt body , its appearance is amenable to remote examination with the aid of a machine vision system . the body of this paper details issues in the design and operation of such systems . for the sake of illustration , extant systems are described in some amount of detail . story_separator_special_tag editorial welcome to the sixth issue of the conet newsletter . conet is the eu fp7 network of excellence on cooperating objects , merging the fields of embedded systems for robotics and control , pervasive computing and wireless sensor networks . conet focuses on establishing the field of cooperating objects within the research and industrial community , thus strengthening the position of europe in the research landscape . in this issue we have an article from eth z\xfcrich on real-time plant monitoring that should help people with a lack of green fingers maintaining their plants . this issues member profile has some information on the delft university of technology and the embedded software group that participates in conet . last but not least , there is an article from our associated member pablo de olavide university on active perception for cooperating objects . if you are interested in obtaining up-to-date information about the conet project please visit our website at : http : //www.cooperating-objects.eu/ we hope you will enjoy this issue . properly taking care of indoor plants can be a quite demanding and challenging task . the provision of ideal environmental conditions for plant growth requires a lot of story_separator_special_tag the number of connected devices is increasing every day , creating smart homes and shaping the era of the internet of things ( iot ) , and most of the time , end-users are unaware of their impacts on privacy . in this work , we analyze the ecosystem around a philips hue smart white bulb in order to assess the privacy risks associated to the use of different devices ( smart speaker or button ) and smartphone applications to control it . we show that using different techniques to switch on or off this bulb has significant consequences regarding the actors involved ( who mechanically gather information on the user 's home ) and the volume of data sent to the internet ( we measured differences up to a factor 100 , depending on the control technique we used ) . even when the user is at home , these data flows often leave the user 's country , creating a situation that is neither privacy friendly ( and the user is most of the time ignorant of the situation ) , nor sovereign ( the user depends on foreign actors ) , nor sustainable ( the extra energetic story_separator_special_tag abstract the allele frequency net database ( afnd , www.allelefrequencies.net ) provides the scientific community with a freely available repository for the storage of frequency data ( alleles , genes , haplotypes and genotypes ) related to human leukocyte antigens ( hla ) , killer-cell immunoglobulin-like receptors ( kir ) , major histocompatibility complex class i chain related genes ( mic ) and a number of cytokine gene polymorphisms in worldwide populations . in the last five years , afnd has become more popular in terms of clinical and scientific usage , with a recent increase in genotyping data as a necessary component of short population report article submissions to another scientific journal . in addition , we have developed a user-friendly desktop application for hla and kir genotype/population data submissions . we have also focused on classification of existing and new data into gold silver bronze criteria , allowing users to filter and query depending on their needs . moreover , we have also continued to expand other features , for example focussed on hla associations with adverse drug reactions . at present , afnd contains > 1600 populations from > 10 million healthy individuals , making afnd a story_separator_special_tag lab of things ( lot , lab-of-things.com ) is a research platform for interconnection , programming , and large scale deployment of devices and sensors . these devices and sensors can then be used for deployment of field studies in a variety of research areas including elderly care , energy management , and the like . lot is built on top of homeos , a middle-ware component , making interconnection of a wide range of devices possible . lot also provides cloud storage and remote monitoring capabilities . traditionally programming on the lot platform has been done using c # in microsoft visual studio . while lot programs developed on the .net framework offer a rich set of functionality , writing programs on lot can be challenging for developers who are not experienced with the technology involved . in this demonstration , we introduce an innovative programming approach on the lot platform by building a generic application and creating corresponding libraries on the user-friendly touchdevelop ( touchdevelop.com ) programming environment . as an example , we implemented the same functionality of the lab of things alerts application using the new generic app . in addition to a touch-enabled programming environment story_separator_special_tag home and building automation ( hba ) trends toward the ambient intelligence paradigm , which aims to autonomously coordinate and control appliances and subsystems in a given environment . nevertheless , hba is based on an explicit user-home interaction and basically enables static and predetermined scenarios . this paper proposes a more flexible multi-agent approach , leveraging semantic-based resource discovery and orchestration for hba applications . backward-compatible enhancements to eib/knx domotic standard allow to support the semantic characterization of user profiles and device functionalities , thus enabling : 1 ) negotiation of the most suitable home services/functionalities according to implicit and explicit user needs and 2 ) device-driven interaction for adapting the environment to context evolution . a power-management problem in hba is presented as a case study to better clarify the proposal and assess its effectiveness . story_separator_special_tag service-oriented architecture ( soa ) is realized by independent , standardized , and self-describing units known as services . this architecture has been widely used and verified for automatic , dynamic , and self-configuring distributed systems such as in building automation . this paper presents a building automation system adopting soa paradigm with devices implemented by device profile for web service ( dpws ) in which context information is collected , processed , and sent to a composition engine to coordinate appropriate devices/services based on the context , composition plan , and predefined policy rules . a six-phased composition process is proposed to carry out the task . in addition , two other components are designed to support the composition process : building ontology as a schema for representing semantic data and composition plan description language to describe context-based composite services in form of composition plans . a prototype consisting of a dpwsim simulator and sambas is developed to illustrate and test the proposed idea . comparison analysis and experimental results imply the feasibility and scalability of the system . story_separator_special_tag we are currently observing emerging solutions to enable the internet of things ( iot ) . efficient and feature rich iot middeware platforms are key enablers for iot . however , due to complexity , most of these middleware platforms are designed to be used by it experts . in this paper , we propose a semantics-driven model that allows non-it experts ( e.g . plant scientist , city planner ) to configure iot middleware components easier and faster . such tools allow them to retrieve the data they want without knowing the underlying technical details of the sensors and the data processing components . we propose a context aware sensor configuration model ( cascom ) to address the challenge of automated context-aware configuration of filtering , fusion , and reasoning mechanisms in iot middleware according to the problems at hand . we incorporate semantic technologies in solving the above challenges . we demonstrate the feasibility and the scalability of our approach through a prototype implementation based on an iot middleware called global sensor networks ( gsn ) , though our model can be generalized into any other middleware platform . we evaluate cascom in agriculture domain and measure both story_separator_special_tag deployment of embedded systems in industrial environments requires preconfiguration for operation , and , in some contexts , easy reconfiguration capabilities are also desirable . it is therefore useful to define a mechanism for embedded devices that will operate in sensor and actuator networks to be remotely ( re ) configured and to have flexible computation capabilities . we propose such a configuration , reconfiguration , and processing mechanism in the form of a software architecture . a node component should be deployed in any embedded device and implements application programming interface ( api ) , configuration , processing , and communication . the resulting system provides remote configuration and processing of data in any node in a most flexible way , since every node has the same uniform api , processing , and access functionalities . the experimental section shows a working deployment of this concept in an industrial refinery setting , as part of the eu fp7 project ginseng . story_separator_special_tag in this paper we discuss a selection of promising and interesting research areas in the design of protocols and systems for wireless industrial communications . we have selected topics that have either emerged as hot topics in the industrial communications community in the last few years ( like wireless sensor networks ) , or which could be worthwhile research topics in the next few years ( for example cooperative diversity techniques for error control , cognitive radio/opportunistic spectrum access for mitigation of external interferences ) . story_separator_special_tag bluetooth ( over ieee 802.15.1 ) , ultra-wideband ( uwb , over ieee 802.15.3 ) , zigbee ( over ieee 802.15.4 ) , and wi-fi ( over ieee 802.11 ) are four protocol standards for short- range wireless communications with low power consumption . from an application point of view , bluetooth is intended for a cordless mouse , keyboard , and hands-free headset , uwb is oriented to high-bandwidth multimedia links , zigbee is designed for reliable wirelessly networked monitoring and control networks , while wi-fi is directed at computer-to-computer connections as an extension or substitution of cabled networks . in this paper , we provide a study of these popular wireless communication standards , evaluating their main features and behaviors in terms of various metrics , including the transmission time , data coding efficiency , complexity , and power consumption . it is believed that the comparison presented in this paper would benefit application engineers in selecting an appropriate protocol .
the rising popularity of the sensor-equipped smartphone is changing the possible scale and scope of human activity inference . the diversity in user population seen in large user bases can overwhelm conventional one-size-fits-all classication approaches . although personalized models are better able to handle population diversity , they often require increased effort from the end user during training and are computationally expensive . in this paper , we propose an activity classification framework that is scalable and can tractably handle an increasing number of users . scalability is achieved by maintaining distinct groups of similar users during the training process , which makes it possible to account for the differences between users without resorting to training individualized classifiers . the proposed framework keeps user burden low by leveraging crowd-sourced data labels , where simple natural language processing techniques in combination with multiinstance learning are used to handle labeling errors introduced by low-commitment everyday users . experiment results on a large public dataset demonstrate that the framework can cope with population diversity irrespective of population size . story_separator_special_tag a key challenge of data-driven social science is the gathering of high quality multi-dimensional datasets . a second challenge relates to design and execution of structured experimental interventions in-situ , in a way comparable to the reliability and intentionality of ex-situ laboratory experiments . in this paper we introduce the friends and family study , in which a young-family residential community is transformed into a living laboratory . we employ a ubiquitous computing approach that combines extremely rich data collection in terms of signals , dimensionality , and throughput , together with the ability to conduct targeted experimental interventions with study populations . we present our mobile-phone-based social and behavioral sensing system , which has been deployed for over a year now . finally , we describe a novel tailored intervention aimed at increasing physical activity in the subject population . results demonstrate the value of social factors for motivation and adherence , and allow us to quantify the contribution of different incentive mechanisms . story_separator_special_tag wearable computers have the potential to act as intelligent agents in everyday life and to assist the user in a variety of tasks , using context to determine how to act . location is the most common form of context used by these agents to determine the user 's task . however , another potential use of location context is the creation of a predictive model of the user 's future movements . we present a system that automatically clusters gps data taken over an extended period of time into meaningful locations at multiple scales . these locations are then incorporated into a markov model that can be consulted for use with a variety of applications in both single-user and collaborative scenarios . story_separator_special_tag this paper envisions a new research direction that we call psychological computing . the key observation is that , even though computing systems are missioned to satisfy human needs , there has been little attempt to bring understandings of human need/psychology into core system design . this paper makes the case that percolating psychological insights deeper into the computing layers is valuable , even essential . through examples from content caching , vehicular systems , and network scheduling , we argue that psychological awareness can not only offer performance gains to known technological problems , but also spawn new kinds of systems that are difficult to conceive otherwise . story_separator_special_tag machine learning methods extract value from vast data sets quickly and with modest resources . they are established tools in a wide range of industrial applications , including search engines , dna sequencing , stock market analysis , and robot locomotion , and their use is spreading rapidly . people who know the methods have their choice of rewarding jobs . this hands-on text opens these opportunities to computer science students with modest mathematical backgrounds . it is designed for final-year undergraduates and master 's students with limited background in linear algebra and calculus . comprehensive and coherent , it develops everything from basic reasoning to advanced techniques within the framework of graphical models . students learn more than a menu of techniques , they develop analytical and problem-solving skills that equip them for the real world . numerous examples and exercises , both computer based and theoretical , are included in every chapter . resources for students and instructors , including a matlab toolbox , are available online . story_separator_special_tag the results of three experiments showed that dutch , taiwanese , and japanese adults were able to identify dutch vocal expressions of emotion beyond chance expectancy . inspection of the confusion data further revealed that , in addition to symmetrical confusions , there were quite a few confusions that were asymmetrical . the outcomes of a multidimensional scaling finally suggested that confusions were a function of similarity in levels of activity of the emotions concerned rather than , for example , similarity in evaluative meaning . the conclusion was that there are universally recognizable characteristics of vocal patterns of emotion and that these characteristics are primarily related to the activity dimension of emotional meaning . story_separator_special_tag the complexity of the mobility tracking problem in a cellular environment has been characterized under an information-theoretic framework . shannon 's entropy measure is identified as a basis for comparing user mobility models . by building and maintaining a dictionary of individual user 's path updates ( as opposed to the widely used location updates ) , the proposed adaptive on-line algorithm can learn subscribers ' profiles . this technique evolves out of the concepts of lossless compression . the compressibility of the variable-to-fixed length encoding of the acclaimed lempel -- -ziv family of algorithms reduces the update cost , whereas their built-in predictive power can be effectively used to reduce paging cost . story_separator_special_tag this is the first textbook on pattern recognition to present the bayesian viewpoint . the book presents approximate inference algorithms that permit fast approximate answers in situations where exact answers are not feasible . it uses graphical models to describe probability distributions when no other books apply graphical models to machine learning . no previous knowledge of pattern recognition or machine learning concepts is assumed . familiarity with multivariate calculus and basic linear algebra is required , and some experience in the use of probabilities would be helpful though not essential as the book includes a self-contained introduction to basic probability theory . story_separator_special_tag the smartphone revolution has brought ubiquitous , powerful , and connected sensing hardware to the masses . this holds great promise for a wide range of research fields . however , deployment of experiments onto a large set of mobile devices places technological , organizational , and sometimes financial burdens on researchers , making real-world experimental research cumbersome and difficult . we argue that a research infrastructure in the form of a large-scale mobile phone testbed is required to unlock the potential of this new technology . we aim to facilitate experimentation with mobile phone sensing by providing a pragmatic middleware framework that is easy to use and features fine-grained user-level control to guard the privacy of the volunteer smart-phone users . in this paper we describe the challenges and requirements for such a middleware , outline an architecture featuring a flexible , scriptable publish/subscribe framework , and report on our experience with an implementation running on top of the android platform . story_separator_special_tag purpose location prediction enables the next generation of location based applications . the purpose of this paper is to provide a historical summary of research in personal location prediction . location prediction began as a tool for network management , predicting the load on particular cellular towers or wifi access points . with the increasing popularity of mobile devices , location prediction turned personal , predicting individuals ' next locations given their current locations.design/methodology/approach this paper includes an overview of prediction techniques and reviews several location prediction projects comparing the raw location data , feature extraction , choice of prediction algorithms and their results.findings a new trend has emerged , that of employing additional context to improve or expand predictions . incorporating temporal information enables location predictions farther out into the future . appending place types or place names can improve predictions or develop prediction applications . story_separator_special_tag the explicit investigation of anticipations in relation to adaptive behavior is a recent approach . this chapter first provides psychological background that motivates and inspires the study of anticipations in the adaptive behavior field . next , a basic framework for the study of anticipations in adaptive behavior is suggested . different anticipatory mechanisms are identified and characterized . first fundamental distinctions are drawn between implicit anticipatory behavior , payoff anticipatory behavior , sensory anticipatory behavior , and state anticipatory behavior . a case study allows further insights into the drawn distinctions . many future research direction are suggested . story_separator_special_tag as smartphones get smarter by pushing intelligence to the phone and computing cloud , they 'll start to understand our life patterns , reason about our health and wellbeing , help us navigate our day , and intervene on our behalf . here , the authors present various smartphone sensing systems that they 've built , arguing that , eventually , these smartphones will evolve into cognitive phones . story_separator_special_tag neural signals are everywhere just like mobile phones . we propose to use neural signals to control mobile phones for hands-free , silent and effortless human-mobile interaction . until recently , devices for detecting neural signals have been costly , bulky and fragile . we present the design , implementation and evaluation of the neurophone system , which allows neural signals to drive mobile phone applications on the iphone using cheap off-the-shelf wireless electroencephalography ( eeg ) headsets . we demonstrate a brain-controlled address book dialing app , which works on similar principles to p300-speller brain-computer interfaces : the phone flashes a sequence of photos of contacts from the address book and a p300 brain potential is elicited when the flashed photo matches the person whom the user wishes to dial . eeg signals from the headset are transmitted wirelessly to an iphone , which natively runs a lightweight classifier to discriminate p300 signals from noise . when a person 's contact-photo triggers a p300 , his/her phone number is automatically dialed . neurophone breaks new ground as a brain-mobile phone interface for ubiquitous pervasive computing . we discuss the challenges in making our initial prototype more practical , robust story_separator_special_tag technological advances in sensing , computation , storage , and communications will turn the near-ubiquitous mobile phone into a global mobile sensing device . people-centric sensing will help drive this trend by enabling a different way to sense , learn , visualize , and share information about ourselves , friends , communities , the way we live , and the world we live in . it juxtaposes the traditional view of mesh sensor networks with one in which people , carrying mobile devices , enable opportunistic sensing coverage . in the metrosense project 's vision of people-centric sensing , users are the key architectural system component , enabling a host of new application areas such as personal , public , and social sensing . story_separator_special_tag how do social networks affect the spread of behavior ? a popular hypothesis states that networks with many clustered ties and a high degree of separation will be less effective for behavioral diffusion than networks in which locally redundant ties are rewired to provide shortcuts across the social space . a competing hypothesis argues that when behaviors require social reinforcement , a network with more clustering may be more advantageous , even if the network as a whole has a larger diameter . i investigated the effects of network structure on diffusion by studying the spread of health behavior through artificially structured online communities . individual adoption was much more likely when participants received social reinforcement from multiple neighbors in the social network . the behavior spread farther and faster across clustered-lattice networks than across corresponding random networks . story_separator_special_tag context-aware computing is a mobile computing paradigm in which applications can discover and take advantage of contextual information ( such as user location , time of day , nearby people and devices , and user activity ) . since it was proposed about a decade ago , many researchers have studied this topic and built several context-aware applications to demonstrate the usefulness of this new technology . context-aware applications ( or the system infrastructure to support them ) , however , have never been widely available to everyday users . in this survey of research on context-aware systems and applications , we looked in depth at the types of context used and models of context information , at systems that support collecting and disseminating context , and at applications that adapt to the changing context . through this survey , it is clear that context-aware research is an old but rich area for research . the difficulties and possible solutions we outline serve as guidance for researchers hoping to make context-aware computing a reality . story_separator_special_tag even though human movement and mobility patterns have a high degree of freedom and variation , they also exhibit structural patterns due to geographic and social constraints . using cell phone location data , as well as data from two online location-based social networks , we aim to understand what basic laws govern human motion and dynamics . we find that humans experience a combination of periodic movement that is geographically limited and seemingly random jumps correlated with their social networks . short-ranged travel is periodic both spatially and temporally and not effected by the social network structure , while long-distance travel is more influenced by social network ties . we show that social relationships can explain about 10 % to 30 % of all human movement , while periodic behavior explains 50 % to 70 % . based on our findings , we develop a model of human mobility that combines periodic short range movements with travel due to the social network structure . we show that our model reliably predicts the locations and dynamics of future human movement and gives an order of magnitude better performance than present models of human mobility . story_separator_special_tag automated and scalable approaches for understanding the semantics of places are critical to improving both existing and emerging mobile services . in this paper , we present crowdsense @ place ( csp ) , a framework that exploits a previously untapped resource -- opportunistically captured images and audio clips from smartphones -- to link place visits with place categories ( e.g. , store , restaurant ) . csp combines signals based on location and user trajectories ( using wifi/gps ) along with various visual and audio place `` hints '' mined from opportunistic sensor data . place hints include words spoken by people , text written on signs or objects recognized in the environment . we evaluate csp with a seven-week , 36-user experiment involving 1,241 places in five locations around the world . our results show that csp can classify places into a variety of categories with an overall accuracy of 69 % , outperforming currently available alternative solutions . story_separator_special_tag monitoring a user 's mobility during daily life is an essential requirement in providing advanced mobile services . while extensive attempts have been made to monitor user mobility , previous work has rarely addressed issues with predictions of temporal behavior in real deployment . in this paper , we introduce smartdc , a mobility prediction-based adaptive duty cycling scheme to provide contextual information about a user 's mobility : time-resolved places and paths . unlike previous approaches that focused on minimizing energy consumption for tracking raw coordinates , we propose efficient techniques to maximize the accuracy of monitoring meaningful places with a given energy constraint . smartdc comprises unsupervised mobility learner , mobility predictor , and markov decision process-based adaptive duty cycling . smartdc estimates the regularity of individual mobility and predicts residence time at places to determine efficient sensing schedules . our experiment results show that smartdc consumes 81 percent less energy than the periodic sensing schemes , and 87 percent less energy than a scheme employing context-aware sensing , yet it still correctly monitors 90 percent of a user 's location changes within a 160-second delay . story_separator_special_tag activity-aware systems have inspired novel user interfaces and new applications in smart environments , surveillance , emergency response , and military missions . systems that recognize human activities from body-worn sensors can further open the door to a world of healthcare applications , such as fitness monitoring , eldercare support , long-term preventive and chronic care , and cognitive assistance . wearable systems have the advantage of being with the user continuously . so , for example , a fitness application could use real-time activity information to encourage users to perform opportunistic activities . furthermore , the general public is more likely to accept such activity recognition systems because they are usually easy to turn off or remove . story_separator_special_tag knowledge of how people interact is important in manydisciplines , e.g . organizational behavior , social networkanalysis , information diffusion and knowledge managementapplications . we are developing methods to automaticallyand unobtrusively learn the social network structures thatarise within human groups based on wearable sensors . atpresent researchers mainly have to rely on questionnaires , surveys or diaries in order to obtain data on physicalinteractions between people . in this paper , we show howsensor measurements from the sociometer can be used tobuild computational models of group interactions . wepresent results on how we can learn the structure of face-to-face interactions within groups , detect when membersare in face-to-face proximity and also when they are havinga conversation . story_separator_special_tag background the prevalence of obesity has increased substantially over the past 30 years . we performed a quantitative analysis of the nature and extent of the person-to-person spread of obesity as a possible factor contributing to the obesity epidemic . methods we evaluated a densely interconnected social network of 12,067 people assessed repeatedly from 1971 to 2003 as part of the framingham heart study . the body-mass index was available for all subjects . we used longitudinal statistical models to examine whether weight gain in one person was associated with weight gain in his or her friends , siblings , spouse , and neighbors . results discernible clusters of obese persons ( body-mass index [ the weight in kilograms divided by the square of the height in meters ] , > or =30 ) were present in the network at all time points , and the clusters extended to three degrees of separation . these clusters did not appear to be solely attributable to the selective formation of social ties among obese persons . a person 's chances of becoming obese increased by 57 % ( 95 % confidence interval [ ci ] , 6 to 123 ) if he story_separator_special_tag mobile applications are becoming increasingly ubiquitous and provide ever richer functionality on mobile devices . at the same time , such devices often enjoy strong connectivity with more powerful machines ranging from laptops and desktops to commercial clouds . this paper presents the design and implementation of clonecloud , a system that automatically transforms mobile applications to benefit from the cloud . the system is a flexible application partitioner and execution runtime that enables unmodified mobile applications running in an application-level virtual machine to seamlessly off-load part of their execution from mobile devices onto device clones operating in a computational cloud . clonecloud uses a combination of static analysis and dynamic profiling to partition applications automatically at a fine granularity while optimizing execution time and energy use for a target computation and communication environment . at runtime , the application partitioning is effected by migrating a thread from the mobile device at a chosen point to the clone in the cloud , executing there for the remainder of the partition , and re-integrating the migrated thread back to the mobile device . our evaluation shows that clonecloud can adapt application partitioning to different environments , and can help some applications story_separator_special_tag recent advances in small inexpensive sensors , low-power processing , and activity modeling have enabled applications that use on-body sensing and machine learning to infer people 's activities throughout everyday life . to address the growing rate of sedentary lifestyles , we have developed a system , ubifit garden , which uses these technologies and a personal , mobile display to encourage physical activity . we conducted a 3-week field trial in which 12 participants used the system and report findings focusing on their experiences with the sensing and activity inference . we discuss key implications for systems that use on-body sensing and activity inference to encourage physical activity . story_separator_special_tag the goal of the mavhome ( managing an intelligent versatile home ) project is to create a home that acts as an intelligent agent . in this paper we introduce the mavhome architecture . the role of prediction algorithms within the architecture is discussed , and a meta-predictor is presented which combines the strengths of multiple approaches to inhabitant action prediction . we demonstrate the effectiveness of these algorithms on smart home data . story_separator_special_tag personal mobile devices are increasingly equipped with the capability to sense the physical world ( through cameras , microphones , and accelerometers , for example ) and the , network world ( with wi-fi and bluetooth interfaces ) . such devices offer many new opportunities for cooperative sensing applications . for example , users ' mobile phones may contribute data to community-oriented information services , from city-wide pollution monitoring to enterprise-wide detection of unauthorized wi-fi access points . this people-centric mobile-sensing model introduces a new security challenge in the design of mobile systems : protecting the privacy of participants while allowing their devices to reliably contribute high-quality data to these large-scale applications.we describe anonysense , a privacy-aware architecture for realizing pervasive applications based on collaborative , opportunistic sensing by personal mobile devices . anonysense allows applications to submit sensing tasks that will be distributed across anonymous participating mobile devices , later receiving verified , yet anonymized , sensor data reports back from the field , thus providing the first secure implementation of this participatory sensing model . we describe our trust model , and the security properties that drove the design of the anonysense system . we evaluate our prototype story_separator_special_tag context is not simply the state of a predefined environment with a fixed set of interaction resources . it 's part of a process of interacting with an ever-changing environment composed of reconfigurable , migratory , distributed , and multiscale resources . story_separator_special_tag only intrusive and expensive ways of precisely expressing emotions has been proposed , which are not likely to appear soon in everyday ubicomp environments . in this paper , we study to which extent we can identify the emotion a user is explicitly expressing through 2d and 3d gestures . indeed users already often manipulate mobile devices with touch screen and accelerometers . we conducted a field study where we asked participants to explicitly express their emotion through gestures and to report their affective states . we contribute by ( 1 ) showing a high number of significant correlations in 3d motion descriptors of gestures and in the arousal dimension ; ( 2 ) defining a space of affective gestures . we identify ( 3 ) groups of descriptors that structure the space and are related to arousal . finally , we provide with ( 4 ) a preliminary model of arousal and we identify ( 5 ) interesting patterns in particular classes of gestures . such results are useful for ubicomp application designers in order to envision the use of gestures as a cheap and non-intrusive affective modality . story_separator_special_tag this paper presents maui , a system that enables fine-grained energy-aware offload of mobile code to the infrastructure . previous approaches to these problems either relied heavily on programmer support to partition an application , or they were coarse-grained requiring full process ( or full vm ) migration . maui uses the benefits of a managed code environment to offer the best of both worlds : it supports fine-grained code offload to maximize energy savings with minimal burden on the programmer . maui decides at run-time which methods should be remotely executed , driven by an optimization engine that achieves the best energy savings possible under the mobile device 's current connectivity constrains . in our evaluation , we show that maui enables : 1 ) a resource-intensive face recognition application that consumes an order of magnitude less energy , 2 ) a latency-sensitive arcade game application that doubles its refresh rate , and 3 ) a voice-based language translation application that bypasses the limitations of the smartphone environment by executing unsupported components remotely . story_separator_special_tag to realize the potential of opportunistic and participatory sensing using mobile smartphones , a key challenge is ensuring the ease of developing and deploying such applications , without the need for the application writer to reinvent the wheel each time . to this end , we present a platform for remote sensing using smartphones ( prism ) that balances the interconnected goals of generality , security , and scalability . prism allows application writers to package their applications as executable binaries , which offers efficiency and also the flexibility of reusing existing code modules . prism then pushes the application out automatically to an appropriate set of phones based on a specified set of predicates . this push model enables timely and scalable application deployment while still ensuring a good degree of privacy . to safely execute untrusted applications on the smartphones , while allowing them controlled access to sensitive sensor data , we augment standard software sandboxing with several prism-specific elements like resource metering and forced amnesia.we present three applications built on our implementation of prism on windows mobile : citizen journalist , party thermometer , and road bump monitor . these applications vary in the set of sensors story_separator_special_tag this paper uses gps loggers and interviews to measure the time taken to collect water in two kenyan informal settlements . the time devoted to water collection is widely believed to prevent women and girls , who do most of this work , from undertaking more creative tasks , including income generation and education . we studied collection times in two settlements to compare nyalenda in kisumu , where the utility has introduced a new piped water system , with kibera in nairobi , where no such improvement has been made . in addition to the primary results of quantitative collections times , we discuss the use of gps in this context and our findings that the two methods of measurement provide insights which neither would have provided alone . story_separator_special_tag previous studies have shown that human movement is predictable to a certain extent at different geographic scales . existing prediction techniques exploit only the past history of the person taken into consideration as input of the predictors . in this paper , we show that by means of multivariate nonlinear time series prediction techniques it is possible to increase the forecasting accuracy by considering movements of friends , people , or more in general entities , with correlated mobility patterns ( i.e. , characterised by high mutual information ) as inputs . finally , we evaluate the proposed techniques on the nokia mobile data challenge and cabspotting datasets . story_separator_special_tag this paper deals with an introduction to computing anticipatory systems starting with robert rosen 's definition of anticipatory systems . firstly , the internalist and externalist aspects of anticipation will be explained at an intuitive point of view . secondly , the concepts of incursion and hyperincursion are proposed to model anticipatory systems . thirdly , a simple example of a computing anticipatory system will be simulated on computer from an incursive harmonic oscillator . this oscillator includes an anticipatory model of itself in view of computing its successive states . story_separator_special_tag poor air quality is a global health issue , causing serious problems like asthma , cancer , and heart disease around the world . earlier this decade , the world health organization estimated that three million people die each year from the effects of air pollution [ 6 ] . unfortunately , while variations in air quality are significant , today 's air quality monitors are very sparsely deployed . to address this visibility gap , the common sense project is developing participatory sensing systems that allow individuals to measure their personal exposure , groups to aggregate their members ' exposure , and activists to mobilize grassroots community action . story_separator_special_tag in 1977 dalenius articulated a desideratum for statistical databases : nothing about an individual should be learnable from the database that can not be learned without access to the database . we give a general impossibility result showing that a formalization of dalenius ' goal along the lines of semantic security can not be achieved . contrary to intuition , a variant of the result threatens the privacy even of someone not in the database . this state of affairs suggests a new measure , differential privacy , which , intuitively , captures the increased risk to one 's privacy incurred by participating in a database . the techniques developed in a sequence of papers [ 8 , 13 , 3 ] , culminating in those described in [ 12 ] , can achieve any desired level of privacy under this measure . in many cases , extremely accurate information about the database can be provided while simultaneously ensuring very high levels of privacy story_separator_special_tag this paper presents an analysis of continuous cellular tower data representing five months of movement from 215 randomly sampled subjects in a major urban city . we demonstrate the potential of existing community detection methodologies to identify salient locations based on the network generated by tower transitions . the tower groupings from these unsupervised clustering techniques are subsequently validated using data from bluetooth beacons placed in the homes of the subjects . we then use these inferred locations as states within several dynamic bayesian networks to predict each subject s subsequent movements with over 90 % accuracy . we also introduce the x-factor model , a dbn with a latent variable corresponding to abnormal behavior . we conclude with a description of extensions for this model , such as incorporating additional contextual and temporal variables already being logged by the story_separator_special_tag we introduce a system for sensing complex social systems with data collected from 100 mobile phones over the course of 9\xa0months . we demonstrate the ability to use standard bluetooth-enabled mobile telephones to measure information access and use in different contexts , recognize social patterns in daily user activity , infer relationships , identify socially significant locations , and model organizational rhythms . story_separator_special_tag longitudinal behavioral data generally contains a significant amount of structure . in this work , we identify the structure inherent in daily behavior with models that can accurately analyze , predict , and cluster multimodal data from individuals and communities within the social network of a population . we represent this behavioral structure by the principal components of the complete behavioral dataset , a set of characteristic vectors we have termed eigenbehaviors . in our model , an individual s behavior over a specific day can be approximated by a weighted sum of his or her primary eigenbehaviors . when these weights are calculated halfway through a day , they can be used to predict the day s remaining behaviors with 79 % accuracy for our test subjects . additionally , we demonstrate the potential for this dimensionality reduction technique to infer community affiliations within the subjects social network by clustering individuals into a behavior space spanned by a set of their aggregate eigenbehaviors . these behavior spaces make it possible to determine the behavioral similarity between both individuals and groups , enabling 96 % classification accuracy of community affiliations within the population-level social network . additionally , the distance story_separator_special_tag eston , roger g. , ann v. rowlands , and david k. ingledew.validity of heart rate , pedometry , and accelerometry for predicting the energy cost of children s activities.j . appl . physiol . 84 ( 1 ) : 362 371 . story_separator_special_tag we present the work that allowed us to win the next-place prediction task of the nokia mobile data challenge . using data collected from the smartphones of 80 users , we explore the characteristics of their mobility traces . we then develop three families of predictors , including tailored models and generic algorithms , to predict , based on instantaneous information only , the next place a user will visit . these predictors are enhanced with aging techniques that allow them to adapt quickly to the users ' changes of habit . finally , we devise various strategies to blend predictors together and take advantage of their diversity , leading to relative improvements of up to 4 % . story_separator_special_tag the mobile phone has transformed life in the city . using them , individuals can both receive information about their surroundings through location-based services and contribute to the city as a system . they can participate by sharing location , text , photos , or video about the conditions of the city . this article explores the literature surrounding mobile phone technology in urban planning and city life . specifically , it explores the potential of mobile phones in sensing , documenting , and exploring the city . this article draws on literature from a wide variety of fields to create a overview of the issues surrounding mobile phones in the city . this article review what we know and what has been speculated about the influence of mobile phones and similar devices on urban life . there is evidence that they alter our sense of individuality , our mobility , our interactions with others , our capacity to participate in and document public life , and our senses of privacy and publicness . the implications and meanings fo . story_separator_special_tag we investigate whether opportune moments to deliver notifications surface at the endings of episodes of mobile interaction ( making voice calls or receiving sms ) based on the assumption that the endings collocate with naturally occurring breakpoint in the user 's primary task . testing this with a naturalistic experiment we find that interruptions ( notifications ) are attended to and dealt with significantly more quickly after a user has finished an episode of mobile interaction compared to a random baseline condition , supporting the potential utility of this notification strategy . we also find that the workload and situational appropriateness of the secondary interruption task significantly affect subsequent delay and completion rate of the tasks . in situ self-reports and interviews reveal complexities in the subjective experience of the interruption , which suggest that a more nuanced classification of the particular call or sms and its relationship to the primary task ( s ) would be desirable . story_separator_special_tag a person seeking another person 's attention is normally able to quickly assess how interruptible the other person currently is . such assessments allow behavior that we consider natural , socially appropriate , or simply polite . this is in sharp contrast to current computer and communication systems , which are largely unaware of the social situations surrounding their usage and the impact that their actions have on these situations . if systems could model human interruptibility , they could use this information to negotiate interruptions at appropriate times , thus improving human computer interaction.this article presents a series of studies that quantitatively demonstrate that simple sensors can support the construction of models that estimate human interruptibility as well as people do . these models can be constructed without using complex sensors , such as vision-based techniques , and therefore their use in everyday office environments is both practical and affordable . although currently based on a demographically limited sample , our results indicate a substantial opportunity for future research to validate these results over larger groups of office workers . our results also motivate the development of systems that use these models to negotiate interruptions at socially appropriate times story_separator_special_tag as mobile devices have become powerful sensor platforms , new applications have emerged which continuously stream mobile user context ( location , activities , etc. ) . however , energy is a limited resource on battery-equipped mobile devices . especially frequent transmissions of context updates over energy-expensive wireless channels drain the battery of mobile devices in an uncontrolled manner . it is a fundamental algorithmic challenge to design protocols such that users can control the energy consumption on mobile devices while , at the same time , optimizing the quality of mobile applications . to address this trade-off in the area of context update protocols , we propose a novel protocol that maximizes the context accuracy perceived by a remote consumer while guaranteeing that the consumed energy stays under a given limit . our update protocol exploits predictions about a user 's future behaviour to give priority to the most effective context updates . in our evaluation , we apply our predictive update protocol to a real-world trace of user context and show that the context accuracy is significantly increased compared to an update protocol which operates without predictions under the same energy budget . story_separator_special_tag the ubiquitous presence of cell phones in emerging economies has brought about a wide range of cell phone-based services for low-income groups . often times , the success of such technologies highly depends on its adaptation to the needs and habits of each social group . in an attempt to understand how cell phones are being used by citizens in an emerging economy , we present a large-scale study to analyze the relationship between specific socio-economic factors and the way people use cell phones in an emerging economy in latin america . we propose a novel analytical approach that combines large-scale datasets of cell phone records with countrywide census data to reveal findings at a national level . our main results show correlations between socio-economic levels and social network or mobility patterns among others . we also provide analytical models to accurately approximate census variables from cell phone records with r2 0.82 . story_separator_special_tag a new class of visuomotor neuron has been recently discovered in the monkey s premotor cortex : mirror neurons . these neurons respond both when a particular action is performed by the recorded monkey and when the same action , performed by another individual , is observed . mirror neurons appear to form a cortical system matching observation and execution of goal-related motor actions . experimental evidence suggests that a similar matching system also exists in humans . what might be the functional role of this matching system ? one possible function is to enable an organism to detect certain mental states of observed conspecifics . this function might be part of , or a precursor to , a more general mind-reading ability . two different accounts of mindreading have been suggested . according to theory theory , mental states are represented as inferred posits of a naive theory . according to simulation theory , other people s mental states are represented by adopting their perspective : by tracking or matching their states with resonant states of one s own . the activity of mirror neurons , and the fact that observers undergo motor facilitation in the same muscular groups story_separator_special_tag in this paper we demonstrate how smart phone sensors , specifically inertial sensors and gps traces , can be used as an objective `` measurement device '' for aiding psychiatric diagnosis . in a trial with 12 bipolar disorder patients conducted over a total ( summed over all patients ) of over 1000 days ( on average 12 weeks per patient ) we have achieved state change detection with a precision/recall of 96 % /94 % and state recognition accuracy of 80 % . the paper describes the data collection , which was conducted as a medical trial in a real life every day environment in a rural area , outlines the recognition methods , and discusses the results . story_separator_special_tag during the past decade there has been an explosion in computation and information technology . with it have come vast amounts of data in a variety of fields such as medicine , biology , finance , and marketing . the challenge of understanding these data has led to the development of new tools in the field of statistics , and spawned new areas such as data mining , machine learning , and bioinformatics . many of these tools have common underpinnings but are often expressed with different terminology . this book describes the important ideas in these areas in a common conceptual framework . while the approach is statistical , the emphasis is on concepts rather than mathematics . many examples are given , with a liberal use of color graphics . it is a valuable resource for statisticians and anyone interested in data mining in science or industry . the book 's coverage is broad , from supervised learning ( prediction ) to unsupervised learning . the many topics include neural networks , support vector machines , classification trees and boosting -- -the first comprehensive treatment of this topic in any book . this major new edition features many story_separator_special_tag a new technique for the analysis of speech , the perceptual linear predictive ( plp ) technique , is presented and examined . this technique uses three concepts from the psychophysics of hearing to derive an estimate of the auditory spectrum : ( 1 ) the critical-band spectral resolution , ( 2 ) the equal-loudness curve , and ( 3 ) the intensity-loudness power law . the auditory spectrum is then approximated by an autoregressive all-pole model . a 5th-order all-pole model is effective in suppressing speaker-dependent details of the auditory spectrum . in comparison with conventional linear predictive ( lp ) analysis , plp analysis is more consistent with human hearing . the effective second formant f2 ' and the 3.5-bark spectral-peak integration theories of vowel perception are well accounted for . plp analysis is computationally efficient and yields a low-dimensional representation of speech . these properties are found to be useful in speaker-independent automatic-speech recognition . story_separator_special_tag location-enhanced mobile devices are becoming common , but applications built for these devices find themselves suffering a mismatch between the latitude and longitude that location sensors provide and the colloquial place label that applications need . conveying my location to my spouse , for example as ( 48.13641n , 11.57471e ) , is less informative than saying at home . we introduce an algorithm called beaconprint that uses wifi and gsm radio fingerprints collected by someone 's personal mobile device to automatically learn the places they go and then detect when they return to those places . beaconprint does not automatically assign names or semantics to places . rather , it provides the technological foundation to support this task . we compare beaconprint to three existing algorithms using month-long trace logs from each of three people . algorithmic results are supplemented with a survey study about the places people go . beaconprint is over 90 % accurate in learning and recognizing places . additionally , it improves accuracy in recognizing places visited infrequently or for short durations a category where previous approaches have fared poorly . beaconprint demonstrates 63 % accuracy for places someone returns to only once or visits story_separator_special_tag the potential for sensor-enabled mobile devices to proactively present information when and where users need it ranks among the greatest promises of ubiquitous computing . unfortunately , mobile phones , pdas , and other computing devices that compete for the user 's attention can contribute to interruption irritability and feelings of information overload . designers of mobile computing interfaces , therefore , require strategies for minimizing the perceived interruption burden of proactively delivered messages . in this work , a context-aware mobile computing device was developed that automatically detects postural and ambulatory activity transitions in real time using wireless accelerometers . this device was used to experimentally measure the receptivity to interruptions delivered at activity transitions relative to those delivered at random times . messages delivered at activity transitions were found to be better received , thereby suggesting a viable strategy for context-aware message delivery in sensor-enabled mobile computing devices . story_separator_special_tag we present methods for inferring the cost of interrupting users based on multiple streams of events including information generated by interactions with computing devices , visual and acoustical analyses , and data drawn from online calendars . following a review of prior work on techniques for deliberating about the cost of interruption associated with notifications , we introduce methods for learning models from data that can be used to compute the expected cost of interruption for a user . we describe the interruption workbench , a set of event-capture and modeling tools . finally , we review experiments that characterize the accuracy of the models for predicting interruption cost and discuss research directions . story_separator_special_tag we investigate opportunistic routing , centering on the recommendation of ideal diversions on trips to a primary destination when an unplanned waypoint , such as a rest stop or a refueling station , is desired . in the general case , an automated routing assistant may not know the driver 's final destination and may need to consider probabilities over destinations in identifying the ideal waypoint along with the revised route that includes the waypoint . we consider general principles of opportunistic routing and present the results of several studies with a corpus of real-world trips . then , we describe how we can compute the expected value of asking a user about the primary destination so as to remove uncertainly about the goal and show how this measure can guide an automated system 's engagements with users when making recommendations for navigation and analogous settings in ubiquitous computing . story_separator_special_tag cartel is a mobile sensor computing system designed to collect , process , deliver , and visualize data from sensors located on mobile units such as automobiles . a cartel node is a mobile embedded computer coupled to a set of sensors . each node gathers and processes sensor readings locally before delivering them to a central portal , where the data is stored in a database for further analysis and visualization . in the automotive context , a variety of on-board and external sensors collect data as users drive.cartel provides a simple query-oriented programming interface , handles large amounts of heterogeneous data from sensors , and handles intermittent and variable network connectivity . cartel nodes rely primarily on opportunistic wireless ( e.g. , wi-fi , bluetooth ) connectivity to the internet , or to `` data mules '' such as other cartel nodes , mobile phone flash memories , or usb keys-to communicate with the portal . cartel applications run on the portal , using a delay-tolerant continuous query processor , icedb , to specify how the mobile nodes should summarize , filter , and dynamically prioritize data . the portal and the mobile nodes use a delay-tolerant network story_separator_special_tag part one : conceptual and methodological issues the information society as an object of study - frederick williams part two : the information economy the growth of the information sector - heather e hudson and louis leung the economy of the new texas - james smith part three : promoting change the `` technopolis '' concept - raymond w smilor , george kozmetsky and david v gibson the coming of mcc - david v gibson and everett m rogers urban telecommunication investment - sharon strover science and technology policy - brian muller the bovernor 's science and technology council - larry d browning part four : attitudes toward change modeling change from survey data - james dyer and don haynes gauging public attitudes toward science and technology - james dyer , frederick williams and don haynes part five : media , information technology , and change texas according to the - frederick williams and denise fynmore new york times relations of occupations to uses of information technologies - stephen d reese predicting media uses - pamela j shoemaker computers in texas schools - nolan estes and victoria williams key issues for education in the information age advertising as an index story_separator_special_tag people spend most of their time at a few key locations , such as home and work . being able to identify how the movements of people cluster around these `` important places '' is crucial for a range of technology and policy decisions in areas such as telecommunications and transportation infrastructure deployment . in this paper , we propose new techniques based on clustering and regression for analyzing anonymized cellular network data to identify generally important locations , and to discern semantically meaningful locations such as home and work . starting with temporally sparse and spatially coarse location information , we propose a new algorithm to identify important locations . we test this algorithm on arbitrary cellphone users , including those with low call rates , and find that we are within 3 miles of ground truth for 88 % of volunteer users . further , after locating home and work , we achieve commute distance estimates that are within 1 mile of equivalent estimates derived from government census data . finally , we perform carbon footprint analyses on hundreds of thousands of anonymous users as an example of how our data and algorithms can form an accurate and story_separator_special_tag proactively providing services to mobile individuals is essential for emerging ubiquitous applications . the major challenge in providing users with proactive services lies in continuously monitoring their contexts based on numerous sensors . the context monitoring with rich sensors imposes heavy workloads on mobile devices with limited computing and battery power . we present seemon , a scalable and energy-efficient context monitoring framework for sensor-rich , resource-limited mobile environments . running on a personal mobile device , seemon effectively performs context monitoring involving numerous sensors and applications . on top of seemon , multiple applications on the device can proactively understand users ' contexts and react appropriately . this paper proposes a novel context monitoring approach that provides efficient processing and sensor control mechanisms . we implement and test a prototype system on two mobile devices : a umpc and a wearable device with a diverse set of sensors . example applications are also developed based on the implemented system . experimental results show that seemon achieves a high level of scalability and energy efficiency . story_separator_special_tag a 2001 ibm manifesto observed that a looming software complexity crisis -caused by applications and environments that number into the tens of millions of lines of code - threatened to halt progress in computing . the manifesto noted the almost impossible difficulty of managing current and planned computing systems , which require integrating several heterogeneous environments into corporate-wide computing systems that extend into the internet . autonomic computing , perhaps the most attractive approach to solving this problem , creates systems that can manage themselves when given high-level objectives from administrators . systems manage themselves according to an administrator 's goals . new components integrate as effortlessly as a new cell establishes itself in the human body . these ideas are not science fiction , but elements of the grand challenge to create self-managing computing systems . story_separator_special_tag continuously understanding a user 's location context in colloquial terms and the paths that connect the locations unlocks many opportunities for emerging applications . while extensive research effort has been made on efficiently tracking a user 's raw coordinates , few attempts have been made to efficiently provide everyday contextual information about these locations as places and paths . we introduce sensloc , a practical location service to provide such contextual information , abstracting location as place visits and path travels from sensor signals . sensloc comprises of a robust place detection algorithm , a sensitive movement detector , and an on-demand path tracker . based on a user 's mobility , sensloc proactively controls active cycle of a gps receiver , a wifi scanner , and an accelerometer . pilot studies show that sensloc can correctly detect 94 % of the place visits , track 95 % of the total travel distance , and still only consume 13 % of energy than algorithms that periodically collect coordinates to provide the same information . story_separator_special_tag mobile data usage over cellular networks has been dramatically increasing over the past years . wi-fi based wireless networks offer a high-bandwidth alternative for offloading such data traffic . however , intermittent connectivity , and battery power drain in mobile devices , inhibits always-on connectivity even in areas with good wi-fi coverage . this paper presents wifisense , a system that employs user mobility information retrieved from low-power sensors ( e.g. , accelerometer ) in smartphones , and further includes adaptive wi-fi sensing algorithms , to conserve battery power while improving wi-fi usage . we implement the proposed system in android-based smartphones and evaluate the implementation in both indoor and outdoor wi-fi networks . our evaluation results show that wifisense saves energy consumption for scans by up to 79 % and achieves considerable increase in wi-fi usage for various scenarios . story_separator_special_tag lifestyle modification is a key facet of the prevention and management of chronic diseases . mobile devices that people already carry provide a promising platform for facilitating these lifestyle changes . this paper describes key lessons learned from the development and evaluation of two mobile systems for encouraging physical activity . we argue that by supporting persistent cognitive activation of health goals , encouraging an extensive range of relevant healthy behaviors , focusing on long-term patterns of activity , and facilitating social support as an optional but not primary motivator , systems can be developed that effectively motivate behavior change and provide support when and where people make decisions that affect their health . story_separator_special_tag context-aware computing describes the situation where a wearable/mobile computer is aware of its user 's state and surroundings and modifies its behavior based on this information . we designed , implemented , and evaluated a wearable system which can learn context-dependent personal preferences by identifying individual user states and observing how the user interacts with the system in these states . this learning occurs online and does not require external supervision . the system relies on techniques from machine learning and statistical analysis . a case study integrates the approach in a context-aware mobile phone . the results indicate that the method is able to create a meaningful user context model while only requiring data from comfortable wearable sensor devices . story_separator_special_tag a key challenge for mobile health is to develop new technology that can assist individuals in maintaining a healthy lifestyle by keeping track of their everyday behaviors . smartphones embedded with a wide variety of sensors are enabling a new generation of personal health applications that can actively monitor , model and promote wellbeing . automated wellbeing tracking systems available so far have focused on physical fitness and sleep and often require external non-phone based sensors . in this work , we take a step towards a more comprehensive smartphone based system that can track activities that impact physical , social , and mental wellbeing namely , sleep , physical activity , and social interactions and provides intelligent feedback to promote better health . we present the design , implementation and evaluation of bewell , an automated wellbeing app for the android smartphones and demonstrate its feasibility in monitoring multi-dimensional wellbeing . by providing a more complete picture of health , bewell has the potential to empower individuals to improve their overall wellbeing and identify any early signs of decline . story_separator_special_tag mobile phones or smartphones are rapidly becoming the central computer and communication device in people 's lives . application delivery channels such as the apple appstore are transforming mobile phones into app phones , capable of downloading a myriad of applications in an instant . importantly , today 's smartphones are programmable and come with a growing set of cheap powerful embedded sensors , such as an accelerometer , digital compass , gyroscope , gps , microphone , and camera , which are enabling the emergence of personal , group , and communityscale sensing applications . we believe that sensor-equipped mobile phones will revolutionize many sectors of our economy , including business , healthcare , social networks , environmental monitoring , and transportation . in this article we survey existing mobile phone sensing algorithms , applications , and systems . we discuss the emerging sensing paradigms , and formulate an architectural framework for discussing a number of the open issues and challenges emerging in the new area of mobile phone sensing research . story_separator_special_tag sensor-enabled smartphones are opening a new frontier in the development of mobile sensing applications . the recognition of human activities and context from sensor-data using classification models underpins these emerging applications . however , conventional approaches to training classifiers struggle to cope with the diverse user populations routinely found in large-scale popular mobile applications . differences between users ( e.g. , age , sex , behavioral patterns , lifestyle ) confuse classifiers , which assume everyone is the same . to address this , we propose community similarity networks ( csn ) , which incorporates inter-person similarity measurements into the classifier training process . under csn every user has a unique classifier that is tuned to their own characteristics . csn exploits crowd-sourced sensor-data to personalize classifiers with data contributed from other similar users . this process is guided by similarity networks that measure different dimensions of inter-person similarity . our experiments show csn outperforms existing approaches to classifier training under the presence of population diversity . story_separator_special_tag broadly conceived as computational models of cognition and tools for modeling complex adaptive systems , later extended for use in adaptive robotics , and today also applied to effective classification and data-mining what has happened to learning classifier systems in the last decade ? this paper addresses this question by examining the current state of learning classifier system research . story_separator_special_tag interventions to shift the behaviour of consumers using unsustainable wildlife products are key to threatened species conservation . whether these interventions are effective is largely unknown due to a dearth of detailed evaluations . we previously conducted a country-level online behaviour change intervention targeting consumers of the critically endangered saiga antelope ( saiga tatarica ) horn in singapore . to evaluate intervention impact , we carried out in-person consumer surveys with > 2,000 individuals pre- and post-intervention ( 2017 and 2019 ) , and 93 in-person post-intervention surveys with traditional chinese medicine ( tcm ) shopkeepers ( 2019 ) . the proportion of self-reported high-usage saiga horn consumers in the target audience ( chinese singaporean women aged 35 59 ) did not change significantly from pre- to post-intervention ( 24.4 % versus 22.6 % ) . however , post-intervention the target audience was significantly more likely than the non-target audience to accurately recall the intervention message and to report a decrease in saiga horn usage ( 4 % versus 1 % reported a behaviour change ) . within the target audience , high-usage consumers were significantly more likely than lower-usage consumers to recall the message and report a behaviour change story_separator_special_tag a key facet of urban design , planning , and monitoring is measuring communities ' well-being . historically , researchers have established a link between well-being and visibility of city neighbourhoods and have measured visibility via quantitative studies with willing participants , a process that is invariably manual and cumbersome . however , the influx of the world 's population into urban centres now calls for methods that can easily be implemented , scaled , and analysed . we propose that one such method is offered by pervasive technology : we test whether urban mobility -- as measured by public transport fare collection sensors -- is a viable proxy for the visibility of a city 's communities . we validate this hypothesis by examining the correlation between london urban flow of public transport and census-based indices of the well-being of london 's census areas . we find that not only are the two correlated , but a number of insights into the flow between areas of varying social standing can be uncovered with readily available transport data . for example , we find that deprived areas tend to preferentially attract people living in other deprived areas , suggesting a segregation story_separator_special_tag this paper presents an overview of the mobile data challenge ( mdc ) , a large-scale research initiative aimed at generating innovations around smartphone-based research , as well as community-based evaluation of related mobile data analysis methodologies . first we review the lausanne data collection campaign ( ldcc ) an initiative to collect unique , longitudinal smartphone data set for the basis of the mdc . then , we introduce the open and dedicated tracks of the mdc ; describe the specific data sets used in each of them ; and discuss some of the key aspects in order to generate privacy-respecting , challenging , and scientifically relevant mobile data resources for wider use of the research community . the concluding remarks will summarize the paper . story_separator_special_tag in this paper , we propose sociophone , a novel initiative to build a mobile platform for face-to-face interaction monitoring . face-to-face interaction , especially conversation , is a fundamental part of everyday life . interaction-aware applications aimed at facilitating group conversations have been proposed , but have not proliferated yet . useful contexts to capture and support face-to-face interactions need to be explored more deeply . more important , recognizing delicate conversational contexts with commodity mobile devices requires solving a number of technical challenges . as a first step to address such challenges , we identify useful meta-linguistic contexts of conversation , such as turn-takings , prosodic features , a dominant participant , and pace . these serve as cornerstones for building a variety of interaction-aware applications . sociophone abstracts such useful meta-linguistic contexts as a set of intuitive apis . its runtime efficiently monitors registered contexts during in-progress conversations and notifies applications on-the-fly . importantly , we have noticed that online turn monitoring is the basic building block for extracting diverse meta-linguistic contexts , and have devised a novel volume-topography-based method . we show the usefulness of sociophone with several interesting applications : sociotherapist , sociodigest , and story_separator_special_tag accurate recognition and tracking of human activities is an important goal of ubiquitous computing . recent advances in the development of multi-modal wearable sensors enable us to gather rich datasets of human activities . however , the problem of automatically identifying the most useful features for modeling such activities remains largely unsolved . in this paper we present a hybrid approach to recognizing activities , which combines boosting to discriminatively select useful features and learn an ensemble of static classifiers to recognize different activities , with hidden markov models ( hmms ) to capture the temporal regularities and smoothness of activities . we tested the activity recognition system using over 12 hours of wearable-sensor data collected by volunteers in natural unconstrained environments . the models succeeded in identifying a small set of maximally informative features , and were able identify ten different human activities with an accuracy of 95 % . story_separator_special_tag learning patterns of human behavior from sensor data is extremely important for high-level activity inference . we show how to extract and label a person 's activities and significant places from traces of gps data . in contrast to existing techniques , our approach simultaneously detects and classifies the significant locations of a person and takes the high-level context into account . our system uses relational markov networks to represent the hierarchical activity model that encodes the complex relations among gps readings , activities and significant places . we apply fft-based message passing to perform efficient summation over large numbers of nodes in the networks . we present experiments that show significant improvements over existing techniques . story_separator_special_tag learning patterns of human behavior from sensor data is extremely important for high-level activity inference . this paper describes how to extract a person 's activities and significant places from traces of gps data . the system uses hierarchically structured conditional random fields to generate a consistent model of a person 's activities and places . in contrast to existing techniques , this approach takes the high-level context into account in order to detect the significant places of a person . experiments show significant improvements over existing techniques . furthermore , they indicate that the proposed system is able to robustly estimate a person 's activities using a model that is trained from data collected by other persons . story_separator_special_tag to accomplish frequent , simple tasks with high efficiency , it is necessary to leverage low-power , microcontroller-like processors that are increasingly available on mobile systems . however , existing solutions require developers to directly program the low-power processors and carefully manage inter-processor communication . we present reflex , a suite of compiler and runtime techniques that significantly lower the barrier for developers to leverage such low-power processors . the heart of reflex is a software distributed shared memory ( dsm ) that enables shared memory objects with release consistency among code running on loosely coupled processors . in order to achieve high energy efficiency without sacrificing performance much , the reflex dsm leverages ( i ) extreme architectural asymmetry between low-power processors and powerful central processors , ( ii ) aggressive compile-time optimization , and ( iii ) a minimalist runtime that supports efficient message passing and event-driven execution . we report a complete realization of reflex that runs on a ti omap4430-based development platform as well as on a custom tri-processor mobile platform . using smartphone sensing applications reported in recent literature , we show that reflex supports a programming style very close to contemporary smartphone programming . story_separator_special_tag in his now well-known stanford university commencement address , delivered on 12 june 2005 , steve jobs , then ceo of apple computer and pixar animation studios , encouraged the graduating class to be innovative by pursuing what you love and staying foolish . the speech has been cited worldwide as it epitomizes the culture of the knowledge economy , whereby what are deemed important for innovation are not just large r & d labs but also a culture of innovation and the ability of key players to change the rules of the game . by emphasizing the foolish part of innovation , jobs highlights the fact that underlying the success of a company like apple at the heart of the silicon valley revolution is not ( just ) the experience and technical expertise of its staff , but ( also ) their ability to be a bit crazy , take risks and give design as much importance as hardcore technology . the fact that jobs dropped out of school , took calligraphy classes and continued to dress all his life like a college student in sneakers is all symbolic of his own style of staying young and foolish . story_separator_special_tag location is a fundamental service for mobile computing . typical gps receivers , although widely available , consume too much energy to be useful for many applications . observing that in many sensing scenarios , the location information can be post-processed when the data is uploaded to a server , we design a cloud-offloaded gps ( co-gps ) solution that allows a sensing device to aggressively duty-cycle its gps receiver and log just enough raw gps signal for post-processing . leveraging publicly available information such as gnss satellite ephemeris and an earth elevation database , a cloud service can derive good quality gps locations from a few milliseconds of raw data . using our design of a portable sensing device platform called cleo , we evaluate the accuracy and efficiency of the solution . compared to more than 30 seconds of heavy signal processing on standalone gps receivers , we can achieve three orders of magnitude lower energy consumption per location tagging . story_separator_special_tag stress can have long term adverse effects on individuals ' physical and mental well-being . changes in the speech production process is one of many physiological changes that happen during stress . microphones , embedded in mobile phones and carried ubiquitously by people , provide the opportunity to continuously and non-invasively monitor stress in real-life situations . we propose stresssense for unobtrusively recognizing stress from human voice using smartphones . we investigate methods for adapting a one-size-fits-all stress model to individual speakers and scenarios . we demonstrate that the stresssense classifier can robustly identify stress across multiple individuals in diverse acoustic environments : using model adaptation stresssense achieves 81 % and 76 % accuracy for indoor and outdoor environments , respectively . we show that stresssense can be implemented on commodity android phones and run in real-time . to the best of our knowledge , stresssense represents the first system to consider voice based stress detection and model adaptation in diverse real-life conversational situations using smartphones . story_separator_special_tag top end mobile phones include a number of specialized ( e.g. , accelerometer , compass , gps ) and general purpose sensors ( e.g. , microphone , camera ) that enable new people-centric sensing applications . perhaps the most ubiquitous and unexploited sensor on mobile phones is the microphone - a powerful sensor that is capable of making sophisticated inferences about human activity , location , and social events from sound . in this paper , we exploit this untapped sensor not in the context of human communications but as an enabler of new sensing applications . we propose soundsense , a scalable framework for modeling sound events on mobile phones . soundsense is implemented on the apple iphone and represents the first general purpose sound sensing system specifically designed to work on resource limited phones . the architecture and algorithms are designed for scalability and soundsense uses a combination of supervised and unsupervised learning techniques to classify both general sound types ( e.g. , music , voice ) and discover novel sound events specific to individual users . the system runs solely on the mobile phone with no back-end interactions . through implementation and evaluation of two proof of story_separator_special_tag supporting continuous sensing applications on mobile phones is challenging because of the resource demands of long-term sensing , inference and communication algorithms . we present the design , implementation and evaluation of the jigsaw continuous sensing engine , which balances the performance needs of the application and the resource demands of continuous sensing on the phone . jigsaw comprises a set of sensing pipelines for the accelerometer , microphone and gps sensors , which are built in a plug and play manner to support : i ) resilient accelerometer data processing , which allows inferences to be robust to different phone hardware , orientation and body positions ; ii ) smart admission control and on-demand processing for the microphone and accelerometer data , which adaptively throttles the depth and sophistication of sensing pipelines when the input data is low quality or uninformative ; and iii ) adaptive pipeline processing , which judiciously triggers power hungry pipeline stages ( e.g. , sampling the gps ) taking into account the mobility and behavioral patterns of the user to drive down energy costs . we implement and evaluate jigsaw on the nokia n95 and the apple iphone , two popular smartphone platforms , story_separator_special_tag an important question in behavioral epidemiology and public health is to understand how individual behavior is affected by illness and stress . although changes in individual behavior are intertwined with contagion , epidemiologists today do not have sensing or modeling tools to quantitatively measure its effects in real-world conditions . in this paper , we propose a novel application of ubiquitous computing . we use mobile phone based co-location and communication sensing to measure characteristic behavior changes in symptomatic individuals , reflected in their total communication , interactions with respect to time of day ( e.g. , late night , early morning ) , diversity and entropy of face-to-face interactions and movement . using these extracted mobile features , it is possible to predict the health status of an individual , without having actual health measurements from the subject . finally , we estimate the temporal information flux and implied causality between physical symptoms , behavior and mental health . story_separator_special_tag mobile phones are a pervasive platform for opportunistic sensing of behaviors and opinions . three studies use location and communication sensors to model individual behaviors and symptoms , long-term health outcomes , and the diffusion of opinions in a community . these three analyses illustrate how mobile phones can unobtrusively monitor rich social interactions , because the underlying sensing technologies are now commonplace and readily available . story_separator_special_tag exposure and adoption of opinions in social networks are important questions in education , business , and government . we describe a novel application of pervasive computing based on using mobile phone sensors to measure and model the face-to-face interactions and subsequent opinion changes amongst undergraduates , during the 2008 us presidential election campaign . we find that self-reported political discussants have characteristic interaction patterns and can be predicted from sensor data . mobile features can be used to estimate unique individual exposure to different opinions , and help discover surprising patterns of dynamic homophily related to external political events , such as election debates and election day . to our knowledge , this is the first time such dynamic homophily effects have been measured . automatically estimated exposure explains individual opinions on election day . finally , we report statistically significant differences in the daily activities of individuals that change political opinions versus those that do not , by modeling and discovering dominant activities using topic models . we find people who decrease their interest in politics are routinely exposed ( face-to-face ) to friends with little or no interest in politics . story_separator_special_tag if industry visionaries are correct , our lives will soon be full of sensors , connected together in loose conglomerations via wireless networks , each monitoring and collecting data about the environment at large . these sensors behave very differently from traditional database sources : they have intermittent connectivity , are limited by severe power constraints , and typically sample periodically and push immediately , keeping no record of historical information . these limitations make traditional database systems inappropriate for queries over sensors . we present the fjords architecture for managing multiple queries over many sensors , and show how it can be used to limit sensor resource demands while maintaining high query throughput . we evaluate our architecture using traces from a network of traffic sensors deployed on interstate 80 near berkeley and present performance results that show how query throughput , communication costs and power consumption are necessarily coupled in sensor environments . story_separator_special_tag urban street-parking availability statistics are challenging to obtain in real-time but would greatly benefit society by reducing traffic congestion . in this paper we present the design , implementation and evaluation of parknet , a mobile system comprising vehicles that collect parking space occupancy information while driving by . each parknet vehicle is equipped with a gps receiver and a passenger-side-facing ultrasonic range-finder to determine parking spot occupancy . the data is aggregated at a central server , which builds a real-time map of parking availability and could provide this information to clients that query the system in search of parking . creating a spot-accurate map of parking availability challenges gps location accuracy limits . to address this need , we have devised an environmental fingerprinting approach to achieve improved location accuracy . based on 500 miles of road-side parking data collected over 2 months , we found that parking spot counts are 95 % accurate and occupancy maps can achieve over 90 % accuracy . finally , we quantify the amount of sensors needed to provide adequate coverage in a city . using extensive gps traces from over 500 san francisco taxicabs , we show that if parknet were story_separator_special_tag the ewatch is a wearable sensing , notification , and computing platform built into a wrist watch form factor making it highly available , instantly viewable , ideally located for sensors , and unobtrusive to users . bluetooth communication provides a wireless link to a cellular phone or stationary computer . ewatch senses light , motion , audio , and temperature and provides visual , audio , and tactile notification . the system provides ample processing capabilities with multiple day battery life enabling realistic user studies . this paper provides the motivation for developing a wearable computing platform , a description of the power aware hardware and software architectures , and results showing how online nearest neighbor classification can identify and recognize a set of frequently visited locations . story_separator_special_tag researchers studying daily life mobility patterns have recently shown that humans are typically highly predictable in their movements . however , no existing work has examined the boundaries of this predictability , where human behaviour transitions temporarily from routine patterns to highly unpredictable states . to address this shortcoming , we tackle two interrelated challenges . first , we develop a novel information-theoretic metric , called instantaneous entropy , to analyse an individual 's mobility patterns and identify temporary departures from routine . second , to predict such departures in the future , we propose the first bayesian framework that explicitly models breaks from routine , showing that it outperforms current state-of-the-art predictors . story_separator_special_tag people living in urban areas spend a considerable amount of time on public transport , for example , commuting to/from work . during these periods , opportunities for inter-personal networking present themselves , as many members of the public now carry electronic devices equipped with bluetooth or other wireless technology . using these devices , individuals can share content ( e.g. , music , news and video clips ) with fellow travellers that are on the same train or bus . transferring media content takes time ; in order to maximise the chances of successful downloads , users should identify neighbours that possess desirable content and who will travel with them for long-enough periods . in this paper , we propose a user-centric prediction scheme that collects historical colocation information to determine the best content sources . the scheme works on the assumption that people have a high degree of regularity in their movements . we first validate this assumption on a real dataset , that consists of traces of people moving in a large city 's mass transit system . we then demonstrate experimentally on these traces that our prediction scheme significantly improves communication efficiency , when compared to story_separator_special_tag we present darwin , an enabling technology for mobile phone sensing that combines collaborative sensing and classification techniques to reason about human behavior and context on mobile phones . darwin advances mobile phone sensing through the deployment of efficient but sophisticated machine learning techniques specifically designed to run directly on sensor-enabled mobile phones ( i.e. , smartphones ) . darwin tackles three key sensing and inference challenges that are barriers to mass-scale adoption of mobile phone sensing applications : ( i ) the human-burden of training classifiers , ( ii ) the ability to perform reliably in different environments ( e.g. , indoor , outdoor ) and ( iii ) the ability to scale to a large number of phones without jeopardizing the `` phone experience '' ( e.g. , usability and battery lifetime ) . darwin is a collaborative reasoning framework built on three concepts : classifier/model evolution , model pooling , and collaborative inference . to the best of our knowledge darwin is the first system that applies distributed machine learning techniques and collaborative inference concepts to mobile phones . we implement the darwin system on the nokia n97 and apple iphone . while darwin represents a general story_separator_special_tag we present the design , implementation , evaluation , and user ex periences of thecenceme application , which represents the first system that combines the inference of the presence of individuals using off-the-shelf , sensor-enabled mobile phones with sharing of this information through social networking applications such as facebook and myspace . we discuss the system challenges for the development of software on the nokia n95 mobile phone . we present the design and tradeoffs of split-level classification , whereby personal sensing presence ( e.g. , walking , in conversation , at the gym ) is derived from classifiers which execute in part on the phones and in part on the backend servers to achieve scalable inference . we report performance measurements that characterize the computational requirements of the software and the energy consumption of the cenceme phone client . we validate the system through a user study where twenty two people , including undergraduates , graduates and faculty , used cenceme continuously over a three week period in a campus town . from this user study we learn how the system performs in a production environment and what uses people find for a personal sensing system . story_separator_special_tag background : emotional awareness and self-regulation are important skills for improving mental health and reducing the risk of cardiovascular disease . cognitive behavioral therapy can teach these skills but is not widely available . objective : this exploratory study examined the potential of mobile phone technologies to broaden access to cognitive behavioral therapy techniques and to provide in-the-moment support . methods : we developed a mobile phone application with touch screen scales for mood reporting and therapeutic exercises for cognitive reappraisal ( ie , examination of maladaptive interpretations ) and physical relaxation . the application was deployed in a one-month field study with eight individuals who had reported significant stress during an employee health assessment . participants were prompted via their mobile phones to report their moods several times a day on a mood map a translation of the circumplex model of emotion and a series of single-dimension mood scales . using the prototype , participants could also activate mobile therapies as needed . during weekly open-ended interviews , participants discussed their use of the device and responded to longitudinal views of their data . analyses included a thematic review of interview narratives , assessment of mood changes over the story_separator_special_tag the perspective of natural phenomena as computational expression can help us find ways to carry out anticipatory computing . with this goal in mind , we can reach back to feynman s attempt to define quantum computation . his understanding that space-time states can be defined not only in reference to the past and the present , but also to the future proves significant for showing how anticipatory processes can be computationally simulated . anticipatory computing is embodied in adaptive , non-deterministic , and open-ended information processes . given the realization that failure to acknowledge anticipation results in major breakdowns ( such as the current global financial crisis ) , the need for anticipation-based computational applications is higher than ever . in this article , an anticipatory control mechanism implemented for the automotive industry is presented . story_separator_special_tag we propose an acquisitional context engine ( ace ) , a middleware that supports continuous context-aware applications while mitigating sensing costs for inferring contexts . the ace provides user 's current context to applications running on it . in addition , it dynamically learns relationships among various context attributes ( e.g. , whenever the user is driving , he is not athome ) . the ace exploits these automatically learned relationships for two powerful optimizations . the first is inference caching that allows the ace to opportunistically infer one context attribute ( athome ) from another already-known attribute ( driving ) , without acquiring any sensor data . the second optimization is speculative sensing that enables the ace to occasionally infer the value of an expensive attribute ( e.g. , athome ) by sensing cheaper attributes ( e.g. , driving ) . our experiments with two real context traces of 105 people and a windows phone prototype show that the ace can reduce sensing costs of three context-aware applications by about 4.2 times , compared to a raw sensor data cache shared across applications , with a very small memory and processing overhead . story_separator_special_tag mobile devices can not rely on a single managed network , but must exploit a wide variety of connectivity options as they travel . we argue that such systems must consider the derivative of connectivity -- the changes inherent in movement between separately managed networks , with widely varying capabilities . with predictive knowledge of such changes , devices can more intelligently schedule network usage.to exploit the derivative of connectivity , we observe that people are creatures of habit ; they take similar paths every day . our system , breadcrumbs , tracks the movement of the device 's owner , and customizes a predictive mobility model for that specific user . combined with past observations of wireless network capabilities , breadcrumbs generates connectivity forecasts . we have built a breadcrumbs prototype , and demonstrated its potential with several weeks of real-world usage . our results show that these forecasts are sufficiently accurate , even with as little as one week of training , to provide improved performance with reduced power consumption for several applications . story_separator_special_tag mobile location-based services are thriving , providing an unprecedented opportunity to collect fine grained spatio-temporal data about the places users visit . this multi-dimensional source of data offers new possibilities to tackle established research problems on human mobility , but it also opens avenues for the development of novel mobile applications and services . in this work we study the problem of predicting the next venue a mobile user will visit , by exploring the predictive power offered by different facets of user behavior . we first analyze about 35 million check-ins made by about 1 million foursquare users in over 5 million venues across the globe , spanning a period of five months . we then propose a set of features that aim to capture the factors that may drive users ' movements . our features exploit information on transitions between types of places , mobility flows between venues , and spatio-temporal characteristics of user check-in patterns . we further extend our study combining all individual features in two supervised learning models , based on linear regression and m5 model trees , resulting in a higher overall prediction accuracy . we find that the supervised methodology based on the story_separator_special_tag many emerging smartphone applications require position information to provide location-based or context-aware services . in these applications , gps is often preferred over its alternatives such as gsm/wifi based positioning systems because it is known to be more accurate . however , gps is extremely power hungry . hence a common approach is to periodically duty-cycle gps . however , gps duty-cycling trades-off positioning accuracy for lower energy . a key requirement for such applications , then , is a positioning system that provides accurate position information while spending minimal energy.in this paper , we present raps , rate-adaptive positioning system for smartphone applications . it is based on the observation that gps is generally less accurate in urban areas , so it suffices to turn on gps only as often as necessary to achieve this accuracy . raps uses a collection of techniques to cleverly determine when to turn on gps . it uses the location-time history of the user to estimate user velocity and adaptively turn on gps only if the estimated uncertainty in position exceeds the accuracy threshold . it also efficiently estimates user movement using a duty-cycled accelerometer , and utilizes bluetooth communication to reduce position story_separator_special_tag many emerging location-aware applications require position information . however , these applications rarely use celltower-based localization because of its inaccuracy , preferring instead to use the more energy-hungry gps.in this paper , we present caps , a cell-id aided positioning system . caps leverages near-continuous mobility and the position history of a user to achieve significantly better accuracy than the celltower-based approach , while keeping energy overhead low . caps is designed based on the insight that users exhibit consistency in routes traveled , and that cell-id transition points that the user experiences can , on a frequently traveled route , uniquely identify position . to this end , caps uses a cell-id sequence matching technique to estimate current position based on the history of cell-id and gps position sequences that match the current cell-id sequence . we have implemented caps on android-based smartphones and have extensively evaluated it at different locations , and for different platforms and carriers . our evaluation results show that caps can save more than 90 % of the energy spent by the positioning system compared to the case where gps is always used , while providing reasonably accurate position information with errors less than story_separator_special_tag behavioural change interventions represent a powerful means for tackling a number of health and well-being issues , from obesity to stress and addiction . in the current medical practice , the change is induced through tailored coaching , support and information delivery . however , with the advent of smartphones , innovative ways of delivering interventions are emerging . indeed , mobile phones , equipped with an array of sensors , and carried by their users at all times , enable therapists to both learn about the user behaviour , and impact the behaviour through the delivery of more relevant and personalised information . in this work we propose harnessing pervasive computing to not only learn from users ' past behaviour , but also predict future actions and emotional states , deliver interventions proactively , evaluate their impact at run-time , and over time learn a personal intervention-effect model of a participant . story_separator_special_tag the mobile phone represents a unique platform for interactive applications that can harness the opportunity of an immediate contact with a user in order to increase the impact of the delivered information . however , this accessibility does not necessarily translate to reachability , as recipients might refuse an initiated contact or disfavor a message that comes in an inappropriate moment . in this paper we seek to answer whether , and how , suitable moments for interruption can be identified and utilized in a mobile system . we gather and analyze a real-world smartphone data trace and show that users ' broader context , including their activity , location , time of day , emotions and engagement , determine different aspects of interruptibility . we then design and implement interruptme , an interruption management library for android smartphones . an extensive experiment shows that , compared to a context-unaware approach , interruptions elicited through our library result in increased user satisfaction and shorter response times . story_separator_special_tag predicting future calls can be the next advanced feature of the next-generation telecommunication networks as the service providers are looking to offer new services to their customers . call prediction can be useful to many applications such as planning daily schedules , avoiding unwanted communications ( e.g . voice spam ) , and resource planning in call centers . predicting calls is a very challenging task . we believe that this is an emerging area of research in ambient intelligence where the electronic devices are sensitive and responsive to people 's needs and behavior . in particular , we believe that the results of this research will lead to higher productivity and quality of life . in this article , we present a call predictor ( cp ) that offers two new advanced features for the next-generation phones namely incoming call forecast and intelligent address book . for the incoming call forecast , the cp makes the next-24-hour incoming call prediction based on recent caller 's behavior and reciprocity . for the intelligent address book , the cp generates a list of most likely contacts/numbers to be dialed at any given time based on the user 's behavior and reciprocity story_separator_special_tag mobile instant messaging ( e.g. , via sms or whatsapp ) often goes along with an expectation of high attentiveness , i.e. , that the receiver will notice and read the message within a few minutes . hence , existing instant messaging services for mobile phones share indicators of availability , such as the last time the user has been online . however , in this paper we not only provide evidence that these cues create social pressure , but that they are also weak predictors of attentiveness . as remedy , we propose to share a machine-computed prediction of whether the user will view a message within the next few minutes or not . for two weeks , we collected behavioral data from 24 users of mobile instant messaging services . by the means of machine-learning techniques , we identified that simple features extracted from the phone , such as the user 's interaction with the notification center , the screen activity , the proximity sensor , and the ringer mode , are strong predictors of how quickly the user will attend to the messages . with seven automatically selected features our model predicts whether a phone user will story_separator_special_tag although mobile phones are ideal platforms for continuous human centric sensing , the state of the art phone architectures today have not been designed to support continuous sensing applications . currently , sampling and processing sensor data on the phone requires the main processor and associated components to be continuously on , creating a large energy overhead that can severely impact the battery lifetime of the phone . we will demonstrate little rock , a novel sensing architecture for mobile phones , where sampling and , when possible , processing of sensor data is offloaded to a dedicated low-power processor . this approach enables the phone to perform continuous sensing three orders of magnitude more energy efficiently compared to the normal approaches . story_separator_special_tag bipolar disorder is a severe form of mental illness . it is characterized by alternated episodes of mania and depression , and it is treated typically with a combination of pharmacotherapy and psychotherapy . recognizing early warning signs of upcoming phases of mania or depression would be of great help for a personalized medical treatment . unfortunately , this is a difficult task to be performed for both patient and doctors . in this paper we present the monarca wearable system , which is meant for recognizing early warning signs and predict maniac or depressive episodes . the system is a smartphone-centred and minimally invasive wearable sensors network that is being developing in the framework of the monarca european project . story_separator_special_tag the idea of continuously monitoring well-being using mobile-sensing systems is gaining popularity . in-situ measurement of human behavior has the potential to overcome the short comings of gold-standard surveys that have been used for decades by the medical community . however , current sensing systems have mainly focused on tracking physical health ; some have approximated aspects of mental health based on proximity measurements but have not been compared against medically accepted screening instruments . in this paper , we show the feasibility of a multi-modal mobile sensing system to simultaneously assess mental and physical health . by continuously capturing fine-grained motion and privacy-sensitive audio data , we are able to derive different metrics that reflect the results of commonly used surveys for assessing well-being by the medical community . in addition , we present a case study that highlights how errors in assessment due to the subjective nature of the responses could potentially be avoided by continuous mobile sensing . story_separator_special_tag the interactions and social relations among users in workplaces have been studied by many generations of social psychologists . there is evidence that groups of users that interact more in workplaces are more productive . however , it is still hard for social scientists to capture fine-grained data about phenomena of this kind and to find the right means to facilitate interaction . it is also difficult for users to keep track of their level of sociability with colleagues . while mobile phones offer a fantastic platform for harvesting long term and fine grained data , they also pose challenges : battery power is limited and needs to be traded-off for sensor reading accuracy and data transmission , while energy costs in processing computationally intensive tasks are high . in this paper , we propose sociablesense , a smart phones based platform that captures user behavior in office environments , while providing the users with a quantitative measure of their sociability and that of colleagues . we tackle the technical challenges of building such a tool : the system provides an adaptive sampling mechanism as well as models to decide whether to perform computation of tasks , such as the story_separator_special_tag today 's mobile phones represent a rich and powerful computing platform , given their sensing , processing and communication capabilities . phones are also part of the everyday life of billions of people , and therefore represent an exceptionally suitable tool for conducting social and psychological experiments in an unobtrusive way.de the ability of sensing individual emotions as well as activities , verbal and proximity interactions among members of social groups . moreover , the system is programmable by means of a declarative language that can be used to express adaptive rules to improve power saving . we evaluate a system prototype on nokia symbian phones by means of several small-scale experiments aimed at testing performance in terms of accuracy and power consumption . finally , we present the results of real deployment where we study participants emotions and interactions . we cross-validate our measurements with the results obtained through questionnaires filled by the users , and the results presented in social psychological studies using traditional methods . in particular , we show how speakers and participants ' emotions can be automatically detected by means of classifiers running locally on off-the-shelf mobile phones , and how speaking and interactions can story_separator_special_tag reynolds , douglas a. , quatieri , thomas f. , and dunn , robert b. , speaker verification using adapted gaussian mixture models , digital signal processing10 ( 2000 ) , 19\xe2 ? 41.in this paper we describe the major elements of mit lincoln laboratory 's gaussian mixture model ( gmm ) -based speaker verification system used successfully in several nist speaker recognition evaluations ( sres ) . the system is built around the likelihood ratio test for verification , using simple but effective gmms for likelihood functions , a universal background model ( ubm ) for alternative speaker representation , and a form of bayesian adaptation to derive speaker models from the ubm . the development and use of a handset detector and score normalization to greatly improve verification performance is also described and discussed . finally , representative performance benchmarks and system behavior experiments on nist sre corpora are presented . story_separator_special_tag a first course in machine learning covers the core mathematical and statistical techniques needed to understand some of the most popular machine learning algorithms . the algorithms presented span the main problem areas within machine learning : classification , clustering and projection . the text gives detailed descriptions and derivations for a small number of algorithms rather than cover many algorithms in less detail . referenced throughout the text and available on a supporting website ( http : //bit.ly/firstcourseml ) , an extensive collection of matlab/octave scripts enables students to recreate plots that appear in the book and investigate changing model specifications and parameter values . by experimenting with the various algorithms and concepts , students see how an abstract set of equations can be used to solve real problems . requiring minimal mathematical prerequisites , the classroom-tested material in this text offers a concise , accessible introduction to machine learning . it provides students with the knowledge and confidence to explore the machine learning literature and research specific methods in more detail . story_separator_special_tag foreword.- preface.- introduction.- preliminaries.- natural and formal systems.- the modelling relation.- the encodings of time.- open systems and the modelling relation.- anticipatory systems.- appendix.- addendum : autobiographical reminiscences.- index . story_separator_special_tag much work has been done on predicting where is one going to be in the immediate future , typically within the next hour . by contrast , we address the open problem of predicting human mobility far into the future , a scale of months and years . we propose an efficient nonparametric method that extracts significant and robust patterns in location data , learns their associations with contextual features ( such as day of week ) , and subsequently leverages this information to predict the most likely location at any given time in the future . the entire process is formulated in a principled way as an eigendecomposition problem . evaluation on a massive dataset with more than 32,000 days worth of gps data across 703 diverse subjects shows that our model predicts the correct location with high accuracy , even years into the future . this result opens a number of interesting avenues for future research and applications . story_separator_special_tag this article discusses the challenges in computer systems research posed by the emerging field of pervasive computing . it first examines the relationship of this new field to its predecessors : distributed systems and mobile computing . it then identifies four new research thrusts : effective use of smart spaces , invisibility , localized scalability , and masking uneven conditioning . next , it sketches a couple of hypothetical pervasive computing scenarios , and uses them to identify key capabilities missing from today 's systems . the article closes with a discussion of the research necessary to develop these capabilities . story_separator_special_tag accurate and fine-grained prediction of future user location and geographical profile has interesting and promising applications including targeted content service , advertisement dissemination for mobile users , and recreational social networking tools for smart-phones . existing techniques based on linear and probabilistic models are not able to provide accurate prediction of the location patterns from a spatio-temporal perspective , especially for long-term estimation . more specifically , they are able to only forecast the next location of a user , but not his/her arrival time and residence time , i.e. , the interval of time spent in that location . moreover , these techniques are often based on prediction models that are not able to extend predictions further in the future . in this paper we present nextplace , a novel approach to location prediction based on nonlinear time series analysis of the arrival and residence times of users in relevant places . nextplace focuses on the predictability of single users when they visit their most important places , rather than on the transitions between different locations . we report about our evaluation using four different datasets and we compare our forecasting results to those obtained by means of the story_separator_special_tag with each eye fixation , we experience a richly detailed visual world . yet recent work on visual integration and change direction reveals that we are surprisingly unaware of the details of our envir . story_separator_special_tag location is an important feature for many applications , and wireless networks can better serve their clients by anticipating client mobility . as a result , many location predictors have been proposed in the literature , though few have been evaluated with empirical evidence . this paper reports on the results of the first extensive empirical evaluation of location predictors , using a two-year trace of the mobility patterns of over 6,000 users on dartmouth 's campus-wide wi-fi wireless network . we implemented and compared the prediction accuracy of several location predictors drawn from two major families of domain-independent predictors , namely markov-based and compression-based predictors . we found that low-order markov predictors performed as well or better than the more complex and more space-consuming compression-based predictors . predictors of both families fail to make a prediction when the recent context has not been previously seen . to overcome this drawback , we added a simple fallback feature to each predictor and found that it significantly enhanced its accuracy in exchange for modest effort . thus the order-2 markov predictor with fallback was the best predictor we studied , obtaining a median accuracy of about 72 % for users with story_separator_special_tag in cellular networks , qos degradation or forced termination may occur when there are insufficient resources to accommodate handoff requests . one solution is to predict the trajectory of mobile terminals so as to perform resource reservations in advance . with the vision that future mobile devices are likely to be equipped with reasonably accurate positioning capability , we investigate how this new feature may be used for mobility predictions . we propose a mobility prediction technique that incorporates road topology information , and describe its use for dynamic resource reservation . simulation results are presented to demonstrate the improvement in reservation efficiency compared with several other schemes . story_separator_special_tag the first decade of the century witnessed a proliferation of devices with sensing and communication capabilities in the possession of the average individual . examples range from camera phones and wireless global positioning system units to sensor-equipped , networked fitness devices and entertainment platforms ( such as wii ) . social networking platforms emerged , such as twitter , that allow sharing information in real time . the unprecedented deployment scale of such sensors and connectivity options ushers in an era of novel data-driven applications that rely on inputs collected by networks of humans or measured by sensors acting on their behalf . these applications will impact domains as diverse as health , transportation , energy , disaster recovery , intelligence and warfare . this paper surveys the important opportunities in human-centric sensing , identifies challenges brought about by such opportunities and describes emerging solutions to these challenges . story_separator_special_tag the anticipatory classifier system ( acs ) is a learning classifier system ( lcs ) that uses a learning process derived from psychology , which is called anticipatory learning process ( alp ) . besides the well known reward learning in lcs , the acs is able to learn a model of its environment by using the alp . the internal model of the environment consists of condition-action-effect rules . a typical question is lcs research is whether the rules are accurate and maximally general , i.e. , whether the rules can be applied in a maximum number of situation . latest research observed that the acs is not generating accurate , maximally general rules reliably , but sometimes produces over-specialized rules . a genetic algorithm is used to overcome this pressure of over-specialization . this invited paper gives an introduction to the current version of acs . applications are not discussed . they can be found in anticipatory classifier systems : an overview of applications ( this volume ) . story_separator_special_tag reinforcement learning , one of the most active research areas in artificial intelligence , is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex , uncertain environment . in reinforcement learning , richard sutton and andrew barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning . their discussion ranges from the history of the field 's intellectual foundations to the most recent developments and applications . the only necessary mathematical background is familiarity with elementary concepts of probability . the book is divided into three parts . part i defines the reinforcement learning problem in terms of markov decision processes . part ii provides basic solution methods : dynamic programming , monte carlo methods , and temporal-difference learning . part iii presents a unified view of the solution methods and incorporates artificial neural networks , eligibility traces , and planning ; the two final chapters present case studies and consider the future of reinforcement learning . story_separator_special_tag activity recognition.- activity recognition from user-annotated acceleration data.- recognizing workshop activity using body worn microphones and accelerometers.- `` are you with me ? '' - using accelerometers to determine if two devices are carried by the same person.- context computing.- context cube : flexible and effective manipulation of sensed context data.- a context-aware communication platform for smart objects.- siren : context-aware computing for firefighting.- near body interfaces.- spectacle-based design of wearable see-through display for accommodation-free viewing.- a compact battery-less information terminal for real world interaction.- software.- inca : a software infrastructure to facilitate the construction and evolution of ubiquitous capture & access applications.- sensors.- activity recognition in the home using simple and ubiquitous sensors.- automatic calibration of body worn acceleration sensors.- reject-optional lvq-based two-level classifier to improve reliability in footstep identification.- issues with rfid usage in ubiquitous computing applications.- security.- a fault-tolerant key-distribution scheme for securing wireless ad hoc networks.- proxnet : secure dynamic wireless connection by proximity sensing.- tackling security and privacy issues in radio frequency identification devices.- architectures and systems.- towards wearable autonomous microsystems.- ubiquitous chip : a rule-based i/o control device for ubiquitous computing.- eseal - a system for enhanced electronic assertion of authenticity and integrity.- algorithms.- story_separator_special_tag in this paper , we present a real-time algorithm for automatic recognition of not only physical activities , but also , in some cases , their intensities , using five triaxial wireless accelerometers and a wireless heart rate monitor . the algorithm has been evaluated using datasets consisting of 30 physical gymnasium activities collected from a total of 21 people at two different labs . on these activities , we have obtained a recognition accuracy performance of 94.6 % using subject-dependent training and 56.3 % using subject-independent training . the addition of heart rate data improves subject-dependent recognition accuracy only by 1.2 % and subject-independent recognition only by 2.1 % . when recognizing activity type without differentiating intensity levels , we obtain a subject-independent performance of 80.6 % . we discuss why heart rate data has such little discriminatory power . story_separator_special_tag mobile phones may interrupt in any place at any time . using the socioxensor research tool on people 's own mobile phones , we conducted an experience sampling study to explore which context information predicts a person 's availability for a phone call , and which context information people wanted to disclose to particular social relations . like other studies , we found that a small set of context information can help initiators of phone calls to improve their ability to know when recipients are receptive to phone calls . we also found that if we restrict the information to information recipients actually want to disclose , which is only a small subset of all information , enough context information is still available for initiators of phone calls to improve their ability to know when recipients are receptive to phone calls . story_separator_special_tag social signal processing is the research domain aimed at bridging the social intelligence gap between humans and machines . this paper is the first survey of the domain that jointly considers its three major aspects , namely , modeling , analysis , and synthesis of social behavior . modeling investigates laws and principles underlying social interaction , analysis explores approaches for automatic understanding of social exchanges recorded with different sensors , and synthesis studies techniques for the generation of social behavior via various forms of embodiment . for each of the above aspects , the paper includes an extensive survey of the literature , points to the most important publicly available resources , and outlines the most fundamental challenges ahead . story_separator_special_tag research in social science has shown that mobile phone conversations distract users , presenting a significant impact to pedestrian safety ; for example , a mobile phone user deep in conversation while crossing a street is generally more at risk than other pedestrians not engaged in such behavior . we propose walksafe , an android smartphone application that aids people that walk and talk , improving the safety of pedestrian mobile phone users . walksafe uses the back camera of the mobile phone to detect vehicles approaching the user , alerting the user of a potentially unsafe situation ; more specifically walksafe i ) uses machine learning algorithms implemented on the phone to detect the front views and back views of moving vehicles and ii ) exploits phone apis to save energy by running the vehicle detection algorithm only during active calls . we present our initial design , implementation and evaluation of the walksafe app that is capable of real-time detection of the front and back views of cars , indicating cars are approaching or moving away from the user , respectively . walksafe is implemented on android phones and alerts the user of unsafe conditions using sound and story_separator_special_tag urban sensing , participatory sensing , and user activity recognition can provide rich contextual information for mobile applications such as social networking and location-based services . however , continuously capturing this contextual information on mobile devices consumes huge amount of energy . in this paper , we present a novel design framework for an energy efficient mobile sensing system ( eemss ) . eemss uses hierarchical sensor management strategy to recognize user states as well as to detect state transitions . by powering only a minimum set of sensors and using appropriate sensor duty cycles eemss significantly improves device battery life . we present the design , implementation , and evaluation of eemss that automatically recognizes a set of users ' daily activities in real time using sensors on an off-the-shelf high-end smart phone . evaluation of eemss with 10 users over one week shows that our approach increases the device battery life by more than 75 % while maintaining both high accuracy and low latency in identifying transitions between end-user activities . story_separator_special_tag consider writing , perhaps the first information technology : the ability to capture a symbolic representation of spoken language for long-term storage freed information from the limits of individual memory . today this technology is ubiquitous in industrialized countries . not only do books , magazines and newspapers convey written information , but so do street signs , billboards , shop signs and even graffiti . candy wrappers are covered in writing . the constant background presence of these products of `` literacy technology '' does not require active attention , but the information to be conveyed is ready for use at a glance . it is difficult to imagine modern life otherwise . story_separator_special_tag the designations employed and the presentation of the material in this publication do not imply the expression of any opinion whatsoever on the part of the world health organization , the united nations human settlements programme or the secretariat of the united nations concerning the legal status of any country , territory , city or area or of its authorities , or concerning the delimitation of its frontiers or boundaries or regarding its economic system or degree of development . dotted lines on maps represent approximate border lines for which there may not yet be full agreement . the mention of specific companies or of certain manufacturers ' products does not imply that they are endorsed or recommended by the world health organization or the united nations human settlements programme in preference to others of a similar nature that are not mentioned . errors and omissions excepted , the names of proprietary products are distinguished by initial capital letters . the world health organization and the united nations human settlements programme do not warrant that the information contained in this publication is complete and correct and shall not be liable for any damages incurred as a result of its use story_separator_special_tag a fundamental difficulty in recognizing human activities is obtaining the labeled data needed to learn models of those activities . given emerging sensor technology , however , it is possible to view activity data as a stream of natural language terms . activity models are then mappings from such terms to activity names , and may be extracted from text corpora such as the web . we show that models so extracted are sufficient to automatically produce labeled segmentations of activity data with an accuracy of 42 % over 26 activities , well above the 3.89 % baseline . the segmentation so obtained is sufficient to bootstrap learning , with accuracy of learned models increasing to 52 % . to our knowledge , this is the first human activity inferencing system shown to learn from sensed activity data with no human intervention per activity learned , even for labeling . story_separator_special_tag mobile crowdsensing is becoming a vital technique for environment monitoring , infrastructure management , and social computing . however , deploying mobile crowdsensing applications in large-scale environments is not a trivial task . it creates a tremendous burden on application developers as well as mobile users . in this paper we try to reveal the barriers hampering the scale-up of mobile crowdsensing applications , and to offer our initial thoughts on the potential solutions to lowering the barriers . story_separator_special_tag background . a key question for public health is how best to engage users cost-effectively with digital interventions . two popular methods of encouraging greater engagement are to provide human support or to provide just-in-time mobile intervention components . objective . our aim was to examine effects on uptake and usage of the web-based power ( positive online weight reduction ) intervention a ) when intermittent telephone support was provided and b ) when mobile intervention components were provided . methods . to test the effects of human support we trialed power in a community public health setting , randomising users ( n=786 ) to website only , website plus two telephone support sessions , or an 8 week waiting list . telephone interviews were carried out with purposively sampled participants to elicit views and experiences of those who did and did not receive support . quantitative follow-up finishes in march 2013 and automatic tracking of website usage ensures we will have complete data for our primary outcomes . novel visualisation techniques are being used to identify patterns of usage and relate these to participant baseline characteristics . to test the effects of adding mobile intervention components we developed a story_separator_special_tag this paper explores the potential of fusing social and sensor data in the cloud , presenting a practice -- a travel recommendation system that offers the predicted mood information of people on where and when users wish to travel . the system is built upon a conceptual framework that allows to blend the heterogeneous social and sensor data for integrated analysis , extracting weather-dependant people 's mood information from twitter and meteorological sensor data streams . in order to handle massively streaming data , the system employs various cloud-serving systems , such as hadoop , hbase , and gsn . using this scalable system , we performed heavy etl as well as filtering jobs , resulting in 12 million tweets over four months . we then derived a rich set of interesting findings through the data fusion , proving that our approach is effective and scalable , which can serve as an important basis in fusing social and sensor data in the cloud . story_separator_special_tag semi-supervised learning is a learning paradigm concerned with the study of how computers and natural systems such as humans learn in the presence of both labeled and unlabeled data . traditionally , learning has been studied either in the unsupervised paradigm ( e.g. , clustering , outlier detection ) where all the data is unlabeled , or in the supervised paradigm ( e.g. , classification , regression ) where all the data is labeled.the goal of semi-supervised learning is to understand how combining labeled and unlabeled data may change the learning behavior , and design algorithms that take advantage of such a combination . semi-supervised learning is of great interest in machine learning and data mining because it can use readily available unlabeled data to improve supervised learning tasks when the labeled data is scarce or expensive . semi-supervised learning also shows potential as a quantitative tool to understand human category learning , where most of the input is self-evidently unlabeled . in this introductory book , we present some popular semi-supervised learning models , including self-training , mixture models , co-training and multiview learning , graph-based methods , and semi-supervised support vector machines . for each model , we story_separator_special_tag we present procab , an efficient method for probabilistically reasoning from observed context-aware behavior . it models the context-dependent utilities and underlying reasons that people take different actions . the model generalizes to unseen situations and scales to incorporate rich contextual information . we train our model using the route preferences of 25 taxi drivers demonstrated in over 100,000 miles of collected data , and demonstrate the performance of our model by inferring : ( 1 ) decision at next intersection , ( 2 ) route to known destination , and ( 3 ) destination given partially traveled route .
1 https : //www.cdc.gov/campylobacter/faq.html . 2 https : //www.cdc.gov/foodnet/index.html . 3 https : //www.cdc.gov/mmwr/volumes/68/wr/ mm6816a2.htm . 4 https : //www.cdc.gov/foodsafety/ifsac/pdf/p192016-report-triagency-508.pdf . 5 https : //www.cdc.gov/campylobacter/outbreaks/ outbreaks.html . 6 fsis s direct-plating and enrichment analytical methods are described in the microbiology laboratory guidebook , chapter 41 ; at https : // www.fsis.usda.gov/wps/wcm/connect/0273bc3d2363-45b3-befb-1190c25f3c8b/mlg-41.pdf ? mod=ajperes . 7 at the time , fsis inspection program personnel were collecting poultry carcass samples over a defined number of sequential days of production to complete a sample set . in may 2015 , fsis began testing poultry carcasses using a continuous sampling program and discontinued the previous set-based verification projects . discussions during the meeting . written statements on meeting topics may be filed with the committee before or after the meeting by sending them to the person listed under for further information contact . written statements may also be filed at the meeting . please refer to docket no . aphis 2019 0045 when submitting your statements . this notice of meeting is given pursuant to section 10 of the federal advisory committee act . story_separator_special_tag e98 volume 63 , number 5 september/october 2018 documentation management program . nursing administration quarterly , 27 ( 4 ) , 285 289 . land , t. l. , & nash , j. a . ( 2008 ) . clinical documentation improvement : a winning strategy in quality data and revenue cycle management . healthcare executive , 23 ( 2 ) , 71 73 . meurer , s. j . ( 2008 ) . mortality risk adjustment methodology for university health system s clinical database : ahrq report by the university health systems consortium . retrieved from https : // archive.ahrq.gov/professionals/qualitypatient-safety/quality-resources/tools/ mortality/meurer.pdf mulvany , c. ( 2011 ) . th e physician value modifi er : pay-for-performance is coming to the physician fee schedule . healthcare financial management , 65 ( 10 ) , 36 38 . parsons , a. , mccullough , c. , wang , j. , & shih , s. ( 2012 ) . validity of electronic health recordderived quality measurement for performance monitoring . journal of the american medical informatics association , 19 ( 4 ) , 604 609 . payne , t. ( 2010 ) . improving clinical documentation in an emr world . healthcare story_separator_special_tag life events and psychological functioning - lawrence h cohen theoretical and methodological issues part one : effects of life events on psychological functioning measurement of life events - lawrence h cohen life stress and psychopathology - scott m monroe and amy m peterman life events and adjustment in childhood and adolescence - james h johnson and andrew s bradlyn methodological and conceptual issues life events in older adults - stanley a murrell , fran h norris and christopher grote the contribution of small events to stress and distress - alex j zautra et al direct and stress-moderating effects of positive life experiences - john w reich and alex j zautra part two : role of moderator variables coping with stressful events - arthur a stone , lynn helder and mark s schneider coping dimensions and issues models of social support and life stress - manuel barrera jr beyond the buffering hypothesis a conceptual reorientation to the study of personality and stressful life events - ralph w swindle jr , kenneth heller and brian lakey story_separator_special_tag in agile software development , industries are becoming more dependent on automated test suites . thus , the test code quality is an important factor for the overall system quality and maintainability . we propose a test automation improvement model ( taim ) defining ten key areas and one general area . each area should be based on measurements , to fill the gap of existing assessments models . the main contribution of this paper is to provide the outline of taim and present our intermediate results and some initial metrics to support our model . our initial target has been the key area targeting implementation and structure of test code . we have used common static measurements to compare the test code and the source code of a unit test automation suite being part of a large complex telecom subsystem . our intermediate results show that it is possible to outline such an improvement model and our metrics approach seems promising . however , to get a generic useful model to aid test automation evolution and provide for comparable measurements , many problems still remain to be solved . taim can as such be viewed as a framework to story_separator_special_tag this article studied and compared the two nonprobability sampling techniques namely , convenience sampling and purposive sampling . convenience sampling and purposive sampling are nonprobability sampling techniques that a researcher uses to choose a sample of subjects/units from a population . although , nonprobability sampling has a lot of limitations due to the subjective nature in choosing the sample and thus it is not good representative of the population , but it is useful especially when randomization is impossible like when the population is very large . it can be useful when the researcher has limited resources , time and workforce . it can also be used when the research does not aim to generate results that will be used to create generalizations pertaining to the entire population . therefore , there is a need to use nonprobability sampling techniques . the aim of this study is to compare among the two nonrandom sampling techniques in order to know whether one technique is better or useful than the other . different articles were reviewed to compare between convenience sampling and purposive sampling and it is concluded that the choice of the techniques ( convenience sampling and purposive sampling ) depends story_separator_special_tag he first objective of this article is to propose a conceptual framework of the effects of on-line questionnaire design on the quality of collected responses . secondly , we present the results of an experiment where different protocols have been tested and compared in a randomised design using the basis of several quality indexes . starting from some previous categorizations , and from the main factors identified in the literature , we first propose an initial global framework of the questionnaire and question characteristics in a web survey , divided into five groups of factors . our framework was built to follow the response process successive stages of the contact between the respondent and the questionnaire itself . then , because it has been studied in the survey methodology literature in a very restricted way , the concept of ` response quality ' is discussed and extended with some more ` qualitative ' criteria that could be helpful for researchers and practitioners , in order to obtain a deeper assessment of the survey output . as an experiment , on the basis of the factors chosen as major characteristics of the questionnaire design , eight versions of a questionnaire related story_separator_special_tag contextsoftware testing practices and processes in many companies are far from being mature and are usually conducted in ad-hoc fashions . such immature practices lead to various negative outcomes , e.g. , ineffectiveness of testing practices in detecting all the defects , and cost and schedule overruns of testing activities . to conduct test maturity assessment ( tma ) and test process improvement ( tpi ) in a systematic manner , various tma/tpi models and approaches have been proposed . objectiveit is important to identify the state-of-the-art and the practice in this area to consolidate the list of all various test maturity models proposed by practitioners and researchers , the drivers of tma/tpi , the associated challenges and the benefits and results of tma/tpi . our article aims to benefit the readers ( both practitioners and researchers ) by providing the most comprehensive survey of the area , to this date , in assessing and improving the maturity of test processes . methodto achieve the above objective , we have performed a multivocal literature review ( mlr ) study to find out what we know about tma/tpi . a mlr is a form of a systematic literature review ( slr story_separator_special_tag context software testing is an important and costly software engineering activity in the industry . despite the efforts of the software testing research community in the last several decades , variou . story_separator_special_tag abstract context many organizations see software test automation as a solution to decrease testing costs and to reduce cycle time in software development . however , establishment of automated testing may fail if test automation is not applied in the right time , right context and with the appropriate approach . objective the decisions on when and what to automate is important since wrong decisions can lead to disappointments and major wrong expenditures ( resources and efforts ) . to support decision making on when and what to automate , researchers and practitioners have proposed various guidelines , heuristics and factors since the early days of test automation technologies . as the number of such sources has increased , it is important to systematically categorize the current state-of-the-art and -practice , and to provide a synthesized overview . method to achieve the above objective , we have performed a multivocal literature review ( mlr ) study on when and what to automate in software testing . a mlr is a form of a systematic literature review ( slr ) which includes the grey literature ( e.g. , blog posts and white papers ) in addition to the published ( formal story_separator_special_tag background : the need for empirical investigations in software engineering is growing . many researchers nowadays , conduct and validate their solutions using empirical research . the survey is an empirical method which enables researchers to collect data from a large population . the main aim of the survey is to generalize the findings . aims : in this study , we aim to identify the problems researchers face during survey design and mitigation strategies . method : a literature review , as well as semi-structured interviews with nine software engineering researchers , were conducted to elicit their views on problems and mitigation strategies . the researchers are all focused on empirical software engineering . results : we identified 24 problems and 65 strategies , structured according to the survey research process . the most commonly discussed problem was sampling , in particular , the ability to obtain a sufficiently large sample . to improve survey instrument design , evaluation and execution recommendations for question formulation and survey pre-testing were given . the importance of involving multiple researchers in the analysis of survey results was stressed . conclusions : the elicited problems and strategies may serve researchers during the design story_separator_special_tag software development is one of the most important worldwide industries and continues to grow . to deal with this challenge , organizations are adopting ever more tools and methodologies . however , software development projects are still failing in meeting time , budget and functional requirements . this study provides insights on the failures faced by software development organizations regarding their processes , the reasons leading to these failures , and initiatives taken to cope with them . a re-search methodology was used to gather and compare results from a literature review and semi-structured interviews . we learnt that there are more failures in management activities , although they were not often report-ed , while failures in requirements engineering and software testing are less in number but more fre-quently reported . lack of communication , lack of time for improvements and appropriate testing , and poor requirements and functionalities specification were the mostly reported failures . furthermore , we learnt that organizations are not implementing any initiative to address these failures , although they suggested solutions . story_separator_special_tag the objective of this industry study is to shed light on the current situation and improvement needs in software test automation . to this end , 55 industry specialists from 31 organizational units were interviewed . in parallel with the survey , a qualitative study was conducted in 12 selected software development organizations . the results indicated that the software testing processes usually follow systematic methods to a large degree , and have only little immediate or critical requirements for resources . based on the results , the testing processes have approximately three fourths of the resources they need , and have access to a limited , but usually sufficient , group of testing tools . as for the test automation , the situation is not as straightforward : based on our study , the applicability of test automation is still limited and its adaptation to testing contains practical difficulties in usability . in this study , we analyze and discuss these limitations and difficulties . story_separator_special_tag this article is the fifth installment of our series of articles on survey research . in it , we discuss what we mean by a population and a sample and the implications of each for survey research . we provide examples of correct and incorrect sampling techniques used in software engineering surveys . story_separator_special_tag software testing is an important component that leads to quality software production . this paper presents the results of a framework for assessing the level of maturity in software testing application in the context of small and medium-sized enterprises ( smes ) based on tmmi model . our framework includes an evaluation questionnaire based on tmmi sub-practices , support tools with examples of artifacts required to ensure that the questionnaire is thoroughly completed , as well as an automated tool support for its application , enabling smes to carry out self-assessment . the framework was applied in ten companies and before the results presented , it can be concluded that the companies maturity in software testing is low and that the companies positively assessed the adequacy of the framework developed for the context of smes . story_separator_special_tag context : many researchers advocate tailoring agile methods to suit a project s or company s specific environment and needs . this includes combining agile methods with more traditional plan driven practices . story_separator_special_tag software testing is considered to be one of the most important processes in software development for it verifies if the system meets the user requirements and specification . manual testing and automated testing are two ways of conducting software testing . automated testing gives software testers the ease to automate the process of software testing thus considered more effective when time , cost and usability are concerned . there are a wide variety of automated testing tools available , either open source or commercial . this paper provides a comparative review of features of open source and commercial testing tools that may help users to select the appropriate software testing tool based on their requirements . story_separator_special_tag we examine the performance of the two rank order correlation coefficients ( spearman 's rho and kendall 's tau ) for describing the strength of association between two continuously measured traits . we begin by discussing when these measures should , and should not , be preferred over pearson 's product moment correlation coefficient on conceptual grounds . for testing the null hypothesis of no monotonic association , our simulation studies found both rank coefficients show similar performance to variants of the pearson product moment measure of association , and provide only slightly better performance than pearson 's measure even if the two measured traits are non-normally distributed . where variants of the pearson measure are not appropriate , there was no strong reason ( based on our results ) to select either of our rank-based alternatives over the other for testing the null hypothesis of no monotonic association . further , our simulation studies indicated that for both rank coefficients there exists at least one method for calculating confidence intervals that supplies results close to the desired level if there are no tied values in the data . in this case , kendall 's coefficient produces consistently narrower confidence story_separator_special_tag there is a documented gap between academic and practitioner views on software testing . this paper tries to close the gap by investigating both views regarding the benefits and limits of test automation . the academic views are studied with a systematic literature review while the practitioners views are assessed with a survey , where we received responses from 115 software professionals . the results of the systematic literature review show that the source of evidence regarding benefits and limitations is quite shallow as only 25 papers provide the evidence . furthermore , it was found that benefits often originated from stronger sources of evidence ( experiments and case studies ) , while limitations often originated from experience reports . we believe that this is caused by publication bias of positive results . the survey showed that benefits of test automation were related to test reusability , repeatability , test coverage and effort saved in test executions . the limitations were high initial invests in automation setup , tool selection and training . additionally , 45 % of the respondents agreed that available tools in the market offer a poor fit for their needs . finally , it was found story_separator_special_tag context : software has increased in size and complexity . it has increased the amount of time and money required for testing . many organizations have invested in software test automation ( sta ) waiting for reducing costs and improving the testing process . however , less than 50 % of them reach the expected objectives related to software test automation mainly due to the lack of a clear understanding of what is involved . objective : there is a documented gap between the academic and practitioners ' points of views about software test automation . this paper has two main objectives . first , evaluating the relevance of 12 critical factors of success ( cfs ) in software test automation collected from the technical literature according to researchers views . second , evaluating the impact of each of them on a basic software test automation lifecycle ( bstal ) . method : to archive the above objectives , we have performed a survey with software test practitioners . each participant was invited by e-mail to answer an electronic survey to evaluate the relevance of cfss . a cutoff value was defined to classify factors according to their relevant levels story_separator_special_tag this work examined the effects of operators ' exposure to various types of automation failures in training . forty-five participants were trained for 3.5\xa0h on a simulated process control environment . during training , participants either experienced a fully reliable , automatic fault repair facility ( i.e . faults detected and correctly diagnosed ) , a misdiagnosis-prone one ( i.e . faults detected but not correctly diagnosed ) or a miss-prone one ( i.e . faults not detected ) . one week after training , participants were tested for 3 h , experiencing two types of automation failures ( misdiagnosis , miss ) . the results showed that automation bias was very high when operators trained on miss-prone automation encountered a failure of the diagnostic system . operator errors resulting from automation bias were much higher when automation misdiagnosed a fault than when it missed one . differences in trust levels that were instilled by the different training experiences disappeared during the testing session . practitioner summary : the experience of automation failures during training has some consequences . a greater potential for operator errors may be expected when an automatic system failed to diagnose a fault than when it story_separator_special_tag the classical method for identifying cause-effect relationships is to conduct controlled experiments . this paper reports upon the present state of how controlled experiments in software engineering are conducted and the extent to which relevant information is reported . among the 5,453 scientific articles published in 12 leading software engineering journals and conferences in the decade from 1993 to 2002 , 103 articles ( 1.9 percent ) reported controlled experiments in which individuals or teams performed one or more software engineering tasks . this survey quantitatively characterizes the topics of the experiments and their subjects ( number of subjects , students versus professionals , recruitment , and rewards for participation ) , tasks ( type of task , duration , and type and size of application ) and environments ( location , development tools ) . furthermore , the survey reports on how internal and external validity is addressed and the extent to which experiments are replicated . the gathered data reflects the relevance of software engineering experiments to industrial practice and the scientific maturity of software engineering research . story_separator_special_tag as part of agile transformation in past few years we have seen it organizations adopting continuous integration principles in their software delivery lifecycle , which has improved the efficiency of development teams . with the time it has been realized that this optimization as part of continuous integration - alone - is just not helping to make the entire delivery lifecycle efficient or is not driving the organization efficiency . unless all the pieces of a software delivery lifecycle work like a well oiled machine - efficiency of organization to optimize the delivery lifecycle can not be met . this is the problem which devops tries to address . this paper tries to cover all aspects of devops applicable to various phases of sdlc and specifically talks about business need , ways to move from continuous integration to continuous delivery and its benefits . continuous delivery transformation in this paper is explained with a real life case study that how infrastructure can be maintained just in form of code ( iaac ) . finally this paper touches upon various considerations one must evaluate before adopting devops and what kind of benefits one can expect . story_separator_special_tag a wide variety of software metrics focusing on various levels of abstraction and attributes have been recommended by the software research community . several organizations have implemented comprehensive metrics programs to strengthen management decision making and enable continuous improvement of their software engineering processes . in spite of the enormous efforts made to bring about advancements in the field , industry s adoption of various metrics is still at a basic level and has not changed much over past 20+ years . moreover , the project manager community still struggles to identify the right set of metrics pertaining to their specific needs and looking for guidelines on how to make the right usage of selected metric sets . there was a dedicated research effort made by the authors over past 3+ years to bring together various attributes of interest in software testing life cycle phases along the dimensions of effectiveness and efficiency with a special focus on the associations amongst these attributes . this research effort produced a software test metrics advisory tool , which project managers for software testing projects can depend upon as an advisory aid while making selection , usage and interpretation of various attributes and associated story_separator_special_tag test automation is becoming critical in software development process . though it has been widely applied , many are not surprised to find there is the long journey to a mature test automation process . to get continues improvement and achieve or sustain test automation benefits , organizations need to know what factors can lead to mature test automation and how to assess the current maturity level of test automation in order to identify improvement steps . however , the contemporary test maturity models are likely to emphasize more on general testing but fewer details for test automation , and also lack empirical evidence from the industry to validate the statements that indicate maturity levels . to address the above issues , this study aims to examine what factors lead to a mature test automation process and how to assess the maturity level against them . story_separator_special_tag test automation is important in the software industry but self-assessment instruments for assessing its maturity are not sufficient . the two objectives of this study are to synthesize what an organization should focus to assess its test automation ; develop a self-assessment instrument ( a survey ) for assessing test automation maturity and scientifically evaluate it . we carried out the study in four stages . first , a literature review of 25 sources was conducted . second , the initial instrument was developed . third , seven experts from five companies evaluated the initial instrument . content validity index and cognitive interview methods were used . fourth , we revised the developed instrument . our contributions are as follows : ( a ) we collected practices mapped into 15 key areas that indicate where an organization should focus to assess its test automation ; ( b ) we developed and evaluated a self-assessment instrument for assessing test automation maturity ; ( c ) we discuss important topics such as response bias that threatens self-assessment instruments . our results help companies and researchers to understand and improve test automation practices and processes . story_separator_special_tag like other sciences and engineering disciplines , software engineering requires a cycle of model building , experimentation , and learning . experiments are valuable tools for all software engineers who are involved in evaluating and choosing between different methods , techniques , languages and tools . the purpose of experimentation in software engineering is to introduce students , teachers , researchers , and practitioners to empirical studies in software engineering , using controlled experiments . the introduction to experimentation is provided through a process perspective , and the focus is on the steps that we have to go through to perform an experiment . the book is divided into three parts . the first part provides a background of theories and methods used in experimentation . part ii then devotes one chapter to each of the five experiment steps : scoping , planning , execution , analysis , and result presentation . part iii completes the presentation with two examples . assignments and statistical material are provided in appendixes . overall the book provides indispensable information regarding empirical studies in particular for experiments , but also for case studies , systematic literature reviews , and surveys . it is a
this paper describes the concept of sensor networks which has been made viable by the convergence of micro-electro-mechanical systems technology , wireless communications and digital electronics . first , the sensing tasks and the potential sensor networks applications are explored , and a review of factors influencing the design of sensor networks is provided . then , the communication architecture for sensor networks is outlined , and the algorithms and protocols developed for each layer in the literature are explored . open research issues for the realization of sensor networks are also discussed . story_separator_special_tag localization is an important topic in mobile wireless ad hoc and sensor networks , which has received considerable attention from the research community during the past few decades . in many sensor networks applications , location awareness is useful or even necessary . however , because of their key role in wireless sensor networks , localization systems can be the target of an attack that could compromise the entire functioning of a wireless sensor network . in this paper , we present a novel defense mechanism against wormhole attacks in dv-hop localization algorithm . the main idea of our approach is to plug in a proactive countermeasure to the basic dv-hop scheme called infection prevention . we choose the wormhole attack as our defending target because it is a particularly challenging attack that can be successfully launched without compromising any nodes or having access to any cryptographic keys . using analysis and simulation , we show that our solution is effective in detecting and defending against wormhole attacks with a high detection rate . copyright \xa9 2011 john wiley & sons , ltd . story_separator_special_tag sensors ' locations play a critical role in many sensor network applications . a number of techniques have been proposed recently to discover the locations of regular sensors based on a few special nodes called beacon nodes , which are assumed to know their locations ( e.g. , through gps receivers or manual configuration ) . however , none of these techniques can work properly when there are malicious attacks , especially when some of the beacon nodes are compromised . this paper introduces a suite of techniques to detect and remove compromised beacon nodes that supply misleading location information to the regular sensors , aiming at providing secure location discovery services in wireless sensor networks . these techniques start with a simple but effective method to detect malicious beacon signals . to identify malicious beacon nodes and avoid false detection , this paper also presents several techniques to detect replayed beacon signals . this paper then proposes a method to reason about the suspiciousness of each beacon node at the base station based on the detection results collected from beacon nodes , and then revoke malicious beacon nodes accordingly . finally , this paper provides detailed analysis and simulation story_separator_special_tag localization is an active field of research in wireless sensor networks wsns . the information of exact physical location of the sensor nodes in wsns is useful for various application e.g . intrusion detection , target tracking , environmental monitoring and network services etc . in this paper we present the classification and comparative study of localization algorithms . the goal of our consideration is to analyze , how these localization algorithms work in order to increase the life span of network nodes in harsh environments like oil fields , gas fields , forests , chemical factories and underground mines etc . and how to find the position of mobile node with distributed , range-based and beacon-based localization technique in harsh environments . furthermore this paper also highlights some issues experienced by these localization techniques . story_separator_special_tag localization is one of the key technologies in wireless sensor networks , and the mobile beacon assisted localization method is promising to reduce the cost . the methods with only one beacon introduce collinearity problem which degrades performance . this paper proposes a weighted centroid localization method using three mobile beacons . these beacons preserve a special formation while traversing the network deployment area , and broadcast their positions periodically . the sensor nodes to be localized estimate the distances to these beacons , and utilize weighted centroid localization scheme to calculate its position . simulation shows that this method is superior to trilateration and weighted centroid algorithm with single mobile beacon . story_separator_special_tag instrumenting the physical world through large networks of wireless sensor nodes , particularly for applications like environmental monitoring of water and soil , requires that these nodes be very small , lightweight , untethered , and unobtrusive . the problem of localization , that is , determining where a given node is physically located in a network , is a challenging one , and yet extremely crucial for many of these applications . practical considerations such as the small size , form factor , cost and power constraints of nodes preclude the reliance on gps of all nodes in these networks . we review localization techniques and evaluate the effectiveness of a very simple connectivity metric method for localization in outdoor environments that makes use of the inherent rf communications capabilities of these devices . a fixed number of reference points in the network with overlapping regions of coverage transmit periodic beacon signals . nodes use a simple connectivity metric , which is more robust to environmental vagaries , to infer proximity to a given subset of these reference points . nodes localize themselves to the centroid of their proximate reference points . the accuracy of localization is then dependent story_separator_special_tag since the deployment of base stations ( bs ) is far from optimum in 3d space , i.e. , the length of vertical baseline between bs is relatively smaller than that of plane baseline , the geometric dilution of precision of the altitude estimate is larger than that of plane location . this paper considers the problem of 3d range location and attempt to improve the altitude estimate . we firstly use a volume formula of tetrahedron to transform the range measurements to volume measurements , then a new pseudo-linear solution is proposed based on the linear relationship between the rectangular and volume coordinates . theory analysis and numerical examples are included to show the improved accuracy of altitude estimate of mobile location . finally , an improved estimate of 3d mobile location is given by solving a set of augmented linear equations . story_separator_special_tag supervisor : dr. lee , victor chung sing ; first reader : dr. huang , scott chih-hao ; second reader : dr. yu , yuen tak story_separator_special_tag many ad hoc network protocols and applications assume the knowledge of geographic location of nodes . the absolute position of each networked node is an assumed fact by most sensor networks which can then present the sensed information on a geographical map . finding position without the aid of gps in each node of an ad hoc network is important in cases where gps is either not accessible , or not practical to use due to power , form factor or line of sight conditions . position would also enable routing in sufficiently isotropic large networks , without the use of large routing tables . we are proposing aps -- - a localized , distributed , hop by hop positioning algorithm , that works as an extension of both distance vector routing and gps positioning in order to provide approximate position for all nodes in a network where only a limited fraction of nodes have self positioning capability . story_separator_special_tag the node localization is an important problem for location-dependent applications of wireless sensor networks . the research has drawn wide attention in recent years . the localization can be categorized as range-free or range-based schemes based on whether the range information is used among the localization process . because of the hardware limitations of the network devices , solutions in range-free localization are being pursued as a cost-effective alternative to more expensive range-based approaches . dv-hop is one of the range-free localization algorithms using hop-distance estimate . in this paper , we develop a new estimation model and improve the dv-hop algorithm by considering the relationships between the communication ranges and the hop-distances . this scheme needs no additional hardware support and can be implemented in a distributed way . simulation results show the performance of the proposed algorithm is superior to that of the dv-hop algorithm story_separator_special_tag wireless sensor network ( wsn ) are more and more widely used in many different scenarios . the localization information is an important criterion for the capability of wsn . nowadays , there are many localization algorithms . dv-hop is a classical range-free localization algorithm , by which unknown nodes can obtain anchors ' information within designated hops , and estimate the distances from themselves to anchors . unknown nodes then use the information to localize themselves . but the estimative distances may incur large error , and will jeopardize the localization precision . in order to solve the problem , we propose a selective anchor node localization algorithm ( sanla ) for wireless sensor networks in this paper , and it can make unknown nodes choose three anchors which are the most accurate to execute trilateration . the experiment results illustrate that our algorithm is valid and effective . story_separator_special_tag localization is one of the most important issues in wireless sensor networks ( wsns ) , especially for the applications requiring the accurate position of the sensed information . the traditional dv-hop method can be simply implemented in real wireless sensor networks without any range measurement tools , but it has bad localization accuracy . to overcome the shortcoming of the traditional dv-hop method on localization accuracy , an improved dv-hop localization method is proposed in this paper . the proposed method is derived from dv-hop method , and adds correction to the distance between anchor nodes and unknown nodes to improve localization accuracy without increasing hardware cost for sensor nodes . simulation results show that the localization accuracy of the improved dv-hop method outweighs significantly the traditional dv-hop method . story_separator_special_tag in many applications of wireless sensor networks ( wsn ) , sensors are deployed un-tethered in hostile environments . for location-aware wsn applications , it is essential to ensure that sensors can determine their location , even in the presence of malicious adversaries . in this paper we address the problem of enabling sensors of wsn to determine their location in an un-trusted environment . since localization schemes based on distance estimation are expensive for the resource constrained sensors , we propose a range-independent localization algorithm called serloc . serloc is distributed algorithm and does not require any communication among sensors . in addition , we show that serloc is robust against severe wsn attacks , such as the wormhole attack , the sybil attack and compromised sensors . to the best of our knowledge , ours is the first work that provides a security-aware range-independent localization scheme for wsn . we present a threat analysis and comparison of the performance of serloc with state-of-the-art range-independent localization schemes . story_separator_special_tag localization techniques.- range-free localization.- a beacon-less location discovery scheme for wireless sensor networks.- learning sensor location from signal strength and connectivity.- node localization using mobile robots in delay-tolerant sensor networks.- experiences from the empirical evaluation of two physical layers for node localization.- secure localization.- robust wireless localization : attacks and defenses.- secure and resilient localization in wireless sensor networks.- secure localization for wireless sensor networks using range-independent methods.- travarsel-transmission range variation based secure localization.- secure sequence-based localization for wireless networks.- securing localization in wireless networks ( using verifiable multilateration and covert base stations ) .- distance bounding protocols : authentication logic analysis and collusion attacks.- location privacy in wireless lan.- secure time synchronization.- time synchronization attacks in sensor networks.- secure and resilient time synchronization in wireless sensor networks.- securing timing synchronization in sensor networks . story_separator_special_tag considering the shortages of the classic mds- map algorithm on localization precision and algorithm complexity , a distributional localization algorithm based on mds had been proposed in this paper . the method of cluster was involved to build different clusters . mds algorithm was used in every cluster for local relative coordinates , euclidean algorithm was used to calculate the distance matrix in this step . then the local maps were combined to form a global relative coordinate map based on matrix translation . finally the relative coordinates was transferred to absolute coordinates by few beacon nodes . simulation results demonstrated that the new algorithm can promote localization precision and perform well on low anisotropic topology . story_separator_special_tag localization has been a major challenge in wireless sensor networks ( wsns ) , especially for the applications requiring the accurate position of the sensed information . in this paper , we propose a new localization algorithm based on the centroid algorithm and the dv-hop algorithm to improve the positioning accuracy without increasing any extra hardware for sensor nodes . this paper firstly analyzed the advantages and disadvantages of the centroid algorithm and the dv-hop algorithm . then we put forward an iterated hybrid algorithm , which is comprised of three steps . firstly , obtaining the initial location of each unknown node by using the centroid algorithm ; secondly , computing the distances among each unknown node to the anchor nodes based on the dv-hop algorithm ; finally , taylor series expansion ( tse ) algorithm is utilized to estimate coordinate of each unknown node . simulation results show that our iterated hybrid algorithm has better positioning accuracy . story_separator_special_tag the emergence of wireless sensor networks brought many benefits in different application domains such as collaborative tasks , lower costs , equipment 's autonomy and higher tolerance to failures . these advantages made the number of applications that use this kind of network grow in the past few years . meanwhile , the possibility of employing these systems to trace the movement of an object , which can be part of the network itself , is of great utility . the present work aims at the study and development of a localization system of mobile nodes for wireless sensor networks . different methods to obtain the distances between network nodes are studied and received signal strength algorithms are developed to synthesize the data and to show the location of the nodes . finally , simulations and experiments are presented in order to analyze the viability of the developed proposal . story_separator_special_tag localization has been an important topic in data-centric wireless sensor networks ( wsns ) due to the special association between location information and the relevance of sensory data . although most proposed approaches have been modeled for 2d space , a gradual but marked shift in the focus of 3d localization has taken place . while the 3d technique brings wsns closer to reality , its complexity in computation and accuracy can be relatively high . in this paper , we propose a complexity-reduced 3d trilateration localization approach ( cola ) based on rssi values . our goal is to lower the complexity by reducing 3d trilateration to 2d trilateration through the use of super anchor nodes ones with pairwise positions whose coordinates only differ in the z-axis . in this paper , we conduct several empirical experiments in an indoor office setting , and evaluate our approach by comparison with a typical 3d trilateration algorithm . the results show that our approach outperforms the typical algorithm and yields more accurate results with lower computational cost . story_separator_special_tag radio signal strength ( rss ) is notorious for being a noisy signal that is difficult to use for ranging-based localization . in this study , we demonstrate that rss can be used to localize a multi-hop sensor network , and we quantify the effects of various environmental factors on the resulting localization error . we achieve 4.1m error in a 49 node network deployed in a half-football field sized area , demonstrating that rss localization can be a feasible alternative to solutions like gps given the right conditions . however , we also show that this result is highly sensitive to subtle environmental factors such as the grass height , radio enclosure , and elevation of the nodes from the ground . story_separator_special_tag we characterize the fundamental limits of localization using signal strength in indoor environments . signal strength approaches are attractive because they are widely applicable to wireless sensor networks and do not require additional localization hardware . we show that although a broad spectrum of algorithms can trade accuracy for precision , none has a significant advantage in localization performance . we found that using commodity 802.11 technology over a range of algorithms , approaches and environments , one can expect a median localization error of 10 ft and 97th percentile of 30 ft. we present strong evidence that these limitations are fundamental and that they are unlikely to transcend without a fundamentally more complex environmental models or additional localization infrastructure . story_separator_special_tag localization is one of the most challenging and important issues in wireless sensor networks ( wsns ) , especially if cost-effective approaches are demanded . in this paper , we present intensively discuss and analyze approaches relying on the received signal strength indicator ( rssi ) . the advantage of employing the rssi values is that no extra hardware ( e.g . ultrasonic or infra-red ) is needed for network-centric localization . we studied different factors that affect the measured rssi values . finally , we evaluate two methods to estimate the distance ; the first approach is based on statistical methods . for the second one , we use an artificial neural network to estimate the distance . story_separator_special_tag we present algorithms for estimating the location of stationary and mobile users based on heterogeneous indoor rf technologies . we propose two location algorithms , selective fusion location estimation ( selfloc ) and region of confidence ( roc ) , which can be used in conjunction with classical location algorithms such as triangulation , or with third-party commercial location estimation systems . the selfloc algorithm infers the user location by selectively fusing location information from multiple wireless technologies and/or multiple classical location algorithms in a theoretically optimal manner . the roc algorithm attempts to overcome the problem of aliasing in the signal domain , where different physical locations have similar rf characteristics , which is particularly acute when users are mobile . we have empirically validated the proposed algorithms using wireless lan and bluetooth technology . our experimental results show that applying selfloc for stationary users when using multiple wireless technologies and multiple classical location algorithms can improve location accuracy significantly , with mean distance errors as low as 1.6 m. for mobile users we find that using roc can allow us to obtain mean errors as low as 3.7 m. both algorithms can be used in conjunction with a story_separator_special_tag with the advances in wireless communications and low-power electronics , accurate position location may now be accomplished by a number of techniques which involve commercial wireless services . emerging position location systems , when used in conjunction with mobile communications services , will lead to enhanced public safety and revolutionary products and services . the fundamental technical challenges and business motivations behind wireless position location systems are described , and promising techniques for solving the practical position location problem are treated . story_separator_special_tag a mobile ad hoc network ( manet ) is a collection of mobile hosts that form a temporary network on the fly without using any fixed infrastructure . recently , the explosive growth in the use of real-time applications on mobile devices has resulted in new challenges to the design of protocols for manets . chief among these challenges to enable real-time applications for manets is incorporating support for quality of service ( qos ) , such as bandwidth constraints . however , manets having a high ratio of topology change make routing especially unstable ; making stability is an important challenge , especially for routing having a quality of service provision . in this paper , we propose a reliable multi-path qos routing ( rmqr ) protocol with a slot assignment scheme . in this scheme , we examine the qos routing problem associated with searching for a reliable multi-path ( or uni-path ) qos route from a source node to a destination node in a manet . this route must also satisfy certain bandwidth requirements . we determine the route expiration time between two connected mobile nodes using global positioning system ( gps ) . then , two story_separator_special_tag awareness of the physical location for each node is required by many wireless sensor network applications . the discovery of the position can be realized utilizing range measurements including received signal strength , time of arrival , time difference of arrival and angle of arrival . in this paper , we focus on localization techniques based on angle of arrival information between neighbor nodes . we propose a new localization and orientation scheme that considers beacon information multiple hops away . the scheme is derived under the assumption of noisy angle measurements . we show that the proposed method achieves very good accuracy and precision despite inaccurate angle measurements and a small number of beacons story_separator_special_tag over the past decade , wireless sensor networks have advanced in terms of hardware design , communication protocols , and resource efficiency . recently , there has been growing interest in mobility , and several small-profile sensing devices that control their own movement have been developed . unfortunately , resource constraints inhibit the use of traditional navigation methods because these typically require bulky , expensive sensors , substantial memory , and a generous power supply . therefore , alternative navigation techniques are required . in this paper , we present a navigation system implemented entirely on resource-constrained sensors . localization is realized using triangulation in conjunction with radio interferometric angle-of-arrival estimation . a digital compass is employed to keep the mobile node on the desired trajectory . we also present a variation of the approach that uses a kalman filter to estimate heading without using the compass . we demonstrate that a resource-constrained mobile sensor can accurately perform waypoint navigation with an average position error of 0.95 m . story_separator_special_tag with the development of research on wireless sensor networks ( wsns ) , localization in the wsns has become a very important research point . at present , there are mainly two kinds of approaches , range-based approach and range-free approach . localization precision of range-based approach is higher than the range-free approach . in the range-based approach , time difference of arrival ( tdoa ) method needs less requirements for time synchronization . this paper proposes a high precision localization algorithm based on tdoa , which utilizes average value of time difference by rolling average to decrease the measurement error , and adopts unconstrained least squares ( ls ) estimator to achieve the accurate localization . simulation results and error analysis prove its validity . story_separator_special_tag localization in wireless sensor networks gets more and more important , because many applications need to locate the source of incoming measurements as precise as possible . weighted centroid localization ( wcl ) provides a fast and easy algorithm to locate devices in wireless sensor networks . the algorithm is derived from a centroid determination which calculates the position of devices by averaging the coordinates of known reference points . to improve the calculated position in real implementations , wcl uses weights to attract the estimated position to close reference points provided that coarse distances are available . due to the fact that zigbee provides the link quality indication ( lqi ) as a quality indicator of a received packet , it can also be used to estimate a distance from a node to reference points . story_separator_special_tag from the publisher : this invaluable reference offers the most comprehensive introduction available to the concepts of multisensor data fusion . it introduces key algorithms , provides advice on their utilization , and raises issues associated with their implementation . with a diverse set of mathematical and heuristic techniques for combining data from multiple sources , the book shows how to implement a data fusion system , describes the process for algorithm selection , functional architectures and requirements for ancillary software , and illustrates man-machine interface requirements an database issues . story_separator_special_tag this paper proposes a new type of range-free localization method based on affine transformation . nodes extract subgraphs with a grid topology from a sensor network and assign x-y coordinates to themselves in a decentralized manner . the nodes estimate their positions using an affine transformation based on the mapping of the physical positions and the x-y coordinates of three anchors in an extracted graph . in contrast with multilateration-based localization methods , the proposed method works well even in a non-convex hull deployment , such as a terrain with big regions without sensors . we provide a theoretical analysis and simulation results . we also present a strategy for minimizing the position estimation error and maximizing the coverage of the proposed method . in the simulation results , the position estimation error is 0.18 ( normalized by the radio communication range ) and the coverage is almost 100 % in a non-convex hull deployment .
in this paper we present results for two tasks : social event detection and social network extraction from a literary text , alice in wonderland . for the first task , our system trained on a news corpus using tree kernels and support vector machines beats the baseline systems by a statistically significant margin . using this system we extract a social network from alice in wonderland . we show that while we achieve an f-measure of about 61 % on social event detection , our extracted unweighted network is not statistically distinguishable from the un-weighted gold network according to popularly used network measures . story_separator_special_tag affect is a transient phenomenon , with emotions tending to blend and interact over time [ 4 ] . this paper discusses emotional distributions in child-directed texts . it provides statistical evidence for the relevance of emotional sequencing , and evaluates trends of emotional story development , based on annotation statistics on 22 grimms fairy tales which form part of a larger on-going text-annotation project that is also introduced . the study is motivated by the need for exploring features for text-based emotion prediction at the sentence-level , for use in expressive text-to-speech synthesis of children s stories . story_separator_special_tag beatie ( 1979 ) has expressed disappointment that the use of computers in literary scholarship since the nineteen-sixties has not had the positive impact that originally had been predicted . in fact , the main application of computer assistance has been to the rather straightforward tasks of preparing word counts and text concordances . other procedures that have been developed to aid in literary research have suffered from the limitation of difficulty in application to studies other than the one for which the procedure was first developed . thus the procedures have not become standard ones with which others could check the research findings themselves or readily apply to the examination of other materials . an attempt to identify general factors determining the attractiveness of aesthetic objects has been made in the work of berlyne ( 1960 , 1972 ) . the identification of the motivational effects of variables like novelty , surprisingness , uncertainty , conflict , and complexity have permitted comparative studies . the resulting effects are examined in the context of arousal theory of motivation , according to which organisms strive to attain optimal levels of stimulation . in another paper , dember ( 1965 ) argued story_separator_special_tag the objective of this work is not to replicate subjective impressions and certainly not to supplant them , but to explore means by which the second dimension of literary impact , qualities of emotional expression , can be objectively studied through the collection and display of measures made possible by the computer . with that goal , this paper has illustrated two approaches to the analysis and display of three fundamental emotional tone scores . the first is the production of a combined score , tension , which has been derived from previous studies of literary text and criterion passages . the second approach is the generation of transition graphs which identify the emotional state of passages of text according to the categories proposed in mehrabian 's theoretical system . both of these approaches to the modeling of emotional tone scores generate meaningful displays of data which can be used in objective comparisons of different stories and which lead to fresh interpretations of the reasons for their impact on a reader . they can be applied to actual samples of the kind of literature that is spontaneously read for pleasure in addition to being of interest for analytic purposes . story_separator_special_tag a revised version of the tale of peter rabbit was published in 1987 by ladybird press with simplified text and alteration of the illustrations of the 1902 original . revisions of two other of the beatrix potter stories were subsequently published . by applying a computer system that detects words with connotative meaning scores and applying mathematical models to the stories it was possible to make an objective comparison of the emotional tone patterns in the original and the revised versions . as would be expected , the initial editions of the three stories were quite different from each other . the emotional tone pattern of peter rabbit and one of the other stories was shown to have been altered considerably by the simplification of text , which lends objective support to the outcry in the popular press against these changes . story_separator_special_tag predicting the success of literary works is a curious question among publishers and aspiring writers alike . we examine the quantitative connection , if any , between writing style and successful literature . based on novels over several different genres , we probe the predictive power of statistical stylometry in discriminating successful literary works , and identify characteristic stylistic elements that are more prominent in successful writings . our study reports for the first time that statistical stylometry can be surprisingly effective in discriminating highly successful literature from less successful counterpart , achieving accuracy up to 84 % . closer analyses lead to several new insights into characteristics of the writing style in successful literature , including findings that are contrary to the conventional wisdom with respect to good writing style and readability . story_separator_special_tag in this work we present sentiwordnet 3.0 , a lexical resource explicitly devised for supporting sentiment classification and opinion mining applications . sentiwordnet 3.0 is an improved version of sentiwordnet 1.0 , a lexical resource publicly available for research purposes , now currently licensed to more than 300 research groups and used in a variety of research projects worldwide . both sentiwordnet 1.0 and 3.0 are the result of automatically annotating all wordnet synsets according to their degrees of positivity , negativity , and neutrality . sentiwordnet 1.0 and 3.0 differ ( a ) in the versions of wordnet which they annotate ( wordnet 2.0 and 3.0 , respectively ) , ( b ) in the algorithm used for automatically annotating wordnet , which now includes ( additionally to the previous semi-supervised learning step ) a random-walk step for refining the scores . we here discuss sentiwordnet 3.0 , especially focussing on the improvements concerning aspect ( b ) that it embodies with respect to version 1.0. we also report the results of evaluating sentiwordnet 3.0 against a fragment of wordnet 3.0 manually annotated for positivity , negativity , and neutrality ; these results indicate accuracy improvements of about 20 story_separator_special_tag the current study investigated whether fiction experiences change empathy of the reader . based on transportation theory , it was predicted that when people read fiction , and they are emotionally transported into the story , they become more empathic . two experiments showed that empathy was influenced over a period of one week for people who read a fictional story , but only when they were emotionally transported into the story . no transportation led to lower empathy in both studies , while study 1 showed that high transportation led to higher empathy among fiction readers . these effects were not found for people in the control condition where people read non-fiction . the study showed that fiction influences empathy of the reader , but only under the condition of low or high emotional transportation into the story . story_separator_special_tag the present study provides evidence that valence focus and arousal focus are important processes in determining whether a dimensional or a discrete emotion model best captures how people label thei . story_separator_special_tag introduction - i : introduction : the two thousand year old assumption chapter - 1 : the search for emotion 's `` fingerprints '' chapter - 2 : emotions are constructed chapter - 3 : the myth of universal emotions chapter - 4 : the origin of feeling chapter - 5 : concepts , goals , and words chapter - 6 : how the brain makes emotions chapter - 7 : emotions as a social reality chapter - 8 : a new view of human nature chapter - 9 : mastering your emotions chapter - 10 : emotions and illness chapter - 11 : emotion and the law chapter - 12 : is a growling dog angry ? chapter - 13 : from brain to mind : the new frontier acknowledgements - ii : acknowledgments section - iii : appendix a : brain basics section - iv : appendix b : supplement for chapter 2 section - v : appendix c : supplement for chapter 3 section - vi : appendix d : evidence for the concept cascade section - vii : bibliography section - viii : notes section - ix : illustration credits index - x : index story_separator_special_tag in this work , we describe an experiment on the categorization of poems based on their emotional content , which is automatically measured . for that purpose , we center on the poetry of francisco de quevedo and a well-known sentiment categorization of it . thereby , we explore how emotions can help in the classification process . the goal was to verify whether the information about emotional content can be used to build classifiers reproducing that categorization . story_separator_special_tag classroom teachers at all levels and subject areas need effective instructional strategies as they are increasingly encouraged to use literature with their students . this article shares strategies for helping readers to `` make emotional meaning '' with authentic stories . essentially , the strategies help readers become aware of the emotional state of story characters and teach them ways to use this knowledge in their efforts to comprehend the stories they read . in literature as exploration ( 1983 ) and elsewhere , rosenblatt makes the point that developing instructional strategies to help students `` clarify and enlarge '' their initial responses to stories is one of the central roles of the literature teacher ( p. 76 ) . such strategies require readers to bring something from their own experience to a story ( beach & hynds , 1989 ; many & wiseman , 1992 ) , along with what rosenblatt refers to as `` a keener and more adequate perception of all that the text offers '' ( p. 77 ) . in this case , readers learn to use their own experience with human emotions in combination with ex- story_separator_special_tag across the university the way in which we pursue research is changing , and digital technology is playing a significant part in that change . indeed , it is becoming more and more evident that research is increasingly being mediated through digital technology . many argue that this mediation is slowly beginning to change what it means to undertake research , affecting both the epistemologies and ontologies that underlie a research programme ( sometimes conceptualised as close versus distant reading , see moretti 2000 ? ? ? ) . of course , this development is variable depending on disciplines and research agendas , with some more reliant on digital technology than others , but it is rare to find an academic today who has had no access to digital technology as part of their research activity . library catalogues are now probably the minimum way in which an academic can access books and research articles without the use of a computer , but , with card indexes dying a slow and certain death ( baker 1996 : 2001 ) , there remain few outputs for the non-digital scholar to undertake research in the modern university . email , google searches story_separator_special_tag the words we use in everyday language reveal our thoughts , feelings , personality , and motivations . linguistic inquiry and word count ( liwc ) is a software program to analyse text by counting words in 66 psychologically meaningful categories that are catalogued in a dictionary of words . this article presents the dutch translation of the dictionary that is part of the liwc 2007 version . it describes and explains the liwc instrument and it compares the dutch and english dictionaries on a corpus of parallel texts . the dutch and english dictionaries were shown to give similar results in both languages , except for a small number of word categories . correlations between word counts in the two languages were high to very high , while effect sizes of the differences between word counts were low to medium . the liwc 2007 categories can now be used to analyse dutch language texts . story_separator_special_tag we address the challenge of sentiment analysis from visual content . in contrast to existing methods which infer sentiment or emotion directly from visual low-level features , we propose a novel approach based on understanding of the visual concepts that are strongly related to sentiments . our key contribution is two-fold : first , we present a method built upon psychological theories and web mining to automatically construct a large-scale visual sentiment ontology ( vso ) consisting of more than 3,000 adjective noun pairs ( anp ) . second , we propose sentibank , a novel visual concept detector library that can be used to detect the presence of 1,200 anps in an image . the vso and sentibank are distinct from existing work and will open a gate towards various applications enabled by automatic sentiment analysis . experiments on detecting sentiment of image tweets demonstrate significant improvement in detection accuracy when comparing the proposed sentibank based predictors with the text-based approaches . the effort also leads to a large publicly available resource consisting of a visual sentiment ontology , a large detector library , and the training/testing benchmark for visual sentiment analysis . story_separator_special_tag 1 introduction the amount of digital text data available in online libraries has risen dramatically in recent years . googlebooks or the universal digital library ( udl ) initiatives illustrate this impressively . the rapid evolution of vast digital text data archives has spurred the growth of an interdisciplinary digital humanities ( dh ) community , as [ 1 ] puts it , the once inaccessible has suddenly become accessible . researchers in the humanities and social sciences have recognized the big potential digital text archives might offer to gain new insights on long-standing research questions . especially interesting are unstructured or semi-structured digital libraries in this context , as text documents have been central to the humanities and social sciences long before digiti-zation . along these developments , the need for automatically extracting new knowledge from text corpora using advanced data mining methods and information reduction techniques has risen as well . text corpora also bear exciting research avenues for spatially aware disciplines and research fields , including geography , giscience , and the interdisciplinary geographic information retrieval ( gir ) community . this is because text documents often contain explicit and implicit spatio-temporal and thematic information , which story_separator_special_tag bored versus stressed subjects were provided with opportunities to watch television . bored subjects more frequently selected exciting than relaxing programs , while stressed subjects selected similar quantities of each program type . story_separator_special_tag we here describe a novel methodology for measuring affective language in historical text by expanding an affective lexicon and jointly adapting it to prior language stages . we automatically construct a lexicon for word-emotion association of 18th and 19th century german which is then validated against expert ratings . subsequently , this resource is used to identify distinct emotional patterns and trace long-term emotional trends in different genres of writing spanning several centuries . story_separator_special_tag similarly , in natural language processing ( nlp ) , emotion analytics have developed into an active area of research ( liu , 2015 ) . nevertheless , there is little previous work explicitly addressing emotion in historical language and the specific methodological problems this raises . hamilton et al . ( 2016 ) as well as cook and stevenson ( 2010 ) presented methods for identifying amelioration and pejoration of words . acerbi et al . ( 2013 ) and bentley et al . ( 2014 ) demonstrated the potential of emotion analysis for the digital humanities ( dh ) by linking temporal emotion patterns in texts to major sociopolitical events and trends in the 20 century . story_separator_special_tag human emotions and their modelling are increasingly understood to be a crucial aspect in the development of intelligent systems . over the past years , in fact , the adoption of psychological models of emotions has become a common trend among researchers and engineers working in the sphere of affective computing . because of the elusive nature of emotions and the ambiguity of natural language , however , psychologists have developed many different affect models , which often are not suitable for the design of applications in fields such as affective hci , social data mining , and sentiment analysis . to this end , we propose a novel biologically-inspired and psychologically-motivated emotion categorisation model that goes beyond mere categorical and dimensional approaches . such model represents affective states both through labels and through four independent but concomitant affective dimensions , which can potentially describe the full range of emotional experiences that are rooted in any of us . story_separator_special_tag this paper presents visualizations to facilitate users ability to understand personal narratives in the historical and sociolinguistic context that they occurred . the visualizations focus on several elements of narrative time , space , and emotion to explore oral testimonies of korean comfort women , women who were forced into sexual slavery by japanese military during world war ii . the visualizations were designed to enable viewers to easily spot similarities and differences in life paths among individuals and also form an integrated view of spatial , temporal and emotional aspects of narrative . by exploring the narratives through the interactive interfaces , these visualizations facilitate users understandings of the unique identities and experiences of the comfort women , in addition to their collective and shared story . visualizations of this kind could be integrated into a toolkit for humanities scholars to facilitate exploration and analysis of other historical narratives , and thus serve as windows to intimate aspects of the past . story_separator_special_tag 1 this essay works at the empirical level to isolate a series of technical problems , logical fallacies , and conceptual flaws in an increasingly popular subfield in literary studies variously known as cultural analytics , literary data mining , quantitative formalism , literary text mining , computational textual analysis , computational criticism , algorithmic literary studies , social computing for literary studies , and computational literary studies ( the phrase i use here ) . in a nutshell the problem with computational literary analysis as it stands is that what is robust is obvious ( in the empirical sense ) and what is not obvious is not robust , a situation not easily overcome given the nature of literary data and the nature of statistical inquiry . there is a fundamental mismatch between the statistical tools that are used and the objects to which they are applied . digital humanities ( dh ) , a field of study which can encompass subjects as diverse as histories of media and early computational practices , the digitization of texts for open access , digital inscription and mediation , and computational linguistics and lexicology , and technical papers on data mining , story_separator_special_tag one reason why the debate about emotion runs into a dead end is that the second historical source which considered emotion as a cognitive function in the late 19th century was forgotten in anglo-american psychology . story_separator_special_tag an experiment tested the hypothesis that art can cause significant changes in the experience of one 's own personality traits under laboratory conditions . after completing a set of questionnaires , including the big-five inventory ( bfi ) and an emotion checklist , the experimental group read the short story the lady with the toy dog by chekhov , while the control group read a comparison text that had the same content as the story , but was documentary in form . the comparison text was controlled for length , readability , complexity , and interest level . participants then completed again the bfi and emotion checklist , randomly placed within a larger set of questionnaires . the results show the experimental group experienced significantly greater change in self-reported experience of personality traits than the control group , and that emotion change mediated the effect of art on traits . further consideration should be given to the role of art in the facilitation of processes of personality growth and maturation . story_separator_special_tag the potential of literature to increase empathy was investigated in an experiment . participants ( n = 100 , 69 women ) completed a package of questionnaires that measured lifelong exposure to fiction and nonfiction , personality traits , and affective and cognitive empathy . they read either an essay or a short story that were equivalent in length and complexity , were tested again for cognitive and affective empathy , and were finally given a non-self-report measure of empathy . participants who read a short story who were also low in openness experienced significant increases in self-reported cognitive empathy ( p .05 ) . no increases in affective empathy were found . participants who were frequent fiction-readers had higher scores on the non-self-report measure of empathy . our results suggest a role for fictional literature in facilitating development of empathy . story_separator_special_tag cross-cultural research on facial expression and the developments of methods to measure facial expression are briefly summarized . what has been learned about emotion from this work on the face is then elucidated . four questions about facial expression and emotion are discussed . what information does an expression typically convey ? can there be emotion without facial expression ? can there be a facial expression of emotion without emotion ? how do individuals differ in their facial expressions of emotion ? story_separator_special_tag observers in both literate and preliterate cultures chose the predicted emotion for photographs of the face , although agreement was higher in the literate samples . these findings suggest that the pan-cultural element in facial displays of emotion is the association between facial muscular movements and discrete primary emotions , although cultures may still differ in what evokes an emotion , in rules for controlling the display of emotion , and in behavioral consequences . story_separator_special_tag better representations of plot structure could greatly improve computational methods for summarizing and generating stories . current representations lack abstraction , focusing too closely on events . we present a kernel for comparing novelistic plots at a higher level , in terms of the cast of characters they depict and the social relationships between them . our kernel compares the characters of different novels to one another by measuring their frequency of occurrence over time and the descriptive and emotional language associated with them . given a corpus of 19th-century novels as training data , our method can accurately distinguish held-out novels in their original form from artificially disordered or reversed surrogates , demonstrating its ability to robustly represent important aspects of plot structure . story_separator_special_tag we present a method for extracting social networks from literature , namely , nineteenth-century british novels and serials . we derive the networks from dialogue interactions , and thus our method depends on the ability to determine when two characters are in conversation . our approach involves character name chunking , quoted speech attribution and conversation detection given the set of quotes . we extract features from the social networks and examine their correlation with one another , as well as with metadata such as the novel 's setting . our results provide evidence that the majority of novels in this time period do not fit two characterizations provided by literacy scholars . instead , our results suggest an alternative explanation for differences in social networks . story_separator_special_tag publisher summary along with the discovery of new antibiotics came the development of new chromatographic analytical techniques . one of these that have been refined for rapid and precise detection and identification is high-performance liquid chromatography ( hplc ) . journal of chromatography library , volume 26 contains a significant amount of methodology and data based upon hplc . columns are packed with particles having an average diameter of less than 50 m , the velocity of the mobile phase is increased by means of a high inlet pressure , and sensitive sample detectors are applied to the column effluent . this valuable tool is included in this latest volume for numerous antibiotics for which it has been reported . likewise , there is a significant increase in the text of data obtained by thin-layer chromatography , whenever it has been reported . modifications of methods to increase the speed of paper or thin-layer separations have also resulted in new data being presented in this volume . new chromatographic methods that have been developed are reported for the numerous penicillins and cephalosporins that have appeared . story_separator_special_tag recent work in literary sentiment analysis has suggested that shifts in emotional valence may serve as a reliable proxy for plot movement in novels . the raw sentiment time series of a novel can now be extracted using a variety of different methods , and after extraction , filtering is commonly used to smooth the irregular sentiment time series . using an adaptive filter , which is among the most effective in determining trends of a signal , reducing noise , and performing fractal and multifractal analysis , we show that the energy of the smoothed sentiment signals decays with the smoothing parameter as a power-law , characterized by a hurst parameter h of 1/2 < ; h < ; 1 , which signifies long-range correlations . we further show that a smoothed sentiment arc corresponds to the sentiment of fast playing mode or sentiment retained in one 's memory , and that for a novel to be both captivating and rich , h has to be larger than 1/2 but can not be too close to 1 . story_separator_special_tag within the discipline of psychology , the conventional history outlines the development of two fundamental approaches to the scientific study of emotion basic emotion and appraisal traditions . in this article , we outline the development of a third approach to emotion that exists in the psychological literature the psychological constructionist tradition . in the process , we discuss a number of works that have virtually disappeared from the citation trail in psychological discussions of emotion . we also correct some misconceptions about early sources , such as work by darwin and james . taken together , these three contributions make for a fuller and more accurate account of ideas about emotion during the century stretching from 1855 to just before 1960 . story_separator_special_tag it is widely believed that certain emotions are universally recognized in facial expressions . recent evidence indicates that western perceptions ( e.g. , scowls as anger ) depend on cues to u.s. emotion concepts embedded in experiments . because such cues are standard features in methods used in cross-cultural experiments , we hypothesized that evidence of universality depends on this conceptual context . in our study , participants from the united states and the himba ethnic group from the keunene region of northwestern namibia sorted images of posed facial expressions into piles by emotion type . without cues to emotion concepts , himba participants did not show the presumed `` universal '' pattern , whereas u.s. participants produced a pattern with presumed universal features . with cues to emotion concepts , participants in both cultures produced sorts that were closer to the presumed `` universal '' pattern , although substantial cultural variation persisted . our findings indicate that perceptions of emotion are not universal , but depend on cultural and conceptual contexts . story_separator_special_tag abela , joan . hospitaller malta and the mediterranean economy in the sixteenth century . woodbridge : boydell press , 2018 , 266 pp. , \xa375.00 ( hardback ) , isbn 978183272112. akin , yi it . when the war came home : the ottomans great war and the devastation of an empire . stanford : stanford university press , 2018 , 288 pp. , us $ 27.95 ( paperback ) , isbn 9781503603639. ashbrook harvey , susan and margaret mullett , eds . knowing bodies , passionate souls : sense perceptions in byzantium . cambridge , mass : harvard university press , 2017 , 342 pp. , 59 colour illus. , 3 halftones , 9 line illus , 1 table . us $ 70.00/\xa355.95/ 63.00 ( hardback ) , isbn 9780884024217. asto , avi . rebuilding islam in contemporary spain : the politics of mosque establishment , 1976 2013. eastbourne-portland : sussex academic press , 2017 , 256 pp. , \xa365.00/us $ 79.95 , isbn 978-1-84519-894-7. bartal , renana , neta bodner , and bianca k\xfchnel , eds . natural materials of the holy land and the visual translation of place , 500 1500. abingdon : routledge , 2017 story_separator_special_tag london2-4 july 2015the 41st annual conference was held in london at the polish social and cultural association ( posk ) and the university women 's club , over july 2-4 , 2015. participants presented thirtythree papers on topics such as the windows in heart of darkness , the volcanoes in victory , the wolf in `` prince roman , '' meteorological informatics in lord jim , food as cultural narrative in almayer 's folly , maupassant translations , and video game adaptation of conrad ( `` conrad on consoles '' ) . a highlight of the conference was the philip conrad memorial lecture , chaired by keith carabine and given by laurence davies on the first da y. the speech , t itled `` conrad and contingency , or 'the incomprehensible logic of accident , ' '' explored the selected letters of joseph conrad ( 2015 ) , a summarizing volume following the nine volumes of the collected letters of joseph conrad ( 19832008 ) . this e nt ertai ni ng and thoughtful lecture connected well wi th satu rday 's af ter -dinner speech , by rick gekoski , who revealed the anonymous source of many of these story_separator_special_tag explanatory accounts of emotion require , among other things , theoretically tractable representations of emotional experience . common methods for producing such representations have well-known drawb . story_separator_special_tag literature provides us with otherwise unavailable insights into the ways emotions are produced , experienced and enacted in human social life . it is particularly valuable because it deepens our comprehension of the mutual relations between emotional response and ethical judgment . these are the central claims of hogan 's study , which carefully examines a range of highly esteemed literary works in the context of current neurobiological , psychological , sociological and other empirical research . in this work , he explains the value of literary study for a cognitive science of emotion and outlines the emotional organization of the human mind . he explores the emotions of romantic love , grief , mirth , guilt , shame , jealousy , attachment , compassion and pity - in each case drawing on one work by shakespeare and one or more works by writers from different historical periods or different cultural backgrounds , such as the eleventh-century chinese poet li ch'ing-chao and the contemporary nigerian playwright wole soyinka . story_separator_special_tag literature for composition : essays , fiction , poetry , and . mon , 25 mar 2019 04:05:00 gmt essays , fiction , poetry and drama by william burto see more like this ap english literature composition for dummies by woods , geraldine see more like this . download literature for composition essays , fiction , poetry , and drama with myliteraturelab 7th edit , download literature for composition essays , fiction , poetry , download [ pdf ] literature for composition free usakochan.net wed , 03 apr 2019 15:14:00 gmt exploring literature : writing and arguing about fiction . literature for composition essays fiction poetry and drama thu , 21 mar 2019 03:34:00 gmt [ pdf ] free literature for composition essays fiction poetry and drama download book literature for composition essays fiction poetry and drama.pdf literature wikipedia fri , 15 mar 2019 11:05:00 gmt literature , most generically , is any body of written works.more restrictively , literature refers to writing considered to be an art literature for composition gbv thu , 28 mar 2019 12:08:00 gmt literature for composition essays , fiction , poetry , and drama edited by sylvan barnet tufts university . chapterj other kinds story_separator_special_tag we present emofiel , a system that identifies characters and scenes in a story from a fictional narrative summary , generates appropriate scene descriptions , identifies the emotion flow between a given directed pair of story characters in each interaction , and organizes them along the story timeline . these emotions are identified using two emotion modelling approaches : categorical and dimensional emotion models . the generated plots show that in a particular scene , two characters can share multiple emotions together with different intensity . furthermore , the directionality of the emotion can be captured as well , depending on which character is more dominant in each interaction . emofiel provides a web-based gui that allows users to query the annotated stories to explore the emotion mapping of a given character pair throughout a given story , and to explore scenes for which a certain emotion peaks . story_separator_special_tag abstract theorists from diverse disciplines purport narrative fiction serves to foster empathic development and growth . in two studies , participants subjective , behavioral , and perceptual responses were observed after reading a short fictional story . in study 1 , participants who were more transported into the story exhibited higher affective empathy and were more likely to engage in prosocial behavior . in study 2 , reading-induced affective empathy was related to greater bias toward subtle , fearful facial expressions , decreased perceptual accuracy of fearful expressions , and a higher likelihood of engaging in prosocial behavior . these effects persisted after controlling for an individual s dispositional empathy and general tendency to become absorbed in a story . this study provides an important initial step in empirically demonstrating the influence of reading fiction on empathy , emotional perception , and prosocial behavior . story_separator_special_tag emotions , moods , and conscious awareness ; comment on johnson-laird and oatley 's the language of emotions : an analysis of a semantic field andrew ortony & gerald l. clore to cite this article : andrew ortony & gerald l. clore ( 1989 ) emotions , moods , and conscious awareness ; comment on johnson-laird and oatley 's the language of emotions : an analysis of a semantic field , cognition and emotion , 3:2 , 125-137 , doi : 10.1080/02699938908408076 to link to this article : http : //dx.doi.org/10.1080/02699938908408076 story_separator_special_tag the literature review reveals different conceptual and methodological challenges in the field of music and emotion , such as the lack of agreement in terms of standardized datasets , and the need for . story_separator_special_tag we introduce the concept of sentiment profiles , representations of emotional content in texts and the sentiprofiler system for creating and visualizing such profiles . we also demonstrate the practical applicability of the system in literary research by describing its use in analyzing novels in the gothic fiction genre . our results indicate that the system is able to support literary research by providing valuable insights into the emotional content of gothic novels . story_separator_special_tag most approaches to emotion analysis in fictional texts focus on detecting the emotion expressed in text . we argue that this is a simplification which leads to an overgeneralized interpretation of the results , as it does not take into account who experiences an emotion and why . emotions play a crucial role in the interaction between characters and the events they are involved in . until today , no specific corpora that capture such an interaction were available for literature . we aim at filling this gap and present a publicly available corpus based on project gutenberg , reman ( relational emotion annotation ) , manually annotated for spans which correspond to emotion trigger phrases and entities/events in the roles of experiencers , targets , and causes of the emotion . we provide baseline results for the automatic prediction of these relational structures and show that emotion lexicons are not able to encompass the high variability of emotion expressions and demonstrate that statistical models benefit from joint modeling of emotions with its roles in all subtasks . the corpus that we provide enables future research on the recognition of emotions and associated entities in text . it supports qualitative story_separator_special_tag we present a computational framework for understanding the social aspects of emotions in twitter conversations . using unannotated data and semisupervised machine learning , we look at emotional transitions , emotional influences among the conversation partners , and patterns in the overall emotional exchanges . we find that conversational partners usually express the same emotion , which we name emotion accommodation , but when they do not , one of the conversational partners tends to respond with a positive emotion . we also show that tweets containing sympathy , apology , and complaint are significant emotion influencers . we verify the emotion classification part of our framework by a human-annotated corpus . story_separator_special_tag across a variety of cultural fields , researchers have identified a near ubiquitous underrepresentation and decentralization of women . this occurs both at the level of who is able to produce cultural works and who is depicted within them . women are less likely to be directors of hollywood films and also less likely to have starring roles . story_separator_special_tag whether our students will be reading great traditional books or relevant modern ones in the future , but whether they will be reading books at all . our first round of technological perturbation , which pitted the codex book and culture as we know it against commercial television , did n't turn out so badly as we feared . the print media continued to thrive during tv 's great expansion period . ' and literature continued to be taught in american schools and colleges much as before ; students read books and wrote papers and exams about them , which the professor then read , marked up ( time and zeal permitting ) , and returned to the student . compared to other areas of textual informing in the society around us , literary study has felt almost no pressure from changing technology . this grace period has now been ended by the personal computer and its electronic display of what , until a new word is invented , we must call `` text . '' the literary world , having gingerly learned to manipulate pixeled print ( `` pixels '' are `` picture elements , '' the dots which story_separator_special_tag sentiment analysis is the computational study of people 's opinions , sentiments , emotions , moods , and attitudes . this fascinating problem offers numerous research challenges , but promises insight useful to anyone interested in opinion analysis and social media analysis . this comprehensive introduction to the topic takes a natural-language-processing point of view to help readers understand the underlying structure of the problem and the language constructs commonly used to express opinions , sentiments , and emotions . the book covers core areas of sentiment analysis and also includes related topics such as debate analysis , intention mining , and fake-opinion detection . it will be a valuable resource for researchers and practitioners in natural language processing , computer science , management sciences , and the social sciences.in addition to traditional computational methods , this second edition includes recent deep learning methods to analyze and summarize sentiments and opinions , and also new material on emotion and mood analysis techniques , emotion-enhanced dialogues , and multimodal emotion analysis . story_separator_special_tag textual information in the world can be broadly categorized into two main types : facts and opinions . facts are objective expressions about entities , events and their properties . opinions are usually subjective expressions that describe people s sentiments , appraisals or feelings toward entities , events and their properties . the concept of opinion is very broad . in this chapter , we only focus on opinion expressions that convey people s positive or negative sentiments . much of the existing research on textual information processing has been focused on mining and retrieval of factual information , e.g. , information retrieval , web search , text classification , text clustering and many other text mining and natural language processing tasks . little work had been done on the processing of opinions until only recently . yet , opinions are so important that whenever we need to make a decision we want to hear others opinions . this is not only true for individuals but also true for organizations . one of the main reasons for the lack of study on opinions is the fact that there was little opinionated text available before the world wide web . before story_separator_special_tag sentiment analysis is one of the fastest growing research areas in computer science , making it challenging to keep track of all the activities in the area . we present a computer-assisted literature review , where we utilize both text mining and qualitative coding , and analyze 6,996 papers from scopus . we find that the roots of sentiment analysis are in the studies on public opinion analysis at the beginning of 20th century and in the text subjectivity analysis performed by the computational linguistics community in 1990 's . however , the outbreak of computer-based sentiment analysis only occurred with the availability of subjective texts on the web . consequently , 99 % of the papers have been published after 2004. sentiment analysis papers are scattered to multiple publication venues , and the combined number of papers in the top-15 venues only represent ca . 30 % of the papers in total . we present the top-20 cited papers from google scholar and scopus and a taxonomy of research topics . in recent years , sentiment analysis has shifted from analyzing online product reviews to social media texts from twitter and facebook . many topics beyond product reviews like story_separator_special_tag emotions are central to the experience of literary narrative fiction . affect and mood can influence what book people choose , based partly on whether their goal is to change or maintain their current emotional state . once having chosen a book , the narrative itself acts to evoke and transform emotions , both directly through the events and characters depicted and through the cueing of emotionally valenced memories . once evoked by the story , these emotions can in turn influence a person 's experience of the narrative . lastly , emotions experienced during reading may have consequences after closing the covers of a book . this article reviews the current state of empirical research for each of these stages , providing a snapshot of what is known about the interaction between emotions and literary narrative fiction . with this , we can begin to sketch the outlines of what remains to be discovered . story_separator_special_tag it is not uncommon for certain social networks to divide into two opposing camps in response to stress . this happens , for example , in networks of political parties during winner-takes-all elections , in networks of companies competing to establish technical standards , and in networks of nations faced with mounting threats of war . a simple model for these two-sided separations is the dynamical system dx/dt = x2 , where x is a matrix of the friendliness or unfriendliness between pairs of nodes in the network . previous simulations suggested that only two types of behavior were possible for this system : either all relationships become friendly or two hostile factions emerge . here we prove that for generic initial conditions , these are indeed the only possible outcomes . our analysis yields a closed-form expression for faction membership as a function of the initial conditions and implies that the initial amount of friendliness in large social networks ( started from random initial conditions ) determines whether they will end up in intractable conflict or global harmony . story_separator_special_tag a consensual , componential model of emotions conceptualises them as experiential , physiological , and behavioural responses to personally meaningful stimuli . the present review examines this model in terms of whether different types of emotion-evocative stimuli are associated with discrete and invariant patterns of responding in each response system , how such responses are structured , and if such responses converge across different response systems . across response systems , the bulk of the available evidence favours the idea that measures of emotional responding reflect dimensions rather than discrete states . in addition , experiential , physiological , and behavioural response systems are associated with unique sources of variance , which in turn limits the magnitude of convergence across measures . accordingly , the authors suggest that there is no gold standard measure of emotional responding . rather , experiential , physiological , and behavioural measures are all relevant to understanding emotion and can not . story_separator_special_tag emotional intelligence ( ei ) involves the ability to carry out accurate reasoning about emotions and the ability to use emotions and emotional knowledge to enhance thought . we discuss the origins of the ei concept , define ei , and describe the scope of the field today . we review three approaches taken to date from both a theoretical and methodological perspective . we find that specific-ability and integrative-model approaches adequately conceptualize and measure ei . pivotal in this review are those studies that address the relation between ei measures and meaningful criteria including social outcomes , performance , and psychological and physical well-being . the discussion section is followed by a list of summary points and recommended issues for future research . story_separator_special_tag today we have access to unprecedented amounts of literary texts . however , search still relies heavily on key words . in this paper , we show how sentiment analysis can be used in tandem with effective visualizations to quantify and track emotions in both individual books and across very large collections . we introduce the concept of emotion word density , and using the brothers grimm fairy tales as example , we show how collections of text can be organized for better search . using the google books corpus we show how to determine an entity 's emotion associations from co-occurring words . finally , we compare emotion words in fairy tales and novels , to show that fairy tales have a much wider range of emotion word densities than novels . story_separator_special_tag in this paper , we show how sentiment analysis can be used in tandem with effective visualizations to quantify and track emotions in mail and books . we study a number of specific datasets and show , among other things , how collections of texts can be organized for affect-based search and how books portray different entities through co-occurring emotion words . analysis of the enron email corpus reveals that there are marked differences across genders in how they use emotion words in work-place email . finally , we show that fairy tales have more extreme emotion densities than novels . story_separator_special_tag even though considerable attention has been given to the polarity of words ( positive and negative ) and the creation of large polarity lexicons , research in emotion analysis has had to rely on limited and small emotion lexicons . in this paper , we show how the combined strength and wisdom of the crowds can be used to generate a large , high quality , word emotion and word polarity association lexicon quickly and inexpensively . we enumerate the challenges in emotion annotation in a crowdsourcing scenario and propose solutions to address them . most notably , in addition to questions about emotions associated with terms , we show how the inclusion of a word choice question can discourage malicious data entry , help to identify instances where the annotator may not be familiar with the target term ( allowing us to reject such annotations ) , and help to obtain annotations at sense level ( rather than at word level ) . we conducted experiments on how to formulate the emotion annotation questions , and show that asking if a term is associated with an emotion leads to markedly higher interannotator agreement than that obtained by asking if story_separator_special_tag `` in this groundbreaking book , franco moretti argues that literature scholars should stop reading books and start counting , graphing , and mapping them instead . in place of the traditionally selective literary canon of a few hundred texts , moretti offers charts , maps and time lines , developing the idea of distant reading into a full-blown experiment in literary historiography , in which the canon disappears into the larger literary system . '' -- publisher 's website . story_separator_special_tag we present an automatic method for analyzing sentiment dynamics between characters in plays . this literary format s structured dialogue allows us to make assumptions about who is participating in a conversation . once we have an idea of who a character is speaking to , the sentiment in his or her speech can be attributed accordingly , allowing us to generate lists of a character s enemies and allies as well as pinpoint scenes critical to a character s emotional development . results of experiments on shakespeare s plays are presented along with discussion of how this work can be extended to unstructured texts ( i.e . novels ) . story_separator_special_tag automatic methods for analyzing sentiment and its movement through a play 's social network are investigated . from structured dialogue we can algorithmically determine who is speaking and guess at who is listening or being directly addressed . knowing who is speaking to whom allows the flow of sentiment to be tracked between characters and , within plays with clear time-lines , permits tracking the development of emotional relationships . we hypothesize that changing polarities between characters can be modeled as edge weights in a dynamic social network- a `` sentiment network '' -which can be used to distinguish a document 's genre ( tragedies versus comedies ) , detect a given character 's enemies and allies , and model the overall emotional development of a play . experiments on shakespeare 's plays are presented along with discussion of improvements and further applications . story_separator_special_tag the job lexicon for sentiment analysis of slovenian texts contains a list of 25,524 headwords from the list of slovenian headwords 1.1 ( http : //hdl.handle.net/11356/1038 ) extended with sentiment ratings based on the afinn model with an integer between -5 ( very negative ) and +5 ( very positive ) . the ratings are derived from the lemmatized version of the manually sentiment annotated slovenian ( sentence-based ) news corpus sentinews 1.0 ( http : //hdl.handle.net/11356/1110 ) . story_separator_special_tag four studies were conducted to explore how tender affective states ( e.g. , warmth , sympathy , understanding ) predict attraction to entertainment that features poignant , dramatic , or tragic portrayals . studies 1 and 2 found that tenderness was associated with greater interest in viewing sad films . studies 3 and 4 found that tender affective states were associated with preferences for entertainment featuring not only sad portrayals but also entertainment featuring drama and human connection . results are discussed in terms of how these forms of entertainment may provide viewers the opportunity to contemplate the poignancies of human life an activity that may reflect motivations of media use related to meaningfulness or insight rather than only the experience of pleasure . story_separator_special_tag purpose the purpose of this systematic review was to determine evidence of a cognate effect for young multilingual children ( ages 3 ; 0-8 ; 11 [ years ; months ] , preschool to second grade ) in terms of task-level and child-level factors that may influence cognate performance . cognates are pairs of vocabulary words that share meaning with similar phonology and/or orthography in more than one language , such as rose-rosa ( english-spanish ) or carrot-carotte ( english-french ) . despite the cognate advantage noted with older bilingual children and bilingual adults , there has been no systematic examination of the cognate research in young multilingual children . method we conducted searches of multiple electronic databases and hand-searched article bibliographies for studies that examined young multilingual children 's performance with cognates based on study inclusion criteria aligned to the research questions . results the review yielded 16 articles . the majority of the studies ( 12/16 , 75 % ) demonstrated a positive cognate effect for young multilingual children ( measured in higher accuracy , faster reaction times , and doublet translation equivalents on cognates as compared to noncognates ) . however , not all bilingual children demonstrated story_separator_special_tag advances in computing power , natural language processing , and digitization of text now make it possible to study a culture s evolution through its texts using a big data lens . our ability to communicate relies in part upon a shared emotional experience , with stories often following distinct emotional trajectories and forming patterns that are meaningful to us . here , by classifying the emotional arcs for a filtered subset of 1,327 stories from project gutenberg s fiction collection , we find a set of six core emotional arcs which form the essential building blocks of complex emotional trajectories . we strengthen our findings by separately applying matrix decomposition , supervised learning , and unsupervised learning . for each of these six core emotional arcs , we examine the closest characteristic stories in publication today and find that particular emotional arcs enjoy greater success , as measured by downloads . story_separator_special_tag although consumption-related emotions have been studied with increasing frequency in consumer behavior , issues concerning the appropriate way to measure these emotions remain unresolved . this article reviews the emotion measures currently used in consumer research and the theories on which they are based ; it concludes that the existing measures are unsuited for the purpose of measuring consumption-related emotions . the article describes six empirical studies that assess the domain of consumption-related emotions , that identify an appropriate set of consumption emotion descriptors ( the ces ) , and that compare the usefulness of this descriptor set with the usefulness of other measures in assessing consumption-related emotions . story_separator_special_tag a mathematical model is proposed for interpreting the love story portrayed by walt disney in the film `` beauty and the beast '' . the analysis shows that the story is characterized by a sudden explosion of sentimental involvements , revealed by the existence of a saddle-node bifurcation in the model . the paper is interesting not only because it deals for the first time with catastrophic bifurcations in specific romantic relationships , but also because it enriches the list of examples in which love stories are satisfactorily described through ordinary differential equations . story_separator_special_tag i. what are emotions and how do they operate ? ii . emotion in literature iii . expressing emotions in the arts iv . music and the emotions story_separator_special_tag abstract this paper examines nongoal oriented transactions with texts in order to investigate the information encounter in the context of daily living . findings are reported from a larger research project based on intensive interviews with 194 committed readers who read for pleasure . the paper analyses interview responses that illuminate two aspects of the readers ' experience of reading for pleasure : ( 1 ) how readers choose books to read for pleasure ; and ( 2 ) books that have made a significant difference in readers ' lives . the paper concludes with five themes emerging from this analysis that have implications for the information search process : the active engagement of the reader/searcher in constructing meaning from texts ; the role of the affective dimension ; trustworthiness ; the social context of information seeking ; and the meta-knowledge used by experienced readers in making judgments about texts . story_separator_special_tag emotions are universally recognized from facial expressions -- or so it has been claimed . to support that claim , research has been carried out in various modern cultures and in cultures relatively isolated from western influence . a review of the methods used in that research raises questions of its ecological , convergent , and internal validity . forced-choice response format , within-subject design , preselected photographs of posed facial expressions , and other features of method are each problematic . when they are altered , less supportive or nonsupportive results occur . when they are combined , these method factors may help to shape the results . facial expressions and emotion labels are probably associated , but the association may vary with culture and is loose enough to be consistent with various alternative accounts , 8 of which are discussed . story_separator_special_tag what is the structure of emotion ? emotion is too broad a class of events to be a single scientific category , and no one structure suffices . as an illustration , core affect is distinguished from prototypical emotional episode . core affect refers to consciously accessible elemental processes of pleasure and activation , has many causes , and is always present . its structure involves two bipolar dimensions . prototypical emotional episode refers to a complex process that unfolds over time , involves causally connected subevents ( antecedent ; appraisal ; physiological , affective , and cognitive changes ; behavioral response ; self-categorization ) , has one perceived cause , and is rare . its structure involves categories ( anger , fear , shame , jealousy , etc . ) vertically organized as a fuzzy hierarchy and horizontally organized as part of a circumplex . story_separator_special_tag we present a preliminary study on predicting news values from headline text and emotions . we perform a multivariate analysis on a dataset manually annotated with news values and emotions , discovering interesting correlations among them . we then train two competitive machine learning models an svm and a cnn to predict news values from headline text and emotions as features . we find that , while both models yield a satisfactory performance , some news values are more difficult to detect than others , while some profit more from including emotion information . story_separator_special_tag fiction , a prime form of entertainment , has evolved into multiple genres which one can broadly attribute to different forms of stories . in this paper , we examine the hypothesis that works of fiction can be characterised by the emotions they portray . to investigate this hypothesis , we use the work of fictions in the project gutenberg and we attribute basic emotional content to each individual sentence using ekman s model . a time-smoothed version of the emotional content for each basic emotion is used to train extremely randomized trees . we show through 10-fold cross-validation that the emotional content of each work of fiction can help identify each genre with significantly higher probability than random . we also show that the most important differentiator between genre novels is fear . story_separator_special_tag prior experiments indicated that reading literary fiction improves mentalising performance relative to reading popular fiction , non-fiction , or not reading . however , the experiments had relatively small sample sizes and hence low statistical power . to address this limitation , the present authors conducted four high-powered replication experiments ( combined n = 1006 ) testing the causal impact of reading literary fiction on mentalising . relative to the original research , the present experiments used the same literary texts in the reading manipulation ; the same mentalising task ; and the same kind of participant samples . moreover , one experiment was pre-registered as a direct replication . in none of the experiments did reading literary fiction have any effect on mentalising relative to control conditions . the results replicate earlier findings that familiarity with fiction is positively correlated with mentalising . taken together , the present findings call into question whether a single session of reading fiction leads to immediate improvements in mentalising . story_separator_special_tag we present results from a project on sentiment analysis of drama texts , more concretely the plays of gotthold ephraim lessing . we conducted an annotation study to create a gold standard for a systematic evaluation . the gold standard consists of 200 speeches of lessing s plays and was manually annotated with sentiment information by five annotators . we use the gold standard data to evaluate the performance of different german sentiment lexicons and processing configurations like lemmatization , the extension of lexicons with historical linguistic variants , and stop words elimination , to explore the influence of these parameters and to find best practices for our domain of application . the best performing configuration accomplishes an accuracy of 70 % . we discuss the problems and challenges for sentiment analysis in this area and describe our next steps toward further research . story_separator_special_tag background and motivation the particular exploration of new ways of interactions between society and information communication technologies ( ict ) with a focus on the humanities has the potential to become a key success factor for the values and competitiveness of the nordic region , having in mind recent eu and regional political discussions in the field of digital humanities ( european commission , 2016 ; vetenskapsradet s radet for forskningens infrastrukturer , 2014 ) . digital humanities ( dh ) is a diverse and still emerging field that lies at the intersection of ict and humanities , which is being continually formulated by scholars and practitioners in a range of disciplines ( see , for example , svensson & goldberg , 2015 ; gardiner & musto , 2015 ; schreibman , siemens , & unsworth , 2016 ) . the following are examples of current areas of fields and topics : text-analytic techniques , categorization , data mining ; social network analysis ( sna ) and bibliometrics ; metadata and tagging ; geographic information systems ( gis ) ; multimedia and interactive games ; visualisation ; media . dariah-eu ( http : //dariah.eu ) , is europe s largest story_separator_special_tag ( 2000 ) . emotion , cognition , and decision making . cognition and emotion : vol . 14 , no . 4 , pp . 433-440 . story_separator_special_tag numerous approaches have already been employed to sense affective information from text ; but none of those ever employed the occ emotion model , an influential theory of the cognitive and appraisal structure of emotion . the occ model derives 22 emotion types and two cognitive states as consequences of several cognitive variables . in this chapter , we propose to relate cognitive variables of the emotion model to linguistic components in text , in order to achieve emotion recognition for a much larger set of emotions than handled in comparable approaches . in particular , we provide tailored rules for textural emotion recognition , which are inspired by the rules of the occ emotion model . hereby , we clarify how text components can be mapped to specific values of the cognitive variables of the emotion model . the resulting linguistics-based rule set for the occ emotion types and cognitive states allows us to determine a broad class of emotions conveyed by text . story_separator_special_tag there has long been interest in describing emotional experience in terms of underlying dimensions , but traditionally only two dimensions , pleasantness and arousal , have been reliably found . the reasons for these findings are reviewed , and integrating this review with two recent theories of emotions ( roseman , 1984 ; scherer , 1982 ) , we propose eight cognitive appraisal dimensions to differentiate emotional experience . in an investigation of this model , subjects recalled past experiences associated with each of 15 emotions , and rated them along the proposed dimensions . six orthogonal dimensions , pleasantness , anticipated effort , certainty , attentional activity , self-other responsibility/control , and situational control , were recovered , and the emotions varied systematically along each of these dimensions , indicating a strong relation between the appraisal of one 's circumstances and one 's emotional state . the patterns of appraisal for the different emotions , and the role of each of the dimensions in differentiati ng emotional experience are discussed . most people think of emotions in categorical terms : `` i was scared , '' or `` i was sad , '' or `` i was frustrated . story_separator_special_tag this article provides the first systematic empirical examination of four major genres of theories concerning the nature and rise of the corpus of human emotions with more than 2,000 statistical tests of five hypotheses . the distinction between evolutionary-universal and other `` secondary '' emotions is empirically uninformative for all five cultures . next , the emotion-wheel theory of plutchik receives no empirical support . all palette theories fail four empirical tests . more than 90 empirical tests fail to support kemper and turner in assuming that many secondary emotions arise through complex combinations of primary emotions due to socialization . the johnson-laird and oatley hypothesis of five universal clusters of emotions is also tested and rejected . researchers need to rethink the heuristic value of dichotomizing and lumping emotions in categories such as universal , primary , basic , secondary , tertiary , and so forth . there are clear empirical advantages to differentiating between emotions with three dim . story_separator_special_tag abstract sentiment analysis aims to automatically uncover the underlying attitude that we hold towards an entity . the aggregation of these sentiment over a population represents opinion polling and has numerous applications . current text-based sentiment analysis rely on the construction of dictionaries and machine learning models that learn sentiment from large text corpora . sentiment analysis from text is currently widely used for customer satisfaction assessment and brand perception analysis , among others . with the proliferation of social media , multimodal sentiment analysis is set to bring new opportunities with the arrival of complementary data streams for improving and going beyond text-based sentiment analysis . since sentiment can be detected through affective traces it leaves , such as facial and vocal displays , multimodal sentiment analysis offers promising avenues for analyzing facial and vocal expressions in addition to the transcript or textual content . these approaches leverage emotion recognition and context inference to determine the underlying polarity and scope of an individual 's sentiment . in this survey , we define sentiment and the problem of multimodal sentiment analysis and review recent developments in multimodal sentiment analysis in different domains , including spoken reviews , images , video story_separator_special_tag this article presents the integration of sentiment analysis in alcide , an online platform for historical content analysis . a prior polarity approach has been applied to a corpus of italian historical texts , and a new lexical resource has been developed with a semi-automatic mapping starting from two english lexica . this article also reports on a first experiment on contextual polarity using both expert annotators and crowdsourced contributors . the long-term goal of our research is to create a system to support historical studies , which is able to analyse the sentiment in historical texts and to discover the opinion about a topic and its change over time . story_separator_special_tag in this paper we present a linguistic resource for the lexical representation of affective knowledge . this resource ( named wordnetaffect ) was developed starting from wordnet , through a selection and tagging of a subset of synsets representing the affective story_separator_special_tag in this paper , we present an experiment to identify emotions in tweets . unlike previous studies , which typically use the six basic emotion classes defined by ekman , we classify emotions according to a set of eight basic bipolar emotions defined by plutchik ( plutchik 's `` wheel of emotions '' ) . this allows us to treat the inherently multi-class problem of emotion classification as a binary problem for four opposing emotion pairs . our approach applies distant supervision , which has been shown to be an effective way to overcome the need for a large set of manually labeled data to produce accurate classifiers . we build on previous work by treating not only emoticons and hashtags but also emoji , which are increasingly used in social media , as an alternative for explicit , manual labels . since these labels may be noisy , we first perform an experiment to investigate the correspondence among particular labels of different types assumed to be indicative of the same emotion . we then test and compare the accuracy of independent binary classifiers for each of plutchik 's four binary emotion pairs trained with different combinations of label types story_separator_special_tag the initial stages of a project tracking the literary reputation of authors are described . the critical reviews of six authors who either rose to fame or fell to obscurity between 1900 and 1950 will be examined and we hope to demonstrate the contribution of each text to the evolving reputations of the authors . we provide an initial report on the use of the semantic orientation of adjectives and their rough position in the text to calculate the overall orientation of the text and suggest ways in which this calculation can be improved . improvements include further development of adjective lists , expansion of these lists and the consequent algorithms for calculating orientation to include other parts of speech , and the use of rhetorical structure theory to differentiate units that make a direct contribution to the intended orientation from those that are contrastive or otherwise make an indirect contribution . in proceedings of lrec 2006 workshop towards computational models of literary analysis , pp . 36-43 . story_separator_special_tag emotion analysis ( ea ) is a rapidly developing area in computational linguistics . an ea system can be extremely useful in fields such as information retrieval and emotion-driven computer animation . for most ea systems , the number of emotion classes is very limited and the text units the classes are assigned to are discrete and predefined . the question we address in this paper is whether the set of emotion categories can be enriched and whether the units to which the categories are assigned can be more flexibly defined . we present an experiment showing how an annotation task can be set up so that untrained participants can perform emotion analysis with high agreement even when not restricted to a predetermined annotation unit and using a rich set of emotion categories . as such it sets the stage for the development of more complex ea systems which are closer to the actual human emotional perception of text . story_separator_special_tag abstract art has to do with emotions in different ways . the article starts with a discussion about the relationship between the emotions represented in art and the personal feelings of the creator . both in literature and in painting human beings are the predominant theme . in order to present them as veridical and credible , the artist has to have a knowledge of emotions and their expressions . three levels of representing emotions in art are discerned , implying increasing distance from biologically programmed expressions as they exist in real life . emotional expressions are predominantly nonverbal . both painting and literature are discussed in terms of the means they have at their disposal to render these nonverbal expressions . painting can use nonverbal channels such as outer appearance , gaze , posture and gesture . literature has to describe these nonverbal behaviors in language . some examples are given and a method is presented for analysing the use of nonverbal categories by an author . such a type of analysis can uncover the implicit psychology of the writer . story_separator_special_tag text classification methods have been evaluated on topic classification tasks . this thesis extends the empirical evaluation to emotion classification tasks in the literary domain . this study selects two literary text classification problems -- -the eroticism classification in dickinson 's poems and the sentimentalism classification in early american novels -- -as two cases for this evaluation . both problems focus on identifying certain kinds of emotion -- -a document property other than topic . this study chooses two popular text classification algorithms -- -naive bayes and support vector machines ( svm ) , and three feature engineering options -- -stemming , stopword removal and statistical feature selection ( odds ratio and svm ) -- -as the subjects of evaluation . this study aims to examine the effects of the chosen classifiers and feature engineering options on the two emotion classification problems , and the interaction between the classifiers and the feature engineering options . this thesis seeks empirical answers to the following research questions : ( 1 ) \xa0is svm a better classifier than naive bayes regarding classification accuracy , new literary knowledge discovery and potential for example-based retrieval ? ( 2 ) \xa0is svm a better feature selection story_separator_special_tag identifying plot structure in novels is a valuable step towards automatic processing of literary corpora . we present an approach to classify novels as either having a happy ending or not . to achieve this , we use features based on different sentiment lexica as input for an svmclassifier , which yields an average f1-score of about 73 % . story_separator_special_tag we investigate the love dynamics in ivan turgenev 's pre-masochistic novella torrents of spring , using the system of ordinary differential equations . unlike previous authors , we based our analysis not only on psychological credibility . we relate our model to the ideas of literary criticism . we compare turgenev 's text with the most famous masochistic novella venus in furs by l. von sacher-masoch . story_separator_special_tag subjects were placed into a negative , neutral , or positive affective state and then , ostensibly in a waiting period , provided with an opportunity to watch television . they were free to choose among situation comedy , game show , action drama , and not watching . time of selective exposure was measured unobtrusively . compared to other affect-conditions , subjects in the condition of negative affect avoided comedy . time of exposure in this condition was significantly below that in the condition of neutral affect . the tendency to avoid comedy was stable over the 10-minute test period . compared to other affect conditions , subjects in the condition of positive affect preferred action drama . time of exposure in this condition was significantly above that in the condition of neutral affect . subjects experiencing negative affect watched less action drama over time ; in contrast , subjects experiencing positive affect watched more action drama , but less game show . a subsequent investigation showed that hostile comedy tended to be avoided by provoked subjects specifically . merely frustrated subjects were found to actually prefer such comedy . furthermore , neither provoked nor frustrated subjects avoided nonhostile
among the realistic ingredients to be considered in the computational modeling of infectious diseases , human mobility represents a crucial challenge both on the theoretical side and in view of the limited availability of empirical data . to study the interplay between short-scale commuting flows and long-range airline traffic in shaping the spatiotemporal pattern of a global epidemic we ( i ) analyze mobility data from 29 countries around the world and find a gravity model able to provide a global description of commuting patterns up to 300 kms and ( ii ) integrate in a worldwide-structured metapopulation epidemic model a timescale-separation technique for evaluating the force of infection due to multiscale mobility processes in the disease dynamics . commuting flows are found , on average , to be one order of magnitude larger than airline flows . however , their introduction into the worldwide model shows that the large-scale pattern of the simulated epidemic exhibits only small variations with respect to the baseline case where only airline traffic is considered . the presence of short-range mobility increases , however , the synchronization of subpopulations in close proximity and affects the epidemic behavior at the periphery of the airline transportation story_separator_special_tag the individual movements of large numbers of people are important in many contexts , from urban planning to disease spreading . datasets that capture human mobility are now available and many interesting features have been discovered , including the ultra-slow spatial growth of individual mobility . however , the detailed substructures and spatiotemporal flows of mobility the sets and sequences of visited locations have not been well studied . we show that individual mobility is dominated by small groups of frequently visited , dynamically close locations , forming primary habitats capturing typical daily activity , along with subsidiary habitats representing additional travel . these habitats do not correspond to typical contexts such as home or work . the temporal evolution of mobility within habitats , which constitutes most motion , is universal across habitats and exhibits scaling patterns both distinct from all previous observations and unpredicted by current models . the delay to enter subsidiary habitats is a primary factor in the spatiotemporal growth of human travel . interestingly , habitats correlate with non-mobility dynamics such as communication activity , implying that habitats may influence processes such as information spreading and revealing new connections between human mobility and social networks story_separator_special_tag the use of mobile information and communication technologies ( micts ) in enterprises is a slowly emerging reality . while the significance of mobility is understood , only little theoretical understanding of the potential value and impact of these technologies exists . this research fills the gap by defining the general concept of mobility , discussing the unique characteristics of micts , and identifying key value propositions for enterprises . the paper concludes with a discussion of future research opportunities . copyright \xa9 2004 ifac story_separator_special_tag sumo is an open source traffic simulation package including net import and demand modeling components . we describe the current state of the package as well as future developments and extensions . sumo helps to investigate several research topics e.g . route choice and traffic light algorithm or simulating vehicular communication . therefore the framework is used in different projects to simulate automatic driving or traffic management strategies . story_separator_special_tag during the last two decades of the twentieth century we have seen various transformations in our society as a whole . in particular , information and communication technologies ( icts ) have played a critical role in this transformation process . because of their pervasiveness and our intensive use of them , icts have changed our ways of living in virtually all realms of our social lives . ict is of course not the sole factor of this transformation ; various `` old '' technologies have also played a significant part . modern transportation technologies , for example , have become dramatically sophisticated in terms of effectiveness and usefulness since the early twentieth century . the train and airline infrastructures are highly integrated with icts such as electronic reservation systems and traffic control systems . it is therefore important to recognize that the fundamental nature of technological revolution in the late twentieth century is the dynamic and complex interplay between old and new technologies and between the reconfiguration of the technological fabric and its domestication [ 6 , 27 , 32 , 40 ] .this paper concerns the concept of mobility , which manifests such a transformation of our social story_separator_special_tag this paper describes a simple and general framework for synchronisation of non-linearly coupled dynamical systems interconnected to constitute a com- plex network . the proposed methodology of attaining synchronisation of networks is based on contraction strat- egy . the paper introduces a systematic control proce- dure to achieve synchronisation of a coupled dynamical network of proposed strict-feedback like class of nonlin- ear systems . the non-linear coupling function between different systems of the network is assumed to be in the form of bidirectional links . the proposed method- ology can be applicable to any arbitrarily structure of linear/non-linear , bidirectional or unidirectional n- coupled systems in a network . the general results have been derived for coupled systems interacting through nonlinear coupling function which are interconnected in different networked topologies including ring , global , star , arbitrary etc . the analytical conditions for syn- chronisation are expressed in terms of bounds on cou- pling strength which are derived using partial contrac- tion concepts blended with graph theory results . the proposed approach is straightforwardly applied to high dimensional non-linearly coupled chaotic systems which are common in many applications . a set of representa- tive examples of coupled chaotic systems story_separator_special_tag we study fifteen months of human mobility data for one and a half million individuals and find that human mobility traces are highly unique . in fact , in a dataset where the location of an individual is specified hourly , and with a spatial resolution equal to that given by the carrier 's antennas , four spatio-temporal points are enough to uniquely identify 95 % of the individuals . we coarsen the data spatially and temporally to find a formula for the uniqueness of human mobility traces given their resolution and the available outside information . this formula shows that the uniqueness of mobility traces decays approximately as the 1/10 power of their resolution . hence , even coarse datasets provide little anonymity . these findings represent fundamental constraints to an individual 's privacy and have important implications for the design of frameworks and institutions dedicated to protect the privacy of individuals . story_separator_special_tag the technologies of mobile communications pervade our society and wireless networks sense the movement of people , generating large volumes of mobility data , such as mobile phone call records and global positioning system ( gps ) tracks . in this work , we illustrate the striking analytical power of massive collections of trajectory data in unveiling the complexity of human mobility . we present the results of a large-scale experiment , based on the detailed trajectories of tens of thousands private cars with on-board gps receivers , tracked during weeks of ordinary mobile activity . we illustrate the knowledge discovery process that , based on these data , addresses some fundamental questions of mobility analysts : what are the frequent patterns of people 's travels ? how big attractors and extraordinary events influence mobility ? how to predict areas of dense traffic in the near future ? how to characterize traffic jams and congestions ? we also describe m-atlas , the querying and mining language and system that makes this analytical process possible , providing the mechanisms to master the complexity of transforming raw gps tracks into mobility knowledge . m-atlas is centered onto the concept of a trajectory story_separator_special_tag traffic delays and congestion are a major source of inefficiency , wasted fuel , and commuter frustration . measuring and localizing these delays , and routing users around them , is an important step towards reducing the time people spend stuck in traffic . as others have noted , the proliferation of commodity smartphones that can provide location estimates using a variety of sensors -- -gps , wifi , and/or cellular triangulation -- -opens up the attractive possibility of using position samples from drivers ' phones to monitor traffic delays at a fine spatiotemporal granularity . this paper presents vtrack , a system for travel time estimation using this sensor data that addresses two key challenges : energy consumption and sensor unreliability . while gps provides highly accurate location estimates , it has several limitations : some phones do n't have gps at all , the gps sensor does n't work in `` urban canyons '' ( tall buildings and tunnels ) or when the phone is inside a pocket , and the gps on many phones is power-hungry and drains the battery quickly . in these cases , vtrack can use alternative , less energy-hungry but noisier sensors like story_separator_special_tag the website wheresgeorge.com invites its users to enter the serial numbers of their us dollar bills and track them across america and beyond . why ? for fun and because it had not been done yet , they say . but the dataset accumulated since december 1998 has provided the ideal raw material to test the mathematical laws underlying human travel , and that has important implications for the epidemiology of infectious diseases . analysis of the trajectories of over half a million dollar bills shows that human dispersal is described by a two-parameter continuous-time random walk model : our travel habits conform to a type of random proliferation known as superdiffusion . and with that much established , it should soon be possible to develop a new class of models to account for the spread of human disease . the dynamic spatial redistribution of individuals is a key driving force of various spatiotemporal phenomena on geographical scales . it can synchronize populations of interacting species , stabilize them , and diversify gene pools1,2,3 . human travel , for example , is responsible for the geographical spread of human infectious disease4,5,6,7,8,9 . in the light of increasing international trade , story_separator_special_tag a socio-spatial graph is a combination of a social network with a spatial network . in such graph , the social network contains information on users and about the social relationships among these users . the spatial network contains information on geographic entities and about the spatial relationships among these entities . users are associated with geographic locations using life-pattern edges . the life pattern edges synopsize the location history of people , and accordingly , connect individuals to places they frequently visit . such graphs are used to provide information on people , while taking into account the spatial whereabout of individuals , and to provide information on geographical entities , in correspondence with their social aspects , i.e. , according to the people visited in these entities . thus , socio-spatial graphs are important analytic tools , however , when they combine a large social network with a large spatial network , the result is a large graph . in this paper we show how to eciently build such large graphs and how to query them eectively . story_separator_special_tag models of human mobility have broad applicability in fields such as mobile computing , urban planning , and ecology . this paper proposes and evaluates where , a novel approach to modeling how large populations move within different metropolitan areas . where takes as input spatial and temporal probability distributions drawn from empirical data , such as call detail records ( cdrs ) from a cellular telephone network , and produces synthetic cdrs for a synthetic population . we have validated where against billions of anonymous location samples for hundreds of thousands of phones in the new york and los angeles metropolitan areas . we found that where offers significantly higher fidelity than other modeling approaches . for example , daily range of travel statistics fall within one mile of their true values , an improvement of more than 14 times over a weighted random waypoint model . our modeling techniques and synthetic cdrs can be applied to a wide range of problems while avoiding many of the privacy concerns surrounding real cdrs . story_separator_special_tag the development of a city gradually fosters different functional regions , such as educational areas and business districts . in this paper , we propose a framework ( titled drof ) that discovers regions of different functions in a city using both human mobility among regions and points of interests ( pois ) located in a region . specifically , we segment a city into disjointed regions according to major roads , such as highways and urban express ways . we infer the functions of each region using a topic-based inference model , which regards a region as a document , a function as a topic , categories of pois ( e.g. , restaurants and shopping malls ) as metadata ( like authors , affiliations , and key words ) , and human mobility patterns ( when people reach/leave a region and where people come from and leave for ) as words . as a result , a region is represented by a distribution of functions , and a function is featured by a distribution of mobility patterns . we further identify the intensity of each function in different locations . the results generated by our framework can benefit a story_separator_special_tag connections established by users of online social networks are influenced by mechanisms such as preferential attachment and triadic closure . yet , recent research has found that geographic factors also constrain users : spatial proximity fosters the creation of online social ties . while the effect of space might need to be incorporated to these social mechanisms , it is not clear to which extent this is true and in which way this is best achieved . to address these questions , we present a measurement study of the temporal evolution of an online location-based social network . we have collected longitudinal traces over 4 months , including information about when social links are created and which places are visited by users , as revealed by their mobile check-ins . thanks to this fine-grained temporal information , we test and compare whether different probabilistic models can explain the observed data adopting an approach based on likelihood estimation , quantitatively comparing their statistical power to reproduce real events . we demonstrate that geographic distance plays an important role in the creation of new social connections : node degree and spatial distance can be combined in a gravitational attachment process that reproduces story_separator_special_tag human mobility states , such as dwelling , walking or driving , are a valuable primary and meta data type for transportation studies , urban planning , health monitoring and epidemiology . previous work focuses on fine-grained location-based mobility inference using global positioning system ( gps ) data and external geo-indexes such as map information . gps-based mobility characterization raises practical issues related to spotty coverage and battery drain , but the more fundamental concern addressed in this paper is that of privacy . for some applications and usage models we contend that it is desirable to adopt a more parsimonious approach to mobility characterization ; one that avoids the collection and use of fine-grained location information by relying instead on gsm and wifi connectivity data . building upon previous work that demonstrated the utility of using gsm and wifi beacons for localization applications , we demonstrate that this approach to mobility classification achieves promising results ( with accuracy of 88 % ) using a sample data set collected across several populated areas in los angeles and is worthy of further research . story_separator_special_tag the knowledge of human mobility is essential to routing design and service planning regarding both civilian and military applications in mobile wireless networks . in this paper , we study the inherent properties of human mobility upon our collected gps moving traces . we found that power laws characterize the human mobility in both spatial and temporal domains . in particular , because of the diurnal cycle patterns of human daily activities in associated social territories with limited size , there always exists a characteristic distance in the power law distributions of trip displacement and distance between site locations and a characteristic time in the power law distributions of pause and site return time , respectively . thus , the ccdf of human movement metrics in spatial and temporal domains always has a transition from power-law head to exponential tail delimited by the associated characteristic distance and characteristic time , respectively . furthermore , we found that either human random moving direction process without pause or the power law distribution of trip displacement lead to a superdiffusive human mobility pattern , while the power law distribution of pause time causes a subdiffusive human movement pattern . story_separator_special_tag many natural and technological applications generate time-ordered sequences of networks , defined over a fixed set of nodes ; for example , time-stamped information about `` who phoned who '' or `` who came into contact with who '' arise naturally in studies of communication and the spread of disease . concepts and algorithms for static networks do not immediately carry through to this dynamic setting . for example , suppose a and b interact in the morning , and then b and c interact in the afternoon . information , or disease , may then pass from a to c , but not vice versa . this subtlety is lost if we simply summarize using the daily aggregate network given by the chain a-b-c. however , using a natural definition of a walk on an evolving network , we show that classic centrality measures from the static setting can be extended in a computationally convenient manner . in particular , communicability indices can be computed to summarize the ability of each node to broadcast and receive information . the computations involve basic operations in linear algebra , and the asymmetry caused by time 's arrow is captured naturally through story_separator_special_tag the central points of communication network flow has often been identified using graph theoretical centrality measures . in real networks , the state of traffic density arises from an interplay between the dynamics of the flow and the underlying network structure . in this work we investigate the relationship between centrality measures and the density of traffic for some simple particle hopping models on networks with emerging scale-free degree distributions . we also study how the speed of the dynamics are affected by the underlying network structure . among other conclusions , we find that , even at low traffic densities , the dynamical measure of traffic density ( the occupation ratio ) has a non-trivial dependence on the static centrality ( quantified by `` betweenness centrality '' ) , which non-central vertices getting a comparatively large portion of the traffic . story_separator_special_tag in this paper , we combine the most complete record of daily mobility , based on large-scale mobile phone data , with detailed geographic information system ( gis ) data , uncovering previously hidden patterns in urban road usage . we find that the major usage of each road segment can be traced to its own - surprisingly few - driver sources . based on this finding we propose a network of road usage by defining a bipartite network framework , demonstrating that in contrast to traditional approaches , which define road importance solely by topological measures , the role of a road segment depends on both : its betweeness and its degree in the road usage network . moreover , our ability to pinpoint the few driver sources contributing to the major traffic flow allows us to create a strategy that achieves a significant reduction of the travel time across the entire road system , compared to a benchmark approach . story_separator_special_tag anonymous location data from cellular phone networks sheds light on how people move around on a large scale . story_separator_special_tag we present analytically the relation functions between degrees or clustering coefficients of a common station in both space l layer and space p layer of transportation systems . a good agreement between the analytical results and the empirical investigations in a railway system and three bus systems in china is observed . story_separator_special_tag we propose a modeling framework for growing multiplexes where a node can belong to different networks . we define new measures for multiplexes and we identify a number of relevant ingredients for modeling their evolution such as the coupling between the different layers and the distribution of node arrival times . the topology of the multiplex changes significantly in the different cases under consideration , with effects of the arrival time of nodes on the degree distribution , average shortest path length , and interdependence . story_separator_special_tag uncovering human mobility patterns is of fundamental importance to the understanding of epidemic spreading , urban transportation and other socioeconomic dynamics embodying spatiality and human travel . according to the direct travel diaries of volunteers , we show the absence of scaling properties in the displacement distribution at the individual level , while the aggregated displacement distribution follows a power law with an exponential cutoff . given the constraint on total travelling cost , this aggregated scaling law can be analytically predicted by the mixture nature of human travel under the principle of maximum entropy . a direct corollary of such theory is that the displacement distribution of a single mode of transportation should follow an exponential law , which also gets supportive evidences in known data . we thus conclude that the travelling cost shapes the displacement distribution at the aggregated level . story_separator_special_tag abstract purpose in this chapter , we will review several alternative methods of collecting data from mobile phones for human mobility analysis . we propose considering cellular network location data as a useful complementary source for human mobility research and provide case studies to illustrate the advantages and disadvantages of each method . methodology/approach we briefly describe cellular phone network architecture and the location data it can provide , and discuss two types of data collection : active and passive localization . active localization is something like a personal travel diary . it provides a tool for recording positioning data on a survey sample over a long period of time . passive localization , on the other hand , is based on phone network data that are automatically recorded for technical or billing purposes . it offers the advantage of access to very large user populations for mobility flow analysis of a broad area . findings we review several alternative methods of collecting data from mobile phone for human mobility analysis to show that cellular network data , although limited in terms of location precision and recording frequency , offer two major advantages for studying human mobility . first , story_separator_special_tag modern technologies not only provide a variety of communication modes ( e.g. , texting , cell phone conversation , and online instant messaging ) , but also detailed electronic traces of these communications between individuals . these electronic traces indicate that the interactions occur in temporal bursts . here , we study intercall duration of communications of the 100,000 most active cell phone users of a chinese mobile phone operator . we confirm that the intercall durations follow a power-law distribution with an exponential cutoff at the population level but find differences when focusing on individual users . we apply statistical tests at the individual level and find that the intercall durations follow a power-law distribution for only 3,460 individuals ( 3.46 % ) . the intercall durations for the majority ( 73.34 % ) follow a weibull distribution . we quantify individual users using three measures : out-degree , percentage of outgoing calls , and communication diversity . we find that the cell phone users with a power-law duration distribution fall into three anomalous clusters : robot-based callers , telecom fraud , and telephone sales . this information is of interest to both academics and practitioners , mobile telecom story_separator_special_tag the availability of massive network and mobility data from diverse domains has fostered the analysis of human behaviors and interactions . this data availability leads to challenges in the knowledge discovery community . several different analyses have been performed on the traces of human trajectories , such as understanding the real borders of human mobility or mining social interactions derived from mobility and vice versa . however , the data quality of the digital traces of human mobility has a dramatic impact over the knowledge that it is possible to mine , and this issue has not been thoroughly tackled so far in literature . in this paper , we mine and analyze with complex network techniques a large dataset of human trajectories , a gps dataset from more than 150k vehicles in italy . we build a multi resolution grid and we map the trajectories with several complex networks , by connecting the different areas of our region of interest . then we analyze the structural properties of these networks and the quality of the borders it is possible to infer from them . the result is a significant advancement in our understanding of the data transformation process that story_separator_special_tag the technologies of mobile communications and ubiquitous computing pervade our society , and wireless networks sense the movement of people and vehicles , generating large volumes of mobility data . this is a scenario of great opportunities and risks : on one side , mining this data can produce useful knowledge , supporting sustainable mobility and intelligent transportation systems ; on the other side , individual privacy is at risk , as the mobility data contain sensitive personal information . a new multidisciplinary research area is emerging at this crossroads of mobility , data mining , and privacy . this book assesses this research frontier from a computer science perspective , investigating the various scientific and technological issues , open problems , and roadmap . the editors manage a research project called geopkdd , geographic privacy-aware knowledge discovery and delivery , funded by the eu commission and involving 40 researchers from 7 countries , and this book tightly integrates and relates their findings in 13 chapters covering all related subjects , including the concepts of movement data and knowledge discovery from movement data ; privacy-aware geographic knowledge discovery ; wireless network and next-generation mobile technologies ; trajectory data models , story_separator_special_tag the technological changes and educational expansion have created the heterogeneity in the human species . clearly , this heterogeneity generates a structure in the population dynamics , namely : citizen , permanent resident , visitor , and etc . furthermore , as the heterogeneity in the population increases , the human mobility between meta-populations patches also increases . depending on spatial scales , a meta-population patch can be decomposed into sub-patches , for examples : homes , neighborhoods , towns , etc . members of the population can move between the sub-patches . the dynamics of human mobility in a heterogeneous and scaled structured population is still its infancy level . in this work , an attempt is made to investigate the human mobility dynamics of heterogeneous and scaled structured population . we present a two scaled human mobility model for a meta-population . the sub regions and regions are interlinked via intra-and inter regional transport network systems . under various types of growth order assumptions on the intra and interregional residence times of the residents of a sub region , different patterns of static behavior of the mobility process is studied . in addition , the results reveal that story_separator_special_tag 1 the chaos computing paradigm ( w.l . ditto , a. miliotis , k. murali , and s. sinha ) 2 how does god play dice ? ( j. nagler and p.h . richter ) 3 phase reduction of stochastic limit-cycle oscillators ( k. yoshimura ) 4 complex systems , numbers and number theory ( l. lacasa , b. luque , and o. miramontes ) 5 wave localization transitions in complex systems ( j.w . kantelhardt , l. jahnke , and r. berkovits ) 6 from deterministic chaos to anomalous diffusion ( r. klages ) story_separator_special_tag add home aims to reduce transport needs and is fostering a modal shift from car-trips to more energy efficient modes especially starting from residential areas . in in 4 out of 5 cases the own front door is the place where modal choices are taken . the coice is often influenced by owning a private car which is considered as the easiest accessible mode in daily life . the approach of add home includes three levels : 1 ) legal and regulatory settings will be reshaped to enable sustainable mobility before planning new residential areas 2 ) the accessibility of new residential areas and each household will be refocused from the focus on private car parking lots to more energy efficient modes of transport 3 ) mobility patterns and habits will be reorganised by mobility-services that bundle trips , shift trips and substitute them the cooperation of municipalities and housing companies/neighbourhood administrations will create liveable housing areas that enables residents to freely choose their transport mode . the project will lead to a more ecological , economical and social way of living . story_separator_special_tag we report on our experience scaling up the mobile millennium traffic information system using cloud computing and the spark cluster computing framework . mobile millennium uses machine learning to infer traffic conditions for large metropolitan areas from crowdsourced data , and spark was specifically designed to support such applications . many studies of cloud computing frameworks have demonstrated scalability and performance improvements for simple machine learning algorithms . our experience implementing a real-world machine learning-based application corroborates such benefits , but we also encountered several challenges that have not been widely reported . these include : managing large parameter vectors , using memory efficiently , and integrating with the application 's existing storage infrastructure . this paper describes these challenges and the changes they required in both the spark framework and the mobile millennium software . while we focus on a system for traffic estimation , we believe that the lessons learned are applicable to other machine learning-based applications . story_separator_special_tag mobile ad hoc networks enable communications between clouds of mobile devices without the need for a preexisting infrastructure . one of their most interesting evolutions are opportunistic networks , whose goal is to also enable communication in disconnected environments , where the general absence of an end-to-end path between the sender and the receiver impairs communication when legacy manet networking protocols are used . the key idea of oppnets is that the mobility of nodes helps the delivery of messages , because it may connect , asynchronously in time , otherwise disconnected subnetworks . this is especially true for networks whose nodes are mobile devices ( e.g. , smartphones and tablets ) carried by human users , which is the typical oppnets scenario . in such a network where the movements of the communicating devices mirror those of their owners , finding a route between two disconnected devices implies uncovering habits in human movements and patterns in their connectivity ( frequencies of meetings , average duration of a contact , etc . ) , and exploiting them to predict future encounters . therefore , there is a challenge in studying human mobility , specifically in its application to oppnets research story_separator_special_tag in temporal networks , where nodes interact via sequences of temporary events , information or resources can only flow through paths that follow the time ordering of events . such temporal paths play a crucial role in dynamic processes . however , since networks have so far been usually considered static or quasistatic , the properties of temporal paths are not yet well understood . building on a definition and algorithmic implementation of the average temporal distance between nodes , we study temporal paths in empirical networks of human communication and air transport . although temporal distances correlate with static graph distances , there is a large spread , and nodes that appear close from the static network view may be connected via slow paths or not at all . differences between static and temporal properties are further highlighted in studies of the temporal closeness centrality . in addition , correlations and heterogeneities in the underlying event sequences affect temporal path lengths , increasing temporal distances in communication networks and decreasing them in the air transport network . story_separator_special_tag the study of influential members of human networks is an important research question in social network analysis . however , the current state-of-the-art is based on static or aggregated representation of the network topology . we argue that dynamically evolving network topologies are inherent in many systems , including real online social and technological networks : fortunately the nature of these systems is such that they allow the gathering of large quantities of finegrained temporal data on interactions amongst the network members.in this paper we propose novel temporal centrality metrics which take into account such dynamic interactions over time . using a real corporate email dataset we evaluate the important individuals selected by means of static and temporal analysis taking two perspectives : firstly , from a semantic level , we investigate their corporate role in the organisation ; and secondly , from a dynamic process point of view , we measure information dissemination and the role of information mediators . we find that temporal analysis provides a better understanding of dynamic processes and a more accurate identification of important people compared to traditional static methods . story_separator_special_tag in this paper , we seek to improve understanding of the structure of human mobility , with a view to using this for designing algorithms for the dissemination of data among mobile users . we analyse community structures and node centrality from the human mobility traces and use these two metrics to design efficient forwarding algorithms in terms of delivery ratio and delivery cost for mobile networks . this is the first empirical study of community and centrality using real human mobility datasets . story_separator_special_tag centrality is an important notion in network analysis and is used to measure the degree to which network structure contributes to the importance of a node in a network . while many different centrality measures exist , most of them apply to static networks . most networks , on the other hand , are dynamic in nature , evolving over time through the addition or deletion of nodes and edges . a popular approach to analyzing such networks represents them by a static network that aggregates all edges observed over some time period . this approach , however , under or overestimates centrality of some nodes . we address this problem by introducing a novel centrality metric for dynamic network analysis . this metric exploits an intuition that in order for one node in a dynamic network to influence another over some period of time , there must exist a path that connects the source and destination nodes through intermediaries at different times . we demonstrate on an example network that the proposed metric leads to a very different ranking than analysis of an equivalent static network . we use dynamic centrality to study a dynamic citations network and contrast story_separator_special_tag a parameter-free model predicts patterns of commuting , phone calls and trade using only population density at all intermediate points . since the 1940s , planners needing to predict population movement , transport-network usage and even epidemics have turned to a model based on the 'gravity law ' . this assumes that the number of individuals travelling between two locations is proportional to the population at the source and destination , and decays with distance . this approach has its limitations , because it looks at the flow between two specific points only . here , albert-laszlo barabasi and colleagues present an alternative model that takes into account population density at all intermediate points . their parameter-free radiation model predicts a range of phenomena from commuting and migrations to phone calls much more accurately than the gravity model . needing only data on population densities , which are easy to measure , the system can be used to predict commuting and transport patterns even in areas where data are not collected systematically . introduced in its contemporary form in 1946 ( ref . 1 ) , but with roots that go back to the eighteenth century2 , the gravity law1,3,4 story_separator_special_tag despite their importance for urban planning , traffic forecasting and the spread of biological and mobile viruses , our understanding of the basic laws governing human motion remains limited owing to the lack of tools to monitor the time-resolved location of individuals . here we study the trajectory of 100,000 anonymized mobile phone users whose position is tracked for a six-month period . we find that , in contrast with the random trajectories predicted by the prevailing levy flight and random walk models , human trajectories show a high degree of temporal and spatial regularity , each individual being characterized by a time-independent characteristic travel distance and a significant probability to return to a few highly frequented locations . after correcting for differences in travel distances and the inherent anisotropy of each trajectory , the individual travel patterns collapse into a single spatial probability distribution , indicating that , despite the diversity of their travel history , humans follow simple reproducible patterns . this inherent similarity in travel patterns could impact all phenomena driven by human mobility , from epidemic prevention to emergency response , urban planning and agent-based modelling . story_separator_special_tag the increasing pervasiveness of location-acquisition technologies ( gps , gsm networks , etc . ) is leading to the collection of large spatio-temporal datasets and to the opportunity of discovering usable knowledge about movement behaviour , which fosters novel applications and services . in this paper , we move towards this direction and develop an extension of the sequential pattern mining paradigm that analyzes the trajectories of moving objects . we introduce trajectory patterns as concise descriptions of frequent behaviours , in terms of both space ( i.e. , the regions of space visited during movements ) and time ( i.e. , the duration of movements ) . in this setting , we provide a general formal statement of the novel mining problem and then study several different instantiations of different complexity . the various approaches are then empirically evaluated over real data and synthetic benchmarks , comparing their strengths and weaknesses . story_separator_special_tag inspired by empirical studies of networked systems such as the internet , social networks , and biological networks , researchers have in recent years developed a variety of techniques and models to help us understand or predict the behavior of these systems . here we review developments in this field , including such concepts as the small-world effect , degree distributions , clustering , network correlations , random graph models , models of network growth and preferential attachment , and dynamical processes taking place on networks . story_separator_special_tag understanding of urban mobility dynamics benefits both aggregate human mobility in wireless communications , and the planning and provision of urban facilities and services . due to the high penetration of cell phones , the cellular networks provide information for urban dynamics with large spatial extent and continuous temporal coverage . in this paper , a novel approach is proposed to explore the space-time structure of urban dynamics , based on the original data collected by cellular networks in a southern city of china , recording population distribution by dividing the city into thousands of pixels . by applying principal component analysis , the intrinsic dimensionality is revealed . the structure of all the pixel population variations could be well captured by a small set of eigen pixel population variations . according to the classification of eigen pixel population variations , each pixel population variation can be decomposed into three constitutions : deterministic trends , short-lived spikes , and noise . moreover , the most significant eigen pixel population variations are utilized in the applications of forecasting and anomaly detection . story_separator_special_tag opportunistic networks use human mobility and consequent wireless contacts between mobile devices , to disseminate data in a peer-to-peer manner . to grasp the potential and limitations of such networks , as well as to design appropriate algorithms and protocols , it is key to understand the statistics of contacts . to date , contact analysis has mainly focused on statistics such as inter-contact and contact distributions . while these pair-wise properties are important , we argue that structural properties of contacts need more thorough analysis . for example , communities of tightly connected nodes , have a great impact on the performance of opportunistic networks and the design of algorithms and protocols .
significance the unpredictable recurrences of prion epidemics , their incurable lethality , and the capacity of animal prions to infect humans provide significant motivation to ascertain the parameters governing disease transmission . the unprecedented spread and uncertain zoonotic potential of chronic wasting disease , a contagious epidemic among deer , elk , and other cervids , is of particular concern . here we demonstrate that naturally occurring primary structural differences in cervid prps differentially impact the efficiency of intra- and interspecies prion transmission . our results not only deliver information about the role of primary structural variation on prion susceptibility , but also provide functional support to a mechanism in which plasticity of a tertiary structural epitope governs prion protein conversion and intra- and interspecies susceptibility to prions . understanding the molecular parameters governing prion propagation is crucial for controlling these lethal , proteinaceous , and infectious neurodegenerative diseases . to explore the effects of prion protein ( prp ) sequence and structural variations on intra- and interspecies transmission , we integrated studies in deer , a species naturally susceptible to chronic wasting disease ( cwd ) , a burgeoning , contagious epidemic of uncertain origin and zoonotic potential , story_separator_special_tag a major focus in prion structural biology studies is unraveling the molecular mechanism leading to the structural conversion of prp ( c ) to its pathological form , prp ( sc ) . in our recent studies , we attempted to understand the early events of the conformational changes leading to prp ( sc ) using as investigative tools point mutations clustered in the open reading frame of the human prp gene and linked to genetic forms of human prion diseases . in the work presented here , we investigate the effect of ph on the nuclear magnetic resonance ( nmr ) structure of recombinant human prp ( huprp ) carrying the pathological v210i mutation responsible for familial creutzfeldt-jakob disease . the nmr structure of huprp ( v210i ) determined at ph 7.2 shows the same overall fold as the previously determined structure of huprp ( v210i ) at ph 5.5. it consists of a disordered n-terminal tail ( residues 90-124 ) and a globular c-terminal domain ( residues 125-231 ) comprising three -helices and a short antiparallel -sheet . detailed comparison of three-dimensional structures of huprp ( v210i ) at ph 7.2 and 5.5 revealed significant local structural differences story_separator_special_tag the development of transmissible spongiform encephalopathies ( tses ) is associated with the conversion of the cellular prion protein ( prp ( c ) ) into a misfolded , pathogenic isoform ( prp ( sc ) ) . spontaneous generation of prp ( sc ) in inherited forms of disease is caused by mutations in gene coding for prp ( prnp ) . in this work , we describe the nmr solution-state structure of the truncated recombinant human prp ( huprp ) carrying the pathological v210i mutation linked to genetic creutzfeldt-jakob disease . the three-dimensional structure of v210i mutant consists of an unstructured n-terminal part ( residues 90-124 ) and a well-defined c-terminal domain ( residues 125-228 ) . the c-terminal domain contains three -helices ( residues 144-156 , 170-194 and 200-228 ) and a short antiparallel -sheet ( residues 129-130 and 162-163 ) . comparison with the structure of the wild-type huprp revealed that although two structures share similar global architecture , mutation introduces some local structural differences . the observed variations are mostly clustered in the ( 2 ) - ( 3 ) inter-helical interface and in the ( 2 ) - ( 2 ) loop region . story_separator_special_tag most thermophilic proteins tend to have more salt bridges , and achieve higher thermostability by up-shifting and broadening their protein stability curves . while the stabilizing effect of salt-bridge has been extensively studied , experimental data on how salt-bridge influences protein stability curves are scarce . here , we used double mutant cycles to determine the temperature-dependency of the pair-wise interaction energy and the contribution of salt-bridges to cp in a thermophilic ribosomal protein l30e . our results showed that the pair-wise interaction energies for the salt-bridges e6/r92 and e62/k46 were stabilizing and insensitive to temperature changes from 298 to 348 k. on the other hand , the pair-wise interaction energies between the control long-range ion-pair of e90/r92 were negligible . the cp of all single and double mutants were determined by gibbs-helmholtz and kirchhoff analyses . we showed that the two stabilizing salt-bridges contributed to a reduction of cp by 0.8 1.0 kj mol 1 k 1. taken together , our results suggest that the extra salt-bridges found in thermophilic proteins enhance the thermostability of proteins by reducing cp , leading to the up-shifting and broadening of the protein stability curves . story_separator_special_tag the ability of prions to infect some species and not others is determined by the transmission barrier . this unexplained phenomenon has led to the belief that certain species were not susceptible to transmissible spongiform encephalopathies ( tses ) and therefore represented negligible risk to human health if consumed . using the protein misfolding cyclic amplification ( pmca ) technique , we were able to overcome the species barrier in rabbits , which have been classified as tse resistant for four decades . rabbit brain homogenate , either unseeded or seeded in vitro with disease-related prions obtained from different species , was subjected to serial rounds of pmca . de novo rabbit prions produced in vitro from unseeded material were tested for infectivity in rabbits , with one of three intracerebrally challenged animals succumbing to disease at 766 d and displaying all of the characteristics of a tse , thereby demonstrating that leporids are not resistant to prion infection . material from the brain of the clinically affected rabbit containing abnormal prion protein resulted in a 100 % attack rate after its inoculation in transgenic mice overexpressing rabbit prp . transmissibility to rabbits ( > 470 d ) has been story_separator_special_tag nmr structures are presented for the recombinant construct of residues 121-230 from the tammar wallaby ( macropus eugenii ) prion protein ( prp ) twprp ( 121-230 ) and for the variant mouse prps mprp [ y225a , y226a ] ( 121-231 ) and mprp [ v166a ] ( 121-231 ) at 20 degrees c and ph 4.5. all three proteins exhibit the same global architecture as seen in other recombinant prp ( c ) s ( cellular isoforms of prp ) and shown to prevail in natural bovine prp ( c ) . special interest was focused on a loop that connects the beta2-strand with helix alpha2 in the prp ( c ) fold , since there are indications from in vivo experiments that this local structural feature affects the susceptibility of transgenic mice to transmissible spongiform encephalopathies . this beta2-alpha2 loop and helix alpha3 form a solvent-accessible contiguous epitope , which has been proposed to be the recognition area for a hypothetical chaperone , the `` protein x '' . this hypothetical chaperone would affect the conversion of prp ( c ) into the disease-related scrapie form ( prp ( sc ) ) by moderating intermolecular interactions related story_separator_special_tag the three-dimensional structures of prion proteins ( prps ) in the cellular form ( prp ( c ) ) include a stacking interaction between the aromatic rings of the residues y169 and f175 , where f175 is conserved in all but two so far analyzed mammalian prp sequences and where y169 is strictly conserved . to investigate the structural role of f175 , we characterized the variant mouse prion protein mprp [ f175a ] ( 121-231 ) . the nmr solution structure represents a typical prp ( c ) -fold , and it contains a 3 ( 10 ) -helical 2- 2 loop conformation , which is well defined because all amide group signals in this loop are observed at 20\xb0c . with this `` rigid-loop prp ( c ) '' behavior , mprp [ f175a ] ( 121-231 ) differs from the previously studied mprp [ y169a ] ( 121-231 ) , which contains a type i -turn 2- 2 loop structure . when compared to other rigid-loop variants of mprp ( 121-231 ) , mprp [ f175a ] ( 121-231 ) is unique in that the thermal unfolding temperature is lowered by 8\xb0c . these observations enable further story_separator_special_tag mutations in the prion protein ( prp ) can cause spontaneous prion diseases in humans ( hu ) and animals . in transgenic mice , mutations can determine the susceptibility to the infection of different prion strains . some of these mutations also show a dominant-negative effect , thus halting the replication process by which wild type mouse ( mo ) prp is converted into mo scrapie . using all-atom molecular dynamics ( md ) simulations , here we studied the structure of huprp , moprp , 10 hu/moprp chimeras , and 1 mo/sheepprp chimera in explicit solvent . overall , 2 s of md were collected . our findings suggest that the interactions between 1 helix and n-terminal of 3 helix are critical in prion propagation , whereas the 2 2 loop conformation plays a role in the dominant-negative effect . an animated interactive 3d complement ( i3dc ) is available in proteopedia at http : //proteopedia.org/w/journal : jbsd:4 . story_separator_special_tag prion-related protein ( prp ) , a cell-surface copper-binding glycoprotein , is considered to be responsible for a number of transmissible spongiform encephalopathies ( tses ) . the structural conversion of prp from the normal cellular isoform ( prpc ) to the post-translationally modified form ( prpsc ) is thought to be relevant to cu2+ binding to histidine residues . rabbits are one of the few mammalian species that appear to be resistant to tses , because of the structural characteristics of the rabbit prion protein ( raprpc ) itself . here we determined the three-dimensional local structure around the c-terminal high-affinity copper-binding sites using x-ray absorption near-edge structure combined with ab initio calculations in the framework of the multiple-scattering ( ms ) theory . result shows that two amino acid resides , gln97 and met108 , and two histidine residues , his95 and his110 , are involved in binding this copper ( ii ) ion . it might help us understand the roles of copper in prion conformation conversions , and the molecular mechanisms of prion-involved diseases . ( c ) 2013 elsevier ltd. all rights reserved . story_separator_special_tag each known abnormal prion protein ( prpsc ) is considered to have a specific range and therefore the ability to infect some species and not others . consequently , some species have been assumed to be prion disease resistant as no successful natural or experimental challenge infections have been reported . this assumption suggested that , independent of the virulence of the prpsc strain , normal prion protein ( prpc ) from these resistant species could not be induced to misfold . numerous in vitro and in vivo studies trying to corroborate the unique properties of prpsc have been undertaken . the results presented in the article rabbits are not resistant to prion infection demonstrated that normal rabbit prpc , which was considered to be resistant to prion disease , can be misfolded to prpsc and subsequently used to infect and transmit a standard prion disease to leporids . using the concept of species resistance to prion disease , we will discuss the mistake of attributing species specific prion disease resistance based purely on the absence of natural cases and incomplete in vivo challenges . the bse epidemic was partially due to an underestimation of species barriers . to repeat story_separator_special_tag the self perpetuating conversion of cellular prion proteins ( prpc ) into an aggregated sheet rich conformation is associated with transmissible spon giform encephalopathies ( tse ) . the loop 166 175 ( l1 ) in prpc , which displays sequence and structural variation among species , has been suggested to play a role in species barrier , in particular against transmission of tse from cervids to domestic and laboratory animals . l1 is ordered in elk prp , as well as in a mouse/elk hybrid ( in which l1 of mouse is replaced by elk ) but not in other species such as mice , humans , and bovine . to investigate the source and significance of l1 dynamics , we carried out explicit solvent molecular dynamics simulations ( 0.5 s in total ) of the mouse prion protein , the mouse/elk hybrid , and control simulations , in which the mouse sequence is reintroduced into the structure of the mouse/elk hybrid . we found that the flexibility of l1 correlates with the backbone dynamics of ser170 . furthermore , l1 mobility promotes a substantial displacement of tyr169 , rupture of the asp178 tyr128 and asp178 tyr169 side chain story_separator_special_tag the nmr structure of the recombinant elk prion protein ( eprp ) , which represents the cellular isoform ( eprpc ) in the healthy organism , is described here . as anticipated from the highly conserved amino acid sequence , eprpc has the same global fold as other mammalian prion proteins ( prps ) , with a flexibly disordered `` tail '' of residues 23-124 and a globular domain 125-226 with three alpha-helices and a short antiparallel beta-sheet . however , eprpc shows a striking local structure variation when compared with most other mammalian prps , in particular human , bovine , and mouse prpc . a loop of residues 166-175 , which links the beta-sheet with the alpha2-helix and is part of a hypothetical `` protein x '' epitope , is outstandingly well defined , whereas this loop is disordered in the other species . based on nmr structure determinations of two mouse prp variants , mprp [ n174t ] and mprp [ s170n , n174t ] , this study shows that the structured loop in eprpc relates to these two local amino acid exchanges , so that mprp [ s170n , n174t ] exactly mimics eprpc . these story_separator_special_tag using a recently developed mesoscopic theory of protein dielectrics , we have calculated the salt bridge energies , total residue electrostatic potential energies , and transfer energies into a low dielectric amyloid-like phase for 12 species and mutants of the prion protein . salt bridges and self energies play key roles in stabilizing secondary and tertiary structural elements of the prion protein . the total electrostatic potential energy of each residue was found to be invariably stabilizing . residues frequently found to be mutated in familial prion disease were among those with the largest electrostatic energies . the large barrier to charged group desolvation imposes regional constraints on involvement of the prion protein in an amyloid aggregate , resulting in an electrostatic amyloid recruitment profile that favours regions of sequence between alpha helix 1 and beta strand 2 , the middles of helices 2 and 3 , and the region n-terminal to alpha helix 1. we found that the stabilization due to salt bridges is minimal among the proteins studied for disease-susceptible human mutants of prion protein . story_separator_special_tag prion diseases are fatal neurodegenerative disorders caused by an aberrant accumulation of the misfolded cellular prion protein ( prpc ) conformer , denoted as infectious scrapie isoform or prpsc . in inherited human prion diseases , mutations in the open reading frame of the prp gene ( prnp ) are hypothesized to favor spontaneous generation of prpsc in specific brain regions leading to neuronal cell degeneration and death . here , we describe the nmr solution structure of the truncated recombinant human prp from residue 90 to 231 carrying the q212p mutation , which is believed to cause gerstmann-str\xe4ussler-scheinker ( gss ) syndrome , a familial prion disease . the secondary structure of the q212p mutant consists of a flexible disordered tail ( residues 90 124 ) and a globular domain ( residues 125 231 ) . the substitution of a glutamine by a proline at the position 212 introduces novel structural differences in comparison to the known wild-type prp structures . the most remarkable differences involve the c-terminal end of the protein and the 2 2 loop region . this structure might provide new insights into the early events of conformational transition of prpc into prpsc . indeed , story_separator_special_tag for a successful analysis of the relation between amino acid sequence and protein structure , an unambiguous and physically meaningful definition of secondary structure is essential . we have developed a set of simple and physically motivated criteria for secondary structure , programmed as a pattern-recognition process of hydrogen-bonded and geometrical features extracted from x-ray coordinates . cooperative secondary structure is recognized as repeats of the elementary hydrogen-bonding patterns turn and bridge . repeating turns are helices , repeating bridges are ladders , connected ladders are sheets . geometric structure is defined in terms of the concepts torsion and curvature of differential geometry . local chain chirality is the torsional handedness of four consecutive c positions and is positive for right-handed helices and negative for ideal twisted -sheets . curved pieces are defined as bends . solvent exposure is given as the number of water molecules in possible contact with a residue . the end result is a compilation of the primary structure , including ss bonds , secondary structure , and solvent exposure of 62 different globular proteins . the presentation is in linear form : strip graphs for an overall view and strip tables for the details of story_separator_special_tag transmissible spongiform encephalopathies ( tses ) are associated with the accumulation of deposits of an abnormal form , prpsc , of the host-encoded prion protein , prpc . amino acid substitutions in prpc have long been known to affect tse disease outcome . in extreme cases in humans , various mutations appear to cause disease . in animals , polymorphisms are associated with variations in disease susceptibility and , in sheep , several polymorphisms have been identified that are known to affect susceptibility of carriers to disease . the mechanisms of polymorphism-mediated modulation of disease susceptibility remain elusive , and we have been studying the effect of various amino acid substitutions at prp codon 164 ( mouse numbering ) , in the 2-r2 loop region of the prion protein , to attempt to decipher how polymorphisms may affect disease susceptibility . combined in vitro approaches suggest that there exists a correlation between the ability of protein variants to convert to abnormal isoforms in seeded conversion assays versus the thermal stability of the protein variants , as judged by both thermal denaturation and an unseeded in vitro oligomerization assay . we have performed molecular dynamics simulations to give an indication of story_separator_special_tag prion diseases , or transmissible spongiform encephalopathies ( tses ) , are associated with the conformational conversion of the cellular prion protein , prp ( c ) , into a protease-resistant form , prp ( sc ) . here , we show that mutation-induced thermodynamic stabilization of the folded , -helical domain of prp ( c ) has a dramatic inhibitory effect on the conformational conversion of prion protein in\xa0vitro , as well as on the propagation of tse disease in\xa0vivo . transgenic mice expressing a human prion protein variant with increased thermodynamic stability were found to be much more resistant to infection with the tse agent than those expressing wild-type human prion protein , in both the primary passage and three subsequent subpassages . these findings not only provide a line of evidence in support of the protein-only model of tses but also yield insight into the molecular nature of the prp ( c ) prp ( sc ) conformational transition , and they suggest an approach to the treatment of prion diseases . story_separator_special_tag prions are infectious particles causing transmissible spongiform encephalopathies ( tses ) . they consist , at least in part , of an isoform ( prpsc ) of the ubiquitous cellular prion protein ( prpc ) . conformational differences between prpcand prpscare evident from increased -sheet content and protease resistance in prpsc ( refs 1,2,3 ) . here we describe a monoclonal antibody , 15b3 , that can discriminate between the normal and disease-specific forms of prp . such an antibody has been long sought as it should be invaluable for characterizing the infectious particle as well as for diagnosis of tses such as bovine spongiform encephalopathy ( bse ) or creutzfeldt jakob disease ( cjd ) in humans . 15b3 specifically precipitates bovine , murine or human prpsc , but not prpc , suggesting that it recognizes an epitope common to prions from different species . using immobilized synthetic peptides , we mapped three polypeptide segments in prp as the 15b3 epitope . in the nmr structure of recombinant mouse prp , segments 2 and 3 of the 15b3 epitope are near neighbours in space , and segment 1 is located in a different part of the molecule . we story_separator_special_tag zoonotic prion transmission was reported after the bovine spongiform encephalopathy ( bse ) epidemic , when > 200 cases of prion disease in humans were diagnosed as variant creutzfeldt-jakob disease . assessing the risk of cross-species prion transmission remains challenging . we and others have studied how specific amino acid residue differences between species impact prion conversion and have found that the 2- 2 loop region of the mouse prion protein ( residues 165 175 ) markedly influences infection by sheep scrapie , bse , mouse-adapted scrapie , deer chronic wasting disease , and hamster-adapted scrapie prions . the tyrosine residue at position 169 is strictly conserved among mammals and an aromatic side chain in this position is essential to maintain a 310-helical turn in the 2- 2 loop . here we examined the impact of the y169g substitution together with the previously described s170n , n174t rigid loop substitutions on cross-species prion transmission in vivo and in vitro . we found that transgenic mice expressing mouse prp containing the triple-amino acid substitution completely resisted infection with two strains of mouse prions and with deer chronic wasting disease prions . these studies indicate that y169 is important for prion formation story_separator_special_tag prion proteins are key molecules in transmissible spongiform encephalopathies ( tses ) , but the precise mechanism of the conversion from the cellular form ( prpc ) to the scrapie form ( prpsc ) is still unknown . here we discovered a chemical chaperone to stabilize the prpc conformation and identified the hot spots to stop the pathogenic conversion . we conducted in silico screening to find compounds that fitted into a pocket created by residues undergoing the conformational rearrangements between the native and the sparsely populated high-energy states ( prp * ) and that directly bind to those residues . forty-four selected compounds were tested in a tse-infected cell culture model , among which one , 2-pyrrolidin-1-yl-n- [ 4- [ 4- ( 2-pyrrolidin-1-yl-acetylamino ) -benzyl ] -phenyl ] -acetamide , termed gn8 , efficiently reduced prpsc . subsequently , administration of gn8 was found to prolong the survival of tse-infected mice . heteronuclear nmr and computer simulation showed that the specific binding sites are the a-s2 loop ( n159 ) and the region from helix b ( v189 , t192 , and k194 ) to b-c loop ( e196 ) , indicating that the intercalation of these distant regions story_separator_special_tag a conformational transition of normal cellular prion protein ( prpc ) to its pathogenic form ( prpsc ) is believed to be a central event in the transmission of the devastating neurological diseases known as spongiform encephalopathies . the common methionine/valine polymorphism at residue 129 in the prp influences disease susceptibility and phenotype . we report here seven crystal structures of human prp variants : three of wild type ( wt ) prp containing v129 , and four of the familial variants d178n and f198s , containing either m129 or v129 . comparison of these structures with each other and with previously published wt prp structures containing m129 revealed that only wt prps were found to crystallize as domain swapped dimers or closed monomers ; the four mutant prps crystallized as non swapped dimers . three of the four mutant prps aligned to form intermolecular sheets . several regions of structural variability were identified , and analysis of their conformations provides an explanation for the structural features , which can influence the formation and conformation of intermolecular sheets involving the m/v129 polymorphic residue . story_separator_special_tag background amyloid fibrils associated with neurodegenerative diseases can be considered biologically relevant failures of cellular quality control mechanisms . it is known that in vivo human tau protein , human prion protein , and human copper , zinc superoxide dismutase ( sod1 ) have the tendency to form fibril deposits in a variety of tissues and they are associated with different neurodegenerative diseases , while rabbit prion protein and hen egg white lysozyme do not readily form fibrils and are unlikely to cause neurodegenerative diseases . in this study , we have investigated the contrasting effect of macromolecular crowding on fibril formation of different proteins . methodology/principal findings as revealed by assays based on thioflavin t binding and turbidity , human tau fragments , when phosphorylated by glycogen synthase kinase-3 , do not form filaments in the absence of a crowding agent but do form fibrils in the presence of a crowding agent , and the presence of a strong crowding agent dramatically promotes amyloid fibril formation of human prion protein and its two pathogenic mutants e196k and d178n . such an enhancing effect of macromolecular crowding on fibril formation is also observed for a pathological human sod1 mutant a4v story_separator_special_tag abstract prion diseases are a group of transmissible , invariably fatal neurodegenerative diseases that affect both humans and animals . according to the protein-only hypothesis , the infectious agent is a prion ( proteinaceous infectious particle ) that is composed primarily of prpsc , the disease-associated isoform of the cellular prion protein , prp . prpsc arises from the conformational change of the normal , glycosylphosphatidylinositol ( gpi ) -anchored protein , prpc . the mechanism by which this process occurs , however , remains enigmatic . rabbits are one of a small number of mammalian species reported to be resistant to prion infection . sequence analysis of rabbit prp revealed that its c-terminal amino acids differ from those of prp from other mammals and may affect the anchoring of rabbit prp through its gpi anchor . using a cell culture model , this study investigated the effect of the rabbit prp-specific c-terminal amino acids on the addition of the gpi anchor to prpc , prpc localization , and prpsc formation . the incorporation of rabbit-specific c-terminal prp residues into mouse prp did not affect the addition of a gpi anchor or the localization of prp . however , these story_separator_special_tag the recent introduction of bank vole ( clethrionomys glareolus ) as an additional laboratory animal for research on prion diseases revealed an important difference when compared to the mouse and the syrian hamster , since bank voles show a high susceptibility to infection by brain homogenates from a wide range of diseased species such as sheep , goats , and humans . in this context , we determined the nmr structure of the c-terminal globular domain of the recombinant bank vole prion protein ( bvprp ) [ bvprp ( 121-231 ) ] at 20 degrees c. bvprp ( 121-231 ) has the same overall architecture as other mammalian prps , with three alpha-helices and an antiparallel beta-sheet , but it differs from prp of the mouse and most other mammalian species in that the loop connecting the second beta-strand and helix alpha2 is precisely defined at 20 degrees c. this is similar to the previously described structures of elk prp and the designed mouse prp ( mprp ) variant mprp [ s170n , n174t ] ( 121-231 ) , whereas syrian hamster prp displays a structure that is in-between these limiting cases . studies with the newly designed variant mprp story_separator_special_tag the present invention relates to a method for preparing a bone graft substitute using bovine bone , and more particularly to a method for preparing a safe bone graft substitute which does not have the risk of infection with bovine spongiform encephalopathy , the method comprising treating bovine bone with sodium hypochlorite and treating the treated bone at a high temperature of more than 600\xb0 c. the bone graft substitute does not cause an immune response , because it is prepared by effectively removing lipids and organic substances from bovine bone having a structure very similar to that of the human bone . also , it has excellent osteoconductivity , and is free of prion , and thus it does not have the risk of infection with bovine spongiform encephalopathy . according to the disclosed invention , the bone graft substitute having such advantages can be prepared in a simple manner . story_separator_special_tag abstract human ( hu ) familial prion diseases are associated with about 40 point mutations of the gene coding for the prion protein ( prp ) . most of the variants associated with these mutations are located in the globular domain of the protein . we performed 50\xa0ns of molecular dynamics for each of these mutants to investigate their structure in aqueous solution . overall , 1.6\xa0 s of molecular dynamics data is presented . the calculations are based on the amber ( parm99 ) force field , which has been shown to reproduce very accurately the structural features of the huprp wild type and a few variants for which experimental structural information is available . the variants present structural determinants different from those of wild-type huprp and the protective mutation huprp ( e219k-129m ) . these include the loss of salt bridges in 2 3 regions and the loss of -stacking interactions in the 2 2 loop . in addition , in the majority of the mutants , the 3 helix is more flexible and y169 is more solvent exposed . the presence of similar traits in this large spectrum of mutations hints to a role of these fingerprints story_separator_special_tag a central theme in prion protein research is the detection of the process that underlies the conformational transition from the normal cellular prion form ( prpc ) to its pathogenic isoform ( prpsc ) . although the three-dimensional structures of monomeric and dimeric human prion protein ( huprp ) have been revealed by nmr spectroscopy and x-ray crystallography , the process underlying the conformational change from prpc to prpsc and the dynamics and functions of prpc remain unknown . the dimeric form is thought to play an important role in the conformational transition . in this study , we performed molecular dynamics ( md ) simulations on monomeric and dimeric huprp at 300 k and 500 k for 10 ns to investigate the differences in the properties of the monomer and the dimer from the perspective of dynamic and structural behaviors . simulations were also undertaken with asp178asn and acidic ph , which is known as a disease-associated factor . our results indicate that the dynamics of the dimer and monomer were similar ( e.g. , denaturation of helices and elongation of the -sheet ) . however , additional secondary structure elements formed in the dimer might result in showing story_separator_special_tag transmissible spongiform encephalopathies are fatal neurodegenerative diseases attributed to misfolding of the cellular prion protein , prpc , into a -sheet-rich , aggregated isoform , prpsc . we previously found that expression of mouse prp with the two amino acid substitutions s170n and n174t , which result in high structural order of the 2 2 loop in the nmr structure at ph 4.5 and 20\xb0c , caused transmissible de novo prion disease in transgenic mice . here we report that expression of mouse prp with the single-residue substitution d167s , which also results in a structurally well ordered 2 2 loop at 20\xb0c , elicits spontaneous prp aggregation in vivo . transgenic mice expressing prpd167s developed a progressive encephalopathy characterized by abundant prp plaque formation , spongiform change , and gliosis . these results add to the evidence that the 2 2 loop has an important role in intermolecular interactions , including that it may be a key determinant of prion protein aggregation . story_separator_special_tag most transmissible spongiform encephalopathies arise either spontaneously or by infection . mutations of prnp , which encodes the prion protein , prp , segregate with phenotypically similar diseases . here we report that moderate overexpression in transgenic mice of mprp ( 170n,174t ) , a mouse prp with two point mutations that subtly affect the structure of its globular domain , causes a fully penetrant lethal spongiform encephalopathy with cerebral prp plaques . this genetic disease was reproduced with 100 % attack rate by intracerebral inoculation of brain homogenate to tga20 mice overexpressing wt prp , and from the latter to wt mice , but not to prp-deficient mice . upon successive transmissions , the incubation periods decreased and prp became more protease-resistant , indicating the presence of a strain barrier that was gradually overcome by repeated passaging . this shows that expression of a subtly altered prion protein , with known 3d structure , efficiently generates a prion disease . story_separator_special_tag muscular contraction dynamics depends on active and passive muscle properties ( e.g. , the force-velocity relation ) as well as on the three-dimensional ( 3d ) muscle structure ( e.g. , the muscle fascicle architecture and aponeurosis dimensions ) . much is known about active muscle force generation and the muscle architecture at a particular age ( mostly for adult specimens ) , but less is known about changes in muscle structure during growth . the present study analyzed growth-related changes in the muscle structure of rabbit gastrocnemius lateralis ( gl ) , gastrocnemius medialis ( gm ) , flexor digitorum longus ( fdl ) , and tibialis anterior ( ta ) . changes in tendon length , muscle belly dimensions ( length , width , thickness ) , as well as aponeurosis length , width , and area were determined using 55 rabbits between 18 and 108 days old . additionally , the 3d muscle fascicle architecture of five rabbits of different ages ( 21 , 37 , 50 , 70 , 100 days ) was determined using a manual digitizer . we found an almost linear increase over time in most of the geometrical parameters observed . gl story_separator_special_tag research in the past 15 years has provided a wealth of new information on the tse diseases . particularly noteworthy has been the identification of prp as a factor in disease susceptibility , the species barrier , and many aspects of disease pathogenesis . further understanding of the structure of prp-res and of amyloids in similar diseases , such as alzheimer 's disease , may provide clues as to the differences and similarities among these protein folding diseases . the problem of the nature of the tse agents remains an enigma . proof of the protein-only hypothesis may require generation of biologically active transmissible agent in a cell-free environment where a virus can not replicate . conversely , proof of a viral etiology will require identification and isolation of a candidate virus . future efforts should not neglect this fascinating and important area . * e-mail : bchesebro @ nih.gov . story_separator_special_tag prion diseases are fatal transmissible neurodegenerative diseases that result from structural conversion of the prion protein into a disease-associated isoform . the prion protein contains a single disulfide bond . our analysis of all nmr structures of the prion protein ( total of 440 structures over nine species ) containing an explicit disulfide bond reveals that the bond exists predominantly in a stable low-energy state , but can also adopt a high-energy configuration . the side chains of two tyrosine residues and one phenylalanine residue control access of solvent to the disulfide bond . notably , the side chains rotate away from the disulfide bond in the high-energy state , exposing the disulfide bond to solvent . the importance of these aromatic residues for protein function was analysed by mutating them to alanine residues and analysing the properties of the mutant proteins using biophysical and cell biological approaches . whereas the mutant protein behaved similarly to wild-type prion protein in recombinant systems , the mutants were retained in the endoplasmic reticulum of mammalian cells and degraded by the proteasomal system . the cellular behaviour of the aromatic residue mutants was similar to the cellular behaviour of a disulfide bond mutant story_separator_special_tag bovine spongiform encephalopathy ( bse ) prions were responsible for an unforeseen epizootic in cattle which had a vast social , economic , and public health impact . this was primarily because bse prions were found to be transmissible to humans . other species were also susceptible to bse either by natural infection ( e.g. , felids , caprids ) or in experimental settings ( e.g. , sheep , mice ) . however , certain species closely related to humans , such as canids and leporids , were apparently resistant to bse . in vitro prion amplification techniques ( sapmca ) were used to successfully misfold the cellular prion protein ( prpc ) of these allegedly resistant species into a bse-type prion protein . the biochemical and biological properties of the new prions generated in vitro after seeding rabbit and dog brain homogenates with classical bse were studied . pathobiological features of the resultant prion strains were determined after their inoculation into transgenic mice expressing bovine and human prpc . strain characteristics of the in vitro -adapted rabbit and dog bse agent remained invariable with respect to the original cattle bse prion , suggesting that the naturally low susceptibility of story_separator_special_tag biophysical responses of proteins to stress much recent work has focused on liquid-liquid phase separation as a cellular response to changing physicochemical conditions . because phase separation responds critically to small changes in conditions such as ph , temperature , or salt , it is in principle an ideal way for a cell to measure and respond to changes in the environment . small ph changes could , for instance , induce phase separation of compartments that store , protect , or inactivate proteins . franzmann et al . used the yeast translation termination factor sup35 as a model for a phase separation induced stress response . lowering the ph induced liquid-liquid phase separation of sup35 . the resulting liquid compartments subsequently hardened into gels , which sequestered the termination factor . raising the ph triggered dissolution of the gels , concomitant with translation restart . protecting sup35 in gels could provide a fitness advantage to recovering yeast cells that must restart the translation machinery after stress . science , this issue p. eaao5654 the prion domain of sup35 is a ph sensor that promotes stress survival by phase separation . introduction the formation of dynamic , membraneless compartments using story_separator_special_tag background the conformational conversion of the host-derived cellular prion protein ( prpc ) into the disease-associated scrapie isoform ( prpsc ) is responsible for the pathogenesis of transmissible spongiform encephalopathies ( tses ) . various single-point mutations in prpcs could cause structural changes and thereby distinctly influence the conformational conversion . elucidation of the differences between the wild-type rabbit prpc ( raprpc ) and various mutants would be of great help to understand the ability of raprpc to be resistant to tse agents . methodology/principal findings we determined the solution structure of the i214v mutant of raprpc ( 91 228 ) and detected the backbone dynamics of its structured c-terminal domain ( 121 228 ) . the i214v mutant displays a visible shift of surface charge distribution that may have a potential effect on the binding specificity and affinity with other chaperones . the number of hydrogen bonds declines dramatically . urea-induced transition experiments reveal an obvious decrease in the conformational stability . furthermore , the nmr dynamics analysis discloses a significant increase in the backbone flexibility on the pico- to nanosecond time scale , indicative of lower energy barrier for structural rearrangement . conclusions/significance our results suggest that both story_separator_special_tag the nuclear magnetic resonance structure of the globular domain with residues 121-230 of a variant human prion protein with two disulfide bonds , hprp ( m166c/e221c ) , shows the same global fold as wild-type hprp ( 121-230 ) . it contains three alpha-helices of residues 144-154 , 173-194 and 200-228 , an anti-parallel beta-sheet of residues 128-131 and 161-164 , and the disulfides cys166-cys221 and cys179-cys214 . the engineered extra disulfide bond in the presumed `` protein x '' -binding site is accommodated with slight , strictly localized conformational changes . high compatibility of hprp with insertion of a second disulfide bridge in the protein x epitope was further substantiated by model calculations with additional variant structures . the ease with which the hprp structure can accommodate a variety of locations for a second disulfide bond within the presumed protein x-binding epitope suggests a functional role for the extensive perturbation by a natural second disulfide bond of the corresponding region in the human doppel protein . story_separator_special_tag abstract background : prion diseases are fatal and infectious neurodegenerative diseases affecting humans and animals . rabbits are one of the few mammalian species reported to be resistant to infection from prion diseases isolated from other species ( i. vorberg et al. , journal of virology 77 ( 3 ) ( 2003 ) 2003 2009 ) . thus the study of rabbit prion protein structure to obtain insight into the immunity of rabbits to prion diseases is very important . findings : the paper is a straight forward molecular dynamics simulation study of wild-type rabbit prion protein ( monomer cellular form ) which apparently resists the formation of the scrapie form . the comparison analyses with human and mouse prion proteins done so far show that the rabbit prion protein has a stable structure . the main point is that the enhanced stability of the c-terminal ordered region especially helix 2 through the d177 r163 salt-bridge formation renders the rabbit prion protein stable . the salt bridge d201 r155 linking helixes 3 and 1 also contributes to the structural stability of rabbit prion protein . the hydrogen bond h186 r155 partially contributes to the structural stability of rabbit prion protein story_separator_special_tag abstract prion diseases such as creutzfeldt-jakob disease , variant creutzfeldt-jakob diseases , gerstmann-straussler-scheinker syndrome , fatal familial insomnia , kuru in humans , scrapie in sheep , bovine spongiform encephalopathy ( or mad-cow disease ) and chronic wasting disease in cattle are invariably fatal and highly infectious neurodegenerative diseases affecting humans and animals . however , by now there have not been some effective therapeutic approaches to treat all these prion diseases . in 2008 , canine mammals including dogs ( canis familials ) were the first time academically reported to be resistant to prion diseases ( vaccine 26 : 2601-2614 ( 2008 ) ) . thus , it is very worth studying the molecular structures of dog prion protein to obtain insights into the immunity of dogs to prion diseases . this paper studies the molecular structural dynamics of wild-type dog prion protein . the comparison analyses with rabbit prion protein show that the dog prion protein has stable molecular structures whether under ne . story_separator_special_tag prion diseases which are serious neurodegenerative diseases that affect humans and animals occur in various of species . unlike many other neurodegenerative diseases affected by amyloid , prion diseases can be highly infectious . prion diseases occur in many species . in humans , prion diseases include the fatal human neurodegenerative diseases such as creutzfeldt-jakob disease ( cjd ) , fatal familial insomnia ( ffi ) , gerstmann-strussler-scheinker syndrome ( gss ) and kuru etc . in animals , prion diseases are related to the bovine spongiform encephalopathy ( bse or 'mad-cow ' disease ) in cattle , the chronic wasting disease ( cwd ) found in deer and elk , and scrapie seen in sheep and goats , etc . more seriously , the fact that transmission of the prion diseases across the species barrier to other species such as humans has caused a major public health concern worldwide . for example , the bse in europe , the cwd in north america , and variant cjds ( vcjds ) in young people of uk . fortunately , it is discovered that the hydrophobic region of prion proteins ( prp ) controls the formation of diseased prions ( prp story_separator_special_tag part i : modelling and computer simulations . the physical chemistry of specific recognition j. janin . approaches to protein-ligand binding from computer simulations w.l . jorgensen , et al . dynamics of biomolecules : simulation versus x-ray and far-infrared experiments s. hery , et al . semiempirical and ab initio modeling of chemical processes : from aqueous solution to enzymes r.p . muller , et al . professional gambling r. rodriguez , g. vriend . molecular modeling of globular proteins : strategy id 3d : secondary structures and epitopes a.j.p . alix . physicochemical properties in vacuo and in solution of some molecules with biological significance from density functional computations t. marino , et al . gmmx conformation searching and prediction of mnr proton-proton coupling constants f.l . tobiason , g. vergoten . part ii : proteins and lipids . biomolecular structure and dynamics : recent experimental and theoretical advances r. kaptein , et al . what drives associations of a-helical peptides in membrane domains of proteins ? role of hydrophobic interactions r.g . efremov , g. vergoten . infrared spectroscopic studies of membrane lipids j.l.r . arrondo , f.m . goni . time-resolved infrared spectroscopy of biomolecules h. story_separator_special_tag prion diseases are infectious fatal neurodegenerative diseases including creutzfeldt-jakob disease in humans and bovine spongiform encephalopathy in cattle . the misfolding and conversion of cellular prp in such mammals into pathogenic prp is believed to be the key procedure . rabbits are among the few mammalian species that exhibit resistance to prion diseases , but little is known about the molecular mechanism underlying such resistance . here , we report that the crowding agents ficoll 70 and dextran 70 have different effects on fibrillization of the recombinant full-length prps from different species : although these agents dramatically promote fibril formation of the proteins from human and cow , they significantly inhibit fibrillization of the rabbit protein by stabilizing its native state . we also find that fibrils formed by the rabbit protein contain less -sheet structure and more -helix structure than those formed by the proteins from human and cow . in addition , amyloid fibrils formed by the rabbit protein do not generate a proteinase k-resistant fragment of 15-16-kda , but those formed by the proteins from human and cow generate such proteinase k-resistant fragments . together , these results suggest that the strong inhibition of fibrillization of the
mobile computing continuously evolve through the sustained effort of many researchers . it seamlessly augments users ' cognitive abilities via compute-intensive capabilities such as speech recognition , natural language processing , etc . by thus empowering mobile users , we could transform many areas of human activity . this article discusses the technical obstacles to these transformations and proposes a new architecture for overcoming them . in this architecture , a mobile user exploits virtual machine ( vm ) technology to rapidly instantiate customized service software on a nearby cloudlet and then uses that service over a wireless lan ; the mobile device typically functions as a thin client with respect to the service . a cloudlet is a trusted , resource-rich computer or cluster of computers that 's well-connected to the internet and available for use by nearby mobile devices . our strategy of leveraging transiently customized proximate infrastructure as a mobile device moves with its user through the physical world is called cloudlet-based , resource-rich , mobile computing . crisp interactive response , which is essential for seamless augmentation of human cognition , is easily achieved in this architecture because of the cloudlet 's physical proximity and one-hop story_separator_special_tag we consider handheld computing devices which are connected to a server ( or a powerful desktop machine ) via a wireless lan . on such devices , it is often possible to save the energy on the handheld by offloading its computation to the server . in this work , based on profiling information on computation time and data sharing at the level of procedure calls , we construct a cost graph for a given application program . we then apply a partition scheme to statically divide the program into server tasks and client tasks such that the energy consumed by the program is minimized . experiments are performed on a suite of multimedia benchmarks . results show considerable energy saving for several programs through offloading . story_separator_special_tag the notion of cloudlet is getting widely accepted from both academia and industry . however , the development of the cloudlet faces a classic bootstrapping problem . it needs practical applications to incentivize cloudlet deployment while developers can not heavily rely on a cloudlet infrastructure until they are widely deployed . to provide a systematic way to expedite cloudlet deployment , i implemented openstack++ that extends openstack , an open source ecosystem for cloud computing . in this work , i present design decisions to efficiently integrate cloudlets with openstack and show implementation details for vm provisioning and handoff . finally , i explain how i resolved practical challenges for the cloudlet porting . this research was supported by the national science foundation ( nsf ) under grant number iis-1065336 . additional support was provided by the intel corporation , google , vodafone , and the conklin kistler family fund . any opinions , findings , conclusions or recommendations expressed in this material are those of the authors and should not be attributed to their employers or funding sources . story_separator_special_tag cloud offload is an important technique in mobile computing . vm-based cloudlets have been proposed as offload sites for the resource-intensive and latency-sensitive computations typically associated with mobile multimedia applications . since cloud offload relies on precisely-configured back-end software , it is difficult to support at global scale across cloudlets in multiple domains . to address this problem , we describe just-in-time ( jit ) provisioning of cloudlets under the control of an associated mobile device . using a suite of five representative mobile applications , we demonstrate a prototype system that is capable of provisioning a cloudlet with a non-trivial vm image in 10 seconds . this speed is achieved through dynamic vm synthesis and a series of optimizations to aggressively reduce transfer costs and startup latency . story_separator_special_tag we describe the architecture and prototype implementation of an assistive system based on google glass devices for users in cognitive decline . it combines the first-person image capture and sensing capabilities of glass with remote processing to perform real-time scene interpretation . the system architecture is multi-tiered . it offers tight end-to-end latency bounds on compute-intensive operations , while addressing concerns such as limited battery capacity and limited processing capability of wearable devices . the system gracefully degrades services in the face of network failures and unavailability of distant architectural tiers . story_separator_special_tag high-data-rate sensors , such as video cameras , are becoming ubiquitous in the internet of things . this article describes gigasight , an internet-scale repository of crowd-sourced video content , with strong enforcement of privacy preferences and access controls . the gigasight architecture is a federated system of vm-based cloudlets that perform video analytics at the edge of the internet , thus reducing the demand for ingress bandwidth into the cloud . denaturing , which is an owner-specific reduction in fidelity of video content to preserve privacy , is one form of analytics on cloudlets . content-based indexing for search is another form of cloudlet-based analytics . this article is part of a special issue on smart spaces . story_separator_special_tag the convergence of mobile computing and cloud computing is predicated on a reliable , high-bandwidth end-to-end network . this basic requirement is hard to guarantee in hostile environments such as military operations and disaster recovery . in this article , the authors examine how vm-based cloudlets that are located in close proximity to associated mobile devices can overcome this challenge . this article is part of a special issue on the edge of the cloud . story_separator_special_tag path computing is a new paradigm that generalizes the edge computing vision into a multi-tier cloud architecture deployed over the geographic span of the network . path computing supports scalable and localized processing by providing storage and computation along a succession of datacenters of increasing sizes , positioned between the client device and the traditional wide-area cloud data-center . cloudpath is a platform that implements the path computing paradigm . cloudpath consists of an execution environment that enables the dynamic installation of light-weight stateless event handlers , and a distributed eventual consistent storage system that replicates application data on-demand . cloudpath handlers are small , allowing them to be rapidly instantiated on demand on any server that runs the cloudpath execution framework . in turn , cloudpath automatically migrates application data across the multiple datacenter tiers to optimize access latency and reduce bandwidth consumption . story_separator_special_tag end user experiences on mobile devices with their rich sets of sensors are constrained by limited device battery lives and restricted form factors , as well as by the 'scope ' of the data available locally . the 'personal cloud ' distributed software abstractions address these issues by enhancing the capabilities of a mobile device via seamless use of both nearby and remote cloud resources . in contrast to vendor-specific , middleware-based cloud solutions , personal cloud instances are created at hypervisor- level , to create for each end user the federation of networked resources best suited for the current environment and use . specifically , the cirrostratus extensions of the xen hypervisor can federate a user 's networked resources to establish a personal execution environment , governed by policies that go beyond evaluating network connectivity to also consider device ownership and access rights , the latter managed in a secure fashion via standard social network services . experimental evaluations with both linux- and android-based devices , and using facebook as the sns , show the approach capable of substantially augmenting a device 's innate capabilities , improving application performance and the effective functionality seen by end users . story_separator_special_tag there is an increasing number of network-enabled computing devices in homes and offices , driven by continued improvements in device capabilities and network connectivity . by exploiting the virtualization technologies that have begun to pervade even the mobile domain , these devices -- hardware components , such as displays , input devices , disks , or processors , can be decoupled from the physical platforms on which they reside to form a resource pool or device cloud . by drawing on the composite resources of device clouds , applications can leverage the heterogeneity present in the cloud to exploit hardware/device differences in terms of power consumption , computational speeds , display sizes , or the presence of certain accelerators , and can take advantage of software diversity in terms of the different operating environments and applications that efficiently operate on individual devices . this paper implements and evaluates the concept of device clouds , in which virtual execution platforms dynamically composed from sets of devices are built for applications , using automated methods that are based on simple policies . experimental results identify the basic overheads associated with device clouds and their use , and demonstrate the advantages of dynamically story_separator_special_tag we introduce paradrop , a specific edge computing platform that provides computing and storage resources at the `` extreme '' edge of the network allowing third-party developers to flexibly create new types of services . this extreme edge of the network is the wifi access point ( ap ) or the wireless gateway through which all end-device traffic ( personal devices , sensors , etc . ) passes through . paradrop 's focus on wifi aps also stems from the fact that the wifi ap has unique contextual knowledge of its end-devices ( e.g. , proximity , channel characteristics ) that are lost as we get deeper into the network . while different variations and implementations of edge computing platforms have been created over the last decade , paradrop focuses on specific design issues around how to structure an architecture , a programming interface , and orchestration framework through which such edge computing services can be dynamically created , installed , and revoked . paradrop consists of the following three main components : a flexible hosting substrate in the wifi aps that supports multi-tenancy , a cloud-based backend through which such computations are orchestrated across many paradrop aps , and story_separator_special_tag in stream processing , data is streamed as a continuous flow of data items , which are generated from multiple sources and geographical locations . the common approach for stream processing is to transfer raw data streams to a central data center that entails communication over the wide-area network ( wan ) . however , this approach is inefficient and falls short for two main reasons : ( i ) the burst in the amount of data generated at the network edge by an increasing number of connected devices , ( ii ) the emergence of applications with predictable and low latency requirements . in this paper , we propose spanedge , a novel approach that unifies stream processing across a geo-distributed infrastructure , including the central and near-the-edge data centers . spanedge reduces or eliminates the latency incurred by wan links by distributing stream processing applications across the central and the near-the-edge data centers . furthermore , spanedge provides a programming environment , which allows programmers to specify parts of their applications that need to be close to the data source . programmers can develop a stream processing application , regardless of the number of data sources and their story_separator_special_tag we are entering a new era of computing , characterized by the need to handle over one zettabyte ( 1021 bytes , or zb ) of data . the world s capacities to sense , transmit , store , and process information need to grow three orders of magnitude , while maintain an energy consumption level similar to that of the year 2010. in other words , we need to produce thousand-fold improvement in performance per watt . to face this challenge , in 2012 the chinese academy of sciences launched a 10-year strategic priority research initiative called the next generation information and communication technology initiative ( the nict initiative ) . a research thrust of the nict program is the cloud-sea computing systems project . the main idea is to augment conventional cloud computing by cooperation and integration of the cloud-side systems and the sea-side systems , where the `` sea-side '' refers to an augmented client side consisting of human facing and physical world facing devices and subsystems . the cloud-sea computing systems project consists of four research tasks : a new computing model called rest 2.0 which extends the rest ( representational state transfer ) architectural style story_separator_special_tag recognition and perception based mobile applications , such as image recognition , are on the rise . these applications recognize the user 's surroundings and augment it with information and/or media . these applications are latency-sensitive . they have a soft-realtime nature - late results are potentially meaningless . on the one hand , given the compute-intensive nature of the tasks performed by such applications , execution is typically offloaded to the cloud . on the other hand , offloading such applications to the cloud incurs network latency , which can increase the user-perceived latency . consequently , edge computing has been proposed to let devices offload intensive tasks to edge servers instead of the cloud , to reduce latency . in this paper , we propose a different model for using edge servers . we propose to use the edge as a specialized cache for recognition applications and formulate the expected latency for such a cache . we show that using an edge server like a typical web cache , for recognition applications , can lead to higher latencies . we propose cachier , a system that uses the caching model along with novel optimizations to minimize latency by story_separator_special_tag image recognition applications are on the rise . increasingly , applications on edge devices such as mobile smartphones , drones and cars , are relying on recognition techniques to provide interactive and intelligent functionality . given the complexity of these techniques , and resource constrained nature of edge devices , applications rely on offloading compute intensive recognition tasks to the cloud . this has also lead to the rise of cloud-based recognition services . this involves sending captured images to remote servers across the internet , which leads to slower responses . with the rising numbers of edge devices , both , the network and such centralized cloud-based solutions , are likely to be under stress , and lead to further slower responses . to reduce the recognition latency , and provide better scalability to the cloud-based solutions , we propose precog . precog employs selective computation on the devices to reduce the need to offload images to the cloud . in coordination with edge servers , it uses prediction to prefetch parts of the trained classifiers used for recognition onto the devices , and uses these smaller models to accelerate recognition on devices . our evaluation shows that precog story_separator_special_tag allocating and managing resources in the internet of things ( iot ) presents many new challenges , including massive scale , new security issues , and new resource types that become critical in making orchestration decisions . in this paper , we investigate whether clouds of edge devices can be managed as infrastructure-as-a-service clouds . we describe our approach , focusstack , that uses location based situational awareness , implemented over a multi-tier geographic addressing network , to solve the problems of inefficient awareness messaging and mixed initiative control that iot device clouds raise for traditional cloud management tools . we provide an extended case study of a shared video application as initial demonstration and evaluation of the work and show that we effectively solve the two key problems above . story_separator_special_tag this paper argues for the utility of back-end driven onloading to the edge as a way to address bandwidth use and latency challenges for future device-cloud interactions . supporting such edge functions ( efs ) requires solutions that can provide ( i ) fast and scalable ef provisioning and ( ii ) strong guarantees for the integrity of the ef execution and confidentiality of the state stored at the edge . in response to these goals , we ( i ) present a detailed design space exploration of the current technologies that can be leveraged in the design of edge function platforms ( efps ) , ( ii ) develop a solution to address security concerns of efs that leverages emerging hardware support for os agnostic trusted execution environments such as intel sgx enclaves , and ( iii ) propose and evaluate airbox , a platform for fast , scalable and secure onloading of edge functions . story_separator_special_tag cloud computing , arguably , has become the de facto computing platform for the big data processing by researchers and practitioners for the last decade , and enabled different stakeholders to discover valuable information from large scale data . at the same time , in the decade , we have witnessed the fast growing deployment of billions of sensors and actuators in multiple applications domains , such as transportation , manufacturing , connected/wearable health care , smart city and so on , stimulating the emerging of edge computing ( a.k.a. , fog computing , cloudlet ) . however , data , as the core of both cloud computing and edge computing , is still owned by each stakeholder and rarely shared due to privacy concern and formidable cost of data transportation , which significantly limits internet of things ( iot ) applications that need data input from multiple stakeholders ( e.g. , video analytics collects data from cameras owned by police department , transportation department , retailer stores , etc. ) . in this paper , we envision that in the era of iot the demand of distributed big data sharing and processing applications will dramatically increase since the data story_separator_special_tag in this paper , we envision the future connected and autonomous vehicles ( cavs ) as a sophisticated computer on wheels , with substantial on-board sensors as data sources and a variety of services running on top to support autonomous driving or other functions . in general , these services are computationally expensive , especially for the machine learning based applications ( e.g. , cnn-based object detection ) . nevertheless , the on-board computation unit possess limited compute resources , raising a huge challenge to deploy these computation-intensive services on the vehicle . on the contrary , the cloud-based architecture conceptually with unconstrained resources suffers from unexpected extended latency that attributes to the large-scale internet data transmission ; thus , adversely affecting the services ' real-time performance , quality of services and user experiences . to address this dilemma , inspired by the promising edge computing paradigm , we propose to build an open vehicular data analytics platform ( openvdap ) for cavs , which is a full-stack edge based platform including an on-board computing/communication unit , an isolation-supported and security & privacy-preserved vehicle operation system , an edge-aware application library , as well as an optimal workload of ? story_separator_special_tag ridesharing services , such as uber and didi , are enjoying great popularity ; however , a big challenge remains in guaranteeing the safety of passenger and driver . state-of-the-art work has primarily adopted the cloud model , where data collected through end devices on vehicles are uploaded to and processed in the cloud . however , data such as video can be too large to be uploaded onto the cloud in real time . when a vehicle is moving , the network communication can become unstable , leading to high latency for data uploading . in addition , the cost of huge data transfer and storage is a big concern from a business point of view . as edge computing enables more powerful computing end devices , it is possible to design a latency-guaranteed framework to ensure in-vehicle safety . in this paper , we propose an edge-based attack detection in ridesharing services , namely safeshareride , which can detect dangerous events happening in the vehicle in near real time . safeshareride is implemented on both drivers ' and passengers ' smartphones . the detection of safeshareride consists of three stages : speech recognition , driving behavior detection , story_separator_special_tag smart home iot devices are becoming increasingly popular . modern programmable smart home hubs such as smartthings enable homeowners to manage devices in sophisticated ways to save energy , improve security , and provide conveniences . unfortunately , many smart home systems contain vulnerabilities , potentially impacting home security and privacy . this paper presents vigilia , a system that shrinks the attack surface of smart home iot systems by restricting the network access of devices . as existing smart home systems are closed , we have created an open implementation of a similar programming and configuration model in vigilia and extended the execution environment to maximally restrict communications by instantiating device-based network permissions . we have implemented and compared vigilia with forefront iot-defense systems ; our results demonstrate that vigilia outperforms these systems and incurs negligible overhead . story_separator_special_tag the adoption of smart home devices is hindered today by the privacy concerns users have regarding their personal data . since these devices depend on remote service providers , users remain oblivious about how and when their data is disclosed and processed . in this paper we present homepad , a privacy-aware smart hub for home environments . our system aims to empower users with the ability to determine how applications can access and process sensitive data collected by smart devices ( e.g. , web cams ) and to prevent applications from executing unless they abide by the privacy restrictions specified by the users . to achieve this goal , homepad applications are implemented as directed graphs of elements , which consist of instances of functions that process data in isolation . by modeling elements and the flow graph using prolog rules , homepad allows for automatic verification of the application 's flow graph against user-defined privacy policies . homepad incurs a negligible performance overhead , requires a modest programming effort , and provides flexible policy support to address the privacy concerns most commonly expressed by potential smart device consumers . story_separator_special_tag along the trend pushing computation from the network core to the edge where the most of data are generated , edge computing has shown its potential in reducing response time , lowering bandwidth usage , improving energy efficiency and so on . at the same time , low-latency video analytics is becoming more and more important for applications in public safety , counter-terrorism , self-driving cars , vr/ar , etc . as those tasks are either computation intensive or bandwidth hungry , edge computing fits in well here with its ability to flexibly utilize computation and bandwidth from and between each layer . in this paper , we present lavea , a system built on top of an edge computing platform , which offloads computation between clients and edge nodes , collaborates nearby edge nodes , to provide low-latency video analytics at places closer to the users . we have utilized an edge-first design and formulated an optimization problem for offloading task selection and prioritized offloading requests received at the edge node to minimize the response time . in case of a saturating workload on the front edge node , we have designed and compared various task placement schemes that story_separator_special_tag organizations deploy a hierarchy of clusters - cameras , private clusters , public clouds - for analyzing live video feeds from their cameras . video analytics queries have many implementation options which impact their resource demands and accuracy of outputs . our objective is to select the `` query plan '' - implementations ( and their knobs ) - and place it across the hierarchy of clusters , and merge common components across queries to maximize the average query accuracy . this is a challenging task , because we have to consider multi-resource ( network and compute ) demands and constraints in the hierarchical cluster and search in an exponentially large search space for plans , placements , and merging . we propose videoedge , a system that introduces dominant demand to identify the best tradeoff between multiple resources and accuracy , and narrows the search space by identifying a `` pareto band '' of promising configurations . videoedge also balances the resource benefits and accuracy penalty of merging queries . deployment results show that videoedge improves accuracy by 25:4 and 5:4 compared to fair allocation of resources and a recent solution for video query planning ( videostorm ) , story_separator_special_tag the performance of mobile computing would be significantly improved by leveraging cloud computing and migrating mobile workloads for remote execution at the cloud . in this paper , to efficiently handle the peak load and satisfy the requirements of remote program execution , we propose to deploy cloud servers at the network edge and design the edge cloud as a tree hierarchy of geo-distributed servers , so as to efficiently utilize the cloud resources to serve the peak loads from mobile users . the hierarchical architecture of edge cloud enables aggregation of the peak loads across different tiers of cloud servers to maximize the amount of mobile workloads being served . to ensure efficient utilization of cloud resources , we further propose a workload placement algorithm that decides which edge cloud servers mobile programs are placed on and how much computational capacity is provisioned to execute each program . the performance of our proposed hierarchical edge cloud architecture on serving mobile workloads is evaluated by formal analysis , small-scale system experimentation , and large-scale trace-based simulations . story_separator_special_tag real-time video analytics on small autonomous drones poses several difficult challenges at the intersection of wireless bandwidth , processing capacity , energy consumption , result accuracy , and timeliness of results . in response to these challenges , we describe four strategies to build an adaptive computer vision pipeline for search tasks in domains such as search-and-rescue , surveillance , and wildlife conservation . our experimental results show that a judicious combination of drone-based processing and edge-based processing can save substantial wireless bandwidth and thus improve scalability , without compromising result accuracy or result latency . story_separator_special_tag virtual reality ( vr ) fundamentally improves the user 's experience when interacting with the virtual world , and could revolutionarily transform designs of many interactive systems . to provide vr from untethered mobile devices , a viable solution is to remotely render vr frames from the edge cloud , but encounters challenges from the limited computation and communication capacities of the edge cloud when serving multiple mobile vr users at the same time . in this paper , we envision the key reason of such challenges as the ignorance of redundancy across vr frames being rendered , and aim to fundamentally remove this performance constraint on highly dynamic vr applications by adaptively reusing the redundant vr frames being rendered for different vr users . such redundancy in each frame is decided at run-time by the edge cloud , which is then able to memoize the previous results of vr frame rendering for future reuse by other users . after a vr frame is generated , the edge cloud further reuses its redundant pixels compared with other frames , and only transmits the distinct portion of this frame to mobile devices . we have implemented our design over android os story_separator_special_tag the idea of programmable networks has recently re-gained considerable momentum due to the emergence of the software-defined networking ( sdn ) paradigm . sdn , often referred to as a `` radical new idea in networking '' , promises to dramatically simplify network management and enable innovation through network programmability . this paper surveys the state-of-the-art in programmable networks with an emphasis on sdn.we provide a historic perspective of programmable networks from early ideas to recent developments . then we present the sdn architecture and the openflow standard in particular , discuss current alternatives for implementation and testing sdn-based protocols and services , examine current and future sdn applications , and explore promising research directions based on the sdn paradigm . story_separator_special_tag as mobile network users look forward to the connectivity speeds of 5g networks , service providers are facing challenges in complying with connectivity demands without substantial financial investments . network function virtualization ( nfv ) is introduced as a new methodology that offers a way out of this bottleneck . nfv is poised to change the core structure of telecommunications infrastructure to be more cost-efficient . in this article , we introduce an nfv framework , and discuss the challenges and requirements of its use in mobile networks . in particular , an nfv framework in the virtual environment is proposed . moreover , in order to reduce signaling traffic and achieve better performance , this article proposes a criterion to bundle multiple functions of a virtualized evolved packet core in a single physical device or a group of adjacent devices . the analysis shows that the proposed grouping can reduce the network control traffic by 70 percent . story_separator_special_tag complexity and flexibility are the main problems that will be faced in the world of the future network . to be able to answer these problems , a method called software defined network ( sdn ) is being developed . the sdn concept is to separate the network 's controller and forwarding plane of the hardware . in this research , sdn component , white box switch was built . to test its performance , rtt and throughput of some configuration were measured . the result is compared to conventional switch , which is cisco catalyst 2950. white box switch was created of a computer with 2.50 ghz processor and 32 gbytes memory . the result shows that performance of cisco catalyst 2950 is superior to white box switch because of its asic that allow it to forward data in hardware . furthermore minimum specification of white box switch can be calculated , that is 1.225 ghz processor and 1 gbytes memory . to get the best performance , we can use single-board computer to be used as white box switch . single-board computer contains all of its hardware in a single-board and it will give best performance of a story_separator_special_tag with the internet of everything ( ioe ) paradigm that gathers almost every object online , huge traffic workload , bandwidth , security , and latency issues remain a concern for iot users in today s world . besides , the scalability requirements found in the current iot data processing ( in the cloud ) can hardly be used for applications such as assisted living systems , big data analytic solutions , and smart embedded applications . this paper proposes an extended cloud iot model that optimizes bandwidth while allowing edge devices ( internet-connected objects/devices ) to smartly process data without relying on a cloud network . its integration with a massively scaled spine-leaf ( sl ) network topology is highlighted . this is contrasted with a legacy multitier layered architecture housing network services and routing policies . the perspective offered in this paper explains how low-latency and bandwidth intensive applications can transfer data to the cloud ( and then back to the edge application ) without impacting qos performance . consequently , a spine-leaf fog computing network ( sl-fcn ) is presented for reducing latency and network congestion issues in a highly distributed and multilayer virtualized iot datacenter environment story_separator_special_tag lightweight virtualization technologies have revolutionized the world of software development by introducing flexibility and innovation to this domain . although the benefits introduced by these emerging solutions have been widely acknowledged in cloud computing , recent advances have led to the spread of such technologies in different contexts . as an example , the internet of things ( iot ) and mobile edge computing benefit from container virtualization by exploiting the possibility of using these technologies not only in data centers but also on devices , which are characterized by fewer computational resources , such as single-board computers . this has led to a growing trend to more efficiently redesign the critical components of iot/edge scenarios ( e.g. , gateways ) to enable the concept of device virtualization . the possibility for efficiently deploying virtualized instances on single-board computers has already been addressed in recent studies ; however , these studies considered only a limited number of devices and omitted important performance metrics from their empirical assessments . this paper seeks to fill this gap and to provide insights for future deployments through a comprehensive performance evaluation that aims to show the strengths and weaknesses of several low-power devices when story_separator_special_tag cloud computing is today s most emphasized information and communications technology ( ict ) paradigm that is directly or indirectly used by almost every online user . however , such great significance comes with the support of a great infrastructure that includes large data centers comprising thousands of server units and other supporting equipment . their share in power consumption generates between 1.1p and 1.5p of the total electricity use worldwide and is projected to rise even more . such alarming numbers demand rethinking the energy efficiency of such infrastructures . however , before making any changes to infrastructure , an analysis of the current status is required . in this article , we perform a comprehensive analysis of an infrastructure supporting the cloud computing paradigm with regards to energy efficiency . first , we define a systematic approach for analyzing the energy efficiency of most important data center domains , including server and network equipment , as well as cloud management systems and appliances consisting of a software utilized by end users . second , we utilize this approach for analyzing available scientific and industrial literature on state-of-the-art practices in data centers and their equipment . finally , we story_separator_special_tag motivated by increased concern over energy consumption in modern data centers , we propose a new , distributed computing platform called nano data centers ( nada ) . nada uses isp-controlled home gateways to provide computing and storage services and adopts a managed peer-to-peer model to form a distributed data center infrastructure . to evaluate the potential for energy savings in nada platform we pick video-on-demand ( vod ) services . we develop an energy consumption model for vod in traditional and in nada data centers and evaluate this model using a large set of empirical vod access data . we find that even under the most pessimistic scenarios , nada saves at least 20 % to 30 % of the energy compared to traditional data centers . these savings stem from energy-preserving properties inherent to nada such as the reuse of already committed baseline power on underutilized gateways , the avoidance of cooling costs , and the reduction of network energy consumption as a result of demand and service co-localization in nada . story_separator_special_tag in this study , the authors focus on theoretical modelling of the fog computing architecture and compare its performance with the traditional cloud computing model . existing research works on fog computing have primarily focused on the principles and concepts of fog computing and its significance in the context of internet of things ( iot ) . this work , one of the first attempts in its domain , proposes a mathematical formulation for this new computational paradigm by defining its individual components and presents a comparative study with cloud computing in terms of service latency and energy consumption . from the performance analysis , the work establishes fog computing , in collaboration with the traditional cloud computing platform , as an efficient green computing platform to support the demands of the next generation iot applications . results show that for a scenario where 25 % of the iot applications demand real-time , low-latency services , the mean energy expenditure in fog computing is 40.48 % less than the conventional cloud computing model . story_separator_special_tag it is envogue to consider how to incorporate various home devices such as set-top boxes into content delivery architectures using the peer-to-peer ( p2p ) paradigm . the hope is to enhance the efficienc of content delivery , e.g. , in terms of reliability , availability , throughput , or to reduce the cost of the content delivery platform or to improve the end user experience . while it is easy to point out the benefit of such proposals they usually do not consider the implications with regards to the energy costs . in this paper we explore the energy trade-offs of such p2p architectures , data center architectures , and content distribution networks ( cdns ) by building upon an energy consumption model of the transport network and datacenters developed in the context of internet tv ( iptv ) . our results show that a cdn within an isp is able to minimize the overall power consumption . while a p2p architecture may reduce the power consumption of the service provider it increases the overall energy consumption . story_separator_special_tag tiny computers located in end-user premises are becoming popular as local servers for internet of things ( iot ) \xa0and fog computing services . these highly distributed servers that can host and distribute content and applications in a peer-to-peer ( p2p ) \xa0fashion are known as nano data centers ( ndcs ) . despite the growing popularity of nano servers , their energy consumption is not well-investigated . to study energy consumption of ndcs , we propose and use flow-based and time-based energy consumption models for shared and unshared network equipment , respectively . to apply and validate these models , a set of measurements and experiments are performed to compare energy consumption of a service provided by ndcs and centralized data centers ( dcs ) . a number of findings emerge from our study , including the factors in the system design that allow ndcs to consume less energy than its centralized counterpart . these include the type of access network attached to nano servers and nano server s time utilization ( the ratio of the idle time to active time ) . additionally , the type of applications running on ndcs and factors such as number of downloads story_separator_special_tag the content delivery network ( cdn ) intensively uses cache to push the content close to end users . over both traditional internet architecture and emerging cloud-based framework , cache allocation has been the core problem that any cdn operator needs to address . as the first step for cache deployment , cdn operators need to discover or estimate the distribution of user requests in different geographic areas . this step results in a statistical spatial model for the user requests , which is used as the key input to solve the optimal cache deployment problem . more often than not , the temporal information in user requests is omitted to simplify the cdn design . in this paper , we disclose that the spatial request model alone may not lead to truly optimal cache deployment and revisit the problem by taking the dynamic traffic demands into consideration . specifically , we model the time-varying traffic demands and formulate the distributed cache deployment optimization problem with an integer linear program ( ilp ) . to solve the problem efficiently , we transform the ilp problem into a scalable form and propose a greedy diagram to tackle it . via experiments story_separator_special_tag the content delivery network ( cdn ) intensively uses cache to push the content close to end users . over both traditional internet architecture and emerging cloud-based framework , cache allocation has been the core problem that any cdn operator needs to address . as the first step for cache deployment , cdn operators need to discover or estimate the distribution of user requests in different geographic areas . this step results in a statistical spatial model for the user requests , which is used as the key input to solve the optimal cache deployment problem . more often than not , the temporal information in user requests is omitted to simplify the cdn design . in this paper , we disclose that the spatial request model alone may not lead to truly optimal cache deployment . by considering the temporal information in user requests , we provide a dynamic traffic based solution to this broadly studied problem . via experiments over the north american isps points of presence ( pops ) network , our new solution outperforms traditional cdn design method and saves the overall delivery cost by 16 % to 20 % . story_separator_special_tag selecting suitable content delivery networks ( cdns ) is critical for any content provider ( cp ) . normally , a cp chooses the cdn that has a larger point of presence ( pop ) footprint and thus is believed to have a better performance . in this paper , we investigate if smaller , cheaper cdns could have a comparable performance to that of big ones . we develop a method for cdn selection and perform a case study using real-world cdn demand trace data over a synthetic pop network . our study would help the cp better understand the intricate relationship between the number of pops and the performance of cdn . story_separator_special_tag soldiers and front-line personnel operating in tactical environments increasingly make use of handheld devices to help with tasks such as face recognition , language translation , decision-making , and mission planning . these resource constrained edge environments are characterized by dynamic context , limited computing resources , high levels of stress , and intermittent network connectivity . cyber-foraging is the leverage of external resource-rich surrogates to augment the capabilities of resource-limited devices . in cloudlet-based cyber-foraging , resource-intensive computation and data is offloaded to cloudlets . forward-deployed , discoverable , virtual-machine-based tactical cloudlets can be hosted on vehicles or other platforms to provide infrastructure to offload computation , provide forward data staging for a mission , perform data filtering to remove unnecessary data from streams intended for dismounted users , and serve as collection points for data heading for enterprise repositories . this paper describes tactical cloudlets and presents experimentation results for five different cloudlet provisioning mechanisms . the goal is to demonstrate that cyber-foraging in tactical environments is possible by moving cloud computing concepts and technologies closer to the edge so that tactical cloudlets , even if disconnected from the enterprise , can provide capabilities that can lead to story_separator_special_tag the size of multi-modal , heterogeneous data collected through various sensors is growing exponentially . it demands intelligent data reduction , data mining and analytics at edge devices . data compression can reduce the network bandwidth and transmission power consumed by edge devices . this paper proposes , validates and evaluates fog data , a service-oriented architecture for fog computing . the center piece of the proposed architecture is a low power embedded computer that carries out data mining and data analytics on raw data collected from various wearable sensors used for telehealth applications . the embedded computer collects the sensed data as time series , analyzes it , and finds similar patterns present . patterns are stored , and unique patterns are transmited . also , the embedded computer extracts clinically relevant information that is sent to the cloud . a working prototype of the proposed architecture was built and used to carry out case studies on telehealth big data applications . specifically , our case studies used the data from the sensors worn by patients with either speech motor disorders or cardiovascular problems . we implemented and evaluated both generic and application specific data mining techniques to show story_separator_special_tag internet of things ( iot ) is experiencing a huge hype these days , thanks to the increasing capabilities of embedded devices that enable their adoption in new fields of application ( e.g . wireless sensor networks , connected cars , health care , etc. ) . on the one hand , this is leading to an increasing adoption of multi-tenancy solutions for cloud and fog computing , to analyze and store the data produced . on the other hand , power consumption has become a major concern for almost every digital system , from the smallest embedded circuits to the biggest computer clusters , with all the shades in between . fine-grain control mechanisms are then needed to cap power consumption at each level of the stack , still guaranteeing service level agreements ( sla ) to the hosted applications . in this work , we propose dockercap , a software-level power capping orchestrator for docker containers that follows an observe-decide-act loop structure : this allows to quickly react to changes that impact on the power consumption by managing resources of each container at run-time , to ensure the desired power cap . we show how we are able story_separator_special_tag computational resources distributed at the edge of the network are the fundamental infrastructural component of edge computing . the operational scale of edge computing introduces new challenges for building and operating suitable computation platforms . many application scenarios require edge computing resources to provide reliable response times while operating in dynamic and resource-constrained environments . in this paper , we present a novel architecture for energy-aware , cluster-based edge computers that are designed to be portable and usable in fieldwork scenarios . we use compact general-purpose commodity hardware to build a high-density cluster prototype , and implement a power-management runtime to enable real-time energy-awareness . furthermore , we present an experimental analysis of the energy and resource-consumption characteristics of our prototype in the context of a data analytics application . the results show the feasibility of our prototype for the presented scenarios , but also reveal the intricacies of power-management approaches already built into modern cpus . we show that different load balancing policies and cluster configurations have a significant impact on energy consumption and system responsiveness . our insights lay the groundwork for future research on energy-consumption optimization approaches for cluster-based edge computers . story_separator_special_tag cloud computing has become the de facto computing platform for application processing in the era of the internet of things ( iot ) . however , limitations of the cloud model , such as the high transmission latency and high costs are giving birth to a new computing paradigm called edge computing ( a.k.a fog computing ) . fog computing aims to move the data processing close to the network edge so as to reduce internet traffic . however , since the servers at the fog layer are not as powerful as the ones in the cloud , there is a need to balance the data processing in between the fog and the cloud . moreover , besides the data offloading issue , the energy efficiency of fog computing nodes has become an increasing concern . densely deployed fog nodes are a major source of carbon footprint in iot systems . to reduce the usage of the brown energy resources ( e.g. , powered by energy produced through fossil fuels ) , green energy is an alternative option . in this paper , we propose employing dual energy sources for supporting the fog nodes , where solar power is the story_separator_special_tag recent years have seen an explosion of data volumes from a myriad of distributed sources such as ubiquitous cameras and various sensors . the challenges of analyzing these geographically dispersed datasets are increasing due to the significant data movement overhead , time-consuming data aggregation , and escalating energy needs . rather than constantly move a tremendous amount of raw data to remote warehouse-scale computing systems for processing , it would be beneficial to leverage in-situ server systems ( ins ) to pre-process data , i.e. , bringing computation to where the data is located . this paper takes the first step towards designing server clusters for data processing in the field . we investigate two representative in-situ computing applications , where data is normally generated from environmentally sensitive areas or remote places that lack established utility infrastructure . these very special operating environments of in-situ servers urge us to explore standalone ( i.e. , off-grid ) systems that offer the opportunity to benefit from local , self-generated energy sources . in this work we implement a heavily instrumented proof-of-concept prototype called insure : in-situ server systems using renewable energy . we develop a novel energy buffering mechanism and a unique story_separator_special_tag driven by the visions of internet of things and 5g communications , recent years have seen a paradigm shift in mobile computing , from the centralized mobile cloud computing towards mobile edge computing ( mec ) . the main feature of mec is to push mobile computing , network control and storage to the network edges ( e.g. , base stations and access points ) so as to enable computation-intensive and latency-critical applications at the resource-limited mobile devices . mec promises dramatic reduction in latency and mobile energy consumption , tackling the key challenges for materializing 5g vision . the promised gains of mec have motivated extensive efforts in both academia and industry on developing the technology . a main thrust of mec research is to seamlessly merge the two disciplines of wireless communications and mobile computing , resulting in a wide-range of new designs ranging from techniques for computation offloading to network architectures . this paper provides a comprehensive survey of the state-of-the-art mec research with a focus on joint radio-and-computational resource management . we also present a research outlook consisting of a set of promising directions for mec research , including mec system deployment , cache-enabled mec , story_separator_special_tag we describe a new approach to power saving and battery life extension on an untethered laptop through wireless remote processing of power-costly tasks . we ran a series of experiments comparing the power consumption of processes run locally with that of the same processes run remotely . we examined the trade-off between communication power expenditures and the power cost of local processing . this paper describes our methodology and results of our experiments . we suggest ways to further improve this approach , and outline a software design to support remote process execution . story_separator_special_tag the recent introduction of smartphones has resulted in an explosion of innovative mobile applications . the computational requirements of many of these applications , however , can not be met by the smartphone itself . the compute power of the smartphone can be enhanced by distributing the application over other compute resources . existing solutions comprise of a light weight client running on the smartphone and a heavy weight compute server running on , for example , a cloud . this places the user in a dependent position , however , because the user only controls the client application . in this paper , we follow a different model , called cyber foraging , that gives users full control over all parts of the application . we have implemented the model using the ibis middleware . we evaluate the model using an innovative application in the domain of multimedia computing , and show that cyber foraging increases the application 's responsiveness and accuracy whilst decreasing its energy usage . story_separator_special_tag this paper presents maui , a system that enables fine-grained energy-aware offload of mobile code to the infrastructure . previous approaches to these problems either relied heavily on programmer support to partition an application , or they were coarse-grained requiring full process ( or full vm ) migration . maui uses the benefits of a managed code environment to offer the best of both worlds : it supports fine-grained code offload to maximize energy savings with minimal burden on the programmer . maui decides at run-time which methods should be remotely executed , driven by an optimization engine that achieves the best energy savings possible under the mobile device 's current connectivity constrains . in our evaluation , we show that maui enables : 1 ) a resource-intensive face recognition application that consumes an order of magnitude less energy , 2 ) a latency-sensitive arcade game application that doubles its refresh rate , and 3 ) a voice-based language translation application that bypasses the limitations of the smartphone environment by executing unsupported components remotely . story_separator_special_tag the convergence of mobile computing and cloud computing enables new mobile applications that are both resource-intensive and interactive . for these applications , end-to-end network bandwidth and latency matter greatly when cloud resources are used to augment the computational power and battery life of a mobile device . this dissertation designs and implements a new architectural element called a cloudlet , that arises from the convergence of mobile computing and cloud computing . cloudlets represent the middle tier of a 3-tier hierarchy , mobile device cloudlet cloud , to achieve the right balance between cloud consolidation and network responsiveness . we first present quantitative evidence that shows cloud location can affect the performance of mobile applications and cloud consolidation . we then describe an architectural solution using cloudlets that are a seamless extension of todays cloud computing infrastructure . finally , we define minimal functionalities that cloudlets must offer above/beyond standard cloud computing , and address corresponding technical challenges . story_separator_special_tag mobile applications are becoming increasingly ubiquitous and provide ever richer functionality on mobile devices . at the same time , such devices often enjoy strong connectivity with more powerful machines ranging from laptops and desktops to commercial clouds . this paper presents the design and implementation of clonecloud , a system that automatically transforms mobile applications to benefit from the cloud . the system is a flexible application partitioner and execution runtime that enables unmodified mobile applications running in an application-level virtual machine to seamlessly off-load part of their execution from mobile devices onto device clones operating in a computational cloud . clonecloud uses a combination of static analysis and dynamic profiling to partition applications automatically at a fine granularity while optimizing execution time and energy use for a target computation and communication environment . at runtime , the application partitioning is effected by migrating a thread from the mobile device at a chosen point to the clone in the cloud , executing there for the remainder of the partition , and re-integrating the migrated thread back to the mobile device . our evaluation shows that clonecloud can adapt application partitioning to different environments , and can help some applications story_separator_special_tag the cloud seems to be an excellent companion of mobile systems , to alleviate battery consumption on smartphones and to backup user 's data on-the-fly . indeed , many recent works focus on frameworks that enable mobile computation offloading to software clones of smartphones on the cloud and on designing cloud-based backup systems for the data stored in our devices . both mobile computation offloading and data backup involve communication between the real devices and the cloud . this communication does certainly not come for free . it costs in terms of bandwidth ( the traffic overhead to communicate with the cloud ) and in terms of energy ( computation and use of network interfaces on the device ) . in this work we study the fmobile software/data backupseasibility of both mobile computation offloading and mobile software/data backups in real-life scenarios . in our study we assume an architecture where each real device is associated to a software clone on the cloud . we consider two types of clones : the off-clone , whose purpose is to support computation offloading , and the back-clone , which comes to use when a restore of user 's data and apps is needed story_separator_special_tag in this paper , we consider an environment in which computational offoading is adopted amongst mobile devices . we call such an environment a mobile device cloud ( mdc ) . in this work , we highlight via emulation , experimenation and real measurements , the potential gain in computation time and energy consumption that can be achieved by offoading tasks within an mdc . we also propose and develop an experimental platform to enable researchers create and experiment with novel offoading algorithms in mdcs . story_separator_special_tag mobile cloud computing ( mcc ) is aimed at integrating mobile devices with cloud computing . it is one of the most important concepts that have emerged in the last few years . mobile devices , in the traditional agent-client architecture of mcc , only utilize resources in the cloud to enhance their functionalities . however , modern mobile devices have many more resources than before . as a result , researchers have begun to consider the possibility of mobile devices themselves sharing resources . this is called the cooperation-based architecture of mcc . resource discovery is one of the most important issues that need to be solved to achieve this goal . most of the existing work on resource discovery has adopted a fixed choice of centralized or flooding strategies . many improved versions of energy-efficient methods based on both strategies have been proposed by researchers due to the limited battery life of mobile devices . this paper proposes a novel adaptive method of resource discovery from a different point of view to distinguish it from existing work . the proposed method automatically transforms between centralized and flooding strategies to save energy according to different network environments . theoretical story_separator_special_tag augmenting the long-term evolution ( lte ) -evolved nodeb ( enb ) with cloud resources offers a low-latency , resilient , and lte-aware environment for offloading the internet of things ( iot ) services and applications . by means of devices memory replication , the iot applications deployed at an lte-integrated edge cloud can scale its computing and storage requirements to support different resource-intensive service offerings . despite this potential , the massive number of iot devices limits the lte edge cloud responsiveness as the lte radio interface becomes the major bottleneck given the unscalability of its uplink access and data transfer procedures to support a large number of devices that simultaneously replicate their memory objects with the lte edge cloud . we propose replisom ; an lte-aware edge cloud architecture and an lte-optimized memory replication protocol which relaxes the lte bottlenecks by a delay and radio resource-efficient memory replication protocol based on the device-to-device communication technology and the sparse recovery in the theory of compressed sampling . replisom effectively schedules the memory replication occasions to resolve contentions for the radio resources as a large number of devices simultaneously transmit their memory replicas . our analysis and numerical evaluation suggest story_separator_special_tag edge computing in internet of things enhances application execution by retrieving cloud resources to the close proximity of resource-constrained end devices at the edge and by enabling task offloading from end devices to the edge . in this paper , edge computing platforms are extended into the data producing end devices , including wireless sensor network nodes and smartphones , with mobile agents . mobile agents operate as a multi-agent system on the opportunistic network of heterogeneous end devices , where the benefits include autonomous , asynchronous and the adaptive execution and relocation of application-specific tasks , while taking into account local resource availability . in addition to the vertical edge connectivity , mobile agents enable horizontal sharing of the information between end devices . use cases are presented , where mobile agents address challenges in current edge computing platforms . an edge application is evaluated , where mobile agents as a multi-agent system process sensor data in a heterogeneous set of end devices , control the operation of the devices and share their results with system components . mobile agents operate atop a rest-based mobile agent software framework that relies on embedded web services for interoperability . a real-world story_separator_special_tag distributed online data analytics has attracted significant research interest in recent years with the advent of fog and cloud computing . the popularity of novel distributed applications such as crowdsourcing and crowdsensing have fostered the need for scalable energy-efficient platforms that can enable distributed data analytics . in this paper , we propose cardap , a ( c ) ontext ( a ) ware ( r ) eal-time ( d ) ata ( a ) nalytics ( p ) latform . cardap is a generic , flexible and extensible , component- based platform that can be deployed in complex distributed mobile analytics applications e.g . sensing activity of citizens in smart cities . cardap incorporates a number of energy efficient data delivery strategies using real-time mobile data stream mining for data reduction and thus less data transmission . extensive experimental evaluations indicate the cardap platform can deliver significant benefits in energy efficiency over naive approaches . lessons learnt and future work conclude the paper . story_separator_special_tag in the last five years , edge computing has attracted tremendous attention from industry and academia due to its promise to reduce latency , save bandwidth , improve availability , and protect data privacy to keep data secure . at the same time , we have witnessed the proliferation of ai algorithms and models which accelerate the successful deployment of intelligence mainly in cloud services . these two trends , combined together , have created a new horizon : edge intelligence ( ei ) . the development of ei requires much attention from both the computer systems research community and the ai community to meet these demands . however , existing computing techniques used in the cloud are not applicable to edge computing directly due to the diversity of computing sources and the distribution of data sources . we envision that there missing a framework that can be rapidly deployed on edge and enable edge ai capabilities . to address this challenge , in this paper we first present the definition and a systematic review of ei . then , we introduce an open framework for edge intelligence ( openei ) , which is a lightweight software platform to equip story_separator_special_tag tensorflow is a machine learning system that operates at large scale and in heterogeneous environments . tensorflow uses dataflow graphs to represent computation , shared state , and the operations that mutate that state . it maps the nodes of a dataflow graph across many machines in a cluster , and within a machine across multiple computational devices , including multicore cpus , general-purpose gpus , and custom designed asics known as tensor processing units ( tpus ) . this architecture gives flexibility to the application developer : whereas in previous `` parameter server '' designs the management of shared state is built into the system , tensorflow enables developers to experiment with novel optimizations and training algorithms . tensorflow supports a variety of applications , with particularly strong support for training and inference on deep neural networks . several google services use tensorflow in production , we have released it as an open-source project , and it has become widely used for machine learning research . in this paper , we describe the tensorflow dataflow model in contrast to existing systems , and demonstrate the compelling performance that tensorflow achieves for several real-world applications . story_separator_special_tag caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models . the framework is a bsd-licensed c++ library with python and matlab bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures . caffe fits industry and internet-scale media needs by cuda gpu computation , processing over 40 million images a day on a single k40 or titan gpu ( $ \\approx $ 2.5 ms per image ) . by separating model representation from actual implementation , caffe allows experimentation and seamless switching among platforms for ease of development and deployment from prototyping machines to cloud environments . caffe is maintained and developed by the berkeley vision and learning center ( bvlc ) with the help of an active community of contributors on github . it powers ongoing research projects , large-scale industrial applications , and startup prototypes in vision , speech , and multimedia . story_separator_special_tag mxnet is a multi-language machine learning ( ml ) library to ease the development of ml algorithms , especially for deep neural networks . embedded in the host language , it blends declarative symbolic expression with imperative tensor computation . it offers auto differentiation to derive gradients . mxnet is computation and memory efficient and runs on various heterogeneous systems , ranging from mobile devices to distributed gpu clusters . this paper describes both the api design and the system implementation of mxnet , and explains how embedding of both symbolic expression and tensor operation is handled in a unified fashion . our preliminary experiments reveal promising results on large scale deep neural network applications using multiple gpu machines . story_separator_special_tag machine learning has changed the computing paradigm . products today are built with machine intelligence as a central attribute , and consumers are beginning to expect near-human interaction with the appliances they use . however , much of the deep learning revolution has been limited to the cloud . recently , several machine learning packages based on edge devices have been announced which aim to offload the computing to the edges . however , little research has been done to evaluate these packages on the edges , making it difficult for end users to select an appropriate pair of software and hardware . in this paper , we make a performance comparison of several state-of-the-art machine learning packages on the edges , including tensorflow , caffe2 , mxnet , pytorch , and tensorflow lite . we focus on evaluating the latency , memory footprint , and energy of these tools with two popular types of neural networks on different edge devices . this evaluation not only provides a reference to select appropriate combinations of hardware and software packages for end users but also points out possible future directions to optimize packages for developers . story_separator_special_tag we propose distributed deep neural networks ( ddnns ) over distributed computing hierarchies , consisting of the cloud , the edge ( fog ) and end devices . while being able to accommodate inference of a deep neural network ( dnn ) in the cloud , a ddnn also allows fast and localized inference using shallow portions of the neural network at the edge and end devices . when supported by a scalable distributed computing hierarchy , a ddnn can scale up in neural network size and scale out in geographical span . due to its distributed nature , ddnns enhance sensor fusion , system fault tolerance and data privacy for dnn applications . in implementing a ddnn , we map sections of a dnn onto a distributed computing hierarchy . by jointly training these sections , we minimize communication and resource usage for devices and maximize usefulness of extracted features which are utilized in the cloud . the resulting system has built-in support for automatic sensor fusion and fault tolerance . as a proof of concept , we show a ddnn can exploit geographical diversity of sensors to improve object recognition accuracy and reduce communication cost . in our story_separator_special_tag the computation for today 's intelligent personal assistants such as apple siri , google now , and microsoft cortana , is performed in the cloud . this cloud-only approach requires significant amounts of data to be sent to the cloud over the wireless network and puts significant computational pressure on the datacenter . however , as the computational resources in mobile devices become more powerful and energy efficient , questions arise as to whether this cloud-only processing is desirable moving forward , and what are the implications of pushing some or all of this compute to the mobile devices on the edge.in this paper , we examine the status quo approach of cloud-only processing and investigate computation partitioning strategies that effectively leverage both the cycles in the cloud and on the mobile device to achieve low latency , low energy consumption , and high datacenter throughput for this class of intelligent applications . our study uses 8 intelligent applications spanning computer vision , speech , and natural language domains , all employing state-of-the-art deep neural networks ( dnns ) as the core machine learning technique . we find that given the characteristics of dnn algorithms , a fine-grained , layer-level story_separator_special_tag with the rapid development of in-depth learning , neural network and deep learning algorithms have been widely used in various fields , e.g. , image , video and voice processing . however , the neural network model is getting larger and larger , which is expressed in the calculation of model parameters . although a wealth of existing efforts on gpu platforms currently used by researchers for improving computing performance , dedicated hardware solutions are essential and emerging to provide advantages over pure software solutions . in this paper , we systematically investigate the neural network accelerator based on fpga . specifically , we respectively review the accelerators designed for specific problems , specific algorithms , algorithm features , and general templates . we also compared the design and implementation of the accelerator based on fpga under different devices and network models and compared it with the versions of cpu and gpu . finally , we present to discuss the advantages and disadvantages of accelerators on fpga platforms and to further explore the opportunities for future research . story_separator_special_tag convolutional neural network ( cnn ) has been widely employed for image recognition because it can achieve high accuracy by emulating behavior of optic nerves in living creatures . recently , rapid growth of modern applications based on deep learning algorithms has further improved research and implementations . especially , various accelerators for deep cnn have been proposed based on fpga platform because it has advantages of high performance , reconfigurability , and fast development round , etc . although current fpga accelerators have demonstrated better performance over generic processors , the accelerator design space has not been well exploited . one critical problem is that the computation throughput may not well match the memory bandwidth provided an fpga platform . consequently , existing approaches can not achieve best performance due to under-utilization of either logic resource or memory bandwidth . at the same time , the increasing complexity and scalability of deep learning applications aggravate this problem . in order to overcome this problem , we propose an analytical design scheme using the roofline model . for any solution of a cnn design , we quantitatively analyze its computing throughput and required memory bandwidth using various optimization techniques , story_separator_special_tag in recent years , convolutional neural network ( cnn ) based methods have achieved great success in a large number of applications and have been among the most powerful and widely used techniques in computer vision . however , cnn-based methods are com-putational-intensive and resource-consuming , and thus are hard to be integrated into embedded systems such as smart phones , smart glasses , and robots . fpga is one of the most promising platforms for accelerating cnn , but the limited bandwidth and on-chip memory size limit the performance of fpga accelerator for cnn.in this paper , we go deeper with the embedded fpga platform on accelerating cnns and propose a cnn accelerator design on embedded fpga for image-net large-scale image classification . we first present an in-depth analysis of state-of-the-art cnn models and show that convolutional layers are computational-centric and fully-connected layers are memory-centric.then the dynamic-precision data quantization method and a convolver design that is efficient for all layer types in cnn are proposed to improve the bandwidth and resource utilization . results show that only 0.4 % accuracy loss is introduced by our data quantization flow for the very deep vgg16 model when 8/4-bit quantization is used story_separator_special_tag machine learning ( ml ) tasks are becoming pervasive in a broad range of applications , and in a broad range of systems ( from embedded systems to data centers ) . as computer architectures evolve toward heterogeneous multi-cores composed of a mix of cores and hardware accelerators , designing hardware accelerators for ml techniques can simultaneously achieve high efficiency and broad application scope.while efficient computational primitives are important for a hardware accelerator , inefficient memory transfers can potentially void the throughput , energy , or cost advantages of accelerators , that is , an amdahl 's law effect , and thus , they should become a first-order concern , just like in processors , rather than an element factored in accelerator design on a second step . in this article , we introduce a series of hardware accelerators ( i.e. , the diannao family ) designed for ml ( especially neural networks ) , with a special emphasis on the impact of memory on accelerator design , performance , and energy . we show that , on a number of representative neural network layers , it is possible to achieve a speedup of 450.65x over a gpu , and story_separator_special_tag in recent years , neural network accelerators have been shown to achieve both high energy efficiency and high performance for a broad application scope within the important category of recognition and mining applications . still , both the energy efficiency and performance of such accelerators remain limited by memory accesses . in this paper , we focus on image applications , arguably the most important category among recognition and mining applications . the neural networks which are state-of-the-art for these applications are convolutional neural networks ( cnn ) , and they have an important property : weights are shared among many neurons , considerably reducing the neural network memory footprint . this property allows to entirely map a cnn within an sram , eliminating all dram accesses for weights . by further hoisting this accelerator next to the image sensor , it is possible to eliminate all remaining dram accesses , i.e. , for inputs and outputs . in this paper , we propose such a cnn accelerator , placed next to a cmos or ccd sensor . the absence of dram accesses combined with a careful exploitation of the specific data access patterns within cnns allows us to design
( 1.1 ) definition let 1 6= g be a group . a subgroup m of g is said to be maximal if m 6= g and there exists no subgroup h such that m < h < g. ifg is finite , by order reasons every subgrouph 6= g is contained in a maximal subgroup . if m is maximal in g , then also every conjugate gmg 1 of m in g is maximal . indeed gmg 1 < k < g = m < g 1kg < g. for this reason the maximal subgroups are studied up to conjugation . ( 1.2 ) lemma let g = g and let m be a maximal subgroup of g. then : story_separator_special_tag this is the softcover reprint of the english translation of bourbaki 's text groupes et algbres de lie , chapters 7 to 9. it completes the previously published translations of chapters 1 to 3 ( 3-540-64242-0 ) and 4 to 6 ( 978-3-540-69171-6 ) by covering the structure and representation theory of semi-simple lie algebras and compact lie groups . chapter 7 deals with cartan subalgebras of lie algebras , regular elements and conjugacy theorems . chapter 8 begins with the structure of split semi-simple lie algebras and their root systems . it goes on to describe the finite-dimensional modules for such algebras , including the character formula of hermann weyl . it concludes with the theory of chevalley orders . chapter 9 is devoted to the theory of compact lie groups , beginning with a discussion of their maximal tori , root systems and weyl groups . it goes on to describe the representation theory of compact lie groups , including the application of integration to establish weyl 's formula in this context . the chapter concludes with a discussion of the actions of compact lie groups on manifolds . the nine chapters together form the most comprehensive text story_separator_special_tag let g be a simple classical algebraic group over an algebraically closed field k of characteristic $ p \\ge 0 $ with natural module w. let h be a closed subgroup of g and let v be a nontrivial p-restricted irreducible tensor indecomposable rational kg-module such that the restriction of v to h is irreducible . in this paper we classify the triples ( g , h , v ) of this form , where $ v e w , w^ { * } $ and h is a disconnected almost simple positive-dimensional closed subgroup of g acting irreducibly on w. moreover , by combining this result with earlier work , we complete the classification of the irreducible triples ( g , h , v ) where g is a simple algebraic group over k , and h is a maximal closed subgroup of positive dimension . story_separator_special_tag let $ g $ be a simple classical algebraic group over an algebraically closed field $ k $ of characteristic $ p \\ge 0 $ with natural module $ w $ . let $ h $ be a closed subgroup of $ g $ and let $ v $ be a non-trivial irreducible tensor-indecomposable $ p $ -restricted rational $ kg $ -module such that the restriction of $ v $ to $ h $ is irreducible . in this paper we classify all such triples $ ( g , h , v ) $ , where $ h $ is a maximal closed disconnected positive-dimensional subgroup of $ g $ , and $ h $ preserves a natural geometric structure on $ w $ . story_separator_special_tag let g be a simple algebraic group over an algebraically closed field k of characteristic \\ ( p\\geqslant 0\\ ) , let h be a proper closed subgroup of g and let v be a nontrivial irreducible kg-module , which is p-restricted , tensor indecomposable and rational . assume that the restriction of v to h is irreducible . in this paper , we study the triples ( g , h , v ) of this form when g is a classical group and h is positive-dimensional . combined with earlier work of dynkin , seitz , testerman and others , our main theorem reduces the problem of classifying the triples ( g , h , v ) to the case where g is an orthogonal group , v is a spin module and h normalizes an orthogonal decomposition of the natural kg-module . story_separator_special_tag in work spreadoverseveraldecades , dynkin ( [ 4 , 3 ] ) , seitz ( [ 10 , 11 ] ) , andtesterman ( [ 16 ] ) classifiedthe maximalclosedconnectedsubgroupsof simplealgebraicgroups.theiranalysesfor theclassicalgroupcases werebasedprimarily on a striking result : if g is a simplealgebraicgroupand : g sl v is a tensor indecomposable irreduciblerationalrepresentation , thenwith specifiedexceptionstheimageof g is maximal amongclosedconnectedsubgroupsof oneof theclassicalgroupssl v , sp v , or so v . froma slightly differentperspecti ve , thequestionthey answeredwas : givenan irreducible , closed , connectedsubgroupg of sl v for somevectorspacev , find all possibilitiesfor closed , connectedovergroupsy of g in sl v . thisquestionof irreducibleovergroups , or therestrictionof irreduciblemodulesto subgroups , appearsin othercontextsaswell . in thispaperwepresentsomeresultsin theabsenceof theconnectedness requirement for thesubgroup : theeventualgoal is to classifyall possibletriples g y v with g aut y bothclosed irreduciblesubgroupsof sl v , y \x06 sl v so v , or sp v , andy a simplegroupof classicaltype . in this paperand [ 5 ] , we give completeresultsfor the casewheng is not connectedbut hassimpleidentity componentx , andthety-high weightandtx-high weightsof v arerestricted.specifically , thepapersare concernedwith theproofof theorem1 below . let g be a non-connectedalgebraicgroupwith simple identity componentx . let v be an irreducible kg-modulewith restrictedx-highweight story_separator_special_tag a partition of a positive integer n is a sequence 1 2 m 0 of integers such that i n. for a positive integer p , a partition 1 2 m ( or its young diagram ) is called p-regular if it does not have p or more equal parts , i.e . if there does not exist t m p 1 with t t 1 t p 1. let f be a field of characteristic p 0. it is well known that irreducible representations of the symmetric group sn over f are naturally parametrized by pregular partitions of n ( cf . for example [ 9 , 12 ] ) . if is such a partition we denote the corresponding irreducible module by d . let sgnn be the one-dimensional sign representation of sn over f ; i.e. , sgnn f as a vector space and g f sgn g f for any g sn f f . here sgn g is just the sign of the permutation g. it is clear that for any irreducible d , the tensor product d sgnn is also irreducible . the problem , usually called the problem of mullineux , is to story_separator_special_tag the representation theory of semisimple algebraic groups over the complex numbers ( equivalently , semisimple complex lie algebras or lie groups , or real compact lie groups ) and the questions of whether a given complex representation is symplectic or orthogonal have been solved since at least the 1950s . similar results for weyl modules of split reductive groups over fields of characteristic different from 2 hold by using similar proofs . this paper considers analogues of these results for simple , induced , and tilting modules of split reductive groups over fields of prime characteristic as well as a complete answer for weyl modules over fields of characteristic 2 . story_separator_special_tag this dissertation is concerned with the study of irreducible embeddings of simple algebraic groups of exceptional type . it is motivated by the role of such embeddings in the study of positive dimensional closed subgroups of classical algebraic groups . the classification of the maximal closed connected subgroups of simple algebraic groups was carried out by e. b. dynkin , g. m. seitz and d. m. testerman . their analysis for the classical groups was based primarily on a striking result : if g is a simple algebraic group and o : g sl ( v ) is a tensor indecomposable irreducible rational representation then , with specified exceptions , the image of g is maximal among closed connected subgroups of one of the classical groups sl ( v ) , sp ( v ) or so ( v ) . in the case of closed , not necessarily connected , subgroups of the classical groups , one is interested in considering irreducible embeddings of simple algebraic groups and their automorphism groups : given a simple algebraic group y defined over an algebraically closed field k , one is led to study the embeddings g 0 , g is a story_separator_special_tag abstract let g be the lie algebra of a semi-simple algebraic group g over an algebraically closed field k of characteristic p > 0 . in the first part of this paper we study the structure of g , and focus on the deviations from the characteristic 0 case . in the second part we determine , for almost-simple g , the automorphism group of g . story_separator_special_tag part i. general theory : schemes group schemes and representations induction and injective modules cohomology quotients and associated sheaves factor groups algebras of distributions representations of finite algebraic groups representations of frobenius kernels reduction mod $ p $ part ii . representations of reductive groups : reductive group simple $ g $ -modules irreducible representations of the frobenius kernels kempf 's vanishing theorem the borel-bott-weil theorem and weyl 's character formula the linkage principle the translation functors filtrations of weyl modules representations of $ g_rt $ and $ g_rb $ geometric reductivity and other applications of the steinberg modules injective $ g_r $ -modules cohomology of the frobenius kernels schubert schemes line bundles on schubert schemes truncated categories and schur algebras results over the integers lusztig 's conjecture and some consequences radical filtrations and kazhdan-lusztig polynomials tilting modules frobenius splitting frobenius splitting and good filtrations representations of quantum groups references list of notations index . story_separator_special_tag the invention concerns a novel method of reducing the tendency of material to accumulate static electricity which comprises antistatically treating said material with a water soluble or readily water-dispersible salt having a cation of the formula : h m ( + ) \xa6 r-o-ch2-c-ch2-n- ( ch2 ) n-n-r3 \xa6\xa6\xa6 ohr2r2 [ r1 ] xm wherein r1 , r2 , and r3 are substituents e.g . alkyl and m , n , and x are integers . story_separator_special_tag introduction preliminaries maximal subgroups of type $ a_1 $ maximal subgroups of type $ a_2 $ maximal subgroups of type $ b_2 $ maximal subgroups of type $ g_2 $ maximal subgroups $ x $ with rank $ ( x ) \\geq3 $ proofs of corollaries 2 and 3 restrictions of small $ g $ -modules to maximal subgroups the tables for theorem 1 and corollary 2 appendix : $ e_8 $ structure constants references . story_separator_special_tag the author has determined , for all simple simply connected reductive linear algebraic groups defined over a finite field , all the irreducible representations in their defining characteristic of degree below some bound . these also give the small degree projective representations in defining characteristic for the corresponding finite simple groups . for large rank l , this bound is proportional to l3 , and for rank less than or equal to 11 much higher . the small rank cases are based on extensive computer calculations . story_separator_special_tag originating from a summer school taught by the authors , this concise treatment includes many of the main results in the area . an introductory chapter describes the fundamental results on linear algebraic groups , culminating in the classification of semisimple groups . the second chapter introduces more specialized topics in the subgroup structure of semisimple groups and describes the classification of the maximal subgroups of the simple algebraic groups . the authors then systematically develop the subgroup structure of finite groups of lie type as a consequence of the structural results on algebraic groups . this approach will help students to understand the relationship between these two classes of groups . the book covers many topics that are central to the subject , but missing from existing textbooks . the authors provide numerous instructive exercises and examples for those who are learning the subject as well as more advanced topics for research students working in related areas . story_separator_special_tag ( 1983 ) . the weyl modules and the irreducible representations of the symplectic group with the fundamental highest weights . communications in algebra : vol . 11 , no . 12 , pp . 1309-1342 . story_separator_special_tag the frobenius-schur indicator ( [ 4 ] , chap . xi , \xa7 8 ) teils us whether a self-dual complex representation of a finite group is an orthogonal or a symplectic one . in the / ? -modular theory , there is an algorithm derived from this criterion for determining the type of a g-invariant form on a self-dual , simple module \xe4s long \xe4s ; ? is odd ( see [ 5 ] ) . in characteristic two , the problem appears to be subtle and has not yet found a satisfactory answer . we therefore aim in this paper to investigate systematically modules with g-invariant quadratic forms , paying particular attention to fields of characteristic two . our main result gives a simple way to compute the witt index of a g-invariant quadratic form when g is a finite solvable group and the field is finite of characteristic two . our methods allow us to simplify and unify some known results along the way ( 2.3 , 2.4 , 3.4 ) . story_separator_special_tag summary it seems desirable , from the point of view of finite group theorists , to develop further the representation theory of finite chevalley groups in the characteristic of their definition ( say , p ) by focusing attention on the full array of p -local subgroups ( that is , the parabolics ) rather than the usual single borel or cartan subgroup . the following observation may be regarded as a generalization to arbitrary parabolic subgroups of the standard result [ 3 , theorem 39 ( d ) ] for a borel subgroup , namely , that in an irreducible module , the subspace fixed by a maximal unipotent subgroup is 1-dimensional , and so affords an irreducible module for a levi complement , which is an ( abelian ) cartan subgroup . it seems natural to state the result first for algebraic groups . story_separator_special_tag keywords : maximal closed connected subgroups ; simply connected simple algebraic group of exceptional type ; semisimple closed connected subgroups ; maximal tori ; root systems ; fundamental weights ; exceptional groups ; irreducible rational tensor indecomposable representation ; embeddings ; maximal parabolic subgroups reference ctg-article-1988-002 record created on 2008-12-16 , modified on 2017-05-12 story_separator_special_tag keywords : maximal closed connected subgroup ; exceptional algebraic ; groups ; irreducible embeddings of semisimple algebraic ; groups ; rational modules ; irreducible ; representations ; embeddings of parabolic subgroups reference ctg-article-1989-002doi:10.1016/0021-8693 ( 89 ) 90218-4 record created on 2008-12-16 , modified on 2017-05-12
we present a new algorithm for inferring the home location of twitter users at different granularities , including city , state , time zone or geographic region , using the content of users tweets and their tweeting behavior . unlike existing approaches , our algorithm uses an ensemble of statistical and heuristic classifiers to predict locations and makes use of a geographic gazetteer dictionary to identify place-name entities . we find that a hierarchical classification approach , where time zone , state or geographic region is predicted first and city is predicted next , can improve prediction accuracy . we have also analyzed movement variations of twitter users , built a classifier to predict whether a user was travelling in a certain period of time and use that to further improve the location detection accuracy . experimental evidence suggests that our algorithm works well in practice and outperforms the best existing algorithms for predicting the home location of twitter users . story_separator_special_tag we propose a novel network-based approach for location estimation in social media that integrates evidence of the social tie strength between users for improved location estimation . concretely , we propose a location estimator -- friendlylocation -- that leverages the relationship between the strength of the tie between a pair of users , and the distance between the pair . based on an examination of over 100 million geo-encoded tweets and 73 million twitter user profiles , we identify several factors such as the number of followers and how the users interact that can strongly reveal the distance between a pair of users . we use these factors to train a decision tree to distinguish between pairs of users who are likely to live nearby and pairs of users who are likely to live in different areas . we use the results of this decision tree as the input to a maximum likelihood estimator to predict a user 's location . we find that this proposed method significantly improves the results of location estimation relative to a state-of-the-art technique . our system reduces the average error distance for 80 % of twitter users from 40 miles to 21 miles using story_separator_special_tag little research exists on one of the most common , oldest , and most utilized forms of online social geographic information : the 'location ' field found in most virtual community user profiles . we performed the first in-depth study of user behavior with regard to the location field in twitter user profiles . we found that 34 % of users did not provide real location information , frequently incorporating fake locations or sarcastic comments that can fool traditional geographic information tools . when users did input their location , they almost never specified it at a scale any more detailed than their city . in order to determine whether or not natural user behaviors have a real effect on the 'locatability ' of users , we performed a simple machine learning experiment to determine whether we can identify a user 's location by only looking at what that user tweets . we found that a user 's country and state can in fact be determined easily with decent accuracy , indicating that users implicitly reveal location information , with or without realizing it . implications for location-based services and privacy are discussed . story_separator_special_tag users ' locations are important for many applications such as personalized search and localized content delivery . in this paper , we study the problem of profiling twitter users ' locations with their following network and tweets . we propose a multiple location profiling model ( mlp ) , which has three key features : 1 ) it formally models how likely a user follows another user given their locations and how likely a user tweets a venue given his location , 2 ) it fundamentally captures that a user has multiple locations and his following relationships and tweeted venues can be related to any of his locations , and some of them are even noisy , and 3 ) it novelly utilizes the home locations of some users as partial supervision . as a result , mlp not only discovers users ' locations accurately and completely , but also `` explains '' each following relationship by revealing users ' true locations in the relationship . experiments on a large-scale data set demonstrate those advantages . particularly , 1 ) for predicting users ' home locations , mlp successfully places 62 % users and outperforms two state-of-the-art methods by 10 story_separator_special_tag mobile applications often need location data , to update locally relevant information and adapt the device context . while most smartphones do include a gps receiver , it 's frequent use is restricted due to high battery drain . we design and prototype an adaptive location service for mobile devices , a-loc , that helps reduce this battery drain . our design is based on the observation that the required location accuracy varies with location , and hence lower energy and lower accuracy localization methods , such as those based on wifi and cell-tower triangulation , can sometimes be used . our method automatically determines the dynamic accuracy requirement for mobile search-based applications . as the user moves , both the accuracy requirements and the location sensor errors change . a-loc continually tunes the energy expenditure to meet the changing accuracy requirements using the available sensors . a bayesian estimation framework is used to model user location and sensor errors . experiments are performed with android g1 and at & t tilt phones , on paths that include outdoor and indoor locations , using war-driving data from google and microsoft . the experiments show that a-loc not only provides significant story_separator_special_tag twitter is a widely-used social networking service which enables its users to post text-based messages , so-called tweets . poi tags on tweets can show more human-readable high-level information about a place rather than just a pair of coordinates . in this paper , we attempt to predict the poi tag of a tweet based on its textual content and time of posting . potential applications include accurate positioning when gps devices fail and disambiguating places located near each other . we consider this task as a ranking problem , i.e. , we try to rank a set of candidate pois according to a tweet by using language and time models . to tackle the sparsity of tweets tagged with pois , we use web pages retrieved by search engines as an additional source of evidence . from our experiments , we find that users indeed leak some information about their accurate locations in their tweets . story_separator_special_tag we propose and evaluate a probabilistic framework for estimating a twitter user 's city-level location based purely on the content of the user 's tweets , even in the absence of any other geospatial cues . by augmenting the massive human-powered sensing capabilities of twitter and related microblogging services with content-derived location information , this framework can overcome the sparsity of geo-enabled features in these services and enable new location-based personalized information services , the targeting of regional advertisements , and so on . three of the key features of the proposed approach are : ( i ) its reliance purely on tweet content , meaning no need for user ip information , private login information , or external knowledge bases ; ( ii ) a classification component for automatically identifying words in tweets with a strong local geo-scope ; and ( iii ) a lattice-based neighborhood smoothing model for refining a user 's location estimate . the system estimates k possible locations for each user in descending order of confidence . on average we find that the location estimates converge quickly ( needing just 100s of tweets ) , placing 51 % of twitter users within 100 miles of story_separator_special_tag according to a recent report by research firm abi research , location-based social networks could reach revenues as high as $ 13.3 billion by 2014 [ 1 ] . social networks like foursquare and gowalla are in a dead heat in the location war . but , having said that it is important to understand for privacy and security reasons , most of the people on social networking sites like twitter are unwilling to specify their locations explicitly . this creates a need for software that mines the location of the user based on the implicit attributes associated with him . in this paper , we propose the development of a tool tweethood that predicts the location of the user on the basis of his social network . we show the evolution of the algorithm , highlighting the drawbacks of the different approaches and our methodology to overcome them . we perform extensive experiments to show the validity of our system in terms of both accuracy and running time . the experiments performed demonstrate that our system achieves an accuracy of 72.1 % at the city level and 80.1 % at the country level . experimental results show that tweethood story_separator_special_tag geography and social relationships are inextricably intertwined ; the people we interact with on a daily basis almost always live near us . as people spend more time online , data regarding these two dimensions -- geography and social relationships -- are becoming increasingly precise , allowing us to build reliable models to describe their interaction . these models have important implications in the design of location-based services , security intrusion detection , and social media supporting local communities.using user-supplied address data and the network of associations between members of the facebook social network , we can directly observe and measure the relationship between geography and friendship . using these measurements , we introduce an algorithm that predicts the location of an individual from a sparse set of located users with performance that exceeds ip-based geolocation . this algorithm is efficient and scalable , and could be run on a network containing hundreds of millions of users . story_separator_special_tag social networks are often grounded in spatial locality where individuals form relationships with those they meet nearby . however , the location of individuals in online social networking platforms is often unknown . prior approaches have tried to infer individuals ' locations from the content they produce online or their online relations , but often are limited by the available location-related data . we propose a new method for social networks that accurately infers locations for nearly all of individuals by spatially propagating location assignments through the social network , using only a small number of initial locations . in five experiments , we demonstrate the effectiveness in multiple social networking platforms , using both precise and noisy data to start the inference , and present heuristics for improving performance . in one experiment , we demonstrate the ability to infer the locations of a group of users who generate over 74 % of the daily twitter message volume with an estimated median location error of 10km . our results open the possibility of gathering large quantities of location-annotated data from social media platforms . story_separator_special_tag twitter is a popular platform for sharing activities , plans , and opinions . through tweets , users often reveal their location information and short term visiting plans . in this paper , we are interested in extracting fine-grained locations mentioned in tweets with temporal awareness . more specifically , we like to extract each point-of-interest ( poi ) mention in a tweet and predict whether the user has visited , is currently at , or will soon visit this poi . our proposed solution , named petar , consists of two main components : a poi inventory and a time-aware poi tagger . the poi inventory is built by exploiting the crowd wisdom of foursquare community . it contains not only the formal names of pois but also the informal abbreviations . the poi tagger , based on conditional random field ( crf ) model , is designed to simultaneously identify the pois and resolve their associated temporal awareness . in our experiments , we investigated four types of features ( i.e. , lexical , grammatical , geographical , and bilou schema features ) for time-aware poi extraction . with the four types of features , petar achieves promising story_separator_special_tag real-time information from microblogs like twitter is useful for different applications such as market research , opinion mining , and crisis management . for many of those messages , location information is required to derive useful insights . today , however , only around 1 % of all tweets are explicitly geotagged . we propose the first multi-indicator method for determining ( 1 ) the location where a tweet was created as well as ( 2 ) the location of the user 's residence . our method is based on various weighted indicators , including the names of places that appear in the text message , dedicated location entries , and additional information from the user profile . an evaluation shows that our method is capable of locating 92 % of all tweets with a median accuracy of below 30km , as well as predicting the user 's residence with a median accuracy of below 5.1km . with that level of accuracy , our approach significantly outperforms existing work . story_separator_special_tag in order to sense and analyze disaster information from social media , microblogs as sources of social data have recently attracted attention . in this paper , we attempt to discover geolocation information from microblog messages to assess disasters . since microblog services are more timely compared to other social media , understanding the geolocation information of each microblog message is useful for quickly responding to a sudden disasters . some microblog services provide a function for adding geolocation information to messages from mobile device equipped with gps detectors . however , few users use this function , so most messages do not have geolocation information . therefore , we attempt to discover the location where a message was generated by using its textual content . the proposed method learns associations between a location and its relevant keywords from past messages , and guesses where a new message came from . story_separator_special_tag location information is critical to understanding the impact of a disaster , including where the damage is , where people need assistance and where help is available . we investigate the feasibility of applying named entity recognizers to extract locations from microblogs , at the level of both geo-location and point-of-interest . our experimental results show that such tools once retrained on microblog data have great potential to detect the where information , even at the granularity of point-of-interest . story_separator_special_tag introduction this paper explores the relation between an increasing place-independence of labour in creative industries and the persisting necessity of local embeddedness . the creative industries are predestined to display new trends of structural change in labour organisation ( manske/schnell 2010 ) . because of dynamic developments of new forms of labour and labour organisation caused by developments in icts , as well as the potential for economic growth , the creative industries are an interesting field of research . though these changes of labour organisation were in recent years more common in low-skill- and highly standardised areas commonly referred to as crowdwork or crowdsourcing ( howe 2006 ) , these labour practices increasingly spread to high-skilled labour , the creative industries on its forefront . methods this paper is the first output of an ongoing research project from the university of vienna ( department of sociology ) and forba ( working life research centre , vienna ) . to provide a brief overview of current discussions about place and virtual work the paper sums up noteworthy contributions found in literature . in addition to the literature review first insights in our empirical research and preliminary results will be presented story_separator_special_tag in this paper , we investigate the interplay of distance and tie strength through an examination of 20 million geo-encoded tweets collected from twitter and 6 million user profiles . concretely , we investigate the relationship between the strength of the tie between a pair of users , and the distance between the pair . we identify several factors -- including following , mentioning , and actively engaging in conversations with another user -- that can strongly reveal the distance between a pair of users . we find a bimodal distribution in twitter , with one peak around 10 miles from people who live nearby , and another peak around 2500 miles , further validating twitter 's use as both a social network ( with geographically nearby friends ) and as a news distribution network ( with very distant relationships ) . story_separator_special_tag a point of interest ( poi ) is a focused geographic entity such as a landmark , a school , an historical building , or a business . points of interest are the basis for most of the data supporting location-based applications . in this paper we propose to curate pois from online sources by bootstrapping training data from web snippets , seeded by pois gathered from social media . this large corpus is used to train a sequential tagger to recognize mentions of pois in text . using wikipedia data as the training data , we can identify pois in free text with an accuracy that is 116 % better than the state of the art poi identifier in terms of precision , and 50 % better in terms of recall . we show that using foursquare and gowalla checkins as seeds to bootstrap training data from web snippets , we can improve precision between 16 % and 52 % , and recall between 48 % and 187 % over the state-of-the-art . the name of a poi is not sufficient , as the poi must also be associated with a set of geographic coordinates . our method increases story_separator_special_tag harnessing rich , but unstructured information on social networks in real-time and showing it to relevant audience based on its geographic location is a major challenge . the system developed , twittertagger , geotags tweets and shows them to users based on their current physical location . experimental validation shows a performance improvement of three orders by twittertagger compared to that of the baseline model . story_separator_special_tag this paper addresses the task of user classification in social media , with an application to twitter . we automatically infer the values of user attributes such as political orientation or ethnicity by leveraging observable information such as the user behavior , network structure and the linguistic content of the user s twitter feed . we employ a machine learning approach which relies on a comprehensive set of features derived from such user information . we report encouraging experimental results on 3 tasks with different characteristics : political affiliation detection , ethnicity identification and detecting affinity for a particular business . finally , our analysis shows that rich linguistic features prove consistently valuable across the 3 tasks and show great promise for additional user classification needs . story_separator_special_tag the rapid growth of geotagged social media raises new computational possibilities for investigating geographic linguistic variation . in this paper , we present a multi-level generative model that reasons jointly about latent topics and geographical regions . high-level topics such as `` sports '' or `` entertainment '' are rendered differently in each geographic region , revealing topic-specific regional distinctions . applied to a new dataset of geotagged microblogs , our model recovers coherent topics and their regional variants , while identifying geographic areas of linguistic consistency . the model also enables prediction of an author 's geographic location from raw text , outperforming both text regression and supervised topic models . story_separator_special_tag we present a new algorithm for inferring the home locations of twitter users at different granularities , such as city , state , or time zone , using the content of their tweets and their tweeting behavior . unlike existing approaches , our algorithm uses an ensemble of statistical and heuristic classifiers to predict locations . we find that a hierarchical classification approach can improve prediction accuracy . experimental evidence suggests that our algorithm works well in practice and outperforms the best existing algorithms for predicting the location of twitter users . story_separator_special_tag with a large amount of geotagged resources from smart devices , it is important to provide users with intelligent location-based services . particularly , in this work , we focus on spatial ranking service , which can retrieve a set of relevant resources with a certain tag . this paper designs ranking algorithm in order to find out a list of locations which are collected from geotagged resources on snss . as extending hits algorithm [ 13 ] , we propose a novel method ( called geohits ) that can analyze an undirected 2-mode graph composed with a set of tags and a set of locations . thereby , meaningful relationships between the locations and a set of tags are discovered by integrating several weighting schemes and hits algorithm . to evaluate the proposed spatial ranking approach , we have shows the experimental results from the recommendation applications . this article has been corrected . link to the correction 10.2298/csis151203064e story_separator_special_tag twitter , a popular microblogging service , has received much attention recently . an important characteristic of twitter is its real-time nature . for example , when an earthquake occurs , people make many twitter posts ( tweets ) related to the earthquake , which enables detection of earthquake occurrence promptly , simply by observing the tweets . as described in this paper , we investigate the real-time interaction of events such as earthquakes in twitter and propose an algorithm to monitor tweets and to detect a target event . to detect a target event , we devise a classifier of tweets based on features such as the keywords in a tweet , the number of words , and their context . subsequently , we produce a probabilistic spatiotemporal model for the target event that can find the center and the trajectory of the event location . we consider each twitter user as a sensor and apply kalman filtering and particle filtering , which are widely used for location estimation in ubiquitous/pervasive computing . the particle filter works better than other comparable methods for estimating the centers of earthquakes and the trajectories of typhoons . as an application , we story_separator_special_tag empirical studies and some high-profile anecdotal cases have demonstrated a link between suicidal ideation and experiences with bullying victimization or offending . the current study examines the extent to which a nontraditional form of peer aggression -- cyberbullying -- is also related to suicidal ideation among adolescents . in 2007 , a random sample of 1,963 middle-schoolers from one of the largest school districts in the united states completed a survey of internet use and experiences . youth who experienced traditional bullying or cyberbullying , as either an offender or a victim , had more suicidal thoughts and were more likely to attempt suicide than those who had not experienced such forms of peer aggression . also , victimization was more strongly related to suicidal thoughts and behaviors than offending . the findings provide further evidence that adolescent peer aggression must be taken seriously both at school and at home , and suggest that a suicide prevention and intervention component is essential within comprehensive bullying response programs implemented in schools . story_separator_special_tag geographically-grounded situational awareness ( sa ) is critical to crisis management and is essential in many other decision making domains that range from infectious disease monitoring , through regional planning , to political campaigning . social media are becoming an important information input to support situational assessment ( to produce awareness ) in all domains . here , we present a geovisual analytics approach to supporting sa for crisis events using one source of social media , twitter . specifically , we focus on leveraging explicit and implicit geographic information for tweets , on developing place-time-theme indexing schemes that support overview+detail methods and that scale analytical capabilities to relatively large tweet volumes , and on providing visual interface methods to enable understanding of place , time , and theme components of evolving situations . our approach is user-centered , using scenario-based design methods that include formal scenarios to guide design and validate implementation as well as a systematic claims analysis to justify design choices and provide a framework for future testing . the work is informed by a structured survey of practitioners and the end product of phase-i development is demonstrated / validated through implementation in senseplace2 , a map-based story_separator_special_tag witnessing the emergence of twitter , we propose a twitter-based event detection and analysis system ( tedas ) , which helps to ( 1 ) detect new events , to ( 2 ) analyze the spatial and temporal pattern of an event , and to ( 3 ) identify importance of events . in this demonstration , we show the overall system architecture , explain in detail the implementation of the components that crawl , classify , and rank tweets and extract location from tweets , and present some interesting results of our system .
the effect of interfaces on mechanical properties is considered . elastic and plastic compatibilities at boundaries are treated . specific influences at both low and high temperatures are discussed , with emphasis on dislocation mechanisms and the atomic scale structure of boundaries . story_separator_special_tag abstract the passage of dislocations through grain boundaries in face centered cubic and body centered cubic polycrystalline metals was studied using dynamic in situ high voltage electron microscopy ( hvem ) , static transmission electron microscopy ( tem ) , and anisotropic elastic stress analysis . several conclusions were reached : ( 1 ) when dislocations propagate across grain boundaries , the activated slip system can be predicted from pile-up properties and grain boundary orientation using a combined criterion based on boundary geometric factors and internal stresses ; ( 2 ) different grain boundaries impede dislocation slip propagation to different degrees , the calculated value of the pile-up obstacle stress varying from 280 to 870 mpa for dislocation transmission through a grain boundary in 304 stainless steel ; ( 3 ) dynamic in situ straining of miniature tensile specimens reveals additional modes of dislocation and grain boundary interactions that were hidden from static tem observations . in connection with the last conclusion , simultaneous dislocation transmission and reflection was activated by a stressed pile-up and a complex mechanism involving coordinated movements of four sets of dilocations in and near a grain boundary was observed . story_separator_special_tag abstract a study has been made of the continuity of slip bands across grain boundaries in aluminum bicrystals . orientations favouring continuity have been determined . for continuity , the active slip planes in the component crystals had to intersect the boundary in lines that diverged by no more than approximately 15\xb0 . for bicrystals in which the divergence between the intersection lines was small , continuity was interpreted in terms of activation of slip sources by dislocations piled up against the boundary , using stroh 's analysis of the stresses around the head of a pile-up . story_separator_special_tag this paper is a brief review of the relationship between the structure and the mechanical properties of grain interfaces . early work had inferred that such a correlation exists from experiments with bicrystals of different orientations . more recently , the advances in the understanding of grain boundary structure have allowed specifying the structure and then studying how it relates to mechanical behavior . the rudiments of grain boundary structure characterization are discussed . the mechanical properties of interfaces are described in terms of slip induced intergranular cavitation , continuity of slip bands across interfaces and the `` yield strength '' of interfaces . the need for much more work in this important area is emphasized . story_separator_special_tag abstract the uniaxial tensile straining of stainless steel 304 sheet material and transmission electron microscope observations of representative electron-transparent thin sections prepared from variously strained samples showed that dislocation profiles extending from the grain boundaries , and associated with ledges on the boundary plane , increase in frequency ( the number of profiles per unit length of grain boundary plane ) with increasing strain . because of the nature of these profiles , gleaned from numerous observations , the majority are considered to be emission profiles , particularly at low plastic strains ( p 2 % ) . consequently , grain boundaries in stainless steel are concluded to be the principal sources for initial dislocations . these conclusions were supported by the in situ straining of thin microtensile specimens of stainless steel 304 and direct observations of dislocation emission from grain boundaries in a high voltage electron microscope . in these observations , dislocation profiles resembling dislocation pile-ups were observed to form at grain boundary ledges , and ledges were observed to form by the glide motion of dislocations in the grain boundary plane . grain boundary dislocations moving in the interface plane were observed in some cases to dissociate story_separator_special_tag abstract during in-situ tem deformation of 310 stainless steel , it has been observed that 1 2 a 110 dislocations are glissile on the { 121 } planes as well as on the more common { 111 } planes . the { 121 } slip plane was initiated by a grain-boundary dislocation source to relieve the stress concentrated on a grain boundary by a dislocation pile-up . this preference of slip on a { 121 } plane over a { 111 } plane can be understood by considering , in addition to the magnitude of the resolved shear stress on the plane , the angle between the lines of intersection of the slip systems in the boundary plane and the magnitude of the burgers vector of the residual dislocation left in the grain boundary by the emission process . story_separator_special_tag abstract high-resolution transmission electron microscopy observations are presented of the motion of grain-boundary dislocations within an aluminium = 3 [ 011 ] bicrystal . the observations show the glide of dissociated a/3 [ 111 ] grain-boundary dislocations on the incoherent = 3 ( 211 ) boundary . these dislocations are subsequently incorporated into a growing coherent two lamella where further motion occurs by climb . analysis of the tip of the advancing twin suggests that nucleation of coherent twin segments may be initiated by the absorption of a lattice dislocation in the boundary with the subsequent dissociation to form a/3 [ 111 ] and a/6 [ 211 ] grain-boundary dislocations . story_separator_special_tag abstract the transfer mechanisms of the deformation at - twin interfaces in a polysynthetically twinned tial alloy are studied by in-situ straining experiments performed at room temperature in a transmission electron microscope . several situations which differ in the nature of the incident dislocations and in the orientation of the applied stress are analysed . some cases for which no dislocations are emitted in the second lamella are also presented . the crossing of the interface by ordinary dislocations occurs by cross-slip if they are of screw character when lying in the interface plane . if they are not of screw character when lying in the interface plane , these ordinary dislocations are locked at the interface and no emission of dislocations is observed in the neighbouring lamella . for the case of incident twinning dislocations . twins and ordinary dislocations are emitted in the mirror plane of the neighbouring lamella . all the results of this paper and of the companion paper are then discussed with the aim of understanding the role of these interfaces on the mechanical properties of lamellar tial alloys . story_separator_special_tag abstract iso-axial symmetrical tilt bicryslals of pure nickel oriented lor either screw or mixed crystal slip were deformed in simple compression at 573 k ( 0.33 t m ) at a strain rate of 3 \xd7 10 4 s 1 . observation of slip traces revealed that screw bands were generally continuous across the interface . the degree of slip continuity , however , depended on boundary structure . boundaries related lo low values exhibited a higher degree of slip continuity than high boundaries . dislocations with mixed character were discontinuous regardless of boundary . when the results are interpreted in terms of reactions between lattice dislocations at the grain boundary plane , it emerges that slip continuity is related to 1 . ( a ) the magnitude of the residual dislocation at the boundary 2 . ( b ) the dissociation of the residual into smaller grain boundary dislocations . a smaller residual and a larger burgers vector of the grain boundary dislocation enhances the probability of slip continuity across the interface . the reasons for why grain boundaries are weak barriers to passage of screw dislocations but good traps for mixed dislocations are discussed . story_separator_special_tag abstract in situ x-ray diffraction topography using synchrotron radiation ( srxrt ) observation of deformation of fe 4\xa0at. % si bicrystals 3 , 9 and 15 was completed by tem observation of deformed samples . the results were discussed from the viewpoint of the criteria allowing prediction of slip accommodation at the grain boundary . the common slip system in a 3 bicrystal is impeded by dissociation of slip dislocations into grain boundary dislocations . in 9 bicrystals , the transfer with highest probability appears simultaneously with independent deformation in both grains . in 15 bicrystals , no residual dislocations were observed in the boundary . the slip dislocations contribute to creation of dislocation sources in the boundary . story_separator_special_tag specimens for in situ tem straining were prepared from fe-5.5 at. % si 3 bicrystals with { 112 } grain boundary plane . they were strained under three different directions of the stress at the boundary with respect to the orientation of the grains . transfer of slip across the boundary was analysed . in one case , the transfer of slip was realized by a transformation of the slip dislocation in one grain into the slip dislocation in the other grain . low energy dislocation was created in the gb in accordance with general transfer criteria . in the second case , the incoming and outgoing slip systems were in direct contraction to the general transfer criteria . in the third case , oriented for common slip system in both grains , the trapped incoming slip dislocations dissociated into twinning dislocations which created twins on the other side of the boundary . story_separator_special_tag the structure of a 110 90\xb0 grain boundary in au is investigated using high-resolution transmission electron microscopy ( hrtem ) and atomistic simulation . it consists of coherent segments , exhibiting the extended 9r configuration described by medlin et al . 10 , with superimposed line defects to accommodate the coherency strain . two types of defects are observed , crystal dislocations and disconnections , where the latter exhibit step nature in addition to dislocation character . both types of defect are identified by hrtem in combination with circuit mapping , and their parameters are shown to be consistent with the topological theory of interfacial defects 7. moreover , the misfit-relieving function of observed defect arrays , their influence on interface orientation and the relative rotation of the adjacent crystals is elucidated . during observation , defect decomposition is observed in a manner which conserves burgers vector and step height . one of the decomposition products is glissile , consistent with the . story_separator_special_tag abstract - brass two-phase bicrystals , consisting of fcc ( ) single crystals and bee ( ) single crystals , which were made by the solid state diffusion couple technique , were tensile-tested at room temperature in order to clarify the role of phase-interface on the deformation . the two-phase bicrystals had small concentration gradients in the - and -phases and satisfied the kurjumov-sach 's orientation relationships i.e . { 1 1 1 } { 1 1 0 } and [ 1 1 0 ] [ 1 1 1 ] at the interface . the slip traces observed in bicrystals deformed to about 3 % plastic strain showed a striking contrast between the - and -phases ; the slip traces in the -phase were clear and straight , while those in the -phase were fine and wavy . the slip systems in the bicrystals were attributed to those observed in and single crystals , and were explained by a plastic strain incompatibility mechanism . the slip systems , originating at the interface or propagating from another phase , were observed on matching planes . story_separator_special_tag abstract glide on prismatic planes has been observed in the neighbourhood of a high-angle grain boundary in stainless steel , where the misorientation between neighbouring grains is about a common axis close to [ 110 ] . detailed analysis , of the senses as well as the magnitudes of the burgers vectors involved , indicates that a continuity of slip can occur across a high-angle boundary by the activation of prismatic glide . story_separator_special_tag as a tribute to the scientific work of professor david brandon , this paper delineates the possibilities of utilizing in situ transmission electron microscopy to unravel dislocation-grain boundary interactions . in particular , we have focused on the deformation characteristics of al mg films . to this end , in situ nanoindentation experiments have been conducted in tem on ultrafine-grained al and al mg films with varying mg contents . the observed propagation of dislocations is markedly different between al and al mg films , i.e . the presence of solute mg results in solute drag , evidenced by a jerky-type dislocation motion with a mean jump distance that compares well to earlier theoretical and experimental results . it is proposed that this solute drag accounts for the difference between the load-controlled indentation responses of al and al mg alloys . in contrast to al mg alloys , several yield excursions are observed during initial indentation of pure al , which are commonly attributed to the collective motion of dislocations nucleated under the indenter . displacement-controlled indentation does not result in a qualitative difference between al and al mg , which can be explained by the specific feedback characteristics providing story_separator_special_tag abstract experiments with bicrystals of nickel were designed to study whether different reactions between dislocations at grain boundaries can lead to different effects . the experiments were carried out at 573 k in low cycle fatigue . the large majority of the bicrystals were 9 , and a few were 21. the results can be broadly classified into two groups . in one the dislocations activated in the adjacent crystals were symmetrical and in the other they were nonsymmetrical . the symmetrical case consisted of two types , screw-screw ( s-s ) where one the dislocations was of opposite sign , and symmetrical-mixed ( m-m ) where both had an edge component similarly tilted with respect to the boundary plane . the s-s case left no residuals in the boundary plane and produced no effects at the boundary . the m-m case gave rise to a low energy array of edge residuals in the boundary , which led to onset of dynamic recrystallization , but did not produce cavitation nor boundary migration . in the nonsymmetrical group three cases were examined , screw-mixed ( s-m ) , edge sessile-rigid ( es-r , where edge dislocations with their burgers vector normal story_separator_special_tag abstract a lattice dislocation may lower its elastic energy by dissociation in a high angle grain boundary . the absorption process involves the separation of the product grain boundary dislocations at a rate limited in general by climb and their interaction with any pre-existing network . this description of the absorption process is extended to random boundaries with the help of a new model for their structure in which the existence of an irregular arrangement of certain low energy groups of atoms is postulated . experimental results are analysed in support of the model for the absorption process . story_separator_special_tag some recent discussions have centred on the mechanism by which lattice dislocations are absorbed by grain boundaries at high temperatures . boundaries in two well-characterized coincidence site lattice systems have been studied using an electron microscope hot stage . the dislocation behaviour was shown to be consistent with a description based on dissociation and reactions involving grain boundary dislocations . a model of grain boundary migration based on the motion of grain boundary dislocations depends on both the boundary crystallography and the temperature , and may account for experimental measurements of grain boundary migration activation volumes . story_separator_special_tag abstract the early stages of yielding in high purity copper ( plastic strains of 1 10 \xd7 10 4 ) have been studied by both quantitative transmission electron microscopy ( tem ) and tensile data analysis . the tem observations , in the form of comparative lattice defect densities , and the tensile data , in the form of stress vs square root of plastic strain plots , both indicate a two stage deformation process separated by a transition region . each stage is characterized by a dominant deformation mode . in the first stage , this mode is the generation of lattice dislocations at grain boundaries , with the dislocations remaining at or near the boundaries . in the second stage , these dislocations are emitted into the grain interiors , where the grownin dislocations are moving and multiplying as well . these two stages are separated by a transition region where both modes are competitive . story_separator_special_tag abstract studies were made on a high purity iron , an fe-3 % si alloy and commercial ni . the microyield at 10 6 in./in . strain was measured as a function of grain size . a grain size dependence was found for fe-3 % si and ni but not for fe . it is shown that inclusion population is probably responsible for the latter . supplementary studies of microyield by etch pit examination reveal that slip activity can be observed below the 10 6 strain level but that the grain size trends are unchanged . evidence is shown to support the view that grain boundary intersections are primary points of slip initiation . story_separator_special_tag the flow stress of ni3 ( al , nb ) single crystals has been measured as a function of orientation in the temperature range 77 to 910 k. while the increasing flow stress behavior is similar to that observed in other ni3al-based alloys , the absolute value of the stress was found to be much higher . also , the effect of orientation changes was to produce much greater changes in the temperature at which the peak flow stress occurs than has been previously observed . the operative slip systems were analyzed by two surface slip trace analysis . primary octahedral slip was found to be predominant at temperatures below the peak stress temperature , while primary cube slip is prevalent above the peak temperature . the anomalous increase in the flow stress of ni3 ( al , nb ) with increasing temperature is generally consistent with the thermally activated cross-slip of a/2 < 110 > dislocations from { 111 } planes onto { 100 } planes . the cross-slip is shown to be aided not only be the resolved shear stress on the { 100 } cross-slip plane but also by the stress tending to constrict the a/ < story_separator_special_tag many aspects of the hepatitis c virus ( hcv ) life cycle have not been reproduced in cell culture , which has slowed research progress on this important human pathogen . here , we describe a full-length hcv genome that replicates and produces virus particles that are infectious in cell culture ( hcvcc ) . replication of hcvcc was robust , producing nearly 105 infectious units per milliliter within 48 hours . virus particles were filterable and neutralized with a monoclonal antibody against the viral glycoprotein e2 . viral entry was dependent on cellular expression of a putative hcv receptor , cd81 . hcvcc replication was inhibited by interferon- and by several hcv-specific antiviral compounds , suggesting that this in vitro system will aid in the search for improved antivirals . story_separator_special_tag abstract this study analyzes the influence of microstructural parameters on strain incompatibilities that develop in irradiated 316l stainless steel and evaluates the influence of the incompatibilities on intergranular cracking . tensile specimens were proton irradiated to 7\xa0dpa at a temperature of 400\xa0\xb0c and then strained to 5 % in a 400\xa0\xb0c supercritical water environment . the surface oxides were removed through an oxide stripping treatment to reveal the dislocation channels for analysis of their interactions with grain boundaries . it was observed that grains with high schmid factors and low taylor factors were more likely to have multiple active slip planes , and less likely to have discontinuous slip across grain boundaries . it was also observed that the propensity for slip discontinuity was greater along boundaries that had surface trace inclinations of 50\xb0 or higher to the tensile axis . the similar dependencies of slip discontinuity and intergranular cracking on trace inclination , schmid factor , and taylor factor and the observation that intergranular cracks occur primarily at grain boundary sites that are most susceptible to slip discontinuity suggest that such strain incompatibilities promote intergranular cracking . story_separator_special_tag the gliding modes of a duplex ti-6al-4v titanium alloy were investigated through in situ ( scanning electron microscopy ) tensile tests . a method based on electron back-scattering diffraction ( ebsd ) measurements was used to identify activated slip systems . the approach applied to a large number of grains allowed a statistical analysis of the nature ( basal , prismatic , pyramidal ) and distribution of the slip systems according to the crystallographic texture . a discussion concerning the pertinence of schmid 's law to explain the occurrence and succession of slip events is then proposed . the domain in favor of each type of slip system is finally presented by using inverse pole figures mapped with schmid 's factor iso-curves . story_separator_special_tag abstract in this study , high resolution ex situ digital image correlation ( dic ) was used to measure plastic strain accumulation with sub-grain level spatial resolution in uniaxial tension of a nickel-based superalloy , hastelloy x. in addition , the underlying microstructure was characterized with similar spatial resolution using electron backscatter diffraction ( ebsd ) . with this combination of crystallographic orientation data and plastic strain measurements , the resolved shear strains on individual slip systems were spatially calculated across a substantial region of interest , i.e . , we determined the local slip system activity in an aggregate of 600 grains and annealing twins . the full-field dic measurements show a high level of heterogeneity in the plastic response with large variations in strain magnitudes within grains and across grain boundaries ( gbs ) . we used the experimental results to study these variations in strain , focusing in particular on the role of slip transmission across gbs in the development of strain heterogeneities . for every gb in the polycrystalline aggregate , we have established the most likely dislocation reaction and used that information to calculate the residual burgers vector and plastic strain magnitudes due to slip story_separator_special_tag abstract the effect of grain boundaries on the plastic properties of symmetrical 100 aluminum bicrystals having various angles of tilt boundaries was studied . it was found that these bicrystals deform by the mode similar to that of the 100 oriented aluminum single crystal . fine multiple slip which appeared in the early stage of the deformation is suppressed at the boundary . afterward , clustered slip accompanying the prominent cross slip can pass the 4\xb0 and 14\xb0 boundaries which have a small difference in orientation . for the 37\xb0 boundary , which has a large misorientation , clustered slip can not pass the boundary and multiple slip is introduced at the boundary . it was found that the flow stresses of these bicrystals are almost equal to those of the component single crystals . therefore , it is considered that the formation energy of grain boundary dislocations formed after the passage of dislocations from one grain to another does not contribute to an increase in flow stress after multiple slip has occurred . story_separator_special_tag abstract the interaction between slip bands and grain boundaries in commercial-purity titanium was examined using cross-correlation-based electron backscatter diffraction . at a low strain level , three types of interactions were observed : blocked slip band with stress concentration ; slip transfer ; and blocked slip band with no stress concentration . the stress concentration induced by the blocked slip band was fitted with eshelby s theoretical model , from which a hall petch coefficient was deduced . it was found that the hall petch coefficient varies with the individual grain boundary . we investigated the geometric alignment between the slip band and various slip systems to the neighbouring grain . stress concentration can be induced by the blocked slip band if the slip system is poorly aligned with a prismatic , pyramidal or basal slip systems in the neighbouring grain . transfer of slip across the boundary occurs when there is good alignment on a prismatic or a pyramidal slip systems . other stress-relieving mechanisms are possible when the best alignment is not with the slip system that has the lower critical resolved shear stress . story_separator_special_tag etude des interactions entre les dislocations et les joints de grains dans un acier inoxydable deforme in situ dans un microscope electronique en transmission.determination , a partir de ces observations des conditions de prevision des systemes de glissement . story_separator_special_tag abstract the application of diffraction contrast electron tomography to dynamic experiments involving dislocation interactions with grain boundaries is demonstrated for the first time . two applications are shown : the first is concerned with post-mortem analysis of dislocation interactions with grain boundaries and illustrates the usefulness of the tomography technique for defect analysis ; the second is in conjunction with in situ straining experiments in which the dynamics of dislocation interactions with grain boundaries are observed directly and the resulting structure visualized three-dimensionally . the in situ straining experiments were conducted at room and elevated temperatures to determine the influence , if any , of thermal processes on the slip transfer mechanism . it was found that increasing the temperature lowers the barrier for dislocation absorption and emission from the boundary and increases the complexity of the interactions , but does not change the fundamental mechanisms governing slip transmission . previous experimentally determined criteria for slip transmission across boundaries were extended to interactions involving partial dislocations , where it was found that the reaction continues to be governed predominately by reduction of the burgers vector of the residual grain boundary dislocation left after slip transfer . story_separator_special_tag abstract 3-d discrete dislocation dynamics simulations were used to investigate the size-dependent plasticity in polycrystalline , free-standing , thin films . a simple line-tension model was used to model the dislocation transmission cross grain boundaries . at a constant film thickness , the total dislocation density and the strength increase as grain size decreases . the yield stress scales with grain diameter with a power law , with an exponent that varies with both film thickness and grain size for thicker films . in addition , the yield strength of films scales proportionally to the reciprocal of thickness and matches experiment results well . a spiral source model was developed that relates the strength of films to the statistical variation of the spiral source length , and accurately predicts the size-dependent strength in polycrystalline thin films . story_separator_special_tag abstract the interaction of lattice dislocations with symmetrical and asymmetrical tilt grain boundaries in 1\xa01\xa01 textured thin nickel films was investigated using atomistic simulation methods . it was found that the misorientation angle of the grain boundary , the sign of the burgers vector of the incoming dislocation and the exact site where the dislocation meets the grain boundary are all important parameters determining the ability of the dislocation to penetrate the boundary . inclination angle , however , does not make an important difference on the transmission scenario of full dislocations . only limited partial dislocation nucleation was observed for the investigated high-angle grain boundary . the peculiarities of nucleation of embryonic dislocations and their emission from tilt grain boundaries are discussed . story_separator_special_tag abstract grain boundaries ( gbs ) provide a strengthening mechanism in engineering materials by impeding dislocation motion . in a polycrystalline material , there is a wide distribution of gb types with characteristic slip transmission and nucleation behaviors . slip gb reactions are not easy to establish analytically or from experiments ; furthermore , there is a strong need to quantify the energy barriers of the individual gbs . we introduce a methodology to calculate the energy barriers during slip gb interaction , in concurrence with the generalized stacking fault energy curve for slip in a perfect face-centered cubic material . by doing so , the energy barriers are obtained at various classifications of gbs for dislocation transmission through the gb and dislocation nucleation from the gb . the character and structure of the gb plays an important role in impeding slip within the material . the coherent twin ( 3 ) boundary provides the highest barrier to slip transmission . from this analysis , we show that there is a strong correlation between the energy barrier and interfacial boundary energy . gbs with lower static interfacial energy offer a stronger barrier against slip transmission and nucleation at the gb story_separator_special_tag abstract we investigate the resistance on the glide of lattice dislocations between adjacent crystal grains due to the presence of a grain boundary ( gb ) . applying a combination of molecular dynamics ( md ) simulations and a line tension ( lt ) model we identify the geometrical parameters that are relevant in the description of this process . in the md simulations we observe slip transmission of dislocation loops nucleated from a crack tip near a series of pure tilt gbs in ni . the results are rationalized in terms of a lt model for the activation of a frank-read source in the presence of a gb . it is found that the slip transmission resistance is a function of only three variables : firstly , the ratio of resolved stress on the incoming slip system to that on the outgoing slip system , secondly , the magnitude of any residual burgers vector content left in the gb and , thirdly , the angle between the traces of the incoming and outgoing slip planes in the gb plane . comparison with the md simulations and experimental da . story_separator_special_tag the atomistic details of a slip transfer through a general high angle grain boundary in three dimensional nanocrystalline al are reported and discussed in terms of possible implications for mesoscopic simulation models . story_separator_special_tag abstract an overview is given of the deformation mechanisms in nanotwinned copper , as studied by recent molecular dynamics , dislocation mechanics and crystal plasticity modeling . we highlight the unique role of nanoscale twin lamellae in producing the hard and soft modes of dislocation glide , as well as how the coherent twin boundaries affect slip transfer , dislocation nucleation , twinning and detwinning . these twin boundary-mediated deformation mechanisms have been mechanistically linked to the mechanical properties of strength , ductility , strain hardening , activation volume , rate sensitivity , size-dependent strengthening and softening in nanotwinned metals . finally , discussions are dedicated to identifying important unresolved issues for future research . story_separator_special_tag abstract in situ experiments using synchrotron x-ray topography and high-voltage electron microscopy ( hvem ) have been carried out in order to study dislocation transmission through ( 122 ) , =9 coincidence tilt boundaries in elemental semiconductors . both techniques proved that this boundary greatly hinders dislocation motion . however , dislocations with the common \xbd [ 011 ] burgers vector , parallel to the tilt axis , can be transmitted from one crystal to the other . x-ray experiments brought several other types of transmission to light , but these were not confirmed by hvem observations . this suggests that they were not true transmission reactions but resulted from the activation of pre-existing nearby dislocation sources by internal stresses arising from dislocation pile-ups against the grain boundary , in the opposite crystal . story_separator_special_tag abstract a complex dislocation configuration resulting from the interactions between dissociated lattice dislocations and a =3 grain boundary is analyzed in pure copper , on the basis of transmission electron microscopy observations coupled with image contrast simulation . the paper focuses on the different mechanisms which may operate to allow the entrance of the shockley partial dislocations within the grain boundary . story_separator_special_tag the interaction between screw dislocations and coherent twin boundaries has been studied by means of molecular dynamics simulations for al , cu and ni . depending on the material and the applied strain , a screw dislocation approaching the coherent twin boundary from one side may either propagate into the adjacent twin grain by cutting through the boundary or it may dissociate within the boundary plane . which one of these two interaction modes applies seems to depend on the material dependent energy barrier for the nucleation of shockley partial dislocations . story_separator_special_tag dislocation and grain-boundary processes contribute significantly to plastic behaviour in polycrystalline metals , but a full understanding of the interaction between these processes and their influence on plastic response has yet to be achieved . the coupled atomistic discrete-dislocation method is used to study edge dislocation pile-ups interacting with a sigma 11- [ 113 ] symmetric tilt boundary in al at zero temperature under various loading conditions . nucleation of grain-boundary dislocations ( gbds ) at the dislocation/grain-boundary intersection is the dominant mechanism of deformation . dislocation pile-ups modify both the stress state and the residual defects at the intersection , the latter due to multiple dislocation absorption into the boundary , and so change the local grain-boundary/dislocation interaction phenomena as compared with cases with a single dislocation . the deformation is irreversible upon unloading and reverse loading if multiple lattice dislocations absorb into the boundary and damage in the form of microvoids and loss of crystalline structure accumulates around the intersection . based on these results , the criteria for dislocation transmission formulated by lee , robertson and birnbaum are extended to include the influences of grain-boundary normal stress , shear stress on the leading pile-up dislocation and minimization story_separator_special_tag the interaction of dislocations with low-angle grain boundaries ( lagbs ) is considered one important contribution to the mechanical strength of metals . although lagbs have been frequently observed in metals , little is known about how they interact with free dislocations that mainly carry the plastic deformation . using discrete dislocation dynamics simulations , we are able to quantify the resistance of a lagb idealized as three sets of dislocations that form a hexagonal dislocation network against lattice dislocation penetration , and examine the associated dislocation processes . our results reveal that such a coherent internal boundary can massively obstruct and even terminate dislocation transmission and thus make a substantial contribution to material strength . story_separator_special_tag in situ straining in the transmission electron microscope and diffraction-contrast electron tomography have been applied to the investigation of dislocation/grain boundary and dislocation/twin boundary interactions in -ti . it was found that , similar to fcc materials , the transfer of dislocations across grain boundaries is governed primarily by the minimization of the magnitude of the burgers vector of the residual grain boundary dislocation . that is , grain boundary strain energy density minimization determines the selection of the emitted slip system . story_separator_special_tag abstract nanoindentation measurements near a high-angle grain boundary in a fe-14 % si bicrystal showed dislocation pile-up and transmission across the boundary . the latter is observed as a characteristic displacement jump , from which the hall petch slope can be calculated as a measure for the slip transmission properties of the boundary . story_separator_special_tag nanoindentation was undertaken near grain boundaries to increase understanding of their individual contributions to the material s macroscopic mechanical properties . prior work with nanoindentation in body-centered cubic ( bcc ) materials has shown that some grain boundaries produce a pop-in event , an excursion in the load displacement curve . in the current work , grain boundary associated pop-in events were observed in a fe 0.01 wt % c polycrystal ( bcc ) , and this is characteristic of high resistance to intergranular slip transfer . grain boundaries with greater misalignment of slip systems tended to exhibit greater resistance to slip transfer . grain boundary associated pop-ins were not observed in pure copper ( face-centered cubic ) or interstitial free steel ~0.002 wt % c ( bcc ) . additionally , it was found that cold work of the fe 0.01 wt % c polycrystal immediately prior to indentation completely suppressed grain boundary associated pop-in events . it is concluded that the grain boundary associated pop-in events are directly linked to interstitials pinning dislocations on or near the boundary . this links well with macroscopic hall petch effect observations . story_separator_special_tag abstract recent progress in understanding dislocation interactions with grain boundaries and interfaces in metallic systems via static and in situ dynamic experimental approaches is reviewed . story_separator_special_tag directionally solidified ( ds ) alloys with the nominal composition ni-30 at . pct fe-20 at . pct al having eutectic microstructures were used to study slip transfer across interphase boundaries and dislocation nucleation at the interfacial steps . the slip transfer from the ductile second phase , ( fcc ) containing ordered ( l12 ) precipitates , to the ordered ( b2 ) phase and the generation of dislocations at the interface steps were interpreted using the mechanisms proposed for similar processes involving grain boundaries in polycrystalline single-phase materials . the criteria for predicting the slip systems activated as a result of slip transfer across grain boundaries were found to be applicable for interphase boundaries in the multiphase ordered ni-fe-al alloys . the potential of tailoring the microstructures and interfaces to promote slip transfer and thereby enhance the intrinsic ductility of dislocation-density-limited intermetallic alloys is discussed . story_separator_special_tag abstract interfaces between two dissimilar metals have been observed to exhibit a range of atomic structures , from atomically flat to atomically stepped . using atomic-scale simulation and theory , we study the influence of the intrinsic bimetal interface structure on the nucleation of lattice dislocations . interface structure is found to have a strong effect on which dislocations are nucleated and the type of nucleation site . we develop a theoretical model that provides criteria for predicting these effects based on key structural relationships between the interface and adjoining crystals . in recognition of these critical conditions , we construct a map that identifies the most likely nucleation site from a given interface . the theory and map developed here can guide efforts to tune interface structures for controlling the strength and deformation of heterogeneous materials . story_separator_special_tag abstract in this work , we examine the microstructural development of a bimetal multilayered composite over a broad range of individual layer thicknesses h from microns to nanometers during deformation . we observe two microstructural transitions , one at the submicron scale and another at the nanoscale . remarkably , each transition is associated with the development of a preferred interface character . we show that the characteristics of these prevailing interfaces are strongly influenced by whether the adjoining crystals are deforming by slip only or by slip and twinning . we present a generalized theory that suggests that , in spite of their different origins , the crystallographic stability of their interface character with respect to deformation depends on the same few basic variables . story_separator_special_tag interfaces , such as grain boundaries , phase boundaries , and surfaces , are important in materials of any microstructural size scale , whether the microstructure is coarse-grained , ultrafine-grained , or nano-grained . in nanostructured materials , however , they dominate material response and as we have seen many times over , can lead to extraordinary and unusual properties that far exceed those of their coarse-grained counterparts . in this article , we focus on bimetal interfaces . to best elucidate interface structure property functionality relationships , we focus our studies on simple layered composites composed of an alternating stack of two metals with bimetal interfaces spaced less than 100\xa0nm . we fabricate these nanocomposites by either a bottom up method ( physical vapor deposition ) or a top down method ( accumulative roll bonding ) to produce two distinct interface types . atomic-scale differences in interface structure are shown to result in profound effects on bulk-scale properties . story_separator_special_tag abstract nano-indentation hardness as a function of bilayer period has been measured for sputter-deposited cu nb multilayers . for this face-centered cubic/body-centered cubic system with incoherent interfaces , we develop dislocation models for the multilayer flow strength as a function of length scale from greater than a micrometer to less than a nanometer . a dislocation pile-up-based hall petch model is found applicable at the sub-micrometer length scales and the hall petch slope is used to estimate the peak strength of the multilayers . at the few to a few tens of nanometers length scales , confined layer slip of single dislocations is treated as the operative mechanism . the effects of dislocation core spreading along the interface , interface stress and interface dislocation arrays on the confined layer slip stress are incorporated in the model to correctly predict the strength increase with decreasing layer thickness . at layer thicknesses of a few nanometers or less , the strength reaches a peak . we postulate that this peak strength is set by the interface resistance to single dislocation transmission , and calculate the transition from confined layer slip to an interface cutting mechanism . story_separator_special_tag enhancement of toughness is currently a critical engineering issue in tungsten metallurgy . the inherent toughness of tungsten single crystals is closely related to the capacity for local plastic slip . in this study we have investigated the plastic behavior of tungsten single crystals by means of micro-indentation experiments performed on specimens exposing ( 100 ) , ( 110 ) , and ( 111 ) surfaces . in parallel , fem simulations were carried out with the peirce asaro needleman crystal plasticity model considering both { 110 } \xa0 111 and { 112 } \xa0 111 slip systems . plastic material parameters were identified by comparing the measured and predicted load displacement curves as well as pile-up profiles . it is found that both measured and simulated plastic pile-up patterns on the indented surfaces exhibit significant anisotropy and orientation dependence , although the measured and simulated load displacement curves manifest no such orientation dependence . the height and extension of pile-ups differ strongly as a function of surface orientation . the fem simulations are able to reproduce the observed features of spherical indentation both qualitatively and quantitatively . story_separator_special_tag abstract in order to investigate an almost pure extrinsic size effect we propose an experimental approach to investigate the deformation structure within single crystalline cross-sections of twisted bamboo-structured au microwires . the cross-sections of individual 1\xa00\xa00 oriented grains of 25\xa0 m thick au microwires have been characterized by laue microdiffraction . the diffraction data were used to calculate the misorientation of each data point with respect to the neutral fiber in the center of the cross-section as well as the kernel average misorientation to map the global and local deformation structure as function of the imposed maximum plastic shear strain . the study is accompanied by crystal plasticity simulations which yield the equivalent plastic strain distributions in the cross-section of the wire . the global deformation structures are directly related to the activated slip systems , resulting from the real orientations of the investigated grains . when averaging the degree of deformation along ring segments , an almost continuous but non-linear increase of misorientation from the center toward the surface is observed , reflecting the overall strain gradient imposed by torsion . for the local deformation structure , pronounced and graded deformation traces are observed which often pass over the story_separator_special_tag the mechanical response of as-processed equal channel angular extrusion materials is anisotropic , depending on both direction and sense of straining . the stress strain curves exhibit hardening characteristics different from the usual work hardening responses , e.g. , stages i iv , expected in annealed fcc metals under monotonic loading . in this work , the anisotropic flow responses of two pure fcc metals , al and cu , processed by route bc are evaluated and compared based on pre-strain level ( number of passes ) , direction of reloading , sense of straining ( i.e. , compression versus tension ) , and their propensity to generate subgrain microstructures and to rearrange , should the slip activity change . in most cases , either macroscopic work softening or strain intervals with little to no work hardening are observed . application of a crystallographically based single-crystal hardening law for strain-path changes [ beyerlein and tome , int . j. plasticity ( 2007 ) ] incorporated into a visco-plastic self-consistent ( vpsc ) model supports the hypothesis that suppression of work hardening is due to reversal or cross effects operating at the grain level . story_separator_special_tag abstract a new constitutive framework , together with an efficient time-integration scheme , is presented for incorporating the crystallography of deformation twinning in polycrystal plasticity models . previous approaches to this problem have required generation of new crystal orientations to reflect the orientations in the twinned regions or implementation of volume fraction transfer schemes , both of which require an update of the crystal orientations at the end of each time step in the simulation of the deformation process . in the present formulation , all calculations are performed in a relaxed configuration in which the lattice orientation of the twinned and the untwinned regions are pre-defined based on the initial lattice orientation of the crystal . the validity of the proposed constitutive framework and the time-integration procedures has been demonstrated through comparisons of predicted rolling textures in low stacking fault energy fcc metals and in hcp metals with the corresponding predictions from the earlier approaches as well as through qualitative comparisons with the measurements reported previously . story_separator_special_tag abstract classical crystal plasticity can account for slip , kink and shear band formation in metal single crystals exhibiting a softening behavior . in the case of multiple slip , the stability of symmetric multi-slip configurations depending on the self/latent hardening ratio is investigated . finite element simulations of symmetric and symmetry-breaking localization modes in single crystals oriented for double slip in tension are presented . some shortcomings of classical crystal plasticity are then pointed out that can be solved using a cosserat crystal plasticity model that explicitly takes elastoplastic lattice torsion-curvature into account . the classical theory predicts for instance the same critical hardening modulus for the onset of slip and kink banding . this is not the case any more in generalized crystal plasticity , as shown by a bifurcation analysis and finite element simulations of cosserat crystals . story_separator_special_tag this paper reports a three-dimensional ( 3d ) study of the microstructure and texture below a conical nanoindent in a ( 1 1 1 ) cu single crystal at nanometer-scale resolution . the experiments are conducted using a joint high-resolution field emission scanning electron microscopy/electron backscatter diffraction ( ebsd ) set-up coupled with serial sectioning in a focused ion beam system in the form of a cross-beam 3d crystal orientation microscope ( 3d ebsd ) . the experiments ( conducted in sets of subsequent \xf0 11 \xfe cross-section planes ) reveal a pronounced deformation-induced 3d patterning of the lattice rotations below the indent . in the cross-section planes perpendicular to the ( 1 1 1 ) surface plane below the indenter tip the observed deformation-induced rotation pattern is characterized by an outer tangent zone with large absolute values of the rotations and an inner zone closer to the indenter axis with small rotations . the mapping of the rotation directions reveals multiple transition regimes with steep orientation gradients and frequent changes in sign . the experiments are compared to 3d elastic viscoplastic crystal plasticity finite element simulations adopting the geometry and boundary conditions of the experiments . the simulations show story_separator_special_tag abstract the mechanical anisotropy of an aa1050 aluminium plate is studied by the use of five crystal plasticity models and two advanced yield functions . in-plane uniaxial tension properties of the plate were predicted by the full-constraint taylor model , the advanced lamel model ( van houtte et al. , 2005 ) and a modified version of this model ( manik and holmedal , 2013 ) , the viscoplastic self-consistent model and a crystal plasticity finite element method ( cpfem ) . results are compared with data from tensile tests at every 15\xb0 from the rolling direction ( rd ) to the transverse direction ( td ) in the plate . furthermore , all the models , except cpfem , were used to provide stress points in the five-dimensional deviatoric stress space at yielding for 201 plastic strain-rate directions . the facet yield surface was calibrated using these 201 stress points and compared to in-plane yield loci and the planar anisotropy which were calculated by the crystal plasticity models . the anisotropic yield function yld2004-18p ( barlat et al. , 2005 ) was calibrated by three methods : using uniaxial tension data , using the 201 virtual yield points in story_separator_special_tag deformation processed metal metal composites ( dmmcs ) are high-strength , high-electrical conductivity composites developed by severe plastic deformation of two ductile metal phases . the extraordinarily high strength of dmmcs is underestimated using the rule of mixture ( or volumetric weighted average ) of conventionally work-hardened metals . in this article , a dislocation-density-based , strain gradient plasticity model is proposed to relate the strain-gradient effect with the geometrically necessary dislocations emanating from the interface to better predict the strength of dmmcs . the model prediction was compared with the experimental findings of cu nb , cu ta , and al ti dmmc systems to verify the applicability of the new model . the results show that this model predicts the strength of dmmcs better than the rule-of-mixture model . the strain-gradient effect , responsible for the exceptionally high strength of heavily cold worked dmmcs , is dominant at large deformation strain since its characteristic microstructure length is comparable with the intrinsic material length . story_separator_special_tag abstract this work presents a crystal plasticity modeling framework that accounts for the influence of material interfaces on the plastic behavior of the two crystals on either side of the interface . within an interface-affected zone ( iaz ) extending from both sides of the interface , slip system activity is presumed to be biased towards systems that permit slip transfer across the interface . the preferred slip transfer pathways are determined from the geometric alignment of the slip systems and the stress state within each crystal . the iaz model is applied to study the plastic stability of cu nb bicrystals under plane strain compression . our results show that the additional constraints imposed through the enforcement of slip continuity across the interface leads to reduced plastic stability as compared to the case without an iaz for several of the interfaces studied . story_separator_special_tag abstract this work is an attempt to answer the question : is there a physically natural method of characterizing the possible interactions between the slip systems of two grains that meet at a grain boundary a method that could form the basis for the formulation of grain-boundary conditions ? here we give a positive answer to this question based on the notion of a burgers vector as described by a tensor field g on the grain boundary [ gurtin , m.e. , needleman , a. , 2005. boundary conditions in small-deformation single-crystal plasticity that account for the burgers vector . j. mech . phys . solids 53 , 1 31 ] . we show that the magnitude of g can be expressed in terms of two types of moduli : inter-grain moduli that characterize slip-system interactions between the two grains ; intra-grain moduli that for each grain characterize interactions between any two slip systems of that grain . we base the theory on microscopic force balances derived using the principle of virtual power , a version of the second law in the form of a free-energy imbalance , and thermodynamically compatible constitutive relations dependent on g and its rate . story_separator_special_tag abstract a strain gradient crystal plasticity theory is presented that accounts for the resistance of grain boundaries against plastic flow based on an interface yield condition . this theory incorporates the previously presented numerically efficient visco-plastic treatment by the gradient of an equivalent plastic strain eq in wulfinghoff and bohlke ( 2012 ) . the finite element implementation is discussed and the three-dimensional numerical model is fitted to experimental data of polycrystalline copper micro-tensile tests . the size dependent yield strength is reproduced notably well . story_separator_special_tag interactions between dislocations and grain boundaries play an important role in the plastic deformation of polycrystalline metals . capturing accurately the behaviour of these internal interfaces is particularly important for applications where the relative grain boundary fraction is significant , such as ultra fine-grained metals , thin films and micro-devices . incorporating these micro-scale interactions ( which are sensitive to a number of dislocation , interface and crystallographic parameters ) within a macro-scale crystal plasticity model poses a challenge . the innovative features in the present paper include ( i ) the formulation of a thermodynamically consistent grain boundary interface model within a microstructurally motivated strain gradient crystal plasticity framework , ( ii ) the presence of intra-grain slip system coupling through a microstructurally derived internal stress , ( iii ) the incorporation of inter-grain slip system coupling via an interface energy accounting for both the magnitude and direction of contributions to the residual defect from all slip systems in the two neighbouring grains , and ( iv ) the numerical implementation of the grain boundary model to directly investigate the influence of the interface constitutive parameters on plastic deformation . the model problem of a bicrystal deforming in plane story_separator_special_tag the misorientation phase space for symmetrical grain boundaries is explored by means of atomistic computer simulations , and the relationship between the tilt and twist boundaries in this three-parameter phase space is clucidated . the so-called random-boundary model ( in which the interactions of atoms across the interface are assumed to be entirely random ) is further developed to include relaxation of the interplanar spacings away from the grain boundary . this model is shown to include fully relaxed free surfaces naturally , thus permitting a direct comparison of the physical properties of grain boundaries and free surfaces , and hence the determination of ideal cleavage-fracture energies of grain boundaries . an extensive comparison with computer-simulation results for symmetrical tilt and twist boundaries shows that the random-boundary model also provides a good description of the overall structure-energy correlation for both low- and high-angle tilt and twist boundaries . finally , the role of the interplanar spacing parallel to the grain boundary in both the grain-boundary and cleavage-fracture energies is elucidated . story_separator_special_tag abstract dislocation theory is used to invoke a strain gradient theory of rate independent plasticity . hardening is assumed to result from the accumulation of both randomly stored and geometrically necessary dislocation . the density of the geometrically necessary dislocations scales with the gradient of plastic strain . a deformation theory of plasticity is introduced to represent in a phenomenological manner the relative roles of strain hardening and strain gradient hardening . the theory is a non-linear generalization of cosserat couple stress theory . tension and torsion experiments on thin copper wires confirm the presence of strain gradient hardening . the experiments are interpreted in the light of the new theory . story_separator_special_tag abstract size effects are widely observed in the mechanical behavior of materials at the micron scale . however , the underlying deformation mechanisms often remain ambiguous , particularly in the presence of strain gradients . here , combined microstructural investigations and mechanical testing in tension and torsion on annealed polycrystalline gold microwires with diameters of 12.5 , 15 , 17.5 , 25 , 40 and 60\xa0 m were performed to investigate the influence of specimen size , grain size , strain rate and loading conditions on the deformation behavior of the wires . the studies were focused on samples with fully recrystallized microstructures in order to minimize the influence of the deformation related to the fabrication process . in particular , we have prepared a set of wires with different diameters , where the wires were selected such that they have comparable stress strain behavior in tension . in contrast to tensile loading , a systematic smaller is stronger sample size effect was observed for torsional loading . since a grain size effect as well as diameter-dependent texture variations were found in this study , it is argued that the determined size effect is related to the graded loading and story_separator_special_tag this study develops a general theory of crystalline plasticity based on classical crystalline kinematics ; classical macroforces ; microforces for each slip system consistent with a microforce balance ; a mechanical version of the second law that includes , via the microforces , work performed during slip ; a rate-independent constitutive theory that includes dependences on plastic strain-gradients . the microforce balances are shown to be equivalent to yield conditions for the individual slip systems , conditions that account for variations in free energy due to slip . when this energy is the sum of an elastic strain energy and a defect energy quadratic in the plastic-strain gradients , the resulting theory has a form identical to classical crystalline plasticity except that the yield conditions contain an additional term involving the laplacian of the plastic strain . the field equations consist of a system of pdes that represent the nonlocal yield conditions coupled to the classical pde that represents the standard force balance . these are supplemented by classical macroscopic boundary conditions in conjunction with nonstandard boundary conditions associated with slip . a viscoplastic regularization of the basic equations that obviates the need to determine the active slip systems is story_separator_special_tag abstract this study develops a gradient theory of single-crystal plasticity that accounts for geometrically necessary dislocations . the theory is based on classical crystalline kinematics ; classical macroforces ; microforces for each slip system consistent with a microforce balance ; a mechanical version of the second law that includes , via the microforces , work performed during slip ; a rate-independent constitutive theory that includes dependences on a tensorial measure of geometrically necessary dislocations . the microforce balances are shown to be equivalent to nonlocal yield conditions for the individual slip systems . the field equations consist of the yield conditions coupled to the standard macroscopic force balance ; these are supplemented by classical macroscopic boundary conditions in conjunction with nonstandard boundary conditions associated with slip . as an aid to solution , a weak ( virtual power ) formulation of the nonlocal yield conditions is derived . to make contact with classical dislocation theory , the microstresses are shown to represent counterparts of the peach koehler force on a single dislocation . story_separator_special_tag the mechanical behavior of automotive dual-phase steel ( dp ) is modeled by two different approaches : with a full-field representative volume element ( rve ) and with a mean-field model . in the first part of this work , the full-field rve is constituted by a crystal plasticity-based ferrite matrix with von mises-type martensite inclusions . to isolate the martensite influence , the full-field dp results were compared to a full-field comparison rve . in the comparison rve , all martensite inclusions were replaced by a phase that exhibits the average ferrite behavior . a higher relative martensite grain boundary coverage facilitates an increased average dislocation density after quenching . however , for uniaxial deformations above 10 % , the grain size-dependent relation reverses and exhibits slowed-down hardening . in the second part , we incorporate the main findings from the full-field simulations into a nonlinear mean-field model of hashin shtrikman type . the dislocation density production parameter and the saturated dislocation density are modeled based on grain size and martensite coverage . the comparison of both approaches shows good agreement for both the overall and constituent averaged behavior . story_separator_special_tag abstract in this paper , it is shown that the occurrence of dislocation pileups across grain boundaries , as well as subsequent emission to the adjacent grains , is captured theoretically by gradient plasticity and confirmed experimentally by nanoindentation . from a theoretical point of view , this is accomplished ( within a deformation theory framework applicable to continued loading ) by accounting for a specific interfacial term in the overall potential of the material , in terms of which its response , taken to conform to strain gradient plasticity , is defined . the main features that result from the addition of this interfacial term are ( i ) significant size effects of hall petch type in the overall stress strain response of polycrystals and ( ii ) the determination of an analytical expression for the stress corresponding to the onset of dislocation transfer across interfaces . from an experimental point of view , the effective stress at which dislocation transfer takes place across an interface can be obtained from nanoindentations performed in close proximity to an fe 2.2\xa0wt. % si grain boundary , since they exhibit a distinct strain burst that is related to the presence of the story_separator_special_tag backgroundhigh-throughput profiling of dna methylation status of cpg islands is crucial to understand the epigenetic regulation of genes . the microarray-based infinium methylation assay by illumina is one platform for low-cost high-throughput methylation profiling . both beta-value and m-value statistics have been used as metrics to measure methylation levels . however , there are no detailed studies of their relations and their strengths and limitations.resultswe demonstrate that the relationship between the beta-value and m-value methods is a logit transformation , and show that the beta-value method has severe heteroscedasticity for highly methylated or unmethylated cpg sites . in order to evaluate the performance of the beta-value and m-value methods for identifying differentially methylated cpg sites , we designed a methylation titration experiment . the evaluation results show that the m-value method provides much better performance in terms of detection rate ( dr ) and true positive rate ( tpr ) for both highly methylated and unmethylated cpg sites . imposing a minimum threshold of difference can improve the performance of the m-value method but not the beta-value method . we also provide guidance for how to select the threshold of methylation differences.conclusionsthe beta-value has a more intuitive biological interpretation , story_separator_special_tag this paper presents a fully coupled glide-climb crystal plasticity model , whereby climb is controlled by the diffusion of vacancies . an extended strain gradient crystal plasticity model is therefore proposed , which incorporates the climbing of dislocations in the governing transport equations . a global local approach is adopted to separate the scales and assess the influence of local diffusion on the global plasticity problem . the kinematics of the crystal plasticity model is enriched by incorporating the climb kinematics in the crystallographic split of the plastic strain rate tensor . the potential of the fully coupled theory is illustrated by means of two single slip examples that illustrate the interaction between glide and climb in either bypassing a precipitate or destroying a dislocation pile-up . story_separator_special_tag the mechanical response of polycrystalline metals is significantly affected by the behaviour of grain boundaries , in particular when these interfaces constitute a relatively large fraction of the material volume . one of the current challenges in the modelling of grain boundaries at a continuum ( polycrystalline ) scale is the incorporation of the many different interaction mechanisms between dislocations and grain boundaries , as identified from fine-scale experiments and simulations . in this paper , the objective is to develop a model that accounts for the redistribution of the defects along the grain boundary in the context of gradient crystal plasticity . the proposed model incorporates the nonlocal relaxation of the grain boundary net defect density . a numerical study on a bicrystal specimen in simple shear is carried out , showing that the spreading of the defect content has a clear influence on the macroscopic response , as well as on the microscopic fields . this work provides a basis that enables a more thorough analysis of the plasticity of polycrystalline metals at the continuum level , where the plasticity at grain boundaries matters . story_separator_special_tag we present a physics-based constitutive model of dislocation glide in metals that explicitly accounts for the redistribution of dislocations due to their motion . the model parameterizes the complex microstructure by dislocation densities of edge and screw character , which either occur with monopolar properties , i.e . a single dislocation with positive or negative line sense , or with dipolar properties , i.e . two dislocations of opposite line sense combined . the advantage of the model lies in the description of the dislocation density evolution , which comprises the usual rate equations for dislocation multiplication and annihilation , and formation and dissociation of dislocation dipoles . additionally , the spatial redistribution of dislocations by slip is explicitly accounted for . this is achieved by introducing an advection term for the dislocation density that turns the evolution equations for the dislocation density from ordinary into partial differential equations . the associated spatial gradients of the dislocation slip render the model nonlocal . the model is applied to wedge indentation in single-crystalline nickel . the simulation results are compared to published experiments ( kysar et al. , 2010 ) in terms of the spatial distribution of lattice rotations and geometrically story_separator_special_tag predictive microstructural models of poly-crystalline materials require a correct description of the mechanical behavior of internal boundaries , e.g . grain , phase and twin boundaries . dislocations are the carriers of plastic deformation and the presence of internal boundaries restricts their motion . interactions between dislocations and the resistance to their motion caused by the interfaces give rise to hardening and size effects , which should therefore be considered . in this paper , a continuum dislocation transport model in single slip is used to model a two-phase laminated microstructure containing ( plastically ) hard and soft phases . the phase boundary constitutes an interface in the model . the transport equations require continuity of the dislocation flux throughout the domain . expressions for the dislocation flux in the bulk as a function of the dislocation densities and their gradients are readily available in the literature . however , the interface requires an additional constitutive model for the dislocation flux passing through it . such a model is derived here from the interactions of infinite dislocation walls on both sides of the parallel boundary . a qualitative analysis is performed to reveal the effect of interface , material and story_separator_special_tag abstract the interaction of dislocations with grain boundary junctions plays an important role during plastic deformation and stress relaxation in polycrystalline thin films . in the present work , arrays of secondary grain boundary dislocations ( sgbds ) and their behavior at junctions between orthogonal =3 { 111 } and =3 { 112 } grain boundaries in au thin films have been studied using room temperature and in situ transmission electron microscopy ( tem ) . through diffraction contrast experiments , we find that these dislocations have burgers vectors of the type a/ 6 112 . in situ tem experiments conducted at elevated temperature show that the arrays of sgbds on { 111 } twin planes originate in the { 112 } boundaries where they accommodate a small rotational misorientation from the exact coincident-site-lattice ( csl ) orientation . we propose that the discontinuous distribution of sgbds in the { 112 } boundary produces a climb stress that drives the dislocation motion . as the dislocations in the grain boundary increase their separation , the climb stress and the misorientation between grains is reduced . to test the plausibility of this explanation , we consider the balance between the reduction story_separator_special_tag abstract this work aims at the formulation of a gradient crystal plasticity model which incorporates some of the latest developments in continuum dislocation theory and is , at the same time , well-suited for a three-dimensional numerical implementation . specifically , a classical continuum crystal plasticity framework is extended by taking into account continuous dislocation density and curvature field variables which evolve according to partial differential equations ( hochrainer et\xa0al. , 2014 ; ebrahimi et\xa0al. , 2014 ) . these account for dislocation transport and curvature-induced line-length production and have been derived from a higher-dimensional continuum dislocation theory . the dislocation density information is used to model work hardening as a consequence of dislocation entanglement . a composite microstructure is simulated consisting of a soft elasto-plastic matrix and hard elastic inclusions . the particles are assumed to act as obstacles to dislocation motion , leading to pile-ups forming at the matrix inclusion interface . this effect is modeled using gradient plasticity with a simplified equivalent plastic strain gradient approach ( wulfinghoff et\xa0al. , 2013 ) which is used here in order to allow for an efficient numerical treatment of the three-dimensional numerical model . a regularized logarithmic energy is applied story_separator_special_tag the plastic deformation of metals is the result of the motion and interaction of dislocations , line defects of the crystalline structure . continuum models of plasticity , however , remain largely phenomenological to date , usually do not consider dislocation motion , and fail when materials behavior becomes size dependent . in this work we present a novel plasticity theory based on systematic physical averages of the kinematics and dynamics of dislocation systems . we demonstrate that this theory can predict microstructure evolution and size effects in accordance with experiments and discrete dislocation simulations . the theory is based on only four internal variables per slip system and features physical boundary conditions , dislocation pile ups , dislocation curvature , dislocation multiplication and dislocation loss . the presented theory therefore marks a major step towards a physically based theory of crystal plasticity . story_separator_special_tag the geometrically non-linear scale dependent response of polycrystal fcc metals is modelled by an enhanced crystal plasticity framework based on the evolution of several dislocation density types and their distinct physical influence on the mechanical behaviour . the isotropic hardening contribution follows from the evolution of statistically stored dislocation ( ssd ) densities during plastic deformation , where the determination of the slip resistance is based on the mutual short range interactions between all dislocation types , i.e . including the geometrically necessary dislocation ( gnd ) densities . moreover , the gnd 's introduce long range interactions by means of a back-stress measure , opposite to the slip system resolved shear stress . the grain size dependent mechanical behaviour of a limited collection of grains under plane stress loading conditions is determined using the finite element method . each grain is subdivided into finite elements and an additional expression , coupling the gnd densities to spatial crystallographic slip gradients , renders the gnd densities to be taken as supplemental nodal degrees of freedom . consequently , these densities can be uncoupled at the grain boundary nodes , allowing for the introduction of grain boundary dislocations ( gbd 's ) story_separator_special_tag this paper focuses on the continuum scale modeling of dislocation grain boundary interactions and enriches a particular strain gradient crystal plasticity formulation ( convex counter-part of yal\xe7inkaya et al. , j mech phys solids 59:1 17 , 2011 ; int j solids struct 49:2625 2636 , 2012 ) by incorporating explicitly the effect of grain boundaries on the plastic slip evolution . within the framework of continuum thermodynamics , a consistent extension of the model is presented and a potential type non-dissipative grain boundary description in terms of grain boundary burgers tensor ( see e.g . gurtin , j mech phys solids 56:640 662 , 2008 ) is proposed . a fully coupled finite element solution algorithm is built-up in which both the displacement $ $ { { \\varvec { u } } } $ $ u and plastic slips $ $ \\gamma ^ { \\alpha } $ $ are considered as primary variables . for the treatment of grain boundaries within the solution algorithm , an interface element is formulated . the proposed formulation is capable of capturing the effect of misorientation of neighboring grains and the orientation of the grain boundaries on slip evolution in a natural way story_separator_special_tag abstract the tensile deformation of bicrystal specimens with longitudinal grain boundaries has been considered , both from the point of view of macroscopic plasticity and from that of dislocation theory . emphasis has been on the multiple slip associated with the interaction between the two crystals at the boundary . it has been shown that macroscopic continuity at the boundary will in general require the crystals of a bicrystal to deform on at least four slip systems between them , distributed either with two in each crystal or with three in one crystal and one in the other . a model employing the pile-up of dislocations at a grain boundary has led to a method of predicting which additional slip systems will operate in a given bicrystal . experimental observations of slip lines on twenty-four aluminum bicrystals deformed in tension have supported the predictions made by this method . story_separator_special_tag a two-phase alloy of composition ti-47.5al-2.5cr has been studied under two heat-treated conditions in order to obtain different microstructures . these consisted of lamellar and equiaxed distributions of y grains in which the 2 phase was distributed as long lamellae or smaller globules , respectively . the specific rotation relationships between / and / 2 grains have been measured , and these have been used to understand their effect on the compatibility of deformation across adjacent grains . for this , detailed analysis of active slip systems has been carried out by transmission electron microscopy ( tem ) observations of deformed samples . a theoretical calculation of a geometric compatibility factor characterizing the best slip transfer across adjacent grains has been used in such a way that it has been possible to deduce the role played by the type of orientation relationship between grains in producing active deformation systems that allow the maximum compatibility of deformation . story_separator_special_tag abstract the effect of slip transfer on heterogeneous deformation of polycrystals has been a topic of recurring interest , as this process can either lead to the nucleation of damage , or prevent nucleation of damage . this paper examines recent experimental characterization of slip transfer in tantalum , tial , and ti alloys . the methods used to analyze and assess evidence for the occurrence of slip transfer are discussed . comparisons between a characterized and simulated patch of microstructure are used to illustrate synergy that leads to new insights that can not arise with either approach alone . story_separator_special_tag a detailed theoretical and numerical investigation of the infinitesimal single-crystal gradient-plasticity and grain-boundary theory of gurtin ( 2008 ) is performed . the governing equations and flow laws are recast in variational form . the associated incremental problem is formulated in minimisation form and provides the basis for the subsequent finite element formulation . various choices of the kinematic measure used to characterise the ability of the grain boundary to impede the flow of dislocations are compared . an alternative measure is also suggested . a series of three-dimensional numerical examples serve to elucidate the theory . story_separator_special_tag abstract zn li alloys have been shown to be promising for biodegradable vascular stent applications due to their favourable biocompatibility and superior strength . this work presents a thorough evaluation of the microstructure , room temperature mechanical properties and human body temperature creep behaviour of hot-extruded zn-xli ( x\xa0=\xa00.1 , 0.3 and 0.4 ) alloys . all alloys show typical basal texture after the extrusion but the recrystallized grains are much finer with increasing li content . consequently , not only the room temperature yield and tensile strengths but also the elongation to fracture is significantly increased with increasing li content . however , increasing li content has an adverse effect on the creep resistance at human body temperature . moreover , there is a transition in the operative creep mechanism from dislocation creep in the zn-0.1li alloy to grain boundary sliding in the zn-0.3li and zn-0.4li alloys . the observed mechanical behaviour in these alloys can be well related to the grain size effect , i.e . strengthening to softening by grain boundaries with decreasing strain rate . this work suggests that grain size of biodegradable zinc alloys should be optimized in order to achieve a balance between room story_separator_special_tag the influence of grain boundaries on material deformation in ni 3 al was investigated by relating the material pile-up at grain boundaries and the propagation of slip across grain boundaries to the misorientation between the corresponding grains . indentation tests were carried out using micro- and nanoindentation at distances shorter than the radius of indent size from a grain boundary on ni 3 al . the indents were observed using scanning electron microscopy and non-contact-mode atomic force microscopy . repeated experimentation did not reveal a rising trend of hardness near grain boundaries , indicating that hardness is not a sensitive parameter to measure grain boundary strengthening effects . however , it was observed that the slip transfer behavior across a grain boundary has a strong dependence on a local misorientation factor m ' relating the misorientation of slip planes and slip directions on either side of the grain boundary . this result agrees with the fundamental assumption in the physical explanation of the hall-petch effect . story_separator_special_tag abstract the passage of dislocations across grain boundaries in metals has been studied by using the in situ tem deformation technique . a detailed analysis of the interaction of glissile matrix dislocations with grain-boundary dislocations has been performed . the results show that the dislocations piled-up at the grain boundary can : ( 1 ) be transferred directly through the grain boundary into the adjoining grain ; ( 2 ) be absorbed and transformed into extrinsic grain-boundary dislocations ; ( 3 ) be accommodated in the grain boundary , followed by the emission from the grain boundary of a matrix dislocation ; and ( 4 ) be ejected back into their original grain . to predict which slip system is favourable for slip transfer , three criteria have been considered , namely : ( 1 ) the angle between the lines of intersection of the incoming and outgoing slip planes with the grain boundary , this should be as small as possible ; ( 2 ) the resolved shear stress acting on the possible slip systems in the adjoining grain , this should . story_separator_special_tag presentation d'un modele expliquant la signification des grandes valeurs pour le vecteur de burgers residuel inhibant la propagation du glissement . ce modele considere aussi la thermodynamique associee a la nucleation et croissance des boucles de dislocations , observee par microscopie electronique en transmission dans le laiton story_separator_special_tag abstract incompatibility stresses can develop in bicrystals due to material elastic and plastic anisotropies owing to different crystal orientations separated by grain boundaries . here , these stresses are investigated by combining experimental and theoretical studies on 10\xa0 m diameter ni bicrystalline micropillars . throughout stepwise compression tests , slip traces are analyzed by scanning electron microscopy to identify the active slip planes and directions in both crystals . an analytical model is presented accounting for the effects of heterogeneous elasticity coupled to heterogeneous plasticity on the internal mechanical fields . this model provides explicit expressions of stresses in both crystals considering experimentally observed non-equal crystal volume fractions and inclined grain boundaries . it is used to predict the resolved shear stresses on the possible slip systems in each crystal . the predictions of the onset of plasticity as given by the present model in pure elasticity are compared with those given by the classical schmid s law . in contrast with schmid s law , the predictions of the analytical model are in full agreement with the experimental observations regarding the most highly stressed crystal and active slip systems . the effects of plastic incompatibilities are also considered in story_separator_special_tag abstract grain boundaries induce heterogeneities in the deformation response of polycrystals . studying these local variations in response , measured through high resolution strain measurement techniques , is important and can improve our understanding of fatigue damage initiation in the vicinity of grain boundaries and material hardening . in this work , strain fields across grain boundaries were measured using advanced digital image correlation techniques . in conjunction with strain measurements , grain orientations from electron back-scattered diffraction were used to establish the dislocation reactions at each boundary , providing the corresponding residual burgers vectors due to slip transmission across the interfaces . a close correlation was found between the magnitude of the residual burgers vector and the local strain change across the boundary . when the residual burgers vector magnitude ( with respect to the lattice spacing ) exceeds 1.0 , the high strains on one side of the boundary are paired with low strains across the boundary , indicating the difficulties for slip dislocations to penetrate the grain interfaces . when the residual burgers vector approaches zero , the strain fields vary smoothly across the boundary due to limited resistance to slip transmission . the results suggest that story_separator_special_tag abstract a simple geometrical model is presented with the aim to connect measured distributions of orientation relationships between adjacent grains in single and dualphase brasses and the efficiency of grain and phase boundaries as dislocation obstacles . from the orientation relationships a slip transfer number is evaluated from the angle between the slip plane normals and the angle between the slip directions of the grain neighbors . taking into account the different stacking fault energy and the different hardness of -phase brass and -phase brass and using reasonable limiting conditions for the above angles the result of the calculations is the same as obtained from the hall-petch analysis of the yield stress : phase boundaries are stronger dislocation obstacles than grain boundaries . story_separator_special_tag role of cold rolling texture on the tensile properties of the cold rolled and cold rolled and annealed aisi 316l austenitic stainless steel is described here . the solution-annealed stainless steel plates were unidirectionally cold rolled to 50 , 70 and 90 % of reduction in thickness . the cold rolled material was annealed at 500 900 \xb0c annealing temperatures . x-ray diffraction technique was employed to study the texture evolution in cold rolled as well as cold rolled and annealed conditions . the texture components that evolved were translated into slip transmission number and schmid factor . these two parameters were correlated with the tensile properties of the material . the tensile properties were evaluated under all processing conditions . softening of the cold rolled material was observed after annealing with increasing annealing temperatures . from the stress strain curves , strain hardening coefficient n and strain hardening rate were determined . it was found that the effect of texture on tensile behaviour could be understood clearly by strain hardening rate . out of the two parameters , n and , strain hardening rate was found to be more sensitive to type of texture in the material . story_separator_special_tag abstract atomistic modeling is used to investigate the shear resistance and interaction with point defects of a cu nb interface found in nanocomposites synthesized by severe plastic deformation . the shear resistance of this interface is highly anisotropic : in one direction shearing occurs at stresses story_separator_special_tag abstract interfaces with relatively low shear strengths can be strong barriers to glide dislocations due to dislocation core spreading within the interface plane . using atomistic modeling we have studied the influence of interface shear strength on the interaction of lattice glide dislocations with fcc/bcc interfaces . tunable interatomic potentials are employed to vary the interface shear strength for the same interface crystallography . the results show that : ( 1 ) the interface shear strength increases as the dilute heat of mixing decreases ; ( 2 ) the interface shear mechanism involves the nucleation and glide of interfacial dislocations , which is dominated by the atomic structures of interfaces , regardless of the interface shear strength ; ( 3 ) weak interfaces entrap lattice glide dislocations due to the interface shear and core spreading of dislocations within interfaces . reverse shear displacement is needed to enable collapse of the spread core for slip transmission . this study provides an insight into the correlation between interface shear strength and glide dislocation trapping at the interface , which is a crucial unit mechanism in understanding the ultra-high strengths observed in nanoscale fcc/bcc multilayers . story_separator_special_tag bulk cu/nb multilayered composites with high interfacial content have been synthesized via the accumulative roll bonding ( arb ) method . experimental characterization shows that these multilayers with submicronmeter and nanometer individual layer thicknesses contain a predominant , steady-state interface with the kurdjumov sachs orientation relationship joining the mutual { 112 } planes of cu and nb . in this article , we overview microscopy and simulation results on the structure of this interface at an atomic level and its influence on interface properties , such as interface shear resistance and its ability to absorb point defects and nucleate dislocations nucleation . story_separator_special_tag abstract to fully understand the plastic deformation of metallic polycrystalline materials , the physical mechanisms by which a dislocation interacts with a grain boundary must be identified . recent atomistic simulations have focused on the discrete atomic scale motions that lead to either dislocation obstruction , dislocation absorption into the grain boundary with subsequent emission at a different site along the grain boundary , or direct dislocation transmission through the grain boundary into the opposing lattice . these atomistic simulations , coupled with foundational experiments performed to study dislocation pile-ups and slip transfer through a grain boundary , have facilitated the development and refinement of a set of criteria for predicting if dislocation transmission will occur and which slip systems will be activated in the adjacent grain by the stress concentration resulting from the dislocation pile-up . this article provides a concise review of both experimental and atomistic simulation efforts focused on the details of slip transmission at grain boundaries in metallic materials and provides a discussion of outstanding challenges for atomistic simulations to advance this field . story_separator_special_tag abstract the mechanical response of engineering materials evaluated through continuum fracture mechanics typically assumes that a crack or void initially exists , but it does not provide information about the nucleation of such flaws in an otherwise flawless microstructure . how such flaws originate , particularly at grain ( or phase ) boundaries is less clear . experimentally , good vs. bad grain boundaries are often invoked as the reasons for critical damage nucleation , but without any quantification . the state of knowledge about deformation at or near grain boundaries , including slip transfer and heterogeneous deformation , is reviewed to show that little work has been done to examine how slip interactions can lead to damage nucleation . a fracture initiation parameter developed recently for a low ductility model material with limited slip systems provides a new definition of grain boundary character based upon operating slip and twin systems ( rather than an interfacial energy based definition ) . this provides a way to predict damage nucleation density on a physical and local ( rather than a statistical ) basis . the parameter assesses the way that highly activated twin systems are aligned with principal stresses and slip story_separator_special_tag abstract during direct slip transmission of a dislocation through a twin or grain boundary , typically a residual dislocation remains in the boundary plane . through atomistic simulations , we show systematic cases of slip transmission through various types of 1\xa01\xa00 tilts and 1\xa01\xa01 twists grain boundaries ( gbs ) . additionally , one specific type of gb , the coherent twin boundary ( ctb ) , is viewed to investigate the effects of orientation and dislocation type on the slip transmission process . in every case , we measure the residual burgers vector within the boundary and energy barrier for slip to transmit through the ctb or gb . there exists a direct correlation between the magnitude of the residual burgers vector and the energy barrier for slip transmission . hence , in cases of easy slip transmission ( i.e . low energy barrier ) , a small residual dislocation is left in the gb ; meanwhile in cases where it is difficult for slip to transmit past the ctb or gb ( i.e . high energy barrier ) , a large residual burgers vector remains within the boundary . story_separator_special_tag the interactions between 60\xb0 dislocation pile-ups with grain boundaries ( gbs ) are studied using multiscale modeling . careful quantitative analyses of complex processes associated with 60\xb0 dislocation absorption and transmission phenomena at 3 , 9 and 11 symmetric tilt boundaries in al are interpreted in terms of a set of modified lee robertson birnbaum ( mlrb ) criteria . our results and the mlrb criteria ( i ) explain experimental observations , ( ii ) rationalize new mechanisms such as deformation twinning and formation of extended stacking faults , ( iii ) show that reactions can be controlled more strongly by the leading partial of an incoming dislocation rather than the full burgers vector and ( iv ) demonstrate that non-schmid stresses , e.g . shear and compressive stresses along the gb , gb dislocation processes and step-height changes on the gb all influence the critical nucleation stress , but to differing degrees among different tilt boundaries . the mlrb criteria do not capture the effects of local gb structure that can also influence behavior . quantitative metrics based on the mlrb criteria are formulated , using the simulation results , for various absorption and transmission phenomena . these story_separator_special_tag the energetics of slip coherent twin boundary ( ctb ) interactions are established under tensile deformation in face centered cubic ( fcc ) copper with molecular dynamics simulations , exploring the entire stereographic triangle . the ctbs serve as effective barriers in some crystal orientations more than others , consistent with experimental observations . the resulting dislocation structures upon slip twin reactions are identified in terms of burgers vector analysis . visualization of the dislocation transmission , lock formation , dislocation incorporation to twin boundaries , dislocation multiplication at the matrix twin interface and twin translation , growth , and contraction behaviors cover the most significant reactions that can physically occur providing a deeper understanding of the mechanical behavior of fcc alloys in the presence of twin boundaries . the results make a distinction between deformation and annealing twins interacting with incident dislocations and point to the considerable role both types of twins can play in strengthening of fcc metals . story_separator_special_tag abstract microstructurally-induced failure mechanisms in crystalline materials with coincident site-lattice ( csl ) high angle grain boundaries ( gbs ) have been investigated . a multiple-slip rate-dependent crystalline constitutive formulation that is coupled to the evolution of mobile and immobile dislocation densities and specialized computational schemes have been developed to obtain a detailed understanding of the interrelated physical mechanisms that result in material failure . a transmission scalar has also been introduced to investigate slip-rate transmission , blockage and incompatibility at the gb . the combined effects of high angle gb misorientation , mobile and immobile dislocation densities , strain hardening , geometrical softening , localized plastic strains , and slip-rate transmission and blockage on failure evolution in face centered cubic ( f.c.c . ) crystalline materials have been studied . results from the present study are consistent with experimental observations that single dislocation pile-ups result in a transgranular failure mode for the 9 csl gb , and that symmetric double dislocation pile-ups result in an intergranular failure mode for the 17b csl gb . story_separator_special_tag in this paper , a gradient crystal plasticity model in a polycrystalline grain structure is investigated . hereby , the focus is on the influence of the grain boundary conditions . a new type of grain boundary conditions is introduced , the so-called micro-flexible boundary condition . in particular , it is compared to existing grain boundary conditions of plastic slip . numerical results are given for the stress strain response as well as for the plastic slip field in the grain structure . story_separator_special_tag abstract a dislocation-density grain boundary interaction scheme has been developed to account for the interrelated dislocation-density interactions of emission , absorption and transmission in gb regions . the gb scheme is based on slip-system compatibility , local resolved shear stresses , and immobile and mobile dislocation-density accumulation at critical gb locations . to accurately represent dislocation-density evolution , a conservation law for dislocation-densities is used to balance dislocation-density absorption , transmission and emission from the gb . the behavior of f.c.c . polycrystalline copper , with different random low and high angle gbs , are investigated for different crack lengths . for aggregates with random low angle gbs , dislocation-density transmission dominates at the gbs , which can indicate that the low angle gb will not significantly change crack growth directions . for aggregates with random high angle gbs , extensive dislocation-density absorption and pile-ups occur . the high stresses associated with this behavior , along the gbs , can result in intergranular crack growth due to potential crack nucleation sites in the gb . story_separator_special_tag we suggest a dislocation based constitutive model to incorporate the mechanical interaction between mobile dislocations and grain boundaries into a crystal plasticity finite element framework . the approach is based on the introduction of an additional activation energy into the rate equation for mobile dislocations in the vicinity of grain boundaries . the energy barrier is derived by using a geometrical model for thermally activated dislocation penetration events through grain boundaries . the model takes full account of the geometry of the grain boundaries and of the schmid factors of the critically stressed incoming and outgoing slip systems and is formulated as a vectorial conservation law . the new model is applied to the case of 50 % ( frictionless ) simple shear deformation of al bicrystals with either a small , medium , or large angle grain boundary parallel to the shear plane . the simulations are in excellent agreement with the experiments in terms of the von mises equivalent strain distributions and textures . the study reveals that the incorporation of the misorientation alone is not sufficient to describe the influence of grain boundaries on polycrystal micro-mechanics . we observe three mechanisms which jointly entail pronounced local hardening story_separator_special_tag a multiple slip dislocation-density based crystalline formulation has been coupled to a kinematically based scheme that accounts for grain-boundary ( gb ) interfacial interactions with dislocation densities . specialized finite-element formulations have been used to gain detailed understanding of the initiation and evolution of large inelastic deformation modes due to mechanisms that can result from dislocation-density pile-ups at gb interfaces , partial and total dislocation-density transmission from one grain to neighboring grains , and dislocation density absorption within gbs . these formulations provide a methodology that can be used to understand how interactions at the gb interface scale affect overall macroscopic behavior at different inelastic stages of deformation for polycrystalline aggregates due to the interrelated effects of gb orientations , the evolution of mobile and immobile dislocation-densities , slip system orientation , strain hardening , geometrical softening , geometric slip compatibility , and localized plastic strains . criteria have been developed to identify and monitor the initiation and evolution of multiple regions where dislocation pile-ups at gbs , or partial and total dislocation density transmission through the gb , or absorption within the gb can occur . it is shown that the accurate prediction of these mechanisms is essential to story_separator_special_tag in this communication , we summarize the current advances in size-dependent continuum plasticity of crystals , specifically , the rate-independent ( quasistatic ) formulation , on the basis of dislocation mechanics . a particular emphasis is placed on relaxation of slip at interfaces . this unsolved problem is the current frontier of research in plasticity of crystalline materials . we outline a framework for further investigation , based on the developed theory for the bulk crystal . the bulk theory is based on the concept of geometrically necessary dislocations , specifically , on configurations where dislocations pile-up against interfaces . the average spacing of slip planes provides a characteristic length for the theory . the physical interpretation of the free energy includes the error in elastic interaction energies resulting from coarse representation of dislocation density fields . continuum kinematics is determined by the fact that dislocation pile-ups have singular distribution , which allows us to represent the dense dislocation field at the boundary as a superdislocation , i.e. , the jump in the slip filed . associated with this jump is story_separator_special_tag this paper discusses boundary conditions appropriate to a theory of single-crystal plasticity ( gurtin , j. mech . phys . solids 50 ( 2002 ) 5 ) that includes an accounting for the burgers vector through energetic and dissipative dependences on the tensor g=curlhp , with hp the plastic part in the additive decomposition of the displacement gradient into elastic and plastic parts . this theory results in a flow rule in the form of n coupled second-order partial differential equations for the slip-rates ( =1,2 , n ) , and , consequently , requires higher-order boundary conditions . motivated by the virtual-power principle in which the external power contains a boundary-integral linear in the slip-rates , hard-slip conditions in which ( a ) \t =0 on a subsurface shard of the boundary for all slip systems are proposed . in this paper we develop a theory that is consistent with that of ( gurtin , 2002 ) , but that leads to an external power containing a boundary-integral linear in the tensor h ijp jrlnr , a result that motivates replacing ( a ) with the microhard condition ( b ) \t h ijp jrlnr=0 on the subsurface shard story_separator_special_tag in a first report [ jin zh , gumbsch p , ma e , albe k , lu k , hahn h , et al . scripta mater 2006 ; 54:1163 ] , interactions between screw dislocation and coherent twin boundary ( ctb ) were studied via molecular dynamics simulations for three face-centered cubic ( fcc ) metals , cu , ni and al . to complement those preliminary results , purely stress-driven interactions between 60\xb0 non-screw lattice dislocation and ctb are considered in this paper . depending on the material and the applied strain , slip has been observed to interact with the boundary in different ways . if a 60\xb0 dislocation is forced by an external stress into a ctb , it dissociates into different partial dislocations gliding into the twin as well as along the twin boundary . a sessile dislocation lock may be generated at the ctb if the transited slip is incomplete . the details of the interaction are controlled by the material-dependent energy barriers for the formation of shockley partial dislocations from the site where the lattice dislocation impinges upon the boundary . story_separator_special_tag the strength of polycrystals is largely controlled by the interaction between lattice dislocations and grain boundaries . the atomistic details of these interactions are difficult to discern even by advanced high-resolution microscopy methods . in this paper we present results of atomistic simulations of interactions between an edge dislocation and three symmetric tilt grain boundaries in body-centred cubic tungsten . our simulations reveal that the outcome of the dislocation grain-boundary interaction depends sensitively on the grain boundary structure , the geometry of the slip systems in neighbouring grains , and the precise location of the interaction within the grain boundary . a detailed analysis of the evolution of the grain boundary structures and local stress fields during dislocation absorption and transmission is provided . story_separator_special_tag abstract the interactions of 112 ] { 111 } twinning dislocations with large-angle grain boundaries , in duplex ti-al alloys after room-temperature deformation in tension , have been investigated by transmission electron microscopy . the dislocation reactions that describe the slip transfer processes have been identified using image matching between experimental and computed images and are discussed and interpreted in terms of the direction of the applied tensile stress . the results demonstrate that , at a general large-angle - grain boundary , slip transfer of incoming 112 ] { 111 } twinning dislocations can be accommodated by the generation of glide in both grains by the movement of \xbd 110 ] -type dislocations on prismatic glide planes , defined by the operative \xbd 110 ] burgers vector of the outgoing glide dislocations and the line of intersection of the incoming deformation twin with the grain boundary . this mechanism therefore represents a generalization of the dislocation interaction observed for edge-type deform . story_separator_special_tag the mechanical response to nanoindentation near grain boundaries has been investigated in an fe 14 % si bicrystal with a general grain boundary and two mo bicrystals with symmetric tilt boundaries . in particular , the indentations performed on the fe 14 % si show that as the grain boundary is approached , in addition to the occurrence of a first plateau in the load versus depth nanoindentation curve , which indicates grain interior yielding , a second plateau is observed , which is believed to indicate dislocation transfer across the boundary . it is noted that the hardness at the onset of these yield excursions increases as the distance of the tip to the boundary decreases , providing thus a new type of size effects , which can be obtained through nanoindentation . the energy released during an excursion compares well to the calculated interaction energy of the piled-up dislocations . hall petch slope values calculated from the excursions are consistent with macroscopically determined properties , suggesting that the hall petch slope may be used to predict whether slip transmission occurs during indentation . no slip transmission was observed in the mo bicrystals ; however , the staircase yielding story_separator_special_tag abstract dislocation-grain-boundary ( gb ) interactions in polycrystalline ice ih during creep have been studied in situ using synchrotron x-ray topography . the basal slip system with the highest schmid factor was found to be the most active in polycrystalline ice whereas the gb orientation relative to the loading direction seemed unimportant . gbs act both as effective sources of lattice dislocations and as strong obstacles to dislocation motion . the observations revealed pile-up formation upon loading and pile-up relaxation after unloading . non-basal segments of lattice dislocations can be generated from gbs in ice . however , they neither noticeably decrease stress concentrations nor contribute significantly to the overall plastic deformation . it was found that dislocations can be generated from both free-surface-gb intersections and from the interiors of gbs , indicating that the dislocation generation mechanism presented in our 1993 paper is not a surface artefact . evidence is also presented that , because .
preface.- 1. basic analysis.- 2. littlewood-paley theory.- 3. transport and transport-diffusion equations.- 4. quasilinear symmetric systems.- 5. incompressible navier-stokes system.- 6. anisotropic viscosity.- 7. euler system for perfect incompressible fluids.- 8. strichartz estimates and applications to semilinear dispersive equations.- 9. smoothing effect in quasilinear wave equations.- 10.- the compressible navier-stokes system.- references . - list of notations.- index . story_separator_special_tag in the present paper , we prove the existence of global solutions for the navier-stokes equations in n when the initial velocity belongs to the weighted weak lorentz space n , ( u ) with a sufficiently small norm under certain restriction on the weight u. at the same time , self-similar solutions are induced if the initial velocity is , besides , a homogeneous function of degree 1. also the uniqueness is discussed . story_separator_special_tag we prove that the cauchy problem for the three dimensional navier-stokes equations is ill posed in $ \\dot { b } ^ { -1 , \\infty } _ { \\infty } $ in the sense that a `` norm inflation '' happens in finite time . more precisely , we show that initial data in the schwartz class $ \\mathcal { s } $ that are arbitrarily small in $ \\dot { b } ^ { -1 , \\infty } _ { \\infty } $ can produce solutions arbitrarily large in $ \\dot { b } ^ { -1 , \\infty } _ { \\infty } $ after an arbitrarily short time . such a result implies that the solution map itself is discontinuous in $ \\dot { b } ^ { -1 , \\infty } _ { \\infty } $ at the origin . story_separator_special_tag for any discretely self-similar , incompressible initial data which are arbitrarily large in weak $ $ l^3 $ $ , we construct a forward discretely self-similar solution to the 3d navier stokes equations in the whole space . this also gives a third construction of self-similar solutions for any $ $ -1 $ $ -homogeneous initial data in weak $ $ l^3 $ $ , improving those in jiasverak and sverak ( invent math 196 ( 1 ) :233 265 , 2014 ) and korobkov and tsai ( forward self-similar solutions of the navier stokes equations in the half space , arxiv:1409.2516 , 2016 ) for holder continuous data . our method is based on a new , explicit a priori bound for the leray equations . story_separator_special_tag we study the solutions of the nonstationary incompressible navier -- stokes equations in $ \\r^d $ , $ d\\ge2 $ , of self-similar form $ u ( x , t ) =\\frac { 1 } { \\sqrt t } u\\bigl ( \\frac { x } { \\sqrt t } \\bigr ) $ , obtained from small and homogeneous initial data $ a ( x ) $ . we construct an explicit asymptotic formula relating the self-similar profile $ u ( x ) $ of the velocity field to its corresponding initial datum $ a ( x ) $ . story_separator_special_tag con ten t s introduct ion . 163 1. prel iminaries . 165 1.1. the nav ie r -s tokes equat ions . 165 1.2. classic 'd , mild and weak solutions . 168 1.3. navier meets fourier . 171 2. functional sett ing of the equat ions . 174 2.1. the l i t t l ewood-pa ley decompos i t ion . 174 2.2. the besov spaces . 178 2.3. the paraproduct rule . 183 2.4. the wavelet decomposi t ion . 184 2.5. other useful function spaces . 187 3. exis tence theorems . 188 3.1. the fixed point theorem . 188 3.2. scal ing invariance . 190 3.3. supercri t ical case . 192 3.4. critical case . 193 4. highly oscil lat ing data . 205 4.1. a remarkable property of besov spaces . 205 4.2. osci l la t ions wi thout besov norms . 207 4.3. the result of koch and tataru . 209 5. uniqueness theorems . 211 5.1. weak solut ions . 212 5.2. supercri t ical mild solutions . 214 5.3. critical mild solutions . 215 6. self-s imilar solutions . 220 6.1. backward : singular . 221 6.2. forward story_separator_special_tag we prove the existence of a forward discretely self-similar solutions to the navier-stokes equations in $ \\bbb r^ { 3 } \\times ( 0 , +\\infty ) $ for a discretely self-similar initial velocity belonging to $ l^2_ { loc } ( \\bbb r^ { 3 } ) $ . story_separator_special_tag we prove liouville type theorems for the self-similar solutions to the navier stokes equations . one of our results generalizes the previous ones by ne as ru i ka ver\xe1k and tsai . using a liouville type theorem , we also remove a scenario of asymptotically self-similar blow-up for the navier stokes equations with the profile belonging to $ $ { l^ { p , \\infty } ( \\mathbb { r } ^3 ) } $ $ lp , ( r3 ) with $ $ { p > \\frac { 3 } { 2 } } $ $ p > 32 . story_separator_special_tag steady-state solutions of the navier-stokes equations : statement of the problem and open questions.- basic function spaces and related inequalities.- the function spaces of hydrodynamics.- steady stokes flow in bounded domains.- steady stokes flow in exterior domains.- steady stokes flow in domains with unbounded boundaries.- steady oseen flow in exterior domains.- steady generalized oseen flow in exterior domains.- steady navier-stokes flow in bounded domains.- steady navier-stokes flow in three-dimensional exterior domains . irrotational case.- steady navier-stokes flow in three-dimensional exterior domains . rotational case.- steady navier-stokes flow in two-dimensional exterior domains.- steady navier-stokes flow in domains with unbounded boundaries.- bibliography.- index . story_separator_special_tag we demonstrate the existence of time-periodic motions of an incompressible navier-stokes fluid subject to a time-periodic body force , occupying the region exterior to a body that performs a periodic rigid motion of same period . story_separator_special_tag in 2001 , h. koch and d. tataru proved the existence of global in time solutions to the incompressible navier-stokes equations in $ { \\mathbb { r } } ^d $ for initial data small enough in $ bmo^ { -1 } $ . we show in this article that the koch and tataru solution has higher regularity . as a consequence , we get a decay estimate in time for any space derivative , and space analyticity of the solution . also as an application of our regularity theorem , we prove a regularity result for self-similar solutions . story_separator_special_tag ( 1989 ) . navier-stokes flow in r3 with measures as initial vorticity and morrey spaces . communications in partial differential equations : vol . 14 , no . 5 , pp . 577-618 . story_separator_special_tag any forward-in-time self-similar ( localized-in-space ) suitable weak solution to the 3d navier-stokes equations is shown to be infinitely smooth in both space and time variables . as an application , a proof of infinite space and time regularity of a class of a priori singular small self-similar solutions in the critical weak lebesgue space $ l^ { 3 , \\infty } $ is given . story_separator_special_tag it is shown that the l3 , -solutions of the cauchy problem for the three-dimensional navier-stokes equations are smooth . story_separator_special_tag we show that the classical cauchy problem for the incompressible 3d navier-stokes equations with $ ( -1 ) $ -homogeneous initial data has a global scale-invariant solution which is smooth for positive times . our main technical tools are local-in-space regularity estimates near the initial time , which are of independent interest . story_separator_special_tag an important open problem in the theory of the navier-stokes equations is the uniqueness of the leray-hopf weak solutions with $ l^2 $ initial data . in this paper we give sufficient conditions for non-uniqueness in terms of spectral properties of a natural linear operator associated to scale-invariant solutions recently constructed in \\cite { jiasverak } . if the spectral conditions are satisfied , non-uniqueness and ill-posedness can appear for quite benign compactly supported data , just at the borderline of applicability of the classical perturbation theory . the verification of the spectral conditions seems to be approachable by relatively straightforward numerical simulations which involve only smooth functions . story_separator_special_tag it is shown that the nonstationary navier-stokes equation ( ns ) in +\xd7 m is well posed in certain morrey spacesm p , ( +\xd7 m ) ( see the text for the definition : in particularm p,0=l p ifp > 1 andm 1,0 is the space of finite measures ) , in the following sense . given a vectora m p , m-p with diva=0 and with certain supplementary conditions , there is a unique local ( in time ) solution ( velocity field ) u ( t , \xb7 ) m p , m-p , which is smooth fort > 0 and takes the initial valuea at least in a weak sense.u is a global solution ifa is sufficiently small . of particular interest is the spacem 1 , m 1 , which admits certain measures ; thusa may be a surface measure on a smooth ( m 1 ) - dimensional surface in +\xd7 m . the regularity of solutions and the decay of global solutions are also considered . the associated vorticity equation ( for the vorticity = u ) can similarly be solved in ( tensor-valued ) m 1 , m 2 , which is story_separator_special_tag where u is the velocity and p is the pressure . it is well known that the navierstokes equations are locally well-posed for smooth enough initial data as long as one imposes appropriate boundary conditions on the pressure at . for instance it is easy to see ( see [ 9 ] for much more general results ) that if s > n 2 then for any h initial data there exists a unique c ( [ 0 , t ] ; h ( r ) ) local solution with a pressure p c ( [ 0 , t ] ; h ( r ) ) . in the sequel we consider solutions for less regular initial data . this has to be understood in the sense that the map from the initial data to the solution extends continuously to rougher function spaces . the question we are interested in is the global well-posedness for small data and local well-posedness for large data , with respect to a certain space of story_separator_special_tag for the incompressible navier-stokes equations in the 3d half space , we show the existence of forward self-similar solutions for arbitrarily large self-similar initial data . story_separator_special_tag presentation of the clay millennium prizes regularity of the three-dimensional fluid flows : a mathematical challenge for the 21st century the clay millennium prizes the clay millennium prize for the navier-stokes equations boundaries and the navier-stokes clay millennium problem the physical meaning of the navier-stokes equations frames of references the convection theorem conservation of mass newton 's second law pressure strain stress the equations of hydrodynamics the navier-stokes equations vorticity boundary terms blow up turbulence history of the equation mechanics in the scientific revolution era bernoulli 's hydrodymica d'alembert euler laplacian physics navier , cauchy , poisson , saint-venant , and stokes reynolds oseen , leray , hopf , and ladyzhenskaya turbulence models classical solutions the heat kernel the poisson equation the helmholtz decomposition the stokes equation the oseen tensor classical solutions for the navier-stokes problem small data and global solutions time asymptotics for global solutions steady solutions spatial asymptotics spatial asymptotics for the vorticity intermediate conclusion a capacitary approach of the navier-stokes integral equations the integral navier-stokes problem quadratic equations in banach spaces a capacitary approach of quadratic integral equations generalized riesz potentials on spaces of homogeneous type dominating functions for the navier-stokes integral equations a proof of oseen story_separator_special_tag analytical and numerical studies are developped concerning the expanding sphere problem in non viscous gas . it is shown that the non linear effects can not be completly neglected even at low expanding velocity . a comparaison of numerical and analytical results are carried out for a large range of the expanding sphere velocity . introduction . l'\xe9tude pr\xe9sent\xe9e ci-dessous , s'int\xe9resse \xe0 la d\xe9termination analytique des champs d'\xe9coulement engendr\xe9s lors de l'expansion \xe2 vitesse constante d'un piston sph\xe9rique dans un gaz parfait i l l i m i t \xe9 et non visqueux . une des motivations de cette \xe9tude r\xe9side actuellement dans la pr\xe9vision des effets dynamiques engendr\xe9s par une d\xe9flagration sph\xe9rique divergente , qui peut par exemple se produire apr\xe8s l'allumage accidentel d'un nuage combustible en espace l ib re . pour des raisons de s\xe9curit\xe9 relatives au stockage de gaz combustibles , ce genre d'\xe9tude a repris r\xe9cemment un regain d ' int\xe9r\xeat comme en t\xe9moigne les nombreux travaux publi\xe9s ces derni\xe8res ann\xe9es sur ce sujet , / l / , / 2 / , / 3 / , / 4 / , travaux qui sont pour la plupart de nature semiph\xe9nom\xe9nologique ou num\xe9rique . la story_separator_special_tag abstract.this paper proves that leray 's self similar solutions of the three dimensional navier stokes equations must be trivial under very general assumptions , for example , if they satisfy local energy estimates . story_separator_special_tag the authors present a unified treatment of basic topics that arise in fourier analysis . their intention is to illustrate the role played by the structure of euclidean spaces , particularly the action of translations , dilatations , and rotations , and to motivate the study of harmonic analysis on more general spaces having an analogous structure , e.g. , symmetric spaces . story_separator_special_tag i. the steady-state stokes equations . 1. some function spaces . 2. existence and uniqueness for the stokes equations . 3. discretization of the stokes equations ( i ) . 4. discretization of the stokes equations ( ii ) . 5. numerical algorithms . 6. the penalty method . ii . the steady-state navier-stokes equations . 1. existence and uniqueness theorems . 2. discrete inequalities and compactness theorems . 3. approximation of the stationary navier-stokes equations . 4. bifurcation theory and non-uniqueness results . iii . the evolution navier-stokes equations . 1. the linear case . 2. compactness theorems . 3. existence and uniqueness theorems . ( n < 4 ) . 4. alternate proof of existence by semi-discretization . 5. discretization of the navier-stokes equations : general stability and convergence theorems . 6. discretization of the navier-stokes equations : application of the general results . 7. approximation of the navier-stokes equations by the projection method . 8. approximation of the navier-stokes equations by the artificial compressibility method . appendix i : properties of the curl operator and application to the steady-state navier-stokes equations . appendix ii . ( by f. thomasset ) : implementation of non-conforming linear finite elements story_separator_special_tag extending the work of jia and sverak on self-similar solutions of the navier stokes equations , we show the existence of large , forward , discretely self-similar solutions .
nowadays , mobile devices are an important part of our everyday lives since they enable us to access a large variety of ubiquitous services . in recent years , the availability of these ubiquitous and mobile services has significantly increased due to the different form of connectivity provided by mobile devices , such as gsm , gprs , bluetooth and wi-fi . in the same trend , the number and typologies of vulnerabilities exploiting these services and communication channels have increased as well . therefore , smartphones may now represent an ideal target for malware writers . as the number of vulnerabilities and , hence , of attacks increase , there has been a corresponding rise of security solutions proposed by researchers . due to the fact that this research field is immature and still unexplored in depth , with this paper we aim to provide a structured and comprehensive overview of the research on security solutions for mobile devices . this paper surveys the state of the art on threats , vulnerabilities and security solutions over the period 2004-2011 , by focusing on high-level attacks , such those to user applications . we group existing approaches aimed at protecting story_separator_special_tag mobile networks are vulnerable to signaling attacks and storms that are caused by traffic patterns that overload the control plane , and differ from distributed denial of service ( ddos ) attacks in the internet since they directly attack the control plane , and also reserve wireless bandwidth without actually using it . such attacks can result from malware and mobile botnets , as well as from poorly designed applications , and can cause service outages in 3g and 4g networks which have been experienced by mobile operators . since the radio resource control ( rrc ) protocol in 3g and 4g networks is particularly susceptible to such attacks , we analyze their effect with a mathematical model that helps to predict the congestion that is caused by an attack . a detailed simulation model of a mobile network is used to better understand the temporal dynamics of user behavior and signaling in the network and to show how rrc based signaling attacks and storms cause significant problems in the control plane and the user plane of the network . our analysis also serves to identify how storms can be detected , and to propose how system parameters can be story_separator_special_tag now a day 's mobile devices like smartphone , tablets and personal digital assistants etc . were playing most essential part in our daily lives . a high-end mobile device performs the same functionality as computers . android based smart phone has become more vulnerable , because of an open source operating system . anyone can develop a new application and post it into android market . these types of applications were not verified by authorized company . so it may include malevolent applications it may be virus , spyware , worms , etc . which can cause system failure , wasting memory resources , corrupting data , stealing personal information and also increases the maintenance cost . due to these reasons , the mobile phone security or mobile security is very essential one in mobile computing . in the existing system is not able to detect new viruses , due to the limitation of updated signatures . the proposed system aims to motivate static code analysis based malware detection using search based machine learning algorithm which is called n-gram analysis and it detects the unnoticed malicious characteristics or vulnerabilities in the mobile applications . story_separator_special_tag the proportion of bank lending to the agricultural sector is generally low across the globe , and the situation is no different in kenya . this is despite the fact that commercial banks have continued to launch tailor-made loan products that target specific groups in the sector . studies acknowledging low credit volume in the sector have mostly focused on supply side factors that account for the status . this paper investigated mobile banking technology adoption as a factor influencing the level of agricultural credit demand by agricultural households . using data from dairy farmers , the study explored the relationship between an individual s espousal of mobile banking technology and the likelihood to access a commercial bank loan through the mobilebanking platform . specific social-demographic factors were hypothesized to moderate the relationship between mobile-banking technology adoption and credit access . the study was anchored on the fact that the world is swiftly transiting from an industrial to a knowledge-based technological environment for sustainable development . in line with this , commercial banks have been in the forefront in substituting traditional banking models with innovative technology based models in offering banking services including credit an individual s espousal and frequency story_separator_special_tag portable devices are today used in all areas of life thanks to their ease of use as well as their applications with unique features . the increase in the number of users , however , also leads to an increase in security threats . this study examines the threats to mobile operating systems . addressing the four mobile operating systems ( android , apple os ( ios ) , symbian and java me ) with the highest number of users , the study provides statistical information about the features of the corresponding operating systems and their areas of use . in the study , the most important threats faced by the mobile operating systems ( malware , vulnerabilities , attacks ) and the risks posed by these threats were analyzed in chronological order and the future-oriented security perspective was suggested . story_separator_special_tag round the globe mobile devices like smartphone , pdas & tablets are playing an essential role in every person day to day lives . various operating systems such as android , ios , blackberry etc provides a platform for smart devices . google 's android is a one of the most popular and user friendly open source software platform for mobile devices . along with its convenience people are likely to download and install malicious applications developed to misguide the user which will create a security gap to penetrate for the attacker . hackers are inclined to discover and exploit the new vulnerabilities which will bring forth with the latest version of android . in this paper , we concentrate on examining and understanding the vulnerabilities exist in android operating system . we will also suggest a metadata model which gives the information of all the related terms required for vulnerability assessment . here , analyzing data is extracted from open source vulnerability database ( osvdb ) and national vulnerability database ( nvd ) . story_separator_special_tag hadoop is a very efficient distributed processing framework . it 's based on map-reduce approach where the application is divided into small fragments of work , each of which may be executed on any node in the cluster . hadoop is very efficient tool in storing and processing unstructured , semi-structured and structured data . unstructured data usually refers to the data stored in files not in traditional row and column way . examples of unstructured data is e-mail messages , videos , audio files , photos , web-pages , and many other kinds of business documents . our work primarily focuses on detecting malware for unstructured data stored in hadoop distributed file system environment . here we use calm av 's updated free virus signature database . we also propose a fast string search algorithm based on map-reduce approach . story_separator_special_tag malware is a computer program or a piece of software that is designed to penetrate and detriment computers without owner 's permission . there are different malware types such as viruses , rootkits , keyloggers , worms , trojans , spywares , ransomware , backdoors , bots , logic bomb , etc . volume , variant and speed of propagation of malwares are increasing every year . antivirus companies are receiving thousands of malwares on the daily basis , so detection of malwares is complex and time consuming task . there are many malwares detection techniques like signature based detection , behavior based detection and machine learning based techniques , etc . the signatures based detection system fails for new unknown malware . in case of behavior based detection , if the antivirus program identify attempt to change or alter a file or communication over internet then it will generate alarm signal , but still there is a chance of false positive rate . also the obfuscation and polymorphism techniques are hinderers the malware detection process . in this paper we propose new method to detect malwares based on the frequency of opcodes in the portable executable file . this story_separator_special_tag as smartphones and mobile devices are rapidly becoming indispensable for many network users , mobile malware has become a serious threat in the network security and privacy . especially on the popular android platform , many malicious apps are hiding in a large number of normal apps , which makes the malware detection more challenging . in this paper , we propose a ml-based method that utilizes more than 200 features extracted from both static analysis and dynamic analysis of android app for malware detection . the comparison of modeling results demonstrates that the deep learning technique is especially suitable for android malware detection and can achieve a high level of 96 % accuracy with real-world android application sets . story_separator_special_tag the use of smartphones ( sps ) with android operating system ( aos ) has reached unprecedented popularity . this is due to the many features that these devices offer as internet connection , storage of information as well as the ability to perform diverse online transactions . as a result , these devices have become the main target of malware attacks that try to exploit the security vulnerabilities of aos.therefore , in order to mitigate these attacks , methods for malware analysis and detection are needed.in this work a method for analysis and detection of malware , which can run natively in the device , is proposed . the approach can analyze applications already installed on the device , monitor new apps installations or updates . static analysis is used to determine the permissions , hardware and software features requested by applications . an application being analyzed is classified as malware or benign using a model based on ensemble machine learning classifiers and feature selection algorithms . to validate the proposed method , 1377 malware samples and 1377 benign samples , collected from different sources , were used.results show that the proposed approach detects malware with 96.26 % of story_separator_special_tag google 's android platform is a widely anticipated open source operating system for mobile phones . this article describes android 's security model and attempts to unmask the complexity of secure application development . the authors conclude by identifying lessons and opportunities for future enhancements . story_separator_special_tag the most popular smartphone platforms i.e . android and ios are equipped with the built-in security features to safeguard their end users . android , being an open source mobile operating system , has some security vulnerabilities . such limitations are also present in ios which is a proprietary platform with some open source components . in this paper we will compare in detail the security features of android and ios , with the intent to integrate the need based security ( nbs ) model in android which selectively grants permission to access resources on a smartphone at run time . this paper proposes the implementation of a reverse engineering process which restricts an app 's permissions and provides a need based mechanism to access resources . the repackaged app with need based security will run on all devices that were supported by the original application . story_separator_special_tag we live in the era of mobile computing . mobile devices havemore sensors and more capabilities than desktop computers . forany computing device that contains sensitive information andaccesses the internet , security is a major concern for bothenterprises and end-users . of the mobile devices commonly inuse , ios and android are the prevalent platforms ; each platformhas a unique architecture and security policy relating to howthey handle these sensitive permissions ; due to these differencesone platform is likely more secure than the other . a deep staticand dynamic analysis of the applications available for eachplatform was conducted in order to determine on whichoverprivileged applications were more prevalent . story_separator_special_tag the massive adoption of mobile devices by individuals as well as by organizations has brought forth many security concerns . their significant abilities have resulted in their permeating use while correspondingly increasing their attractiveness as targets for cybercriminals . consequently , mobile device vendors have increasingly focused on security in their design efforts . however , present security features might still be insufficient to protect users ' assets . in this paper , factors that influence security within the two leading mobile platforms , android and ios , are presented and examined to promote discussion while studying them under one umbrella . we consider various factors that influence security on both platforms , such as application provenance , application permissions , application isolation , and encryption mechanisms . story_separator_special_tag a smart phone is a mobile phone with highly advanced features . a smart phone has a high resolution touch screen display , wi-fi connectivity , web browsing capabilities and the ability to accept the sophisticated applications . the majority of these devices run on any of these popular mobile operating systems such as android , ios , blackberry operating system and windows operating system . today the smart phone world is categorized into three aspects depends upon the mobile operating system which is used in a particular smart phone . these three major mobile operating systems are android from samsung , ios from apple and windows from microsoft . technology and features may vary from one type of mobile operating system to another type of mobile operating system . this paper produces a comparative study on smart phone operating systems android , ios , windows . big differences are highlighted when ios is developed but android are not developed . the present time we can see that ios providing so much security for the user but we can see that android ( google ) given as security features security patch system which is coming at inbuilt in new mobile story_separator_special_tag in this paper smartphones are discussed . today 's smartphone are more common than computers . in fact , smart phones are simply computers with extra hardware-namely , a gsm ( global system for mobile communications ) radio and a baseband processor to control it . these extra features are great , but with the power they provide , there 's also a threat . today , smartphones are becoming targets of attackers in the same way pcs have been for many years . this paper focus on the security models of two smart phone operating systems : apple 's ios and google 's android . these two have a special place in my heart because i was the first to publicly exploit both of them . story_separator_special_tag modern smartphones have a rich spectrum of increasingly sophisticated features , opening opportunities for software-led innovation . of the large number of platforms to develop new software on , in this paper we look closely at three platforms identified as market leaders for the smartphone market by gartner group in 2013 and one platform , firefox os , representing a new paradigm for operating systems based on web technologies . we compare the platforms in several different categories , such as software architecture , application development , platform capabilities and constraints , and , finally , developer support . using the implementation of a mobile version of the tic-tac-toe game on all the four platforms , we seek to investigate strengths , weaknesses and challenges of mobile application development on these platforms . big differences are highlighted when inspecting community environments , hardware abilities and platform maturity . these inevitably impact upon developer choices when deciding on mobile platform development strategies . story_separator_special_tag operating system platform of the intelligent equipment of current mainstream and traditional pc gradually evolved into windows , ios and android camps . among intelligent equipment , android & ios due to openess and flexibility got the initiative , while windows has been in worse situation . but with the microsoft windows 8 & windows phone 8 , the pattern of the future may change dramatically . microsoft is expecting a dominant position in this intelligent device operating system competition via announcing these new members . nokia hopes windows system could bring it back to its glorious . the thesis conducted a detailed comparison on these two brand new operating systems , and tried to find their superiority in the application of intelligent equipment . at the same time , author wish could help to clarify the direction for both companies on operating system in this more and more serious ecological environment competition . story_separator_special_tag the technological advancements in mobile connectivity services such as gprs , gsm , 3g , 4g , blue-tooth , wimax , and wi-fi made mobile phones a necessary component of our daily lives . also , mobile phones have become smart which let the users perform routine tasks on the go . however , this rapid increase in technology and tremendous usage of the smartphones make them vulnerable to malware and other security breaching attacks . this diverse range of mobile connectivity services , device software platforms , and standards make it critical to look at the holistic picture of the current developments in smartphone security research . in this paper , our contribution is twofold . firstly , we review the threats , vulnerabilities , attacks and their solutions over the period of 2010-2015 with a special focus on smartphones . attacks are categorized into two types , i.e. , old attack and new attacks . with this categorization , we aim to provide an easy and concise view of different attacks and the possible solutions to improve smartphone security . secondly , we critically analyze our findings and estimate the market growth of different operating systems for the story_separator_special_tag windows phone 7 is a new smartphone operating system with the potential to become one of the major smartphone platforms in the near future . phones based on windows phone 7 are only available since a few months , so digital forensics of the new system is still in its infancy . this paper is a first look at windows phone 7 from a forensics perspective . it explains the main characteristics of the platform , the problems that forensic investigators face , methods to circumvent those problems and a set of tools to get data from the phone . data that can be acquired include the file system , the registry , and active tasks . based on the file system , further information like smss , emails and facebook data can be extracted . story_separator_special_tag two years ago , the authors assessed 20 mobile applications that worked with ics software and hardware . at that time , mobile technologies were widespread , but iot mania was only beginning . in that paper , the authors stated , convenience often wins over security . nowadays , you can monitor ( or even control ! ) your ics from a brand-new android [ device ] . today , the idea of putting logging , monitoring , and even supervisory/control functions in the cloud is not so farfetched . the purpose of this paper is to discuss how the landscape has evolved over the past two years and assess the security posture of scada systems and mobile applications in this new iot era . 2 \xa9 2017 embedi , ioactive , inc. all rights reserved contents 3 4 4 5 5 6 6 6 6 7 8 8 8 9 12 14 15 17 17 20 20 21 22 25 26 27 28 30 31 31 32 acronyms introduction scada and mobile applications local applications remote applications typical threats and attacks threat types unauthorized physical access to the device or virtual access to device data communication channel compromise story_separator_special_tag mobile security draws more attention while the mobile device gains its popularity . malwares just like viruses , botnet and worms , become concerns since the frequently leakage of personal information . this paper investigates malicious attacks through bluetooth and malwares in different operating systems of mobile devices such as blackberry os , ios , android os and windows phone . besides , countermeasures of vulnerability are also discussed to protect the security and privacy of mobile devices . story_separator_special_tag abstract scholars have not regarded somalia as a place of relevance to thinking about nuclear security . this article gives four reasons why this perspective is not well founded . first , as the state strengthens it needs an international atomic energy agency ( iaea ) nuclear security regime for the control of nuclear materials . second , it has unsecured uranium reserves that could be smuggled abroad . third , those unsecured uranium reserves could be accessed by terrorists for use in a dirty bomb . fourth , there is evidence of past ecomafia intent and planning , and possible success , in dumping radioactive waste on land in somalia or in its territorial waters . the article proposes an innovative system of uranium ore fingerprinting , covert sensors , mobile phone reporting and surveying and evaluation capabilities that would address all four issues . the proposed system would include a low-cost method for turning any smart phone into a radiation detector to crowdsource reporting of possible nuclear materials , plus aerial and underwater drones with low cost radiation sensors . story_separator_special_tag abstract as increasing in number of android phones there is simultaneous increase in mobile malware apps which performs malicious activities such as misusing user 's private information as sending messages i.e . sms , reading users contact information and can harm user by exploiting the user 's confidential data which is stored in mobile . malware are speeded not only infecting the user 's data but also harming several organizations in term of stealing of private and confidential data . hence malware classification and identification is a critical issue . android users are unaware about several apps which they are using whether they are malware infected or not . android applications require the concept of permission mechanism to show that apps are using certain permissions to get access to information from your device . android apps which are installed in the smart phones get access to all the required permission during installation of apps . google assure their customer in terms of security about the apps which are available to download from there play store . android operating system is open system and it allows users to install any applications downloaded from any unsafe site . however permission mechanism is story_separator_special_tag martphones are becoming a vehicle to provide an efficient and convenient way to access , find and share information ; however , the availability of this information has caused an increase in cyber attacks . currently , cyber threats range from trojans and viruses to botnets and toolkits . presently , 96 % of smartphones do not have pre-installed security software . this lack in security is an opportunity for malicious cyber attackers to hack into the various devices that are popular ( i.e . android , iphone and blackberry ) . traditional security software found in personal computers ( pcs ) , such as firewalls , antivirus , and encryption , is not currently available in smartphones . moreover , smartphones are even more vulnerable than personal computers because more people are using smartphones to do personal tasks . nowadays , smartphone users can email , use social networking applications ( facebook and twitter ) , buy and download various applications and shop . furthermore , users can now conduct monetary transactions , such as buying goods , redeeming coupons and tickets , banking and processing point-of-sale payments . monetary transactions are especially attractive to cyber attackers because they story_separator_special_tag application phishing attacks are rooted in users inability to distinguish legitimate applications from malicious ones . previous work has shown that personalized security indicators can help users in detecting application phishing attacks in mobile platforms . a personalized security indicator is a visual secret , shared between the user and a security-sensitive application ( e.g. , mobile banking ) . the user sets up the indicator when the application is started for the first time . later on , the application displays the indicator to authenticate itself to the user . despite their potential , no previous work has addressed the problem of how to securely setup a personalized security indicator -- a procedure that can itself be the target of phishing attacks . in this paper , we propose a setup scheme for personalized security indicators . our solution allows a user to identify the legitimate application at the time she sets up the indicator , even in the presence of malicious applications . we implement and evaluate a prototype of the proposed solution for the android platform . we also provide the results of a small-scale user study aimed at evaluating the usability and security of our solution story_separator_special_tag new techniques for detecting the presence of mobile malware can help protect smartphones from potential security threats . story_separator_special_tag the popularity of android os has dramatically increased malware apps targeting this mobile os . the daily amount of malware has overwhelmed the detection process . this fact has motivated the need for developing malware detection and family attribution solutions with the least manual intervention . in response , we propose cypider framework , a set of techniques and tools aiming to perform a systematic detection of mobile malware by building an efficient and scalable similarity network infrastructure of malicious apps . our detection method is based on a novel concept , namely malicious community , in which we consider , for a given family , the instances that share common features . under this concept , we assume that multiple similar android apps with different authors are most likely to be malicious . cypider leverages this assumption for the detection of variants of known malware families and zero-day malware . it is important to mention that cypider does not rely on signature-based or learning-based patterns . alternatively , it applies community detection algorithms on the similarity network , which extracts sub-graphs considered as suspicious and most likely malicious communities . furthermore , we propose a novel fingerprinting technique , story_separator_special_tag purpose this paper aims to report on the information security behaviors of smartphone users in an affluent economy of the middle east . design/methodology/approach a model based on prior research , synthesized from a thorough literature review , is tested using survey data from 500 smartphone users representing three major mobile operating systems . findings the overall level of security behaviors is low . regression coefficients indicate that the efficacy of security measures and the cost of adopting them are the main factors influencing smartphone security behaviors . at present , smartphone users are more worried about malware and data leakage than targeted information theft . research limitations/implications threats and counter-measures co-evolve over time , and our findings , which describe the state of smartphone security at the current time , will need to be updated in the future . practical implications measures to improve security practices of smartphone users are needed urgently . the findings ind . story_separator_special_tag viruses and malwares can spread from computer networks into mobile networks with the rapid growth of smart cellphone users . in a mobile network , viruses and malwares can cause privacy data leakage , extra charges , and remote listening . furthermore , they can jam wireless servers by sending thousands of spam messages or track user positions through gps . because of the potential damages of mobile viruses , it is important for us to gain a deep understanding of the propagation mechanisms of mobile viruses . in this paper , we propose a two-layer network model for simulating virus propagation through both bluetooth and sms . different from previous work , our work addresses the impacts of human behaviors , i.e. , operational behavior and mobile behavior , on virus propagation . our simulation results provide further insights into the determining factors of virus propagation in mobile networks . moreover , we examine two strategies for restraining mobile virus propagation , i.e. , preimmunization and adaptive dissemination strategies drawing on the methodology of autonomy-oriented computing ( aoc ) . the experimental results show that our strategies can effectively protect large-scale and/or highly dynamic mobile networks . story_separator_special_tag abstract current mobile authentication solutions put a cognitive burden on users to detect and avoid man-in-the-middle attacks . in this paper , we present a mobile authentication protocol named mobile-id which prevents man-in-the-middle attacks without relying on a human in the loop . with mobile-id , the message signed by the secure element on the mobile device incorporates the context information of the connected service provider . hence , upon receiving the signed message the mobile-id server could easily identify the existence of an on-going attack and notify the genuine service provider . story_separator_special_tag abstract ssl/tls ( secure socket layer/transport layer security ) -enabled web applications aim to provide public key certificate based authentication , secure session key establishment , and symmetric key based traffic confidentiality . a large number of electronic commerce applications , such as stock trading , banking , shopping , and gaming rely on the security strength of the ssl/tls protocol . in recent times , a potential threat , known as main-in-the-middle ( mitm ) attack , has been exploited by attackers of ssl/tls-enabled web applications , particularly when naive users want to connect to an ssl/tls-enabled web server . in this paper , we discuss about the mitm threat to ssl/tls-enabled web applications . we review the existing space of solutions to counter the mitm attack on ssl/tls-enabled applications , and then , we provide an effective solution which can resist the mitm attack on ssl/tls-enabled applications . the proposed solution uses a soft-token based approach for user authentication on top of the ssl/tls s security features . we show that the proposed solution is secure , efficient and user friendly in comparison to other similar approaches . story_separator_special_tag botnets are prevailing mechanisms for the facilitation of the distributed denial of service ( ddos ) attacks on computer networks or applications . currently , botnet-based ddos attacks on the application layer are latest and most problematic trends in network security threats . botnet-based ddos attacks on the application layer limits resources , curtails revenue , and yields customer dissatisfaction , among others . ddos attacks are among the most difficult problems to resolve online , especially , when the target is the web server . in this paper , we present a comprehensive study to show the danger of botnet-based ddos attacks on application layer , especially on the web server and the increased incidents of such attacks that has evidently increased recently . botnetbased ddos attacks incidents and revenue losses of famous companies and government websites are also described . this provides better understanding of the problem , current solution space , and future research scope to defend against such attacks efficiently . story_separator_special_tag the rapid development of smartphone technologies have resulted in the evolution of mobile botnets . the implications of botnets have inspired attention from the academia and the industry alike , which includes vendors , investors , hackers , and researcher community . above all , the capability of botnets is uncovered through a wide range of malicious activities , such as distributed denial of service ( ddos ) , theft of business information , remote access , online or click fraud , phishing , malware distribution , spam emails , and building mobile devices for the illegitimate exchange of information and materials . in this study , we investigate mobile botnet attacks by exploring attack vectors and subsequently present a well-defined thematic taxonomy . by identifying the significant parameters from the taxonomy , we compared the effects of existing mobile botnets on commercial platforms as well as open source mobile operating system platforms . the parameters for review include mobile botnet architecture , platform , target audience , vulnerabilities or loopholes , operational impact , and detection approaches . in relation to our findings , research challenges are then presented in this domain . story_separator_special_tag backdoor as a mechanism surreptitiously introduced into a computer system is widely used in performing network attacks . in this article , it is considered to detect its presence while helping an attacker to bypass normal authentication methods of a computer to maintain the access gained . in the latest researches have been done on this field so far , it is emphasized on analyzing only the behavior of backdoors . however , in this paper we propose a novel approach , combining systemic and behavioral features focusing on the `` cmd '' phase that the attacker sends commands to the victim . through the detection method driven in this article , at first we gather the systemic and behavioral alerts produced while the attacker is installing and utilizing the backdoor interactively and then categorize them by specific features selected to give scores to the both aspects seen . scores are given in two steps . the first step based on the prominent systemic alerts selected which are specified to backdoors and in the second step we give scores to the behavior it has in the command phase by creating and running a markov model . literally , the scores story_separator_special_tag abstract the ios operating system has long been a subject of interest among the forensics and law enforcement communities . with a large base of interest among consumers , it has become the target of many hackers and criminals alike , with many celebrity thefts ( for example , the recent article how did scarlett johansson 's phone get hacked ? ) of data raising awareness to personal privacy . recent revelations ( privacy scandal : nsa can spy on smart phone data , 2013 , how the nsa spies on smartphones including the blackberry ) exposed the use ( or abuse ) of operating system features in the surveillance of targeted individuals by the national security agency ( nsa ) , of whom some subjects appear to be american citizens . this paper identifies the most probable techniques that were used , based on the descriptions provided by the media , and today 's possible techniques that could be exploited in the future , based on what may be back doors , bypass switches , general weaknesses , or surveillance mechanisms intended for enterprise use in current release versions of ios . more importantly , i will identify several
we report a measurement of the b-mode polarization power spectrum in the cosmic microwave background ( cmb ) using the polarbear experiment in chile . the faint b-mode polarization signature carries information about the universe 's entire history of gravitational structure formation , and the cosmic inflation that may have occurred in the very early universe . our measurement covers the angular multipole range 500 < l < 2100 and is based on observations of an effective sky area of 25 square degrees with 3.5 arcmin resolution at 150 ghz . on these angular scales , gravitational lensing of the cmb by intervening structure in the universe is expected to be the dominant source of b-mode polarization . including both systematic and statistical uncertainties , the hypothesis of no b-mode polarization power from gravitational lensing is rejected at 97.1 % confidence . the band powers are consistent with the standard cosmological model . fitting a single lensing amplitude parameter a_bb to the measured band powers , a_bb = 1.12 +/- 0.61 ( stat ) +0.04/-0.12 ( sys ) +/- 0.07 ( multi ) , where a_bb = 1 is the fiducial wmap-9 lcdm value . in this expression , `` story_separator_special_tag we present a measurement of the $ b $ -mode polarization power spectrum ( the $ bb $ spectrum ) from 100 $ \\mathrm { deg } ^2 $ of sky observed with sptpol , a polarization-sensitive receiver currently installed on the south pole telescope . the observations used in this work were taken during 2012 and early 2013 and include data in spectral bands centered at 95 and 150 ghz . we report the $ bb $ spectrum in five bins in multipole space , spanning the range $ 300 \\le \\ell \\le 2300 $ , and for three spectral combinations : 95 ghz $ \\times $ 95 ghz , 95 ghz $ \\times $ 150 ghz , and 150 ghz $ \\times $ 150 ghz . we subtract small ( $ < 0.5 \\sigma $ in units of statistical uncertainty ) biases from these spectra and account for the uncertainty in those biases . the resulting power spectra are inconsistent with zero power but consistent with predictions for the $ bb $ spectrum arising from the gravitational lensing of $ e $ -mode polarization . if we assume no other source of $ bb $ power besides lensed story_separator_special_tag we report results from the bicep2 experiment , a cosmic microwave background ( cmb ) polarimeter specifically designed to search for the signal of inflationary gravitational waves in the b-mode power spectrum around 80. the telescope comprised a 26 cm aperture all-cold refracting optical system equipped with a focal plane of 512 antenna coupled transition edge sensor 150 ghz bolometers each with temperature sensitivity of 300 k ( cmb ) s. bicep2 observed from the south pole for three seasons from 2010 to 2012. a low-foreground region of sky with an effective area of 380 square deg was observed to a depth of 87 nk deg in stokes q and u. in this paper we describe the observations , data reduction , maps , simulations , and results . we find an excess of b-mode power over the base lensed- cdm expectation in the range 30 < < 150 , inconsistent with the null hypothesis at a significance of > 5 . through jackknife tests and simulations based on detailed calibration measurements we show that systematic contamination is much smaller than the observed excess . cross correlating against wmap 23 ghz maps we find that galactic synchrotron makes a negligible story_separator_special_tag we present results from an analysis of all data taken by the bicep2 and keck array cosmic microwave background ( cmb ) polarization experiments up to and including the 2014 observing season . this includes the first keck array observations at 95 ghz . the maps reach a depth of 50 nk deg in stokes q and u in the 150 ghz band and 127 nk deg in the 95 ghz band . we take auto- and cross-spectra between these maps and publicly available maps from wmap and planck at frequencies from 23 to 353 ghz . an excess over lensed cdm is detected at modest significance in the 95\xd7150 bb spectrum , and is consistent with the dust contribution expected from our previous work . no significant evidence for synchrotron emission is found in spectra such as 23\xd795 , or for correlation between the dust and synchrotron sky patterns in spectra such as 23\xd7353 . we take the likelihood of all the spectra for a multicomponent model including lensed cdm , dust , synchrotron , and a possible contribution from inflationary gravitational waves ( as parametrized by the tensor-to-scalar ratio r ) using priors on the frequency spectral behaviors story_separator_special_tag the search for the curl component ( b mode ) in the cosmic microwave background ( cmb ) polarization induced by inflationary gravitational waves is described . the canonical single-field slow-roll model of inflation is presented , and we explain the quantum production of primordial density perturbations and gravitational waves . it is shown how these gravitational waves then give rise to polarization in the cmb . we then describe the geometric decomposition of the cmb polarization pattern into a curl-free component ( e mode ) and curl component ( b mode ) and show explicitly that gravitational waves induce b modes . we discuss the b modes induced by gravitational lensing and by galactic foregrounds and show how both are distinguished from those induced by inflationary gravitational waves . issues involved in the experimental pursuit of these b modes are described , and we summarize some of the strategies being pursued . we close with a brief discussion of some other avenues toward detecting/characterizing the inflationar . story_separator_special_tag bicep1 is a millimeter-wavelength telescope designed specifically to measure the inflationary b-mode polarization of the cosmic microwave background ( cmb ) at degree angular scales . we present results from an analysis of the data acquired during three seasons of observations at the south pole ( 2006 to 2008 ) . this work extends the two-year result published in chiang et al . ( 2010 ) , with additional data from the third season and relaxed detector-selection criteria . this analysis also int roduces a more comprehensive estimation of band-power window functions , improved likelihood estimation methods and a new technique for deprojecting monopole temperature-to-polarization leakage which reduces this class of systematic uncertainty to a negligible level . we present maps of temperature , e- and b-mode polarization , and their associated angular power spectra . the improvement in the map noise level and polarization spectra error bars are consistent with the 52 % increase in integration time relative to chiang et al . ( 2010 ) . we confirm both self-consistency of the polarization data and consistency with the two-year results . we measure the angular power spectra at 21 l 335 and find that the ee spectrum is story_separator_special_tag we report the results of a joint analysis of data from bicep2/keck array and planck . bicep2 and keck array have observed the same approximately 400 deg^ { 2 } patch of sky centered on ra 0 h , dec. -57.5\xb0 . the combined maps reach a depth of 57 nk deg in stokes q and u in a band centered at 150 ghz . planck has observed the full sky in polarization at seven frequencies from 30 to 353 ghz , but much less deeply in any given region ( 1.2 k deg in q and u at 143 ghz ) . we detect 150\xd7353 cross-correlation in b modes at high significance . we fit the single- and cross-frequency power spectra at frequencies 150 ghz to a lensed- cdm model that includes dust and a possible contribution from inflationary gravitational waves ( as parametrized by the tensor-to-scalar ratio r ) , using a prior on the frequency spectral behavior of polarized dust emission from previous planck analysis of other regions of the sky . we find strong evidence for dust and no statistically significant evidence for tensor modes . we probe various model variations and extensions , including adding story_separator_special_tag bicep3 is a 550 mm-aperture refracting telescope for polarimetry of radiation in the cosmic microwave background at 95 ghz . it adopts the methodology of bicep1 , bicep2 and the keck array experiments | it possesses sufficient resolution to search for signatures of the inflation-induced cosmic gravitational-wave background while utilizing a compact design for ease of construction and to facilitate the characterization and mitigation of systematics . however , bicep3 represents a significant breakthrough in per-receiver sensitivity , with a focal plane area 5x larger than a bicep2/keck array receiver and faster optics ( f=1:6 vs. f=2:4 ) . large-aperture infrared-reflective metal-mesh filters and infrared-absorptive cold alumina filters and lenses were developed and implemented for its optics . the camera consists of 1280 dual-polarization pixels ; each is a pair of orthogonal antenna arrays coupled to transition-edge sensor bolometers and read out by multiplexed squids . upon deployment at the south pole during the 2014-15 season , bicep3 will have survey speed comparable to keck array 150 ghz ( 2013 ) , and will signifcantly enhance spectral separation of primordial b-mode power from that of possible galactic dust contamination in the bicep2 observation patch story_separator_special_tag we have developed antenna-coupled transition-edge sensor bolometers for a wide range of cosmic microwave background ( cmb ) polarimetry experiments , including bicep2 , keck array , and the balloon borne spider . these detectors have reached maturity and this paper reports on their design principles , overall performance , and key challenges associated with design and production . our detector arrays repeatedly produce spectral bands with 20 % 30 % bandwidth at 95 , 150 , or 230 ghz . the integrated antenna arrays synthesize symmetric co-aligned beams with controlled side-lobe levels . cross-polarized response on boresight is typically $ \\sim 0.5\\ % $ , consistent with cross-talk in our multiplexed readout system . end-to-end optical efficiencies in our cameras are routinely 35 % or higher , with per detector sensitivities of net ~ 300 $ \\mu { { \\rm { k } } } _ { \\mathrm { cmb } } \\sqrt { { \\rm { s } } } $ . thanks to the scalability of this design , we have deployed 2560 detectors as 1280 matched pairs in keck array with a combined instantaneous sensitivity of $ \\sim 9\\ ; \\mu { { \\rm { k story_separator_special_tag between the bicep2 and keck array experiments , we have deployed over 1500 dual polarized antenna coupled bolometers to map the cosmic microwave background s polarization . we have been able to rapidly deploy these detectors because they are completely planar with an integrated phased-array antenna . through our experience in these experiments , we have learned of several challenges with this technology- specifically the beam synthesis in the antenna- and in this paper we report on how we have modified our designs to mitigate these challenges . in particular , we discus differential steering errors between the polarization pairs beam centroids due to microstrip cross talk and gradients of penetration depth in the niobium thin films of our millimeter wave circuits . we also discuss how we have suppressed side lobe response with a gaussian taper of our antenna illumination pattern . these improvements will be used in spider , polar-1 , and this season s retrofit of keck array . story_separator_special_tag bicep3 is a 520mm aperture , compact two-lens refractor designed to observe the polarization of the cosmic microwave background ( cmb ) at 95 ghz . its focal plane consists of modularized tiles of antenna-coupled transition edge sensors ( tess ) , similar to those used in bicep2 and the keck array . the increased per-receiver optical throughput compared to bicep2/keck array , due to both its faster f=1:7 optics and the larger aperture , more than doubles the combined mapping speed of the bicep/keck program . the bicep3 receiver was recently upgraded to a full complement of 20 tiles of detectors ( 2560 tess ) and is now beginning its second year of observation ( and first science season ) at the south pole . we report on its current performance and observing plans . given its high per-receiver throughput while maintaining the advantages of a compact design , bicep3- class receivers are ideally suited as building blocks for a 3rd-generation cmb experiment , consisting of multiple receivers spanning 35 ghz to 270 ghz with total detector count in the tens of thousands . we present plans for such an array , the new `` bicep array '' that story_separator_special_tag we report on the design and performance of our second-generation 32-channel time-division multiplexer developed for the readout of large-format arrays of superconducting transition-edge sensors . we present design issues and measurement results on its gain , bandwidth , noise , and cross talk . in particular , we discuss noise performance at low frequency , important for long uninterrupted submillimeter/far-infrared observations , and present a scheme for mitigation of low-frequency noise . also , results are presented on the decoupling of the input circuit from the first-stage feedback signal by means of a balanced superconducting quantum interference device pair . finally , the first results of multiplexing several input channels in a switched , digital flux-lock loop are shown . story_separator_special_tag we have developed multi-channel electronics ( mce ) which work in concert with time-domain multiplexors developed at nist , to control and read signals from large format bolometer arrays of superconducting transition edge sensors ( tess ) . these electronics were developed as part of the submillimeter common-user bolometer array-2 ( scuba2 ) camera , but are now used in several other instruments . the main advantages of these electronics compared to earlier versions is that they are multi-channel , fully programmable , suited for remote operations and provide a clean geometry , with no electrical cabling outside of the faraday cage formed by the cryostat and the electronics chassis . story_separator_special_tag ground-based millimeter and sub-millimeter telescopes are attempting to image the sky with ever-larger cryogenically-cooled bolometer arrays , but face challenges in mitigating the infrared loading accompanying large apertures . absorptive infrared filters supported by mechanical coolers scale insufficiently with aperture size . reflective metal-mesh filters placed behind the telescope window provide a scalable solution in principle , but have been limited by photolithography constraints to diameters under 300\xa0mm . we present laser etching as an alternate technique to photolithography for fabrication of large-area reflective filters , and show results from lab tests of 500-mm-diameter filters . filters with up to 700-mm diameter can be fabricated using laser etching with existing capability . story_separator_special_tag the astronomical instrumentation group at cardiff university has been developing metal mesh optical filters for more than 30 years , which are currently in use in many ground- , balloon- and space-based instruments . here we review the current state of the art with respect to these quasi-optical components ( low-pass , high-pass and band-pass filters , dichroics and beam-dividers ) as developed for the fir and sub-millimetre wavelength region . we compare performance data with various modelling tools ( hfss , transmission line theory or floquet mode analysis ) . these models assist with our understanding of the behaviour of these filters when used at non-normal incidence or in the diffraction region of the grid structures . interesting artefacts , such as the wood anomalies and behaviour with s and p polarisations , which dictate the usage of these components in polarisation sensitive instruments , will be discussed . story_separator_special_tag bicep3 is a 550-mm aperture telescope with cold , on-axis , refractive optics designed to observe at the 95-ghz band from the south pole . it is the newest member of the bicep/keck family of inflationary probes specifically designed to measure the polarization of the cosmic microwave background ( cmb ) at degree angular scales . bicep3 is designed to house 1280 dual-polarization pixels , which , when fully populated , totals to $ $ \\sim $ $ 9 $ $ \\times $ $ \xd7 the number of pixels in a single keck 95-ghz receiver , thus further advancing the bicep/keck program s 95\xa0ghz mapping speed . bicep3 was deployed during the austral summer of 2014 2015 with nine detector tiles , to be increased to its full capacity of 20 in the second season . after instrument characterization , measurements were taken , and cmb observation commenced in april 2015. together with multi-frequency observation data from planck , bicep2 , and the keck array , bicep3 is projected to set upper limits on the tensor-to-scalar ratio to $ $ r \\lesssim 0.03 $ $ r 0.03 at 95\xa0 % c.l . story_separator_special_tag the inflationary paradigm of the early universe predicts a stochastic background of gravitational waves which would generate a b-mode polarization pattern in the cosmic microwave background ( cmb ) at degree angular scales . precise measurement of b-modes is one of the most compelling observational goals in modern cosmology . since 2011 , the keck array has deployed over 2500 transition edge sensor ( tes ) bolometer detectors at 100 and 150 ghz to the south pole in pursuit of degree-scale b-modes , and bicep3 will follow in 2015 with 2500 more at 100 ghz . characterizing the spectral response of these detectors is important for controlling systematic effects that could lead to leakage from the temperature to polarization signal , and for understanding potential coupling to atmospheric and astrophysical emission lines . we present complete spectral characterization of the keck array detectors , made with a martin-puplett fourier transform spectrometer at the south pole , and preliminary spectra of bicep3 detectors taken in lab . we show band centers and effective bandwidths for both keck array bands , and use models of the atmosphere at the south pole to cross check our absolute calibration . our procedure for obtaining story_separator_special_tag bicep3 is a small-aperture refracting cosmic microwave background ( cmb ) telescope designed to make sensitive polarization maps in pursuit of a potential b-mode signal from inflationary gravitational waves . it is the latest in the bicep/keck array series of cmb experiments located at the south pole , which has provided the most stringent constraints on inflation to date . for the 2016 observing season , bicep3 was outfitted with a full suite of 2400 optically coupled detectors operating at 95 ghz . in these proceedings we report on the far field beam performance using calibration data taken during the 2015-2016 summer deployment season in situ with a thermal chopped source . we generate high-fidelity per-detector beam maps , show the array-averaged beam profile , and characterize the differential beam response between co-located , orthogonally polarized detectors which contributes to the leading instrumental systematic in pair differencing experiments . we find that the levels of differential pointing , beamwidth , and ellipticity are similar to or lower than those measured for bicep2 and keck array . the magnitude and distribution of bicep3 s differential beam mismatch and the level to which temperature-to-polarization leakage may be marginalized over or subtracted in
this study investigated the effect of an innovative chilling device that intends to make subjects more alert and less sleepy . tests were conducted using a variety of methods including electric-encephalography ( eeg ) brain tomography . a series of behavioral tests showed an increase in alertness , changes of body temperatures , and performance indicators after usage of this device . the device chills specific areas of the body and disrupts the body s ability to self-regulate core body temperature . the induced temperature shifts may reduce the body s capability to go to sleep . physiological changes and brain wave indicators of alertness were also reviewed in this paper . a full study of alertness indicators in expanded driver simulations is recommended . as for future application of this device to human factors aspects , this device may have the potential to enhance alertness in the human dimension of machine operation of manned and unmanned assets with further improvement . introduction this study has investigated a device that makes people more alert while driving and prevents them from falling asleep at the wheel . a prototype has been tested with alertness and behavioral performance measures as well as story_separator_special_tag a novel low-computation discriminative feature space is introduced for facial expression recognition capable of robust performance over a rang of image resolutions . our approach is based on the simple local binary patterns ( lbp ) for representing salient micro-patterns of face images . compared to gabor wavelets , the lbp features can be extracted faster in a single scan through the raw image and lie in a lower dimensional space , whilst still retaining facial information efficiently . template matching with weighted chi square statistic and support vector machine are adopted to classify facial expressions . extensive experiments on the cohn-kanade database illustrate that the lbp features are effective and efficient for facial expression discrimination . additionally , experiments on face images with different resolutions show that the lbp features are robust to low-resolution images , which is critical in real-world applications where only low-resolution video input is available . story_separator_special_tag computer animated agents and robots bring a social dimension to human computer interaction and force us to think in new ways about how computers could be used in daily life . face to face communication is a real-time process operating at a a time scale in the order of 40 milliseconds . the level of uncertainty at this time scale is considerable , making it necessary for humans and machines to rely on sensory rich perceptual primitives rather than slow symbolic inference processes . in this paper we present progress on one such perceptual primitive . the system automatically detects frontal faces in the video stream and codes them with respect to 7 dimensions in real time : neutral , anger , disgust , fear , joy , sadness , surprise . the face finder employs a cascade of feature detectors trained with boosting techniques [ 15 , 2 ] . the expression recognizer receives image patches located by the face detector . a gabor representation of the patch is formed and then processed by a bank of svm classifiers . a novel combination of adaboost and svm 's enhances performance . the system was tested on the cohn-kanade dataset story_separator_special_tag within the past decade , significant effort has occurred in developing methods of facial expression analysis . because most investigators have used relatively limited data sets , the generalizability of these various methods remains unknown . we describe the problem space for facial expression analysis , which includes level of description , transitions among expressions , eliciting conditions , reliability and validity of training and test data , individual differences in subjects , head orientation and scene complexity image characteristics , and relation to non-verbal behavior . we then present the cmu-pittsburgh au-coded face expression image database , which currently includes 2105 digitized image sequences from 182 adult subjects of varying ethnicity , performing multiple tokens of most primary facs action units . this database is the most comprehensive testbed to date for comparative studies of facial expression analysis . story_separator_special_tag driver behavior plays a critical role in driving safety . besides alcohol and fatigue , emotion is another factor influencing driver behavior . thus , the detection of driver emotion can contribute to improve driving safety . in this paper , we use bayesian network ( bns ) to develop a detection model of driver emotion with electroencephalogram ( eeg ) , which considers two factors of driver personality and traffic situation . the preliminary experiment results suggest that this method is feasible and therefore can be used to provide adaptive aiding . story_separator_special_tag in this paper , we present a methodology and a wearable system for the evaluation of the emotional states of car-racing drivers . the proposed approach performs an assessment of the emotional states using facial electromyograms , electrocardiogram , respiration , and electrodermal activity . the system consists of the following : 1 ) the multisensorial wearable module ; 2 ) the centralized computing module ; and 3 ) the system 's interface . the system has been preliminary validated by using data obtained from ten subjects in simulated racing conditions . the emotional classes identified are high stress , low stress , disappointment , and euphoria . support vector machines ( svms ) and adaptive neuro-fuzzy inference system ( anfis ) have been used for the classification . the overall classification rates achieved by using tenfold cross validation are 79.3 % and 76.7 % for the svm and the anfis , respectively . story_separator_special_tag supporting drivers by advanced driver assistance systems ( adas ) significantly increases road safety . driver s emotions recognition is a building block of advanced systems for monitoring the driver s comfort and driving ergonomics additionally to driver s fatigue and drowsiness forecasting . this paper presents an approach for driver emotions recognition involving a set of three physiological signals ( electrodermal activity , skin temperature and the electrocardiogram ) . additionally , we propose a cnn ( cellular neural network ) based classifier to classify each signal into four emotional states . moreover , the subject-independent classification results of all signals are fused using dempster-shafer evidence theory in order to obtain a more robust detection of the true emotional state . the new system is tested using the benchmarked mahnob hci dataset and the results show a relatively high performance compared to existing competing algorithms from the recent relevant literature . story_separator_special_tag the paper attempted the recognition of multiple driverspsila emotional state from physiological signals . the major challenge of the research is due to the severe inter-driver variation such that the features of different emotional state are high correlated , and it is found that simple decorrelation method can not normalize the features well to achieve acceptable classification accuracy . hence , in this paper , we propose to apply a latent variable to represent the hidden attribute of individual driver and use statistical training . in addition , we applied temporal constraints for the inference process to improve the recognition accuracy . experimental results show that the proposed method outperform existing algorithms used for emotional state recognition . story_separator_special_tag in this paper , we uncover a new potential application for multi-media technologies : affective intelligent car interfaces for enhanced driving safety . we also describe the experiment we conducted in order to map certain physiological signals ( galvanic skin response , heart beat , and temperature ) to certain driving-related emotions and states ( frustration/anger , panic/fear , and boredom/sleepiness ) . we demonstrate the results we obtained and describe how we use these results to facilitate a more natural human-computer interaction in our multimodal affective car interface for the drivers of the future cars . story_separator_special_tag automated analysis of human affective behavior has attracted increasing attention in recent years . driver 's emotion often influences driving performance which can be improved if the car actively responds to the emotional state of the driver . it is important for an intelligent driver support system to accurately monitor the driver 's state in an unobtrusive and robust manner . ever changing environment while driving poses a serious challenge to existing techniques for speech emotion recognition . in this paper , we utilize contextual information of the outside environment as well as inside car user to improve the emotion recognition accuracy . in particular , a noise cancellation technique is used to suppress the noise adaptively based on the driving context and a gender based context information is analyzed for developing the classifier . experimental analyses show promising results . story_separator_special_tag this paper presents a real-time emotion recognition concept of voice streams . a comprehensive solution based on bayesian quadratic discriminate classifier ( qdc ) is developed . the developed system supports advanced driver assistance systems ( adas ) to detect the mood of the driver based on the fact that aggressive behavior on road leads to traffic accidents . we use only 12 features to classify between 5 different classes of emotions . we illustrate that the extracted emotion features are highly overlapped and how each emotion class is effecting the recognition ratio . finally , we show that the bayesian quadratic discriminate classifier is an appropriate solution for emotion detection systems , where a real-time detection is deeply needed with a low number of features . story_separator_special_tag the article describes a database of emotional speech . ten actors ( 5 female and 5 male ) simulated the emotions , producing 10 german utterances ( 5 short and 5 longer sentences ) which could be used in everyday communication and are interpretable in all applied emotions . the recordings were taken in an anechoic chamber with high-quality recording equipment . in addition to the sound electro-glottograms were recorded . the speech material comprises about 800 sentences ( seven emotions * ten actors * ten sentences + some second versions ) . the complete database was evaluated in a perception test regarding the recognisability of emotions and their naturalness . utterances recognised better than 80 % and judged as natural by more than 60 % of the listeners were phonetically labelled in a narrow transcription with special markers for voice-quality , phonatory and articulatory settings and articulatory features . the database can be accessed by the public via the internet ( http : //www.expressive-speech.net/emodb/ ) . story_separator_special_tag a non-intrusive fatigue detection system based on the video analysis of drivers.eye closure duration measured through eye state information and yawning analyzed through mouth state information.lips are searched through spatial fuzzy c-means ( s-fcm ) clustering.pupils are also detected in the upper part of the face window on the basis of radii , inter-pupil distance and angle.the monitored information of eyes and mouth are further passed to fuzzy expert system ( fes ) that classifies the true state of the driver . this paper presents a non-intrusive fatigue detection system based on the video analysis of drivers . the system relies on multiple visual cues to characterize the level of alertness of the driver . the parameters used for detecting fatigue are : eye closure duration measured through eye state information and yawning analyzed through mouth state information . initially , the face is located through viola-jones face detection method to ensure the presence of driver in video frame . then , a mouth window is extracted from the face region , in which lips are searched through spatial fuzzy c-means ( s-fcm ) clustering . simultaneously , the pupils are also detected in the upper part of the face story_separator_special_tag driver 's fatigue is one of the major causes of traffic accidents , particularly for drivers of large vehicles ( such as buses and heavy trucks ) due to prolonged driving periods and boredom in working conditions . in this paper , we propose a vision-based fatigue detection system for bus driver monitoring , which is easy and flexible for deployment in buses and large vehicles . the system consists of modules of head-shoulder detection , face detection , eye detection , eye openness estimation , fusion , drowsiness measure percentage of eyelid closure ( perclos ) estimation , and fatigue level classification . the core innovative techniques are as follows : 1 ) an approach to estimate the continuous level of eye openness based on spectral regression ; and 2 ) a fusion algorithm to estimate the eye state based on adaptive integration on the multimodel detections of both eyes . a robust measure of perclos on the continuous level of eye openness is defined , and the driver states are classified on it . in experiments , systematic evaluations and analysis of proposed algorithms , as well as comparison with ground truth on perclos measurements , are performed story_separator_special_tag this report summarizes the results of a 3-year research project to develop reliable algorithms for the detection of motor vehicle driver impairment due to drowsiness . these algorithms are based on driving performance measures that can potentially be computed on-board a vehicle during highway driving , such as measures of steering wheel movements and lane tracking . a principal objective of such algorithms is that they correlate highly with , and thus are indicative of , psychophysiological measures of driver alertness/fatigue . additional objectives are that developed algorithms produce low false alarm rates , that there should be minimal encumbering of ( interference with ) the driver , and that the algorithms should be suitable for later field testing . this report describes driving simulation and other studies performed to develop , validate , and refine such algorithms . story_separator_special_tag abstract objectives drowsy driving is a serious highway safety problem . if drivers could be warned before they became too drowsy to drive safely , some drowsiness-related crashes could be prevented . the presentation of timely warnings , however , depends on reliable detection . to date , the effectiveness of drowsiness detection methods has been limited by their failure to consider individual differences . the present study sought to develop a drowsiness detection model that accommodates the varying individual effects of drowsiness on driving performance . methods nineteen driving behavior variables and four eye feature variables were measured as participants drove a fixed road course in a high fidelity motion-based driving simulator after having worked an 8-h night shift . during the test , participants were asked to report their drowsiness level using the karolinska sleepiness scale at the midpoint of each of the six rounds through the road course . a multilevel ordered logit ( mol ) model , an ordered logit model , and an artificial neural network model were used to determine drowsiness . results the mol had the highest drowsiness detection accuracy , which shows that consideration of individual differences improves the models ability to story_separator_special_tag distracted driving is one of the main causes of vehicle collisions in the united states . passively monitoring a driver 's activities constitutes the basis of an automobile safety system that can potentially reduce the number of accidents by estimating the driver 's focus of attention . this paper proposes an inexpensive vision-based system to accurately detect eyes off the road ( eor ) . the system has three main components : 1 ) robust facial feature tracking ; 2 ) head pose and gaze estimation ; and 3 ) 3-d geometric reasoning to detect eor . from the video stream of a camera installed on the steering wheel column , our system tracks facial features from the driver 's face . using the tracked landmarks and a 3-d face model , the system computes head pose and gaze direction . the head pose estimation algorithm is robust to nonrigid face deformations due to changes in expressions . finally , using a 3-d geometric analysis , the system reliably detects eor . story_separator_special_tag abstract automated estimation of the allocation of adriver s visual attention may be a critical component offuture advanced driver assistance systems . in theory , vision-based tracking of the eye can provide a good estimateof gaze location . in practice , eye tracking from videois challenging because of sunglasses , eyeglass re ections , lighting conditions , occlusions , motion blur , and otherfactors . estimation of head pose , on the other hand , isrobust to many of these effects , but can not provide as ne-grained of a resolution in localizing the gaze . however , for the purpose of keeping the driver safe , it is suf cientto partition gaze into regions . in this effort , we proposea system that extracts facial features and classi es theirspatial con guration into six regions in real-time . ourproposed method achieves an average accuracy of 91.4 % at an average decision rate of 11 hz on a dataset of 50drivers from an on-road study.index terms head pose estimation , gaze tracking , driverdistraction , driver assistance systems , on-road study . story_separator_special_tag fast and accurate upper-body and head pose estimation is a key task for automatic monitoring of driver attention , a challenging context characterized by severe illumination changes , occlusions and extreme poses . in this work , we present a new deep learning framework for head localization and pose estimation on depth images . the core of the proposal is a regressive neural network , called poseidon , which is composed of three independent convolutional nets followed by a fusion layer , specially conceived for understanding the pose by depth . in addition , to recover the intrinsic value of face appearance for understanding head position and orientation , we propose a new face-from-depth model for learning image faces from depth . results in face reconstruction are qualitatively impressive . we test the proposed framework on two public datasets , namely biwi kinect head pose and ict-3dhp , and on pandora , a new challenging dataset mainly inspired by the automotive setup . results show that our method overcomes all recent state-of-art works , running in real time at more than 30 frames per second . story_separator_special_tag a driver 's behaviors can be affected by visual , cognitive , auditory , and manual distractions . while it is important to identify the patterns associated with particular secondary tasks , it is more general and useful to define distraction modes that capture the general behaviors induced by various sources of distractions . by explicitly modeling the distinction between types of distractions , we can assess the detrimental effects induced by new in-vehicle technology . this study investigates drivers ' behaviors associated with visual and cognitive distractions , both separately and jointly . external observers assessed the perceived cognitive and visual distractions from real-world driving recordings , showing high interevaluator agreement in both dimensions . the scores from the perceptual evaluation are used to define regression models with elastic net regularization and binary classifiers to separately estimate the cognitive and visual distraction levels . the analysis reveals multimodal features that are discriminative of cognitive and visual distractions . furthermore , the study proposes a novel joint visual cognitive distraction space to characterize driver behaviors . a data-driven clustering approach identifies four distraction modes that provide insights to better understand the deviation in driving behaviors induced by secondary tasks . story_separator_special_tag abstract measuring driver workload is of great significance for improving the understanding of driver behaviours and supporting the improvement of advanced driver assistance systems technologies . in this paper , a novel hybrid method for measuring driver workload estimation for real-world driving data is proposed . error reduction ratio causality , a new nonlinear causality detection approach , is being proposed in order to assess the correlation of each measured variable to the variation of workload . a full model describing the relationship between the workload and the selected important measurements is then trained via a support vector regression model . real driving data of 10 participants , comprising 15 measured physiological and vehicle-state variables are used for the purpose of validation . test results show that the developed error reduction ratio causality method can effectively identify the important variables that relate to the variation of driver workload , and the support vector regression based model can successfully and robustly estimate workload . story_separator_special_tag as use of in-vehicle information systems ( iviss ) such as cell phones , navigation systems , and satellite radios has increased , driver distraction has become an important and growing safety concern . a promising way to overcome this problem is to detect driver distraction and adapt in-vehicle systems accordingly to mitigate such distractions . to realize this strategy , this paper applied support vector machines ( svms ) , which is a data mining method , to develop a real-time approach for detecting cognitive distraction using drivers ' eye movements and driving performance data . data were collected in a simulator experiment in which ten participants interacted with an ivis while driving . the data were used to train and test both svm and logistic regression models , and three different model characteristics were investigated : how distraction was defined , which data were input to the model , and how the input data were summarized . the results show that the svm models were able to detect driver distraction with an average accuracy of 81.1 % , outperforming more traditional logistic regression models . the best performing model ( 96.1 % accuracy ) resulted when distraction was story_separator_special_tag real-time driver distraction detection is the core to many distraction countermeasures and fundamental for constructing a driver-centered driver assistance system . while data-driven methods demonstrate promising detection performance , a particular challenge is how to reduce the considerable cost for collecting labeled data . this paper explored semi-supervised methods for driver distraction detection in real driving conditions to alleviate the cost of labeling training data . laplacian support vector machine and semi-supervised extreme learning machine were evaluated using eye and head movements to classify two driver states : attentive and cognitively distracted . with the additional unlabeled data , the semi-supervised learning methods improved the detection performance ( $ g $ -mean ) by 0.0245 , on average , over all subjects , as compared with the traditional supervised methods . as unlabeled training data can be collected from drivers ' naturalistic driving records with little extra resource , semi-supervised methods , which utilize both labeled and unlabeled data , can enhance the efficiency of model development in terms of time and cost . story_separator_special_tag many road accidents occur due to distracted drivers . today , driver monitoring is essential even for the latest autonomous vehicles to alert distracted drivers in order to take over control of the vehicle in case of emergency . in this paper , a spatio-temporal approach is applied to classify drivers distraction level and movement decisions using convolutional neural networks ( cnns ) . we approach this problem as action recognition to benefit from temporal information in addition to spatial information . our approach relies on features extracted from sparsely selected frames of an action using a pre-trained bn-inception network . experiments show that our approach outperforms the state-of-the art results on the distracted driver dataset ( 96.31 % ) , with an accuracy of 99.10 % for 10-class classification while providing real-time performance . we also analyzed the impact of fusion using rgb and optical flow modalities with a very recent data level fusion strategy . the results on the distracted driver and brain4cars datasets show that fusion of these modalities further increases the accuracy . story_separator_special_tag driver decisions and behaviors are essential factors that can affect the driving safety . to understand the driver behaviors , a driver activities recognition system is designed based on the deep convolutional neural networks ( cnn ) in this paper . specifically , seven common driving activities are identified , which are the normal driving , right mirror checking , rear mirror checking , left mirror checking , using in-vehicle radio device , texting , and answering the mobile phone , respectively . among these activities , the first four are regarded as normal driving tasks , while the rest three are classified into the distraction group . the experimental images are collected using a low-cost camera , and ten drivers are involved in the naturalistic data collection . the raw images are segmented using the gaussian mixture model to extract the driver body from the background before training the behavior recognition cnn model . to reduce the training cost , transfer learning method is applied to fine tune the pre-trained cnn models . three different pre-trained cnn models , namely , alexnet , googlenet , and resnet50 are adopted and evaluated . the detection results for the seven tasks story_separator_special_tag driver decisions and behaviors regarding the surrounding traffic are critical to traffic safety . it is important for an intelligent vehicle to understand driver behavior and assist in driving tasks according to their status . in this paper , the consumer range camera kinect is used to monitor drivers and identify driving tasks in a real vehicle . specifically , seven common tasks performed by multiple drivers during driving are identified in this paper . the tasks include normal driving , left- , right- , and rear-mirror checking , mobile phone answering , texting using a mobile phone with one or both hands , and the setup of in-vehicle video devices . the first four tasks are considered safe driving tasks , while the other three tasks are regarded as dangerous and distracting tasks . the driver behavior signals collected from the kinect consist of a color and depth image of the driver inside the vehicle cabin . in addition , 3-d head rotation angles and the upper body ( hand and arm at both sides ) joint positions are recorded . then , the importance of these features for behavior recognition is evaluated using random forests and maximal information story_separator_special_tag driving a car in an urban setting is an extremely difficult problem , incorporating a large number of complex visual tasks ; however , this problem is solved daily by most adults with little apparent effort . this paper proposes a novel vision-based approach to autonomous driving that can predict and even anticipate a driver 's behavior in real time , using preattentive vision only . experiments on three large datasets totaling over 200 000 frames show that our preattentive model can 1 ) detect a wide range of driving-critical context such as crossroads , city center , and road type ; however , more surprisingly , it can 2 ) detect the driver 's actions ( over 80 % of braking and turning actions ) and 3 ) estimate the driver 's steering angle accurately . additionally , our model is consistent with human data : first , the best steering prediction is obtained for a perception to action delay consistent with psychological experiments . importantly , this prediction can be made before the driver 's action . second , the regions of the visual field used by the computational model strongly correlate with the driver 's gaze locations story_separator_special_tag predicting driver behavior is a key component for advanced driver assistance systems ( adas ) . in this paper , a novel approach based on support vector machine and bayesian filtering is proposed for online lane change intention prediction . the approach uses the multiclass probabilistic outputs of the support vector machine as an input to the bayesian filter , and the output of the bayesian filter is used for the final prediction of lane changes . a lane tracker integrated in a passenger vehicle is used for real-world data collection for the purpose of training and testing . data from different drivers on different highways were used to evaluate the robustness of the approach . the results demonstrate that the proposed approach is able to predict driver intention to change lanes on average 1.3 seconds in advance , with a maximum prediction horizon of 3.29 seconds . story_separator_special_tag lane changes are stressful maneuvers for drivers , particularly during high-speed traffic flows . advanced driver-assistance systems ( adass ) aim to assist drivers during lane change maneuvers . a system that is developed for an average driver or all drivers will have to be conservative for safety reasons to cover all driver/vehicle types . such a conservative system may not be acceptable to aggressive drivers and could be perceived as too aggressive by the more passive drivers . an adas that takes into account the dynamics and characteristics of each individual vehicle/driver system during lane change maneuvers will be more effective and more acceptable to drivers without sacrificing safety . in this paper , we develop a methodology that learns the characteristics of an individual driver/vehicle response before and during lane changes and under different driving environments . these characteristics are captured by a set of models whose parameters are adjusted online to fit the individual vehicle/driver response during lane changes . we develop a two-layer model to describe the maneuver kinematics . the lower layer describes lane change as a kinematic model . the higher layer model establishes the kinematic model parameter values for the particular driver and story_separator_special_tag advanced driver assistance systems ( adas ) have made driving safer over the last decade . they prepare vehicles for unsafe road conditions and alert drivers if they perform a dangerous maneuver . however , many accidents are unavoidable because by the time drivers are alerted , it is already too late . anticipating maneuvers beforehand can alert drivers before they perform the maneuver and also give adas more time to avoid or prepare for the danger . in this work we propose a vehicular sensor-rich platform and learning algorithms for maneuver anticipation . for this purpose we equip a car with cameras , global positioning system ( gps ) , and a computing device to capture the driving context from both inside and outside of the car . in order to anticipate maneuvers , we propose a sensory-fusion deep learning architecture which jointly learns to anticipate and fuse multiple sensory streams . our architecture consists of recurrent neural networks ( rnns ) that use long short-term memory ( lstm ) units to capture long temporal dependencies . we propose a novel training procedure which allows the network to predict the future given only a partial temporal context . we story_separator_special_tag anticipating the future actions of a human is a widely studied problem in robotics that requires spatio-temporal reasoning . in this work we propose a deep learning approach for anticipation in sensory-rich robotics applications . we introduce a sensory-fusion architecture which jointly learns to anticipate and fuse information from multiple sensory streams . our architecture consists of recurrent neural networks ( rnns ) that use long short-term memory ( lstm ) units to capture long temporal dependencies . we train our architecture in a sequence-to-sequence prediction manner , and it explicitly learns to predict the future given only a partial temporal context . we further introduce a novel loss layer for anticipation which prevents over-fitting and encourages early anticipation . we use our architecture to anticipate driving maneuvers several seconds before they happen on a natural driving data set of 1180 miles . the context for maneuver anticipation comes from multiple sensors installed on the vehicle . our approach shows significant improvement over the state-of-the-art in maneuver anticipation by increasing the precision from 77.4 % to 90.5 % and recall from 71.2 % to 87.4 % . story_separator_special_tag despite extraordinary progress of advanced driver assistance systems ( adas ) , an alarming number of over 1,2 million people are still fatally injured in traffic accidents every year1 . human error is mostly responsible for such casualties , as by the time the adas system has alarmed the driver , it is often too late . we present a vision-based system based on deep neural networks with 3d convolutions and residual learning for anticipating the future maneuver based on driver observation . while previous work focuses on hand-crafted features ( e.g . head pose ) , our model predicts the intention directly from video in an end-to-end fashion . our architecture consists of three components : a neural network for extraction of optical flow , a 3d residual network for maneuver classification and a long short-term memory network ( lstm ) for handling temporal data of varying length . to evaluate our idea , we conduct thorough experiments on the publicly available brain4cars benchmark , which covers both inside and outside views for future maneuver anticipation . our model is able to predict driver intention with an accuracy of 83,12 % and 4,07s before the beginning of the maneuver story_separator_special_tag in this paper , we demonstrate a driver intent inference system that is based on lane positional information , vehicle parameters , and driver head motion . we present robust computer vision methods for identifying and tracking freeway lanes and driver head motion . these algorithms are then applied and evaluated on real-world data that are collected in a modular intelligent vehicle test bed . analysis of the data for lane change intent is performed using a sparse bayesian learning methodology . finally , the system as a whole is evaluated using a novel metric and real-world data of vehicle parameters , lane position , and driver head motion . story_separator_special_tag intelligent vehicles and advanced driver assistance systems ( adas ) need to have proper awareness of the traffic context , as well as the driver status since adas share the vehicle control authorities with the human driver . this paper provides an overview of the ego-vehicle driver intention inference ( dii ) , which mainly focuses on the lane change intention on highways . first , a human intention mechanism is discussed in the beginning to gain an overall understanding of the driver intention . next , the ego-vehicle driver intention is classified into different categories based on various criteria . a complete dii system can be separated into different modules , which consist of traffic context awareness , driver states monitoring , and the vehicle dynamic measurement module . the relationship between these modules and the corresponding impacts on the dii are analyzed . then , the lane change intention inference system is reviewed from the perspective of input signals , algorithms , and evaluation . finally , future concerns and emerging trends in this area are highlighted . story_separator_special_tag the paper proposes an advanced driver-assistance system that correlates the driver 's head pose to road hazards by analyzing both simultaneously . in particular , we aim at the prevention of rear-end crashes due to driver fatigue or distraction . we contribute by three novel ideas : asymmetric appearance-modeling , 2d to 3d pose estimation enhanced by the introduced fermat-point transform , and adaptation of global haar ( ghaar ) classifiers for vehicle detection under challenging lighting conditions . the system defines the driver 's direction of attention ( in 6 degrees of freedom ) , yawning and head-nodding detection , as well as vehicle detection , and distance estimation . having both road and driver 's behaviour information , and implementing a fuzzy fusion system , we develop an integrated framework to cover all of the above subjects . we provide real-time performance analysis for real-world driving scenarios . story_separator_special_tag current pedestrian collision warning systems use either auditory alarms or visual symbols to inform drivers . these traditional approaches can not tell the driver where the detected pedestrians are located , which is critical for the driver to respond appropriately . to address this problem , we introduce a new driver interface taking advantage of a volumetric head-up display ( hud ) . in our experimental user study , sixteen participants drove a test vehicle in a parking lot while braking for crossing pedestrians using different interface designs on the hud . our results showed that spatial information provided by conformal graphics on the hud resulted in not only better driver performance but also smoother braking behavior as compared to the baseline . story_separator_special_tag in conditional driving automation , drivers can occasionally disengage from driving to undertake non-driving related tasks . however , when driving situations that can possibly not be managed by the automated system are encountered , situation awareness of drivers is required to avoid accidents . recent studies have revealed that surrounding traffic conditions , complexity of the driving scenario , secondary tasks , speed of ego vehicle , and takeover request experience affect takeover performance . however , neither the scope nor the variety of the dependencies of these variables to human cognitive abilities allowing also situation awareness is known in detail . this contribution discusses the dependencies between the reaction of humans ( takeover time ) and the complexity of the driving task ( working task ) and properties of the secondary task ( non-driving-related task ) . the effect of the variables are systemically varied to generate different driving situations to better understand their scope and interaction . afterwards , experimental results under different variable combinations are discussed . an initial formulation is established to describe the effects . story_separator_special_tag in the development of autonomous vehicles , the main focus of sensor research has been in relation to environmental perception , and only minimal work has focused on the human vehicle interaction perspective . however , human factors need to be considered to ensure the safe operation of partially autonomous vehicles . this study briefly introduces a design methodology for the takeover request ( tor ) time in national highway traffic safety administration level 3 vehicles and compares four different tors in a simulated environment based on human-in-the-loop experiments with various driving scenarios . a total of 30 drivers participated in the study , and the quantitative/qualitative data obtained show statistically significant differences between the four tor thresholds . this study shows that the timing involved in the takeover can be obtained by using a performance-based approach considering human factors . story_separator_special_tag human drivers in autonomous vehicles will monitor the system and be ready to resume control in ambiguous or emergency situations . as a driver 's reaction time to intervene after having realized a problem has occurred can be critical , we present the interactive automation control system ( iacs ) to assist the driver when their takeover is required . the system displays manual or automated mode in an unobstrusive location in the vehicle , signaling when a tor is necessary . we evaluate the system 's performance during a situation in which the automation has not been defined to operate and study its impact on the overall driving performance , specifically the driver 's reaction time to a take over request ( tor ) . results showed significant improvements in driving performance with the proposed system . both the response time to the tor and the number of collisions decreased when the iacs was activated . subjective ratings of the system regarding its performance showed high satisfaction levels . story_separator_special_tag recent studies analyzing driver behavior report that various factors may influence a driver 's take-over readiness when resuming control after an automated driving section . however , there has been little effort made to transfer and integrate these findings into an automated system which classifies the driver 's take-over readiness and derives the expected take-over quality . this study now introduces a new advanced driver assistance system to classify the driver 's takeover readiness in conditionally automated driving scenarios . the proposed system works preemptively , i.e. , the driver is warned in advance if a low take-over readiness is to be expected . the classification of the take-over readiness is based on three information sources : ( i ) the complexity of the traffic situation , ( ii ) the current secondary task of the driver , and ( iii ) the gazes at the road . an evaluation based on a driving simulator study with 81 subjects showed that the proposed system can detect the take-over readiness with an accuracy of 79 % . moreover , the impact of the character of the take-over intervention on the classification result is investigated . finally , a proof of concept story_separator_special_tag complex and hazardous driving situations often arise with the delayed perception of traffic objects . to automatically detect whether such objects have been perceived by the driver , there is a need for techniques that can reliably recognize whether the driver s eyes have fixated or are pursuing the hazardous object . a prerequisite for such techniques is the reliable recognition of fixations , saccades , and smooth pursuits from raw eye tracking data . this chapter addresses the challenge of analyzing the driver s visual behavior in an adaptive and online fashion to automatically distinguish between fixation clusters , saccades , and smooth pursuits . story_separator_special_tag abstract today 's driving assistance systems build on numerous sensors to provide assistance for specific tasks . in order to not patronize the driver , intensity and timing of critical responses by such systems is determined based on parameters derived from vehicle dynamics and scene recognition . however , to date , information on object perception by the driver is not considered by such systems . with advances in eye-tracking technology , a powerful tool to assess the driver 's visual perception has become available , which , in many studies , has been integrated with physiological signals , i.e. , galvanic skin response and eeg , for reliable prediction of object perception . we address the problem of aggregating binary signals from physiological sensors and eye tracking to predict a driver 's visual perception of scene hazards . in the absence of ground truth , it is crucial to use an aggregation scheme that estimates the reliability of each signal source and thus reliably aggregates signals to predict whether an object has been perceived . to this end , we apply state-of-the-art methods for response aggregation on data obtained from simulated driving sessions with 30 subjects . our results story_separator_special_tag this paper presents a novel approach to automated recognition of the driver 's activity , which is a crucial factor for determining the take-over readiness in conditionally autonomous driving scenarios . therefore , an architecture based on head-and eye-tracking data is introduced in this study and several features are analyzed . the proposed approach is evaluated on data recorded during a driving simulator study with 73 subjects performing different secondary tasks while driving in an autonomous setting . the proposed architecture shows promising results towards in-vehicle driver-activity recognition . furthermore , a significant improvement in the classification performance is demonstrated due to the consideration of novel features derived especially for the autonomous driving context . story_separator_special_tag we introduce the novel domain-specific drive & act benchmark for fine-grained categorization of driver behavior . our dataset features twelve hours and over 9.6 million frames of people engaged in distractive activities during both , manual and automated driving . we capture color , infrared , depth and 3d body pose information from six views and densely label the videos with a hierarchical annotation scheme , resulting in 83 categories . the key challenges of our dataset are : ( 1 ) recognition of fine-grained behavior inside the vehicle cabin ; ( 2 ) multi-modal activity recognition , focusing on diverse data streams ; and ( 3 ) a cross view recognition benchmark , where a model handles data from an unfamiliar domain , as sensor type and placement in the cabin can change between vehicles . finally , we provide challenging benchmarks by adopting prominent methods for video- and body pose-based action recognition . story_separator_special_tag as long as vehicles do not provide full automation , the design and function of the human machine interface ( hmi ) is crucial for ensuring that the human driver and the vehicle-based automated systems collaborate in a safe manner . when the driver is decoupled from active control , the design of the hmi becomes even more critical . without mutual understanding , the two agents ( human and vehicle ) will fail to accurately comprehend each other s intentions and actions . this paper proposes a set of design principles for in-vehicle hmi and reviews some current hmi designs in the light of those principles . we argue that in many respects , the current designs fall short of best practice and have the potential to confuse the driver . this can lead to a mismatch between the operation of the automation in the light of the current external situation and the driver s awareness of how well the automation is currently handling that situation . a model to illustrate how the various principles are interrelated is proposed . finally , recommendations are made on how , building on each principle , hmi design solutions can be adopted story_separator_special_tag this paper provides an in-depth description of the best rated human-machine interface that was presented during the 2016 grand cooperative driving challenge . it was demonstrated by the chalmers truck team as the envisioned interface to their open source software framework opendlv , which is used to power chalmers fleet of self-driving vehicles . the design originates from the postulate that the vehicle is fully autonomous to handle even complex traffic scenarios . thus , by including external and internal interfaces , and introducing a show , don t tell principle , it aims at fulfilling the needs of the vehicle occupants as well as other participants in the traffic environment . the design also attempts to comply with , and slightly extend , the current traffic rules and legislation for the purpose of being realistic for full-scale implementation . story_separator_special_tag integrated multimodal systems is one promising direction to improve human-vehicle interaction . in order to create intelligent human-vehicle interfaces and reduce visual load during secondary tasks , combining a haptic rotary device and a graphic display will provide one practical solution . however , in literature , the proper display position for the haptic rotary device is not fully investigated . in this paper , one experimental infotainment system is studied ( including a haptic rotary control device and a graphic display ) to evaluate the proper display position . measurements used include task completion time , reaction to road events , lane/velocity keeping during secondary tasks , and user preference . three display positions are considered : high mounted position , cluster position , and center stack position . the results show that , with increased on-road and off-road visual loads , the cluster display position can reduce lane position deviation significantly compared to high mounted and center stack positions . in addition , the high mounted and cluster display positions are better toward two different road events , including strong wind gust and extreme deceleration of the lead car . story_separator_special_tag while automated vehicle technology progresses , potentially leading to a safer and more efficient traffic environment , many challenges remain within the area of human factors , such as user trust for automated driving ( ad ) vehicle systems . the aim of this paper is to investigate how an appropriate level of user trust for ad vehicle systems can be created via human machine interaction ( hmi ) . a guiding framework for implementing trust-related factors into the hmi interface is presented . this trust-based framework incorporates usage phases , ad events , trust-affecting factors , and levels explaining each event from a trust perspective . based on the research findings , the authors recommend that hmi designers and automated vehicle manufacturers take a more holistic perspective on trust rather than focusing on single , isolated events , for example understanding that trust formation is a dynamic process that starts long before a user 's first contact with the system , and continues long thereafter . furthermore , factors-affecting trust change , both during user interactions with the system and over time ; thus , hmi concepts need to be able to adapt . future work should be dedicated story_separator_special_tag in this article an automation system for human-machine-interfaces ( hmi ) for setpoint adjustment using supervised learning is presented . we use hmis of multi-modal thermal conditioning systems in passenger cars as example for a complex setpoint selection system . the goal is the reduction of interaction complexity up to full automation . the approach is not limited to climate control applications but can be extended to other setpoint-based hmis . story_separator_special_tag abstract vehicle climate control systems aim to keep passengers thermally comfortable . however , current systems control temperature rather than thermal comfort and tend to be energy hungry , which is of particular concern when considering electric vehicles . this paper poses energy-efficient vehicle comfort control as a markov decision process , which is then solved numerically using sarsa ( ) and an empirically validated , single-zone , 1d thermal model of the cabin . the resulting controller was tested in simulation using 200 randomly selected scenarios and found to exceed the performance of bang-bang , proportional , simple fuzzy logic , and commercial controllers with 23 % , 43 % , 40 % , 56 % increase , respectively . compared to the next best performing controller , energy consumption is reduced by 13 % while the proportion of time spent thermally comfortable is increased by 23 % . these results indicate that this is a viable approach that promises to translate into substantial comfort and energy improvements in the car . story_separator_special_tag this article details the development of a gesture recognition technique using a mm-wave radar sensor for in-car infotainment control . gesture recognition is becoming a more prominent form of human-computer interaction and can be used in the automotive industry to provide a safe and intuitive control interface that will limit driver distraction . we use a 60 ghz mm-wave radar sensor to detect precise features of fine motion . specific gesture features are extracted and used to build a machine learning engine that can perform real-time gesture recognition . this article discusses the user requirements and in-car environmental constraints that influenced design decisions . accuracy results of the technique are presented , and recommendations for further research and improvements are made . story_separator_special_tag natural user interfaces can be an effective way to reduce driver s inattention during the driving activity . to this end , in this paper we propose a new dataset , called briareo , specifically collected for the hand gesture recognition task in the automotive context . the dataset is acquired from an innovative point of view , exploiting different kinds of cameras , i.e . rgb , infrared stereo , and depth , that provide various types of images and 3d hand joints . moreover , the dataset contains a significant amount of hand gesture samples , performed by several subjects , allowing the use of deep learning-based approaches . finally , a framework for hand gesture segmentation and classification is presented , exploiting a method introduced to assess the quality of the proposed dataset . story_separator_special_tag understanding passenger intents and extracting relevant slots are important building blocks towards developing contextual dialogue systems for natural interactions in autonomous vehicles ( av ) . in this work , we explored amie ( automated-vehicle multi-modal in-cabin experience ) , the in-cabin agent responsible for handling certain passenger-vehicle interactions . when the passengers give instructions to amie , the agent should parse such commands properly and trigger the appropriate functionality of the av system . in our current explorations , we focused on amie scenarios describing usages around setting or changing the destination and route , updating driving behavior or speed , finishing the trip and other use-cases to support various natural commands . we collected a multi-modal in-cabin dataset with multi-turn dialogues between the passengers and amie using a wizard-of-oz scheme via a realistic scavenger hunt game activity . after exploring various recent recurrent neural networks ( rnn ) based techniques , we introduced our own hierarchical joint models to recognize passenger intents along with relevant slots associated with the action to be performed in av scenarios . our experimental results outperformed certain competitive baselines and achieved overall f1 scores of 0.91 for utterance-level intent detection and 0.96 for story_separator_special_tag autonomous driving is becoming one of the most popular applications of ai . meanwhile , the advances in deep learning have promoted the rapid development of the voice controllable systems ( vcss ) , which have almost reached the maturity stage . before autonomous driving cars reach the highest level of automation , intelligent voice interaction remains the primary approach for human-vehicle interaction . recent works show that such intelligent systems are vulnerable to hidden voice commands that are unnoticed or unintelligible to humans . in particular , an adversary utilizing hidden voice commands is able to control autonomous driving cars . for example , malicious voice commands embedded into the sound of online shared videos can stealthily control the vehicle when people watch the videos in the car . in this article , we investigate the potential perniciousness of hidden voice commands on the vcs of autonomous driving cars , and then discuss feasible defense strategies . we finally propose a pop-noisebased general defense strategy that can resist various kinds of attacks . story_separator_special_tag driving is an integral part of our everyday lives , but it is also a time when people are uniquely vulnerable . previous research has demonstrated that not only does listening to suitable music while driving not impair driving performance , but it could lead to an improved mood and a more relaxed body state , which could improve driving performance and promote safe driving significantly . in this article , we propose safedj , a smartphone-based situation-aware music recommendation system , which is designed to turn driving into a safe and enjoyable experience . safedj aims at helping drivers to diminish fatigue and negative emotion . its design is based on novel interactive methods , which enable in-car smartphones to orchestrate multiple sources of sensing data and the drivers ' social context , in collaboration with cloud computing to form a seamless crowdsensing solution . this solution enables different smartphones to collaboratively recommend preferable music to drivers according to each driver 's specific situations in an automated and intelligent manner . practical experiments of safedj have proved its effectiveness in music-mood analysis , and mood-fatigue detections of drivers with reasonable computation and communication overheads on smartphones . also , story_separator_special_tag in this paper , we propose a floating , multi-layered , wide field-of-view user interface for car drivers . it utilizes stereoscopic depth and focus blurring to highlight items with high priority or urgency . individual layers are additionally used to separate groups of ui elements according to importance or context . our work is motivated by two main prospects : a fundamentally changing driver-car interaction and ongoing technology advancements for mixed reality devices . a working prototype has been implemented as part of a custom driving simulation and will be further extended . we plan evaluations in contexts ranging from manual to fully automated driving , providing context-specific suggestions . we want to determine user preferences for layout and prioritization of the ui elements , perceived quality of the interface and effects on driving performance . story_separator_special_tag situation awareness in highly automated vehicles can help the driver to get back in the loop during a take-over request ( tor ) . we propose to present the driver a detailed digital representation of situations causing a tor via a scaled down digital twin of the highway inside the car . the digital twin virtualizes real time traffic information and is displayed before the actual tor . in the car cockpit an augmented reality headset or a stereoscopic 3d ( s3d ) interface can realize the augmentation . as today 's hardware has technical limitations , we build an hmd based mock-up . we conducted a user study ( n=20 ) to assess the driver behavior during a tor . we found that workload decreases and steering performance raise significantly with the proposed system . we argue that the augmentation of the surrounding world in the car helps to improve performance during tor due to better awareness of the upcoming situation . story_separator_special_tag abstract in order to improve driving safety and minimize driving workload , the information provided should be represented in such a way that it is more easily understood and imposing less cognitive load onto the driver . augmented reality head-up display ( ar- hud ) can facilitate a new form of dialogue between the vehicle and the driver ; and enhance intelligent transportation systems by superimposing surrounding traffic information on the users view and keep drivers view on roads . in this paper , we investigated the potential costs and benefits of using ar cues to improve driving safety as new form of dialog between the vehicle and the driver . we present a new approach for marker-less ar traffics signs recognition system that superimposes augmented virtual objects onto a real scene under all types of driving situations , including unfavorable weather conditions . our method uses two steps : hypothesis generation and hypothesis verification . in the first step , region of interest ( roi ) is extracted using a scanning window with haar cascade detector and adaboost classifier to reduce the computational region in the hypothesis generation step . the second step verifies whether a given candidate and story_separator_special_tag using landmark-based navigation can greatly improve drivers ' route-finding performance . previous research in this area has tended to focus on the inclusion of text or icon-based landmark information utilising dashboard-mounted displays . in contrast , we present landmark-based navigation information using a head-up display ( hud ) . a major issue with using landmarks for navigation is their inherent variability in quality , with many 'poor ' candidates that are not easily identifiable or communicable . a proposed solution to improve the usefulness and utility of such landmarks is to highlight/enhance them using augmented reality ( ar ) . twenty participants undertook four drives in a driving simulator utilising an ar navigation system presented on a hud . participants were provided navigational instructions presented as either conventional distance-to-turn information , on-road arrows or augmented landmark information ( arrow highlighting or box enclosing landmark adjacent to the required turning ) . participants demonstrated significant performance improvements while using the ar landmark 'box ' presentations compared to conventional distance-to-turn information , with response times and success rates enhanced by 43.1 % and 26.2 % , respectively . moreover , drivers reported a significant reduction in workload when using the ar landmark
automatic segmentation of multiple sclerosis ( ms ) lesions from magnetic resonance imaging ( mri ) images is essential for clinical assessment and treatment planning of ms. recent years have seen an increasing use of convolutional neural networks ( cnns ) for this task . although these methods provide accurate segmentation , their applicability in clinical settings remains limited due to a reproducibility issue across different image domains . ms images can have highly variable characteristics across patients , mri scanners and imaging protocols ; retraining a supervised model with data from each new domain is not a feasible solution because it requires manual annotation from expert radiologists . in this work , we explore an unsupervised solution to the problem of domain shift . we present a framework , seg-jdot , which adapts a deep model so that samples from a source domain and samples from a target domain sharing similar representations will be similarly segmented . we evaluated the framework on a multi-site dataset , miccai 2016 , and showed that the adaptation towards a target site can bring remarkable improvements in a model performance over standard training . story_separator_special_tag pelvic floor dysfunction is common in women after childbirth and precise segmentation of magnetic resonance images ( mri ) of the pelvic floor may facilitate diagnosis and treatment of patients . however , because of the complexity of its structures , manual segmentation of the pelvic floor is challenging and suffers from high inter and intra-rater variability of expert raters . multiple template fusion algorithms are promising segmentation techniques for these types of applications , but they have been limited by imperfections in the alignment of templates to the target , and by template segmentation errors . a number of algorithms sought to improve segmentation performance by combining image intensities and template labels as two independent sources of information , carrying out fusion through local intensity weighted voting schemes . this class of approach is a form of linear opinion pooling , and achieves unsatisfactory performance for this application . we hypothesized that better decision fusion could be achieved by assessing the contribution of each template in comparison to a reference standard segmentation of the target image and developed a novel segmentation algorithm to enable automatic segmentation of mri of the female pelvic floor . the algorithm achieves high performance story_separator_special_tag in this paper , we propose an automated segmentation approach based on a deep two-dimensional fully convolutional neural network to segment brain multiple sclerosis lesions from multimodal magnetic resonance images . the proposed model is made as a combination of two deep subnetworks . an encoding network extracts different feature maps at various resolutions . a decoding part upconvolves the feature maps combining them through shortcut connections during an upsampling procedure . to the best of our knowledge , the proposed model is the first slice-based fully convolutional neural network for the purpose of multiple sclerosis lesion segmentation . we evaluated our network on a freely available dataset from isbi ms challenge with encouraging results from a clinical perspective . story_separator_special_tag in this paper , we present an automated approach for segmenting multiple sclerosis ( ms ) lesions from multi-modal brain magnetic resonance images . our method is based on a deep end-to-end 2d convolutional neural network ( cnn ) for slice-based segmentation of 3d volumetric data . the proposed cnn includes a multi-branch downsampling path , which enables the network to encode information from multiple modalities separately . multi-scale feature fusion blocks are proposed to combine feature maps from different modalities at different stages of the network . then , multi-scale feature upsampling blocks are introduced to upsize combined feature maps to leverage information from lesion shape and location . we trained and tested the proposed model using orthogonal plane orientations of each 3d modality to exploit the contextual information in all directions . the proposed pipeline is evaluated on two different datasets : a private dataset including 37 ms patients and a publicly available dataset known as the isbi 2015 longitudinal ms lesion segmentation challenge dataset , consisting of 14 ms patients . considering the isbi challenge , at the time of submission , our method was amongst the top performing solutions . on the private dataset , using story_separator_special_tag this paper presents a simple and effective generalization method for magnetic resonance imaging ( mri ) segmentation when data is collected from multiple mri scanning sites and as a consequence is affected by ( site- ) domain shifts . we propose to integrate a traditional encoder-decoder network with a regularization network . this added network includes an auxiliary loss term which is responsible for the reduction of the domain shift problem and for the resulting improved generalization . the proposed method was evaluated on multiple sclerosis lesion segmentation from mri data . we tested the proposed model on an in-house clinical dataset including 117 patients from 56 different scanning sites . in the experiments , our method showed better generalization performance than other baseline networks . story_separator_special_tag deep learning usually requires large amounts of labeled training data , but annotating data is costly and tedious . the framework of semi-supervised learning provides the means to use both labeled data and arbitrary amounts of unlabeled data for training . recently , semi-supervised deep learning has been intensively studied for standard cnn architectures . however , fully convolutional networks ( fcns ) set the state-of-the-art for many image segmentation tasks . to the best of our knowledge , there is no existing semi-supervised learning method for such fcns yet . we lift the concept of auxiliary manifold embedding for semi-supervised learning to fcns with the help of random feature embedding . in our experiments on the challenging task of ms lesion segmentation , we leverage the proposed framework for the purpose of domain adaptation and report substantial improvements over the baseline model . story_separator_special_tag unsupervised deep learning for medical image analysis is increasingly gaining attention , since it relieves from the need for annotating training data . recently , deep generative models and representation learning have lead to new , exciting ways for unsupervised detection and delineation of biomarkers in medical images , such as lesions in brain mr. yet , supervised deep learning methods usually still perform better in these tasks , due to an optimization for explicit objectives . we aim to combine the advantages of both worlds into a novel framework for learning from both labeled & unlabeled data , and validate our method on the challenging task of white matter lesion segmentation in brain mr images . the proposed framework relies on modeling normality with deep representation learning for unsupervised anomaly detection , which in turn provides optimization targets for training a supervised segmentation model from unlabeled data . in our experiments we successfully use the method in a semi-supervised setting for tackling domain shift , a well known problem in mr image analysis , showing dramatically improved generalization . additionally , our experiments reveal that in a completely unsupervised setting , the proposed pipeline even outperforms the deep learning story_separator_special_tag abstract deep unsupervised representation learning has recently led to new approaches in the field of unsupervised anomaly detection ( uad ) in brain mri . the main principle behind these works is to learn a model of normal anatomy by learning to compress and recover healthy data . this allows to spot abnormal structures from erroneous recoveries of compressed , potentially anomalous samples . the concept is of great interest to the medical image analysis community as it i ) relieves from the need of vast amounts of manually segmented training data a necessity for and pitfall of current supervised deep learning and ii ) theoretically allows to detect arbitrary , even rare pathologies which supervised approaches might fail to find . to date , the experimental design of most works hinders a valid comparison , because i ) they are evaluated against different datasets and different pathologies , ii ) use different image resolutions and iii ) different model architectures with varying complexity . the intent of this work is to establish comparability among recent methods by utilizing a single architecture , a single resolution and the same dataset ( s ) . besides providing a ranking of the story_separator_special_tag the evaluation of white matter lesion progression is an important biomarker in the follow-up of ms patients and plays a crucial role when deciding the course of treatment . current automated lesion segmentation algorithms are susceptible to variability in image characteristics related to mri scanner or protocol differences . we propose a model that improves the consistency of ms lesion segmentations in inter-scanner studies . first , we train a cnn base model to approximate the performance of icobrain , an fda-approved clinically available lesion segmentation software . a discriminator model is then trained to predict if two lesion segmentations are based on scans acquired using the same scanner type or not , achieving a \\ ( 78\\ % \\ ) accuracy in this task . finally , the base model and the discriminator are trained adversarially on multi-scanner longitudinal data to improve the inter-scanner consistency of the base model . the performance of the models is evaluated on an unseen dataset containing manual delineations . the inter-scanner variability is evaluated on test-retest data , where the adversarial network produces improved results over the base model and the fda-approved solution . story_separator_special_tag automatic segmentation of multiple sclerosis ( ms ) lesions is a challenging task due to their variability in shape , size , location and texture in magnetic resonance ( mr ) images . a reliable , automatic segmentation method can help diagnosis and patient follow-up while reducing the time consuming need of manual segmentation . in this paper , we present a fully automated method for ms lesion segmentation . the proposed method uses mr intensities and white matter ( wm ) priors for extraction of candidate lesion voxels and uses convolutional neural networks for false positive reduction . our networks process longitudinal data , a novel contribution in the domain of ms lesion analysis . the method was tested on the isbi 2015 dataset and obtained state-of-the-art dice results with the performance level of a trained human rater . story_separator_special_tag we propose a novel segmentation approach based on deep 3d convolutional encoder networks with shortcut connections and apply it to the segmentation of multiple sclerosis ( ms ) lesions in magnetic resonance images . our model is a neural network that consists of two interconnected pathways , a convolutional pathway , which learns increasingly more abstract and higher-level image features , and a deconvolutional pathway , which predicts the final segmentation at the voxel level . the joint training of the feature extraction and prediction pathways allows for the automatic learning of features at different scales that are optimized for accuracy for any given combination of image types and segmentation task . in addition , shortcut connections between the two pathways allow high- and low-level features to be integrated , which enables the segmentation of lesions across a wide range of sizes . we have evaluated our method on two publicly available data sets ( miccai 2008 and isbi 2015 challenges ) with the results showing that our method performs comparably to the top-ranked state-of-the-art methods , even when only relatively small data sets are available for training . in addition , we have compared our method with five freely story_separator_special_tag patients with multiple sclerosis ( ms ) regularly undergo mri for assessment of disease burden . however , interpretation may be time consuming and prone to intra- and interobserver variability . here , we evaluate the potential of artificial neural networks ( ann ) for automated volumetric assessment of ms disease burden and activity on mri . a single-institutional dataset with 334 ms patients ( 334 mri exams ) was used to develop and train an ann for automated identification and volumetric segmentation of t2/flair-hyperintense and contrast-enhancing ( ce ) lesions . independent testing was performed in a single-institutional longitudinal dataset with 82 patients ( 266 mri exams ) . we evaluated lesion detection performance ( f1 scores ) , lesion segmentation agreement ( dice coefficients ) , and lesion volume agreement ( concordance correlation coefficients [ ccc ] ) . independent evaluation was performed on the public isbi-2015 challenge dataset . the f1 score was maximized in the training set at a detection threshold of 7\xa0mm3 for t2/flair lesions and 14\xa0mm3 for ce lesions . in the training set , mean f1 scores were 0.867 for t2/flair lesions and 0.636 for ce lesions , as compared to 0.878 for story_separator_special_tag background neuropsychological deficits in patients with multiple sclerosis ( ms ) have been shown to be associated with the major pathological substrates of the disease , ie , inflammatory demyelination and neurodegeneration . double inversion recovery sequences allow cortical lesions ( cls ) to be detected in the brain of patients with ms. modern postprocessing techniques allow cortical atrophy to be assessed reliably . objective to investigate the contribution of cortical gray matter lesions and tissue loss to cognitive impairment in patients with relapsing-remitting ms. design cross-sectional survey . setting referral , hospital-based ms clinic . patients seventy patients with relapsing-remitting ms. main outcome measures neuropsychological performance was tested using the rao brief repeatable battery of neuropsychological tests , version a. patients who scored 2 sds below the mean normative values on at least 1 test of the rao brief repeatable battery of neuropsychological tests , version a , were considered to be cognitively impaired . a composite cognitive score ( the cognitive impairment index ) was computed . t2 hyperintense white matter lesion volume , contrast-enhancing lesion number , cl number and volume , normalized brain volume , and normalized neocortical gray matter volume were also assessed . results story_separator_special_tag in conjunction with the isbi 2015 conference , we organized a longitudinal lesion segmentation challenge providing training and test data to registered participants . the training data consisted of five subjects with a mean of 4.4 time-points , and test data of fourteen subjects with a mean of 4.4 time-points . all 82 data sets had the white matter lesions associated with multiple sclerosis delineated by two human expert raters . eleven teams submitted results using state-of-the-art lesion segmentation algorithms to the challenge , with ten teams presenting their results at the conference . we present a quantitative evaluation comparing the consistency of the two raters as well as exploring the performance of the eleven submitted results in addition to three other lesion segmentation algorithms . the challenge presented three unique opportunities : ( 1 ) the sharing of a rich data set ; ( 2 ) collaboration and comparison of the various avenues of research being pursued in the community ; and ( 3 ) a review and refinement of the evaluation metrics currently in use . we report on the performance of the challenge participants , as well as the construction and evaluation of a consensus delineation . story_separator_special_tag the s\xf8rensen-dice index ( sdi ) is a widely used measure for evaluating medical image segmentation algorithms . it offers a standardized measure of segmentation accuracy which has proven useful . however , it offers diminishing insight when the number of objects is unknown , such as in white matter lesion segmentation of multiple sclerosis ( ms ) patients . we present a refinement for finer grained parsing of sdi results in situations where the number of objects is unknown . we explore these ideas with two case studies showing what can be learned from our two presented studies . our first study explores an inter-rater comparison , showing that smaller lesions can not be reliably identified . in our second case study , we demonstrate fusing multiple ms lesion segmentation algorithms based on the insights into the algorithms provided by our analysis to generate a segmentation that exhibits improved performance . this work demonstrates the wealth of information that can be learned from refined analysis of medical image segmentations . story_separator_special_tag supervised machine learning algorithms , especially in the medical domain , are affected by considerable ambiguity in expert markings , primarily in proximity to lesion contours . in this study we address the case where the experts opinion for those ambiguous areas is considered as a distribution over the possible values . we propose a novel method that modifies the experts ' distributional opinion at ambiguous areas by fusing their markings based on their sensitivity and specificity . the algorithm can be applied at the end of any label fusion algorithm that can handle soft values . the algorithm was applied to obtain consensus from soft multiple sclerosis ( ms ) segmentation masks . soft ms segmentations are constructed from manual binary delineations by including lesion surrounding voxels in the segmentation mask with a reduced confidence weight . the method was evaluated on the miccai 2016 challenge dataset , and outperformed previous methods . story_separator_special_tag we present a study of multiple sclerosis segmentation algorithms conducted at the international miccai 2016 challenge . this challenge was operated using a new open-science computing infrastructure . this allowed for the automatic and independent evaluation of a large range of algorithms in a fair and completely automatic manner . this computing infrastructure was used to evaluate thirteen methods of ms lesions segmentation , exploring a broad range of state-of-the-art algorithms , against a high-quality database of 53 ms cases coming from four centers following a common definition of the acquisition protocol . each case was annotated manually by an unprecedented number of seven different experts . results of the challenge highlighted that automatic algorithms , including the recent machine learning methods ( random forests , deep learning , . ) , are still trailing human expertise on both detection and delineation criteria . in addition , we demonstrate that computing a statistically robust consensus of the algorithms performs closer to human expertise on one score ( segmentation ) although still trailing on detection scores . story_separator_special_tag objective : the aim of this study is to assess the performance of deep learning convolutional neural networks ( cnns ) in segmenting gadolinium-enhancing lesions using a large cohort of multiple sclero . story_separator_special_tag multiple sclerosis ( ms ) is a chronic disease . it affects the central nervous system and its clinical manifestation can variate . magnetic resonance imaging ( mri ) is often used to detect , characterize and quantify ms lesions in the brain , due to the detailed structural information that it can provide . manual detection and measurement of ms lesions in mri data is time-consuming , subjective and prone to errors . therefore , multiple automated methodologies for mri-based ms lesion segmentation have been proposed . here , a review of the state-of-the-art of automatic methods available in the literature is presented . the current survey provides a categorization of the methodologies in existence in terms of their input data handling , their main strategy of segmentation and their type of supervision . the strengths and weaknesses of each category are analyzed and explicitly discussed . the positive and negative aspects of the methods are highlighted , pointing out the future trends and , thus , leading to possible promising directions for future research . in addition , a further clustering of the methods , based on the databases used for their evaluation , is provided . the story_separator_special_tag segmentation of multiple sclerosis ( ms ) lesions in longitudinal brain mr scans is performed for monitoring the progression of ms lesions . we hypothesize that the spatio-temporal cues in longitudinal data can aid the segmentation algorithm . therefore , we propose a multi-task learning approach by defining an auxiliary self-supervised task of deformable registration between two time-points to guide the neural network toward learning from spatio-temporal changes . we show the efficacy of our method on a clinical dataset comprised of 70 patients with one follow-up study for each patient . our results show that spatio-temporal information in longitudinal data is a beneficial cue for improving segmentation . we improve the result of current state-of-the-art by 2.6 % in terms of overall score ( p < 0.05 ) . code is publicly available ( https : //github.com/stefandenn3r/spatio-temporal-ms-lesion-segmentation ) . story_separator_special_tag we present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning . metaphorically , the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features . our paradigm stems from recent advances in stochastic optimization and online learning which employ proximal functions to control the gradient steps of the algorithm . we describe and analyze an apparatus for adaptively modifying the proximal function , which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal function that can be chosen in hindsight . we give several efficient algorithms for empirical risk minimization problems with common and important regularization functions and domain constraints . we experimentally study our theoretical analysis and show that adaptive subgradient methods outperform state-of-the-art , yet non-adaptive , subgradient algorithms . story_separator_special_tag background and purpose : most brain lesions are characterized by hyperintense signal on flair . we sought to develop an automated deep learning based method for segmentation of abnormalities on flair and volumetric quantification on clinical brain mris across many pathologic entities and scanning parameters . we evaluated the performance of the algorithm compared with manual segmentation and existing automated methods . materials and methods : we adapted a u-net convolutional neural network architecture for brain mris using 3d volumes . this network was retrospectively trained on 295 brain mris to perform automated flair lesion segmentation . performance was evaluated on 92 validation cases using dice scores and voxelwise sensitivity and specificity , compared with radiologists9 manual segmentations . the algorithm was also evaluated on measuring total lesion volume . results : our model demonstrated accurate flair lesion segmentation performance ( median dice score , 0.79 ) on the validation dataset across a large range of lesion characteristics . across 19 neurologic diseases , performance was significantly higher than existing methods ( dice , 0.56 and 0.41 ) and approached human performance ( dice , 0.81 ) . there was a strong correlation between the predictions of lesion volume of story_separator_special_tag the appearance of contrast-enhanced pathologies ( e.g . lesion , cancer ) is an important marker of disease activity , stage and treatment efficacy in clinical trials . the automatic detection and segmentation of these enhanced pathologies remains a difficult challenge , as they can be very small and visibly similar to other non-pathological enhancements ( e.g . blood vessels ) . in this paper , we propose a deep neural network classifier for the detection and segmentation of gadolinium enhancing lesions in brain mri of patients with multiple sclerosis ( ms ) . to avoid false positive and false negative assertions , the proposed end-to-end network uses an enhancement-based attention mechanism which assigns saliency based on the differences between the t1-weighted images before and after injection of gadolinium , and works to first identify candidate lesions and then to remove the false positives . the effect of the saliency map is evaluated on 2293 patient multi-channel mri scans acquired during two proprietary , multi-center clinical trials for ms treatments . inclusion of the attention mechanism results in a decrease in false positive lesion voxels over a basic u-net [ 2 ] and deepmedic [ 6 ] . in terms story_separator_special_tag we propose a novel method to automatically detect and segment multiple sclerosis lesions , located both in white matter and in the cortex . the algorithm consists of two main steps : ( i ) a supervised approach that outputs an initial bitmap locating candidates of lesional tissue and ( ii ) a bayesian partial volume estimation framework that estimates the lesion concentration in each voxel . by using a mixel approach , potential partial volume effects especially affecting small lesions can be modeled , thus yielding improved lesion segmentation . the proposed method is tested on multiple mr image sequences including 3d mp2rage , 3d flair , and 3d dir . quantitative evaluation is done by comparison with manual segmentations on a cohort of 39 multiple sclerosis early-stage patients . story_separator_special_tag deep neural networks have shown promises in the lesion segmentation of multiple sclerosis ( ms ) from multi-contrast mri including t1 , t2 , pd and flair sequences . however , one challenge in deploying such networks into clinical practice is missing mri sequences due to the variability of image acquisition protocols . therefore , trained networks need to adapt to practical situations where specific mri sequences are unavailable . in this paper , we propose a dnn-based ms lesion segmentation framework with a novel technique called sequence dropout . without altering network architecture , our method ensured the robustness of the network to missing sequences and could achieve its maximal possible performance from a given set of input sequences.experiments were performed on the ieee isbi 2015 longitudinal ms lesion challenge dataset and our method is currently ranked 2nd with a dice similarity coefficient of 0.684. experiments also showed our network achieved its maximal performance with one missing sequence during deployment by comparing with separate networks of the same architecture but trained using the corresponding set of input sequences . our network achieved a non-inferior performance without re-training . experiments with multiple missing sequences further showed the robustness of our story_separator_special_tag multiple sclerosis ( ms ) is a chronic , often disabling , autoimmune disease affecting the central nervous system and characterized by demyelination and neuropathic alterations . magnetic resonance ( mr ) images plays a pivotal role in the diagnosis and the screening of ms. mr images identify and localize demyelinating lesions ( or plaques ) and possible associated atrophic lesions whose mr aspect is in relation with the evolution of the disease . we propose a novel ms lesions segmentation method for mr images , based on convolutional neural networks ( cnns ) and partial self-supervision and studied the pros and cons of using self-supervision for the current segmentation task . investigating the transferability by freezing the firsts convolutional layers , we discovered that improvements are obtained when the cnn is retrained from the first layers . we believe such results suggest that mri segmentation is a singular task needing high level analysis from the very first stages of the vision process , as opposed to vision tasks aimed at day-to-day life such as face recognition or traffic sign classification . the evaluation of segmentation quality has been performed on full image size binary maps assembled from predictions on story_separator_special_tag multiple sclerosis lesion activity segmentation is the task of detecting new and enlarging lesions that appeared between a baseline and a follow-up brain mri scan . while deep learning methods for single-scan lesion segmentation are common , deep learning approaches for lesion activity have only been proposed recently . here , a two-path architecture processes two 3d mri volumes from two time points . in this work , we investigate whether extending this problem to full 4d deep learning using a history of mri volumes and thus an extended baseline can improve performance . for this purpose , we design a recurrent multi-encoder-decoder architecture for processing 4d data . we find that adding more temporal information is beneficial and our proposed architecture outperforms previous approaches with a lesion-wise true positive rate of 0.84 at a lesion-wise false positive rate of 0.19 . story_separator_special_tag multiple sclerosis is an inflammatory autoimmune demyelinating disease that is characterized by lesions in the central nervous system . typically , magnetic resonance imaging ( mri ) is used for tracking disease progression . automatic image processing methods can be used to segment lesions and derive quantitative lesion parameters . so far , methods have focused on lesion segmentation for individual mri scans . however , for monitoring disease progression , lesion activity in terms of new and enlarging lesions between two time points is a crucial biomarker . for this problem , several classic methods have been proposed , e.g. , using difference volumes . despite their success for single-volume lesion segmentation , deep learning approaches are still rare for lesion activity segmentation . in this work , convolutional neural networks ( cnns ) are studied for lesion activity segmentation from two time points . for this task , cnns are designed and evaluated that combine the information from two points in different ways . in particular , two-path architectures with attention-guided interactions are proposed that enable effective information exchange between the two time point 's processing paths . it is demonstrated that deep learning-based methods outperform classic approaches story_separator_special_tag the anatomical location of imaging features is of crucial importance for accurate diagnosis in many medical tasks . convolutional neural networks ( cnn ) have had huge successes in computer vision , but they lack the natural ability to incorporate the anatomical location in their decision making process , hindering success in some medical image analysis tasks . in this paper , to integrate the anatomical location information into the network , we propose several deep cnn architectures that consider multi-scale patches or take explicit location features while training . we apply and compare the proposed architectures for segmentation of white matter hyperintensities in brain mr images on a large dataset . as a result , we observe that the cnns that incorporate location information substantially outperform a conventional segmentation method with hand-crafted features as well as cnns that do not integrate location information . on a test set of 46 scans , the best configuration of our networks obtained a dice score of 0.791 , compared to 0.797 for an independent human observer . performance levels of the machine and the independent human observer were not statistically significantly different ( p-value=0.17 ) . story_separator_special_tag magnetic resonance imaging ( mri ) is widely used in routine clinical diagnosis and treatment . however , variations in mri acquisition protocols result in different appearances of normal and diseased tissue in the images . convolutional neural networks ( cnns ) , which have shown to be successful in many medical image analysis tasks , are typically sensitive to the variations in imaging protocols . therefore , in many cases , networks trained on data acquired with one mri protocol , do not perform satisfactorily on data acquired with different protocols . this limits the use of models trained with large annotated legacy datasets on a new dataset with a different domain which is often a recurring situation in clinical settings . in this study , we aim to answer the following central questions regarding domain adaptation in medical image analysis : given a fitted legacy model , 1 ) how much data from the new domain is required for a decent adaptation of the original network ? ; and , 2 ) what portion of the pre-trained model parameters should be retrained given a certain number of the new domain training samples ? to address these questions , story_separator_special_tag fully convolutional deep neural networks have been asserted to be fast and precise frameworks with great potential in image segmentation . one of the major challenges in training such networks raises when the data are unbalanced , which is common in many medical imaging applications , such as lesion segmentation , where lesion class voxels are often much lower in numbers than non-lesion voxels . a trained network with unbalanced data may make predictions with high precision and low recall , being severely biased toward the non-lesion class which is particularly undesired in most medical applications where false negatives are actually more important than false positives . various methods have been proposed to address this problem , including two-step training , sample re-weighting , balanced sampling , and more recently , similarity loss functions and focal loss . in this paper , we fully trained convolutional deep neural networks using an asymmetric similarity loss function to mitigate the issue of data imbalance and achieve much better tradeoff between precision and recall . to this end , we developed a 3d fully convolutional densely connected network ( fc-densenet ) with large overlapping image patches as input and an asymmetric similarity loss story_separator_special_tag we introduce a deep learning image segmentation framework that is extremely robust to missing imaging modalities . instead of attempting to impute or synthesize missing data , the proposed approach learns , for each modality , an embedding of the input image into a single latent vector space for which arithmetic operations ( such as taking the mean ) are well defined . points in that space , which are averaged over modalities available at inference time , can then be further processed to yield the desired segmentation . as such , any combinatorial subset of available modalities can be provided as input , without having to learn a combinatorial number of imputation models . evaluated on two neurological mri datasets ( brain tumors and ms lesions ) , the approach yields state-of-the-art segmentation results when provided with all modalities ; moreover , its performance degrades remarkably gracefully when modalities are removed , significantly more so than alternative mean-filling or other synthesis approaches . story_separator_special_tag automatic lesion segmentation on conventional magnetic resonance imaging is an essential component in disease diagnosis , assessment , and follow-up . recently , extensive deep neural networks have been designed for automatic lesion segmentation . however , these approaches are not easy to further optimize owing to the poor interpretability . in this paper , we present a novel cross attention densely-connected network ( ca-dcn ) for multiple sclerosis lesion segmentation , which integrates attention mechanism into the encoder-decoder architecture . aiming for further improving the performance of the model , we propose a comprehensive cross attention mechanism module by combining the characteristics of spatial and channel domains . our method is evaluated on the public international symposium on biomedical imaging ( isbi ) 2015 multiple sclerosis segmentation challenge . at the time of submission , our method was amongst the top performing solution . story_separator_special_tag multiple sclerosis ( ms ) lesion segmentation from mr images is important for neuroimaging analysis . ms is diffuse , multifocal , and tend to involve peripheral brain structures such as the white matter , corpus callosum , and brainstem . recently , u-net has made great achievements in medical image segmentation area . however , the insufficiently use of context information and feature representation , makes it fail to achieve segmentation of ms lesions accurately . to solve the problem , 3d attention context u-net ( acu-net ) is proposed for ms lesion segmentation in this paper . the proposed acu-net includes 3d spatial attention block , which is used to enrich spatial details and feature representation of lesion in the decoding stage . furthermore , in the encoding and decoding stage of the network , 3d context guided module is designed for guiding local information and surrounding information . the proposed acu-net was evaluated on the isbi 2015 longitudinal ms lesion segmentation challenge dataset , and it achieved superior performance compared to latest approaches . story_separator_special_tag fueled by the diversity of datasets , semantic segmentation is a popular subfield in medical image analysis with a vast number of new methods being proposed each year . this ever-growing jungle of methodologies , however , becomes increasingly impenetrable . at the same time , many proposed methods fail to generalize beyond the experiments they were demonstrated on , thus hampering the process of developing a segmentation algorithm on a new dataset . here we present nnu-net ( 'no-new-net ' ) , a framework that automatically adapts itself to any given new dataset . while this process was completely human-driven so far , we make a first attempt to automate necessary adaptations such as preprocessing , the exact patch size , batch size , and inference settings based on the properties of a given dataset . remarkably , nnu-net strips away the architectural bells and whistles that are typically proposed in the literature and relies on just a simple u-net architecture embedded in a robust training scheme . out of the box , nnu-net achieves state of the art performance on six well-established segmentation challenges . source code is available at https : //github.com/mic-dkfz/nnunet . story_separator_special_tag highlightsan efficient 11 layers deep , multi scale , 3d cnn architecture.a novel training strategy that significantly boosts performance.the first employment of a 3d fully connected crf for post processing.state of the art performance on three challenging lesion segmentation tasks.new insights into the automatically learned intermediate representations . abstract we propose a dual pathway , 11 layers deep , three dimensional convolutional neural network for the challenging task of brain lesion segmentation . the devised architecture is the result of an in depth analysis of the limitations of current networks proposed for similar applications . to overcome the computational burden of processing 3d medical scans , we have devised an efficient and effective dense training scheme which joins the processing of adjacent image patches into one pass through the network while automatically adapting to the inherent class imbalance present in the data . further , we analyze the development of deeper , thus more discriminative 3d cnns . in order to incorporate both local and larger contextual information , we employ a dual pathway architecture that processes the input images at multiple scales simultaneously . for post processing of the network 's soft segmentation , we use a 3d fully story_separator_special_tag in this paper , we propose a fast fully convolutional neural network ( fcnn ) for crowd segmentation . by replacing the fully connected layers in cnn with 1 by 1 convolution kernels , fcnn takes whole images as inputs and directly outputs segmentation maps by one pass of forward propagation . it has the property of translation invariance like patch-by-patch scanning but with much lower computation cost . once fcnn is learned , it can process input images of any sizes without warping them to a standard size . these attractive properties make it extendable to other general image segmentation problems . based on fcnn , a multi-stage deep learning is proposed to integrate appearance and motion cues for crowd segmentation . both appearance filters and motion filers are pretrained stage-by-stage and then jointly optimized . different combination methods are investigated . the effectiveness of our approach and component-wise analysis are evaluated on two crowd segmentation datasets created by us , which include image frames from 235 and 11 scenes , respectively . they are currently the largest crowd segmentation datasets and will be released to the public . story_separator_special_tag this paper explores the use of a soft ground-truth mask ( `` soft mask '' ) to train a fully convolutional neural network ( fcnn ) for segmentation of multiple sclerosis ( ms ) lesions . detection and segmentation of ms lesions is a complex task largely due to the extreme unbalanced data , with very small number of lesion pixels that can be used for training . utilizing the anatomical knowledge that the lesion surrounding pixels may also include some lesion level information , we suggest to increase the data set of the lesion class with neighboring pixel data - with a reduced confidence weight . a soft mask is constructed by morphological dilation of the binary segmentation mask provided by a given expert , where expert-marked voxels receive label 1 and voxels of the dilated region are assigned a soft label . in the methodology proposed , the fcnn is trained using the soft mask . on the isbi 2015 challenge dataset , this is shown to provide a better precision-recall tradeoff and to achieve a higher average dice similarity coefficient . we also show that by using this soft mask scheme we can improve the network segmentation story_separator_special_tag supervised machine learning algorithms , especially in the medical domain , are affected by considerable ambiguity in expert markings . in this study we address the case where the experts ' opinion is obtained as a distribution over the possible values . we propose a soft version of the staple algorithm for experts ' markings fusion that can handle soft values . the algorithm was applied to obtain consensus from soft multiple sclerosis ( ms ) segmentation masks . soft ms segmentations are constructed from manual binary delineations by including lesion surrounding voxels in the segmentation mask with a reduced confidence weight . we suggest that these voxels contain additional anatomical information about the lesion structure . the fused masks are utilized as ground truth mask to train a fully convolutional neural network ( fcnn ) . the proposed method was evaluated on the miccai 2016 challenge dataset , and yields improved precision-recall tradeoff and a higher average dice similarity coefficient . story_separator_special_tag manual segmentation of multiple sclerosis ( ms ) in brain imaging is a challenging task due to intra and inter-observer variability resulting in poor reproducibility . to overcome the limitations of manual assessment various automatic segmentation techniques has been proposed in the literature . this paper presents the systematic review of the literature in automated multiple sclerosis lesion segmentation , the lesions complexity and classification of various existing automated methods . a comparative analysis of the various ms segmentation techniques is also presented and future directions are identified to carry out research work further in this field . story_separator_special_tag the multiple sclerosis ( ms ) lesion segmentation is critical for the diagnosis , treatment and follow-up of the ms patients . nowadays , the ms lesion segmentation in magnetic resonance image ( mri ) is a timeconsuming manual process carried out by medical experts , which is subject to intraand interexpert variability . machine learning methods including deep learning has been applied to this problem , obtaining solutions that outperformed other conventional automatic methods . deep learning methods have especially turned out to be promising , attaining human expert performance levels . our aim is to develop a fully automatic method that will help experts in their task and reduce the necessary time and effort in the process . in this paper , we propose a new approach based on convolutional neural networks ( cnn ) to the ms lesion segmentation problem . we study different cnn approaches and compare their segmentation performance . we obtain an average dice score of 57.5 % and a true positive rate of 59.7 % for a real dataset of 59 patients with a specific cnn approach , outperforming the other cnn approaches and a commonly used automatic tool for ms lesion story_separator_special_tag abstract : we introduce adam , an algorithm for first-order gradient-based optimization of stochastic objective functions , based on adaptive estimates of lower-order moments . the method is straightforward to implement , is computationally efficient , has little memory requirements , is invariant to diagonal rescaling of the gradients , and is well suited for problems that are large in terms of data and/or parameters . the method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients . the hyper-parameters have intuitive interpretations and typically require little tuning . some connections to related algorithms , on which adam was inspired , are discussed . we also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework . empirical results demonstrate that adam works well in practice and compares favorably to other stochastic optimization methods . finally , we discuss adamax , a variant of adam based on the infinity norm . story_separator_special_tag we trained a large , deep convolutional neural network to classify the 1.2 million high-resolution images in the imagenet lsvrc-2010 contest into the 1000 different classes . on the test data , we achieved top-1 and top-5 error rates of 37.5 % and 17.0 % , respectively , which is considerably better than the previous state-of-the-art . the neural network , which has 60 million parameters and 650,000 neurons , consists of five convolutional layers , some of which are followed by max-pooling layers , and three fully connected layers with a final 1000-way softmax . to make training faster , we used non-saturating neurons and a very efficient gpu implementation of the convolution operation . to reduce overfitting in the fully connected layers we employed a recently developed regularization method called `` dropout '' that proved to be very effective . we also entered a variant of this model in the ilsvrc-2012 competition and achieved a winning top-5 test error rate of 15.3 % , compared to 26.2 % achieved by the second-best entry . story_separator_special_tag in this work , we present a comparison of a shallow and a deep learning architecture for the automated segmentation of white matter lesions in mr images of multiple sclerosis patients . in particular , we train and test both methods on early stage disease patients , to verify their performance in challenging conditions , more similar to a clinical setting than what is typically provided in multiple sclerosis segmentation challenges . furthermore , we evaluate a prototype naive combination of the two methods , which refines the final segmentation . all methods were trained on 32 patients , and the evaluation was performed on a pure test set of 73 cases . results show low lesion-wise false positives ( 30 % ) for the deep learning architecture , whereas the shallow architecture yields the best dice coefficient ( 63 % ) and volume difference ( 19 % ) . combining both shallow and deep architectures further improves the lesion-wise metrics ( 69 % and 26 % lesion-wise true and false positive rate , respectively ) . story_separator_special_tag the automated detection of cortical lesions ( cls ) in patients with multiple sclerosis ( ms ) is a challenging task that , despite its clinical relevance , has received very little attention . accurate detection of the small and scarce lesions requires specialized sequences and high or ultra-high field mri . for supervised training based on multimodal structural mri at 7t , two experts generated ground truth segmentation masks of 60 patients with 2014 cls . we implemented a simplified 3d u-net with three resolution levels ( 3d u-net- ) . by increasing the complexity of the task ( adding brain tissue segmentation ) , while randomly dropping input channels during training , we improved the performance compared to the baseline . considering a minimum lesion size of 0.75 { \\mu } l , we achieved a lesion-wise cortical lesion detection rate of 67 % and a false positive rate of 42 % . however , 393 ( 24 % ) of the lesions reported as false positives were post-hoc confirmed as potential or definite lesions by an expert . this indicates the potential of the proposed method to support experts in the tedious process of cl manual segmentation story_separator_special_tag quantified volume and count of white-matter lesions based on magnetic resonance ( mr ) images are important biomarkers in several neurodegenerative diseases . for a routine extraction of these biomarkers an accurate and reliable automated lesion segmentation is required . to objectively and reliably determine a standard automated method , however , creation of standard validation datasets is of extremely high importance . ideally , these datasets should be publicly available in conjunction with standardized evaluation methodology to enable objective validation of novel and existing methods . for validation purposes , we present a novel mr dataset of 30 multiple sclerosis patients and a novel protocol for creating reference white-matter lesion segmentations based on multi-rater consensus . on these datasets three expert raters individually segmented white-matter lesions , using in-house developed semi-automated lesion contouring tools . later , the raters revised the segmentations in several joint sessions to reach a consensus on segmentation of lesions . to evaluate the variability , and as quality assurance , the protocol was executed twice on the same mr images , with a six months break . the obtained intra-consensus variability was substantially lower compared to the intra- and inter-rater variabilities , showing improved story_separator_special_tag the highest accuracy object detectors to date are based on a two-stage approach popularized by r-cnn , where a classifier is applied to a sparse set of candidate object locations . in contrast , one-stage detectors that are applied over a regular , dense sampling of possible object locations have the potential to be faster and simpler , but have trailed the accuracy of two-stage detectors thus far . in this paper , we investigate why this is the case . we discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause . we propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples . our novel focal loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training . to evaluate the effectiveness of our loss , we design and train a simple dense detector we call retinanet . our results show that when trained with the focal loss , retinanet is able to match the speed of previous one-stage detectors while surpassing the accuracy of story_separator_special_tag convolutional networks are powerful visual models that yield hierarchies of features . we show that convolutional networks by themselves , trained end-to-end , pixels-to-pixels , exceed the state-of-the-art in semantic segmentation . our key insight is to build `` fully convolutional '' networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning . we define and detail the space of fully convolutional networks , explain their application to spatially dense prediction tasks , and draw connections to prior models . we adapt contemporary classification networks ( alexnet , the vgg net , and googlenet ) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task . we then define a novel architecture that combines semantic information from a deep , coarse layer with appearance information from a shallow , fine layer to produce accurate and detailed segmentations . our fully convolutional network achieves state-of-the-art segmentation of pascal voc ( 20 % relative improvement to 62.2 % mean iu on 2012 ) , nyudv2 , and sift flow , while inference takes one third of a second for a typical image . story_separator_special_tag segmentation of anatomical regions of interest such as vessels or small lesions in medical images is still a difficult problem that is often tackled with manual input by an ex-pert . one of the major challenges for this task is that the appearance of foreground ( positive ) regions can be similar to background ( negative ) regions . as a result , many automatic segmentation algorithms tend to exhibit asymmetric errors , typically producing more false positives than false negatives . in this paper , we aim to leverage this asymmetry and train a diverse ensemble of models with very high recall , while sacrificing their precision . our core idea is straightforward : a diverse ensemble of low precision and high recall models are likely to make different false positive errors ( classifying background as foreground in different parts of the image ) , but the true positives will tend to be consistent . thus , in aggregate the false positive errors will cancel out , yielding high performance for the ensemble . our strategy is general and can be applied with any segmentation model . in three different applications ( carotid artery segmentation in a neck ct story_separator_special_tag biomedical image segmentation requires both voxel-level information and global context . we report on a deep convolutional architecture which combines a fully-convolutional network for local features and an encoder-decoder network in which convolutional layers and maxpooling compute high-level features , which are then upsampled to the resolution of the initial image using further convolutional layers and tied unpooling . we apply the method to segmenting multiple sclerosis lesions and gliomas . story_separator_special_tag segmentation of white matter lesions and deep grey matter structures is an important task in the quantification of magnetic resonance imaging in multiple sclerosis . in this paper we explore segmentation solutions based on convolutional neural networks ( cnns ) for providing fast , reliable segmentations of lesions and grey-matter structures in multi-modal mr imaging , and the performance of these methods when applied to out-of-centre data . we trained two state-of-the-art fully convolutional cnn architectures on the 2016 msseg training dataset , which was annotated by seven independent human raters : a reference implementation of a 3d unet , and a more recently proposed 3d-to-2d architecture ( deepscan ) . we then retrained those methods on a larger dataset from a single centre , with and without labels for other brain structures . we quantified changes in performance owing to dataset shift , and changes in performance by adding the additional brain-structure labels . we also compared performance with freely available reference methods . both fully-convolutional cnn methods substantially outperform other approaches in the literature when trained and evaluated in cross-validation on the msseg dataset , showing agreement with human raters in the range of human inter-rater variability . story_separator_special_tag the detection of new or enlarged white-matter lesions is a vital task in the monitoring of patients undergoing disease-modifying treatment for multiple sclerosis . however , the definition of 'new or enlarged ' is not fixed , and it is known that lesion-counting is highly subjective , with high degree of inter- and intra-rater variability . automated methods for lesion quantification , if accurate enough , hold the potential to make the detection of new and enlarged lesions consistent and repeatable . however , the majority of lesion segmentation algorithms are not evaluated for their ability to separate radiologically progressive from radiologically stable patients , despite this being a pressing clinical use-case . in this paper , we explore the ability of a deep learning segmentation classifier to separate stable from progressive patients by lesion volume and lesion count , and find that neither measure provides a good separation . instead , we propose a method for identifying lesion changes of high certainty , and establish on an internal dataset of longitudinal multiple sclerosis cases that this method is able to separate progressive from stable time-points with a very high level of discrimination ( auc = 0.999 ) , while story_separator_special_tag convolutional neural networks ( cnns ) have been recently employed to solve problems from both the computer vision and medical image analysis fields . despite their popularity , most approaches are only able to process 2d images while most medical data used in clinical practice consists of 3d volumes . in this work we propose an approach to 3d image segmentation based on a volumetric , fully convolutional , neural network . our cnn is trained end-to-end on mri volumes depicting prostate , and learns to predict segmentation for the whole volume at once . we introduce a novel objective function , that we optimise during training , based on dice coefficient . in this way we can deal with situations where there is a strong imbalance between the number of foreground and background voxels . to cope with the limited number of annotated volumes available for training , we augment the data applying random non-linear transformations and histogram matching . we show in our experimental evaluation that our approach achieves good performances on challenging test data while requiring only a fraction of the processing time needed by other previous methods . story_separator_special_tag deep learning ( dl ) networks have recently been shown to outperform other segmentation methods on various public , medical-image challenge datasets [ 3 , 11 , 16 ] , especially for large pathologies . however , in the context of diseases such as multiple sclerosis ( ms ) , monitoring all the focal lesions visible on mri sequences , even very small ones , is essential for disease staging , prognosis , and evaluating treatment efficacy . moreover , producing deterministic outputs hinders dl adoption into clinical routines . uncertainty estimates for the predictions would permit subsequent revision by clinicians . we present the first exploration of multiple uncertainty estimates based on monte carlo ( mc ) dropout [ 4 ] in the context of deep networks for lesion detection and segmentation in medical images . specifically , we develop a 3d ms lesion segmentation cnn , augmented to provide four different voxel-based uncertainty measures based on mc dropout . we train the network on a proprietary , large-scale , multi-site , multi-scanner , clinical ms dataset , and compute lesion-wise uncertainties by accumulating evidence from voxel-wise uncertainties within detected lesions . we analyze the performance of voxel-based segmentation story_separator_special_tag multiple sclerosis ( ms ) is a demyelinating disease that affects the central nervous system ( cns ) and is characterized by the presence of cns lesions . volumetric measures of tissues , including lesions , on magnetic resonance imaging ( mri ) play key roles in the clinical management and treatment evaluation of ms patient . recent advances in deep learning ( dl ) show promising results for automated medical image segmentation . in this work , we used deep convolutional neural networks ( cnns ) for brain tissue classification on mri acquired from ms patients in a large multi-center clinical trial . multi-channel mri data that included t1-weighted , dual-echo fast spin echo , and fluid-attenuated inversion recovery images were acquired on these patients . the pre-processed images ( following co-registration , skull stripping , bias field correction , intensity normalization , and de-noising ) served as the input to the cnn for tissue classification . the network was trained using expert-validated segmentation . quantitative assessment showed high dice similarity coefficients between the cnn and the validated segmentation , with dsc values of 0.94 for white matter and grey matter , 0.97 for cerebrospinal fluid , and 0.85 story_separator_special_tag abstract background magnetic resonance images with multiple contrasts or sequences are commonly used for segmenting brain tissues , including lesions , in multiple sclerosis ( ms ) . however , acquisition of images with multiple contrasts increases the scan time and complexity of the analysis , possibly introducing factors that could compromise segmentation quality . objective to investigate the effect of various combinations of multi-contrast images as input on the segmented volumes of gray ( gm ) and white matter ( wm ) , cerebrospinal fluid ( csf ) , and lesions using a deep neural network . methods u-net , a fully convolutional neural network was used to automatically segment gm , wm , csf , and lesions in 1000 ms patients . the input to the network consisted of 15 combinations of flair , t1- , t2- , and proton density-weighted images . the dice similarity coefficient ( dsc ) was evaluated to assess the segmentation performance . for lesions , true positive rate ( tpr ) and false positive rate ( fpr ) were also evaluated . in addition , the effect of lesion size on lesion segmentation was investigated . results highest dsc was observed for story_separator_special_tag background the dependence of deep-learning ( dl ) -based segmentation accuracy of brain mri on the training size is not known . purpose to determine the required training size for a desired accuracy in brain mri segmentation in multiple sclerosis ( ms ) using dl . study type retrospective analysis of mri data acquired as part of a multicenter clinical trial . study population in all , 1008 patients with clinically definite ms. field strength/sequence mris were acquired at 1.5t and 3t scanners manufactured by ge , philips , and siemens with dual turbo spin echo , flair , and t1 -weighted turbo spin echo sequences . assessment segmentation results using an automated analysis pipeline and validated by two neuroimaging experts served as the ground truth . a dl model , based on a fully convolutional neural network , was trained separately using 16 different training sizes . the segmentation accuracy as a function of the training size was determined . these data were fitted to the learning curve for estimating the required training size for desired accuracy . statistical tests the performance of the network was evaluated by calculating the dice similarity coefficient ( dsc ) , and lesion story_separator_special_tag the dice overlap ratio is commonly used to evaluate the performance of image segmentation algorithms . while dice overlap is very useful as a standardized quantitative measure of segmentation accuracy in many applications , it offers a very limited picture of segmentation quality in complex segmentation tasks where the number of target objects is not known a priori , such as the segmentation of white matter lesions or lung nodules . while dice overlap can still be used in these applications , segmentation algorithms may perform quite differently in ways not reflected by differences in their dice score . here we propose a new set of evaluation techniques that offer new insights into the behavior of segmentation algorithms . we illustrate these techniques with a case study comparing two popular multiple sclerosis ( ms ) lesion segmentation algorithms : oasis and lesiontoads . story_separator_special_tag an automatic framework for multiple sclerosis ( ms ) follow-up by magnetic resonance imaging ( mri ) is presented . it is based on the identification and segmentation of lesions by using convolutional neural network ( cnn ) architecture applied to the volumes collected by different imaging modalities and on the registration of the volumes obtained by two consecutive examinations . the resulting binary masks obtained from the identification/segmentation strategy on each examination are used to calculate the volume of each lesions , their status ( chronic or active ) and , hence , to estimate the progression of the disease . preliminary results are reported demonstrating that the calculations performed by the proposed framework are capable , when the disease is stable , to gather the same information obtainable when the contrast agent ( ca ) is administered to the patient . story_separator_special_tag performance of a convolutional neural network ( cnn ) based white-matter lesion segmentation in magnetic resonance ( mr ) brain images was evaluated under various conditions involving different levels of image preprocessing and augmentation applied and different compositions of the training dataset . on images of sixty multiple sclerosis patients , half acquired on one and half on another scanner of different vendor , we first created a highly accurate multi-rater consensus based lesion segmentations , which were used in several experiments to evaluate the cnn segmentation result . first , the cnn was trained and tested without preprocessing the images and by using various combinations of preprocessing techniques , namely histogram-based intensity standardization , normalization by whitening , and train dataset augmentation by flipping the images across the midsagittal plane . then , the cnn was trained and tested on images of the same , different or interleaved scanner datasets using a cross-validation approach . the results indicate that image preprocessing has little impact on performance in a same-scanner situation , while between-scanner performance benefits most from intensity standardization and normalization , but also further by incorporating heterogeneous multi-scanner datasets in the training phase . under such conditions the story_separator_special_tag several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as rmsprop , adam , adadelta , nadam are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients . in many applications , e.g . learning with large output spaces , it has been empirically observed that these algorithms fail to converge to an optimal solution ( or a critical point in nonconvex settings ) . we show that one cause for such failures is the exponential moving average used in the algorithms . we provide an explicit example of a simple convex optimization setting where adam does not converge to the optimal solution , and describe the precise problems with the previous analysis of adam algorithm . our analysis suggests that the convergence issues can be fixed by endowing such algorithms with ` long-term memory ' of past gradients , and propose new variants of the adam algorithm which not only fix the convergence issues but often also lead to improved empirical performance . story_separator_special_tag there is large consent that successful training of deep networks requires many thousand annotated training samples . in this paper , we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently . the architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization . we show that such a network can be trained end-to-end from very few images and outperforms the prior best method ( a sliding-window convolutional network ) on the isbi challenge for segmentation of neuronal structures in electron microscopic stacks . using the same network trained on transmitted light microscopy images ( phase contrast and dic ) we won the isbi cell tracking challenge 2015 in these categories by a large margin . moreover , the network is fast . segmentation of a 512x512 image takes less than a second on a recent gpu . the full implementation ( based on caffe ) and the trained networks are available at this http url . story_separator_special_tag multiple sclerosis ( ms ) is an autoimmune disease that leads to lesions in the central nervous system . magnetic resonance ( mr ) images provide sufficient imaging contrast to visualize and detect lesions , particularly those in the white matter . quantitative measures based on various features of lesions have been shown to be useful in clinical trials for evaluating therapies . therefore robust and accurate segmentation of white matter lesions from mr images can provide important information about the disease status and progression . in this paper , we propose a fully convolutional neural network ( cnn ) based method to segment white matter lesions from multi-contrast mr images . the proposed cnn based method contains two convolutional pathways . the first pathway consists of multiple parallel convolutional filter banks catering to multiple mr modalities . in the second pathway , the outputs of the first one are concatenated and another set of convolutional filters are applied . the output of this last pathway produces a membership function for lesions that may be thresholded to obtain a binary segmentation . the proposed method is evaluated on a dataset of 100 ms patients , as well as the isbi story_separator_special_tag fully convolutional deep neural networks carry out excellent potential for fast and accurate image segmentation . one of the main challenges in training these networks is data imbalance , which is particularly problematic in medical imaging applications such as lesion segmentation where the number of lesion voxels is often much lower than the number of non-lesion voxels . training with unbalanced data can lead to predictions that are severely biased towards high precision but low recall ( sensitivity ) , which is undesired especially in medical applications where false negatives are much less tolerable than false positives . several methods have been proposed to deal with this problem including balanced sampling , two step training , sample re-weighting , and similarity loss functions . in this paper , we propose a generalized loss function based on the tversky index to address the issue of data imbalance and achieve much better trade-off between precision and recall in training 3d fully convolutional deep neural networks . experimental results in multiple sclerosis lesion segmentation on magnetic resonance images show improved f2 score , dice coefficient , and the area under the precision-recall curve in test data . based on these results we suggest story_separator_special_tag magnetic resonance imaging ( mri ) synthesis has attracted attention due to its various applications in the medical imaging domain . in this paper , we propose generating synthetic multiple sclerosis ( ms ) lesions on mri images with the final aim to improve the performance of supervised machine learning algorithms , therefore , avoiding the problem of the lack of available ground truth . we propose a two-input two-output fully convolutional neural network model for ms lesion synthesis in mri images . the lesion information is encoded as discrete binary intensity level masks passed to the model and stacked with the input images . the model is trained end-to-end without the need for manually annotating the lesions in the training set . we then perform the generation of synthetic lesions on healthy images via registration of patient images , which are subsequently used for data augmentation to increase the performance for supervised ms lesion detection algorithms . our pipeline is evaluated on ms patient data from an in-house clinical dataset and the public isbi2015 challenge dataset . the evaluation is based on measuring the similarities between the real and the synthetic images as well as in terms of lesion story_separator_special_tag highlights a deep learning model for new t2-w lesions detection in multiple sclerosis is presented . combining a learning-based registration network with a segmentation one increases the performance . the proposed model decreases false-positives while increasing true-positives . better performance compared to other supervised and unsupervised state-of-the-art approaches . story_separator_special_tag accurate detection and segmentation of new lesional activity in longitudinal magnetic resonance images ( mris ) of patients with multiple sclerosis ( ms ) is important for monitoring disease activity , as well as for assessing treatment effects . in this work , we present the first deep learning framework to automatically detect and segment new and enlarging ( ne ) t2w lesions from longitudinal brain mris acquired from relapsing-remitting ms ( rrms ) patients . the proposed framework is an adapted 3d u-net [ 1 ] which includes as inputs the reference multi-modal mri and t2-weighted lesion maps , as well an attention mechanism based on the subtraction mri ( between the two timepoints ) which serves to assist the network in learning to differentiate between real anatomical change and artifactual change , while constraining the search space for small lesions . experiments on a large , proprietary , multi -center , multi-modal , clinical trial dataset consisting of 1677 multi-modal scans illustrate that network achieves high overall detection accuracy ( detection auc=.95 ) , outperforming ( 1 ) a u-net without an attention mechanism ( de-tection auc=.93 ) , ( 2 ) a framework based on subtracting independent story_separator_special_tag abstract this paper examines data fusion methods for multi-view data classification . we present a decision concept that explicitly takes into account the input multi-view structure , where for each case there is a different subset of relevant views . this data fusion concept , which we dub mixture of views , is implemented by a special purpose neural network architecture . the single view decisions are combined by a data-driven decision , into a global decision according to the relevance of each view in a given case . the method was applied to two challenging computer-aided diagnosis ( cadx ) tasks : the task of classifying breast microcalcifications as benign or malignant based on craniocaudal ( cc ) and mediolateral oblique ( mlo ) mammography views and segmenting multiple sclerosis ( ms ) white matter lesions . the experimental results show that our method outperforms previously suggested fusion methods . story_separator_special_tag while computed tomography and other imaging techniques are measured in absolute units with physical meaning , magnetic resonance images are expressed in arbitrary units that are difficult to interpret and differ between study visits and subjects . much work in the image processing literature on intensity normalization has focused on histogram matching and other histogram mapping techniques , with little emphasis on normalizing images to have biologically interpretable units . furthermore , there are no formalized principles or goals for the crucial comparability of image intensities within and across subjects . to address this , we propose a set of criteria necessary for the normalization of images . we further propose simple and robust biologically motivated normalization techniques for multisequence brain imaging that have the same interpretation across acquisitions and satisfy the proposed criteria . we compare the performance of different normalization methods in thousands of images of patients with alzheimer 's disease , hundreds of patients with multiple sclerosis , and hundreds of healthy subjects obtained in several different studies at dozens of imaging centers . story_separator_special_tag an automated method for segmenting magnetic resonance head images into brain and non-brain has been developed . it is very robust and accurate and has been tested on thousands of data sets from a wide variety of scanners and taken with a wide variety of mr sequences . the method , brain extraction tool ( bet ) , uses a deformable model that evolves to fit the brain 's surface by the application of a set of locally adaptive model forces . the method is very fast and requires no preregistration or other pre-processing before being applied . we describe the new method and give examples of results and the results of extensive quantitative testing against `` gold-standard '' hand segmentations , and two other popular automated methods . story_separator_special_tag in this paper the coronary artery tracking competition , which was part of the workshop : `` 3d segmentation in the clinic : a grand challenge ii '' is described . this workshop was held during the 2008 medical image computing and computer assisted intervention ( miccai ) conference . an introduction is given to underline the importance of ( semi- ) automatic coronary artery centerline extraction methods and the advantages of an online framework facilitating a fair comparison of these methods . furthermore , information is provided about the set-up of the workshop , the evaluation measures used and the online framework . results for the algorithms , submitted by both industrial and academic research institutes , are presented as well . story_separator_special_tag a variant of the popular nonparametric nonuniform intensity normalization ( n3 ) algorithm is proposed for bias field correction . given the superb performance of n3 and its public availability , it has been the subject of several evaluation studies . these studies have demonstrated the importance of certain parameters associated with the b-spline least-squares fitting . we propose the substitution of a recently developed fast and robust b-spline approximation routine and a modified hierarchical optimization scheme for improved bias field correction over the original n3 algorithm . similar to the n3 algorithm , we also make the source code , testing , and technical documentation of our contribution , which we denote as \xbfn4itk , \xbf available to the public through the insight toolkit of the national institutes of health . performance assessment is demonstrated using simulated data from the publicly available brainweb database , hyperpolarized 3he lung image data , and 9.4t postmortem hippocampus data . story_separator_special_tag convolutional neural networks ( cnn ) have been obtaining successful results in the task of image segmentation in recent years . these methods use as input the sampling obtained using square uniform patches centered on each voxel of the image , which could not be the optimal approach since there is a very limited use of global context . in this work we present a new construction method for the patches by means of a circular non-uniform sampling of the neighborhood of the voxels . this allows a greater global context with a radial extension with respect to the central voxel . this approach was applied on the 2015 longitudinal ms lesion segmentation challenge dataset , obtaining better results than approaches using square uniform and non-uniform patches with the same computational cost of the cnn models . story_separator_special_tag multiple sclerosis lesions segmentation is an important step in the diagnosis and tracking in the evolution of the disease . convolutional neural networks ( cnn ) have been obtaining successful results in the task of lesion segmentation in recent years , but still present problem segmenting boundaries of the lesions . in this work we focus the learning process on hard voxels close to the boundaries of the lesions by means of a stratified sampling and the use of focal loss function that dynamically increase the penalization on this kind of voxels . this approach was applied on the 2015 longitudinal ms lesion segmentation challenge dataset ( isbi2015 ( https : //smart-stats-tools.org/lesion-challenge ) ) , obtaining better results than approaches using binary cross entropy loss and focal loss functions with uniform sampling . story_separator_special_tag we present our entry for the longitudinal multiple sclerosis challenge 2015 using 3d convolutional neural networks ( cnn ) . we model a voxel-wise classifier using multi-channel 3d patches of mri volumes as input . for each ground truth , a cnn is trained and the final segmentation is obtained by combining the probability outputs of these cnns . efficient training is achieved by using sub-sampling methods and sparse convolutions . we obtain accurate results with dice scores comparable to the inter-rater variability . story_separator_special_tag abstract in this paper , we present a novel automated method for white matter ( wm ) lesion segmentation of multiple sclerosis ( ms ) patient images . our approach is based on a cascade of two 3d patch wise convolutional neural networks ( cnn ) . the first network is trained to be more sensitive revealing possible candidate lesion voxels while the second network is trained to reduce the number of misclassified voxels coming from the first network . this cascaded cnn architecture tends to learn well from a small ( symbol ) set of labeled data of the same mri contrast , which can be very interesting in practice , given the difficulty to obtain manual label annotations and the large amount of available unlabeled magnetic resonance imaging ( mri ) data . we evaluate the accuracy of the proposed method on the public ms lesion segmentation challenge miccai2008 dataset , comparing it with respect to other state of the art ms lesion segmentation tools . furthermore , the proposed method is also evaluated on two private ms clinical datasets , where the performance of our method is also compared with different recent public available state of the story_separator_special_tag abstract in recent years , several convolutional neural network ( cnn ) methods have been proposed for the automated white matter lesion segmentation of multiple sclerosis ( ms ) patient images , due to their superior performance compared with those of other state-of-the-art methods . however , the accuracies of cnn methods tend to decrease significantly when evaluated on different image domains compared with those used for training , which demonstrates the lack of adaptability of cnns to unseen imaging data . in this study , we analyzed the effect of intensity domain adaptation on our recently proposed cnn-based ms lesion segmentation method . given a source model trained on two public ms datasets , we investigated the transferability of the cnn model when applied to other mri scanners and protocols , evaluating the minimum number of annotated images needed from the new domain and the minimum number of layers needed to re-train to obtain comparable accuracy . our analysis comprised ms patient data from both a clinical center and the public isbi2015 challenge database , which permitted us to compare the domain adaptation capability of our model to that of other state-of-the-art methods . in both datasets , our story_separator_special_tag the high irregularity of multiple sclerosis ( ms ) lesions in sizes and numbers often proves difficult for automated systems on the task of ms lesion segmentation . current state-of-the-art ms segmentation algorithms employ either only global perspective or just patch-based local perspective segmentation approaches . although global image segmentation can obtain good segmentation for medium to large lesions , its performance on smaller lesions lags behind . on the other hand , patch-based local segmentation disregards spatial information of the brain . in this work , we propose synergynet , a network segmenting ms lesions by fusing data from both global and local perspectives to improve segmentation across different lesion sizes . we achieve global segmentation by leveraging the u-net architecture and implement the local segmentation by augmenting u-net with the mask r-cnn framework . the sharing of lower layers between these two branches benefits end-to-end training and proves advantages over simple ensemble of the two frameworks . we evaluated our method on two separate datasets containing 765 and 21 volumes respectively . our proposed method can improve 2.55 % and 5.0 % for dice score and lesion true positive rates respectively while reducing over 20 % in false story_separator_special_tag convolutional neural networks trained on publicly available medical imaging datasets ( source domain ) rarely generalise to different scanners or acquisition protocols ( target domain ) . this motivates the active field of domain adaptation . while some approaches to the problem require labelled data from the target domain , others adopt an unsupervised approach to domain adaptation ( uda ) . evaluating uda methods consists of measuring the model s ability to generalise to unseen data in the target domain . in this work , we argue that this is not as useful as adapting to the test set directly . we therefore propose an evaluation framework where we perform test-time uda on each subject separately . we show that models adapted to a specific target subject from the target domain outperform a domain adaptation method which has seen more data of the target domain but not this specific target subject . this result supports the thesis that unsupervised domain adaptation should be used at test-time , even if only using a single target-domain subject . story_separator_special_tag in this paper , we develop a two-stage neural network solution for the challenging task of white-matter lesion segmentation . to cope with the vast vari- ability in lesion sizes , we sample brain mr scans with patches at three differ- ent dimensions and feed them into separate fully convolutional neural networks ( fcns ) . in the second stage , we process large and small lesion separately , and use ensemble-nets to combine the segmentation results generated from the fcns . a novel activation function is adopted in the ensemble-nets to improve the segmen- tation accuracy measured by dice similarity coefficient . experiments on miccai 2017 white matter hyperintensities ( wmh ) segmentation challenge data demonstrate that our two-stage-multi-sized fcn approach , as well as the new activation function , are effective in capturing white-matter lesions in mr images . story_separator_special_tag characterizing the performance of image segmentation approaches has been a persistent challenge . performance analysis is important since segmentation algorithms often have limited accuracy and precision . interactive drawing of the desired segmentation by human raters has often been the only acceptable approach , and yet suffers from intra-rater and inter-rater variability . automated algorithms have been sought in order to remove the variability introduced by raters , but such algorithms must be assessed to ensure they are suitable for the task . the performance of raters ( human or algorithmic ) generating segmentations of medical images has been difficult to quantify because of the difficulty of obtaining or estimating a known true segmentation for clinical data . although physical and digital phantoms can be constructed for which ground truth is known or readily estimated , such phantoms do not fully reflect clinical images due to the difficulty of constructing phantoms which reproduce the full range of imaging characteristics and normal and pathological anatomical variability observed in clinical data . comparison to a collection of segmentations by raters is an attractive alternative since it can be carried out directly on the relevant clinical imaging data . however , the most story_separator_special_tag abstract purpose accurate lesion segmentation is important for measurements of lesion load and atrophy in subjects with multiple sclerosis ( ms ) . international ms lesion challenges show a preference of convolutional neural networks ( cnn ) strategies , such as nicmslesions . however , since the software is trained on fairly homogenous training data , we aimed to test the performance of nicmslesions in an independent dataset with manual and other automatic lesion segmentations to determine whether this method is suitable for larger , multi-center studies . methods manual lesion segmentation was performed in fourteen subjects with ms on sagittal 3d flair images from a 3t ge whole-body scanner with 8-channel head coil . we compared five different categories of automated lesion segmentation methods for their volumetric and spatial agreement with manual segmentation : ( i ) unsupervised , untrained ( lesiontoads ) ; ( ii ) supervised , untrained ( lst-lpa and nicmslesions with default settings ) ; ( iii ) supervised , untrained with threshold adjustment ( lst-lpa optimized for current data ) ; ( iv ) supervised , trained with leave-one-out cross-validation on fourteen subjects with ms ( nicmslesions and bianca ) ; and ( v story_separator_special_tag histopathology image segmentation is an important area in the field of computer aided diagnosis using image processing . the segmentation of multiple sclerosis ( ms ) lesions from mr images can establish the basis for subsequent lesion reconstruction , volume estimation , and course evaluation . this study proposes a method for automatically segmenting ms lesions based on 3d convolutional neural network ( cnn ) . the method is divided into two stages , each of which includes two convolution layers and two pooling layers . the alternative lesion voxels are selected in the first stage , while in the second stage , the final lesion voxels are segmented from the lesion voxels which are obtained in the first stage by restricting the conditions . the method has been tested on the miccai 2008 and 2016 datasets and compared to the other baseline methods . the experiment results show that the method has better performance than the other baseline methods on different evaluation indicators , including dice similarity coefficient , absolute difference in lesion volume , true positive rate , false positive rate , and predictive positivity value . story_separator_special_tag we present a novel per-dimension learning rate method for gradient descent called adadelta . the method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent . the method requires no manual tuning of a learning rate and appears robust to noisy gradient information , different model architecture choices , various data modalities and selection of hyperparameters . we show promising results compared to other methods on the mnist digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment . story_separator_special_tag automated segmentation of multiple sclerosis ( ms ) lesions in brain imaging is challenging due to the high variability in lesion characteristics . based on the generative adversarial network ( gan ) , we propose a semantic segmentation framework ms-gan to localize ms lesions in multimodal brain magnetic resonance imaging ( mri ) , which consists of one multimodal encoder-decoder generator g and multiple discriminators d corresponding to the multiple input modalities . for the design of the generator , we adopt an encoder-decoder deep learning architecture with bypass of spatial information from encoder to the corresponding decoder , which helps to reduce the network parameters while improving the localization performance . our generator is also designed to integrate multimodal imaging data in end-to-end learning with multi-path encoding and cross-modality fusion . an additional classification-related constraint is proposed for the adversarial training process of the gan model , with the aim of alleviating the hard-to-converge issue in classification-based image-to-image translation problems . for evaluation , we collected a database of 126 cases from patients with relapsing ms. we also experimented with other semantic segmentation models as well as patch-based deep learning methods for performance comparison . the results show that story_separator_special_tag brain lesion volume measured on t2 weighted mri images is a clinically important disease marker in multiple sclerosis ( ms ) . manual delineation of ms lesions is a time-consuming and highly operator-dependent task , which is influenced by lesion size , shape and conspicuity . recently , automated lesion segmentation algorithms based on deep neural networks have been developed with promising results . in this paper , we propose a novel recurrent slice-wise attention network ( rsanet ) , which models 3d mri images as sequences of slices and captures long-range dependencies through a recurrent manner to utilize contextual information of ms lesions . experiments on a dataset with 43 patients show that the proposed method outperforms the state-of-the-art approaches . our implementation is available online at this https url . story_separator_special_tag recently , 3d medical image reconstruction ( mir ) and segmentation ( mis ) based on deep neural networks have been developed with promising results , and attention mechanism has been further designed to capture global contextual information for performance enhancement . however , the large size of 3d volume images poses a great computational challenge to traditional attention methods . in this paper , we propose a folded attention ( fa ) approach to improve the computational efficiency of traditional attention methods on 3d medical images . the main idea is that we apply tensor folding and unfolding operations with four permutations to build four small sub-affinity matrices to approximate the original affinity matrix . through four consecutive sub-attention modules of fa , each element in the feature tensor can aggregate spatial-channel information from all other elements . compared to traditional attention methods , with moderate improvement of accuracy , fa can substantially reduce the computational complexity and gpu memory consumption . we demonstrate the superiority of our method on two challenging tasks for 3d mir and mis , which are quantitative susceptibility mapping and multiple sclerosis lesion segmentation . story_separator_special_tag inpainting lesions is an important preprocessing task for algorithms analyzing brain mris of multiple sclerosis ( ms ) patients , such as tissue segmentation and cortical surface reconstruction . we propose a new deep learning approach for this task . unlike existing inpainting approaches which ignore the lesion areas of the input image , we leverage the edge information around the lesions as a prior to help the inpainting process . thus , the input of this network includes the t1-w image , lesion mask and the edge map computed from the t1-w image , and the output is the lesion-free image . the introduction of the edge prior is based on our observation that the edge detection results of the mri scans will usually contain the contour of white matter ( wm ) and grey matter ( gm ) , even though some undesired edges appear near the lesions . instead of losing all the information around the neighborhood of lesions , our approach preserves the local tissue shape ( brain/wm/gm ) with the guidance of the input edges . the qualitative results show that our pipeline inpaints the lesion areas in a realistic and shape-consistent way . our story_separator_special_tag recent years have seen an increasing use of supervised learning methods for segmentation tasks . however , the predictive performance of these algorithms depend on the quality of labels , especially in medical image domain , where both the annotation cost and inter-observer variability are high . in a typical annotation collection process , different clinical experts provide their estimates of the true segmentation labels under the influence of their levels of expertise and biases . treating these noisy labels blindly as the ground truth can adversely affect the performance of supervised segmentation models . in this work , we present a neural network architecture for jointly learning , from noisy observations alone , both the reliability of individual annotators and the true segmentation label distributions . the separation of the annotators characteristics and true segmentation label is achieved by encouraging the estimated annotators to be maximally unreliable while achieving high fidelity with the training data . our method can also be viewed as a translation of staple , an established label aggregation framework proposed in warfield et al . [ 1 ] to the supervised learning paradigm . we demonstrate first on a generic segmentation task using mnist data
the performance of superconducting qubits has improved by several orders of magnitude in the past decade . these circuits benefit from the robustness of superconductivity and the josephson effect , and at present they have not encountered any hard physical limits . however , building an error-corrected information processor with many such qubits will require solving specific architecture problems that constitute a new field of research . for the first time , physicists will have to master quantum error correction to design and operate complex active systems that are dissipative in nature , yet remain coherent indefinitely . we offer a view on some directions for the field and speculate on its future . story_separator_special_tag during the last ten years , superconducting circuits have passed from being interesting physical devices to becoming contenders for near-future useful and scalable quantum information processing ( qip ) . advanced quantum simulation experiments have been shown with up to nine qubits , while a demonstration of quantum supremacy with fifty qubits is anticipated in just a few years . quantum supremacy means that the quantum system can no longer be simulated by the most powerful classical supercomputers . integrated classical-quantum computing systems are already emerging that can be used for software development and experimentation , even via web interfaces . therefore , the time is ripe for describing some of the recent development of superconducting devices , systems and applications . as such , the discussion of superconducting qubits and circuits is limited to devices that are proven useful for current or near future applications . consequently , the centre of interest is the practical applications of qip , such as computation and simulation in physics and chemistry . story_separator_special_tag we propose an implementation of a universal set of one- and two-quantum-bit gates for quantum computation using the spin states of coupled single-electron quantum dots . desired operations are effected by the gating of the tunneling barrier between neighboring dots . several measures of the gate quality are computed within a recently derived spin master equation incorporating decoherence caused by a prototypical magnetic environment . dot-array experiments that would provide an initial demonstration of the desired nonequilibrium spin dynamics are proposed . story_separator_special_tag semiconductor spins are one of the few qubit realizations that remain a serious candidate for the implementation of large-scale quantum circuits . excellent scalability is often argued for spin qubits defined by lithography and controlled via electrical signals , based on the success of conventional semiconductor integrated circuits . however , the wiring and interconnect requirements for quantum circuits are completely different from those for classical circuits , as individual direct current , pulsed and in some cases microwave control signals need to be routed from external sources to every qubit . this is further complicated by the requirement that these spin qubits currently operate at temperatures below 100 mk . here , we review several strategies that are considered to address this crucial challenge in scaling quantum circuits based on electron spin qubits . key assets of spin qubits include the potential to operate at 1 to 4 k , the high density of quantum dots or donors combined with possibilities to space them apart as needed , the extremely long-spin coherence times , and the rich options for integration with classical electronics based on the same technology . story_separator_special_tag we investigate coherent time evolution of charge states ( pseudospin qubit ) in a semiconductor double quantum dot . this fully tunable qubit is manipulated with a high-speed voltage pulse that controls the energy and decoherence of the system . coherent oscillations of the qubit are observed for several combinations of many-body ground and excited states of the quantum dots . possible decoherence mechanisms in the present device are also discussed . story_separator_special_tag the field of solid-state quantum computation is expanding rapidly initiated by our original charge qubit demonstrations . various types of solid-state qubits are being studied , and their coherent properties are improving . the goal of this review is to summarize achievements on josephson charge qubits . we cover the results obtained in our joint group of nec nano electronics research laboratories and riken advanced science institute , also referring to the works done by other groups . starting from a short introduction , we describe the principle of the josephson charge qubit , its manipulation and readout . we proceed with coupling of two charge qubits and implementation of a logic gate . we also discuss decoherence issues . finally , we show how a charge qubit can be used as an artificial atom coupled to a resonator to demonstrate lasing action . story_separator_special_tag quantum networks provide opportunities and challenges across a range of intellectual and technical frontiers , including quantum computation , communication and metrology . the realization of quantum networks composed of many nodes and channels requires new scientific capabilities for generating and characterizing quantum coherence and entanglement . fundamental to this endeavour are quantum interconnects , which convert quantum states from one physical system to those of another in a reversible manner . such quantum connectivity in networks can be achieved by the optical interactions of single photons and atoms , allowing the distribution of entanglement across the network and the teleportation of quantum states between nodes . story_separator_special_tag we propose a scheme to utilize photons for ideal quantum transmission between atoms located at spatially separated nodes of a quantum network . the transmission protocol employs special laser pulses that excite an atom inside an optical cavity at the sending node so that its state is mapped into a time-symmetric photon wave packet that will enter a cavity at the receiving node and be absorbed by an atom there with unit probability . implementation of our scheme would enable reliable transfer or sharing of entanglement among spatially distant atoms . story_separator_special_tag quantum key distribution network has become a reality in practical environment . quantum repeaters have been explored in various physical systems and their combinations . for practical use of them , these new paradigms must be combined with existing or emerging infrastructures of communication and security systems . in this article , we discussed how quantum network can be combined with modern cryptographic technologies in fibre network and with emerging mobile terminals in wireless network , creating new solutions for the future cryptographic and communication systems . our discussions are summarised in a roadmap . story_separator_special_tag a proposed network of atomic clocks using non-local entangled states could achieve unprecedented stability and accuracy in time-keeping , as well as being secure against internal or external attack . story_separator_special_tag sharing information coherently between nodes of a quantum network is fundamental to distributed quantum information processing . in this scheme , the computation is divided into subroutines and performed on several smaller quantum registers that are connected by classical and quantum channels 1 . a direct quantum channel , which connects nodes deterministically rather than probabilistically , achieves larger entanglement rates between nodes and is advantageous for distributed fault-tolerant quantum computation 2 . here we implement deterministic state-transfer and entanglement protocols between two superconducting qubits fabricated on separate chips . superconducting circuits 3 constitute a universal quantum node 4 that is capable of sending , receiving , storing and processing quantum information5 8. our implementation is based on an all-microwave cavity-assisted raman process 9 , which entangles or transfers the qubit state of a transmon-type artificial atom 10 with a time-symmetric itinerant single photon . we transfer qubit states by absorbing these itinerant photons at the receiving node , with a probability of 98.1 \xb1 0.1 per cent , achieving a transfer-process fidelity of 80.02 \xb1 0.07 per cent for a protocol duration of only 180 nanoseconds . we also prepare remote entanglement on demand with a fidelity as high story_separator_special_tag we report an experimental quantum key distribution that utilizes pulsed homodyne detection , instead of photon counting , to detect weak pulses of coherent light . although our scheme inherently has a finite error rate , homodyne detection allows high-efficiency detection and quantum state measurement of the transmitted light using only conventional devices at room temperature . our prototype system works at $ 1.55\\ensuremath { \\mu } \\mathrm { m } $ wavelength and the quantum channel is a 1-km standard optical fiber . the probability distribution of the measured electric-field amplitude has a gaussian shape . the effect of experimental imperfections such as optical loss and detector noise can be parametrized by the variance and the mean value of the gaussian distribution . story_separator_special_tag quantum continuous variables are being explored as an alternative means to implement quantum key distribution , which is usually based on single photon counting . the former approach is potentially advantageous because it should enable higher key distribution rates . here we propose and experimentally demonstrate a quantum key distribution protocol based on the transmission of gaussian-modulated coherent states ( consisting of laser pulses containing a few hundred photons ) and shot-noise-limited homodyne detection ; squeezed or entangled beams are not required . complete secret key extraction is achieved using a reverse reconciliation technique followed by privacy amplification . the reverse reconciliation technique is in principle secure for any value of the line transmission , against gaussian individual attacks based on entanglement and quantum memories . our table-top experiment yields a net key transmission rate of about 1.7 megabits per second for a loss-free line , and 75 kilobits per second for a line with losses of 3.1 db . we anticipate that the scheme should remain effective for lines with higher losses , particularly because the present limitations are essentially technical , so that significant margin for improvement is available on both the hardware and software . story_separator_special_tag we explore the intimate relationship between quantum lithography , heisenberg-limited parameter estimation and the rate of dynamical evolution of quantum states . we show how both the enhanced accuracy in measurements and the increased resolution in quantum lithography follow from the use of entanglement . mathematically , the hyperresolution of quantum lithography appears naturally in the derivation of heisenberg-limited parameter estimation . we also review recent experiments offering a proof of principle of quantum lithography , and we address the question of state preparation and the fabrication of suitable photoresists . story_separator_special_tag quantum strategies can help to make parameter-estimation schemes more precise , but for noisy processes it is typically not known how large that improvement may be . here , a universal quantum bound is derived for the error in the estimation of parameters that characterize dynamical processes . story_separator_special_tag we provide algorithms for efficiently addressing quantum memory in parallel . these imply that the standard circuit model can be simulated with low overhead by the more realistic model of a distributed quantum computer . as a result , the circuit model can be used by algorithm designers without worrying whether the underlying architecture supports the connectivity of the circuit . in addition , we apply our results to existing memory intensive quantum algorithms . we present a parallel quantum search algorithm and improve the time-space trade-off for the element distinctness and collision problems . story_separator_special_tag we report on the demonstration of light storage for times greater than a second in praseodymium doped y2sio5 using electromagnetically induced transparency . the long storage times were enabled by the long coherence times possible for the hyperfine transitions in this material . the use of a solid-state system also enabled operation with the probe and coupling beam counter-propagating , allowing easy separation of the two beams . the efficiency of the storage was low because of the low optical thickness of the sample ; as is discussed , this deficiency should be easy to rectify . story_separator_special_tag quantum memory is important to quantum information processing in many ways : a synchronization device to match various processes within a quantum computer , an identity quantum gate that leaves any state unchanged , and a tool to convert heralded photons to photons-on-demand . in addition to quantum computing , quantum memory would be instrumental for the implementation of long-distance quantum communication using quantum repeaters . the importance of this basic quantum gate is exemplified by the multitude of optical quantum memory mechanisms being studied : optical delay lines , cavities , electromagnetically-induced transparency , photon-echo , and off-resonant faraday interaction . here we report on the state-of-the-art in the field of optical quantum memory , including criteria for successful quantum memory and current performance levels . story_separator_special_tag a room-temperature nanomechanical transducer that couples efficiently to both radio waves and light allows radio-frequency signals to be detected as an optical phase shift with quantum-limited sensitivity . many applications , from medical imaging and radio astronomy to navigation and wireless communication , depend on the faithful transmission and detection of weak radio-frequency microwaves . here eugene polzik and co-workers demonstrate a completely new capability in this area the conversion of weak radio waves into laser signals using a nanomechanical oscillator . the oscillator , a membrane made from silicon nitride , can couple simultaneously to radio signals and light reflected off its surface and this feature can be used to measure the radio signals as optical phase shifts , with quantum-limited sensitivity . compared to existing detectors , this approach has the advantage of working at room temperature , and the signals produced can be readily transferred into standard optical fibres . low-loss transmission and sensitive recovery of weak radio-frequency and microwave signals is a ubiquitous challenge , crucial in radio astronomy , medical imaging , navigation , and classical and quantum communication . efficient up-conversion of radio-frequency signals to an optical carrier would enable their transmission through optical story_separator_special_tag in this introductory article on the subject of quantum error correction and fault-tolerant quantum computation , we review three important ingredients that enter known constructions for fault-tolerant quantum computation , namely quantum codes , error discretization and transversal quantum gates . taken together , they provide a ground on which the theory of quantum error correction can be developed and fault-tolerant quantum information protocols can be built . story_separator_special_tag a practical quantum computer must not merely store information , but also process it . to prevent errors introduced by noise from multiplying and spreading , a fault-tolerant computational architecture is required . current experiments are taking the first steps toward noise-resilient logical qubits . but to convert these quantum devices from memories to processors , it is necessary to specify how a universal set of gates is performed on them . the leading proposals for doing so , such as magic-state distillation and colour-code techniques , have high resource demands . alternative schemes , such as those that use high-dimensional quantum codes in a modular architecture , have potential benefits , but need to be explored further . story_separator_special_tag we present a comprehensive and self-contained simplified review of the quantum computing scheme of raussendorf et al . [ phys . rev . lett . 98 , 190504 ( 2007 ) ; n. j. phys . 9 , 199 ( 2007 ) ] , which features a two-dimensional nearest-neighbor coupled lattice of qubits , a threshold error rate approaching 1 % , natural asymmetric and adjustable strength error correction , and low overhead arbitrarily long-range logical gates . these features make it one of the best and most practical quantum computing schemes devised to date . we restrict the discussion to direct manipulation of the surface code using the stabilizer formalism , both of which we also briefly review , to make the scheme accessible to a broad audience . story_separator_special_tag we investigate the capacity of bosonic quantum channels for the transmission of quantum information . we calculate the quantum capacity for a class of gaussian channels , including channels describing optical fibers with photon losses , by proving that gaussian encodings are optimal . for arbitrary channels we show that achievable rates can be determined from few measurable parameters by proving that every channel can asymptotically simulate a gaussian channel which is characterized by second moments of the initial channel . along the way we provide a complete characterization of degradable gaussian channels and those arising from teleportation protocols . story_separator_special_tag quantum state transfer between microwave and optical frequencies is essential for connecting superconducting quantum circuits to optical systems and extending microwave quantum networks over long distances . however , establishing such a quantum interface is extremely challenging because the standard direct quantum transduction requires both high coupling efficiency and small added noise . we propose an entanglement-based scheme-generating microwave-optical entanglement and using it to transfer quantum states via quantum teleportation-which can bypass the stringent requirements in direct quantum transduction and is robust against loss errors . in addition , we propose and analyze a counterintuitive design-suppress the added noise by placing the device at a higher temperature environment-which can improve both the device quality factor and power handling capability . we systematically analyze the generation and verification of entangled microwave-optical-photon pairs . the parameter for entanglement verification favors the regime of cooperativity mismatch and can tolerate certain thermal noises . our scheme is feasible given the latest advances on electro-optomechanics , and can be generalized to various physical systems . story_separator_special_tag we report a superconducting artificial atom with a coherence time of $ { t } _ { 2 } ^ { * } =92 $ $ \\ensuremath { \\mu } $ s and energy relaxation time $ { t } _ { 1 } =70 $ $ \\ensuremath { \\mu } $ s . the system consists of a single josephson junction transmon qubit on a sapphire substrate embedded in an otherwise empty copper waveguide cavity whose lowest eigenmode is dispersively coupled to the qubit transition . we attribute the factor of four increase in the coherence quality factor relative to previous reports to device modifications aimed at reducing qubit dephasing from residual cavity photons . this simple device holds promise as a robust and easily produced artificial quantum system whose intrinsic coherence properties are sufficient to allow tests of quantum error correction . story_separator_special_tag superconducting microwave resonators are reliable circuits widely used for detection and as test devices for material research . a reliable determination of their external and internal quality factors is crucial for many modern applications , which either require fast measurements or operate in the single photon regime with small signal to noise ratios . here , we use the circle fit technique with diameter correction and provide a step by step guide for implementing an algorithm for robust fitting and calibration of complex resonator scattering data in the presence of noise . the speedup and robustness of the analysis are achieved by employing an algebraic rather than an iterative fit technique for the resonance circle . story_separator_special_tag we provide a self-consistent electromagnetic theory of the coupling between dipole emitters and dissipative nanoresonators . the theory that relies on the concept of quasinormal modes with complex frequencies provides an accurate closed-form expression for the electromagnetic local density of states of any photonic or plasmonic resonator with strong radiation leakage , absorption , and material dispersion . it represents a powerful tool to calculate and conceptualize the electromagnetic response of systems that are governed by a small number of resonance modes . we use the formalism to revisit purcell 's factor . the new formula substantially differs from the usual one ; in particular , it predicts that a spectral detuning between the emitter and the resonance does not necessarily result in a lorentzian response in the presence of dissipation . comparisons with fully vectorial numerical calculations for plasmonic nanoresonators made of gold nanorods evidence the high accuracy of the predictions achieved by our semianalytical treatment . story_separator_special_tag we show explicitly how the commonly adopted prescription for calculating effective mode volumes is wrong and leads to uncontrolled errors . instead , we introduce a generalized mode volume that can be easily evaluated based on the mode calculation methods typically applied in the literature , and which allows one to compute the purcell effect and other interesting optical phenomena in a rigorous and unambiguous way . story_separator_special_tag the purcell factor quantifies the change of the radiative decay of a dipole in an electromagnetic environment relative to free space . designing this factor is at the heart of photonics technology , striving to develop ever smaller or less lossy optical resonators . the purcell factor can be expressed using the electromagnetic eigenmodes of the resonators , introducing the notion of a mode volume for each mode . this approach allows an analytic treatment , reducing the purcell factor and other observables to sums over eigenmode resonances . calculating the mode volumes requires a correct normalization of the modes . we introduce an exact normalization of modes , not relying on perfectly matched layers . we present an analytic theory of the purcell effect based on this exact mode normalization and the resulting effective mode volume . we use a homogeneous dielectric sphere in vacuum , which is analytically solvable , to exemplify these findings . we furthermore verify the applicability of the normalization to numerically determined modes of a finite dielectric cylinder . story_separator_special_tag we discuss three formally different formulas for normalization of quasinormal modes currently in use for modeling optical cavities and plasmonic resonators and show that they are complementary and provide the same result . regardless of the formula used for normalization , one can use the norm to define an effective mode volume for use in purcell factor calculations . story_separator_special_tag recently , kristensen , ge and hughes have compared [ phys . rev . a 92 , 053810 ( 2015 ) ] three di erent methods for normalization of quasinormal modes in open optical systems , and concluded that they all provide the same result . we show here that this conclusion is incorrect and illustrate that the normalization of [ opt . lett . 37 , 1649 ( 2012 ) ] is divergent for any optical mode having a nite quality factor , and that the silver-m uller radiation condition is not ful lled for quasinormal modes . story_separator_special_tag we refute all claims of the `` comment on `` normalization of quasinormal modes in leaky optical cavities and plasmonic resonators '' `` by e. a. muljarov and w. langbein ( arxiv:1602.07278v1 ) . based entirely on information already contained in our original article ( p. t. kristensen , r.-c. ge and s. hughes , physical review a 92 , 053810 ( 2015 ) ) , we dismiss every point of criticism as being completely unjustified and point out how important parts of our argumentation appear to have been overlooked by the comment authors . in addition , we provide additional calculations showing directly the link between the normalizations by sauvan et al . and muljarov et al. , which were not included in our original article . story_separator_special_tag it is shown that for the scalar analog of electrodynamics in one dimension , the quasinormal modes of a leaky cavity form a complete set inside the cavity , provided the cavity is defined by a discontinuity in the refractive index . this condition is sufficiently general to apply to a number of interesting examples . the quasinormal modes are also orthogonal under a modified definition of the inner product . the completeness and orthogonality hold even though the cavity is not a hermitian system by itself . these properties allow the discrete quasinormal modes to be used as the basis for dynamics of the scalar wave in the cavity . story_separator_special_tag modern cavity quantum electrodynamics ( cavity qed ) illuminates the most fundamental aspects of coherence and decoherence in quantum mechanics . experiments on atoms in cavities can be described by elementary models but reveal intriguing subtleties of the interplay of coherent dynamics with external couplings . recent activity in this area has pioneered powerful new approaches to the study of quantum coherence and has fueled the growth of quantum information science . in years to come , the purview of cavity qed will continue to grow as researchers build on a rich infrastructure to attack some of the most pressing open questions in micro- and mesoscopic physics . story_separator_special_tag this paper reviews the work on cavity quantum electrodynamics of free atoms . in recent years , cavity experiments have also been conducted on a variety of solid-state systems resulting in many interesting applications , of which microlasers , photon bandgap structures and quantum dot structures in cavities are outstanding examples . although these phenomena and systems are very interesting , discussion is limited here to free atoms and mostly single atoms because these systems exhibit clean quantum phenomena and are not disturbed by a variety of other effects . at the centre of our review is the work on the one-atom maser , but we also give a survey of the entire field , using free atoms in order to show the large variety of problems dealt with . the cavity interaction can be separated into two main regimes : the weak coupling in cavity or cavity-like structures with low quality factors q and the strong coupling when high-q cavities are involved . the weak coupling leads to modification of spontaneous transitions and level shifts , whereas the strong coupling enables one to observe a periodic exchange of photons between atoms and the radiation field . in this case story_separator_special_tag fast , high-fidelity single and two-qubit gates are essential to building a viable quantum information processor , but achieving both in the same system has proved challenging for spin qubits . we propose and analyze an approach to perform a long-distance two-qubit controlled phase ( cphase ) gate between two singlet-triplet qubits using an electromagnetic resonator to mediate their interaction . the qubits couple longitudinally to the resonator , and by driving the qubits near the resonator 's frequency they can be made to acquire a state-dependent geometric phase that leads to a cphase gate independent of the initial state of the resonator . using high impedance resonators enables gate times of order 10 ns while maintaining long coherence times . simulations show average gate fidelities of over 96 % using currently achievable experimental parameters and over 99 % using state-of-the-art resonator technology . after optimizing the gate fidelity in terms of parameters tuneable in-situ , we find it takes a simple power-law form in terms of the resonator 's impedance and quality and the qubits ' noise bath . story_separator_special_tag in this experiment , we couple a superconducting transmon qubit to a high-impedance 645 omega microwave resonator . doing so leads to a large qubit-resonator coupling rate g , measured through a large vacuum rabi splitting of 2g similar or equal to 910 mhz . the coupling is a significant fraction of the qubit and resonator oscillation frequencies. , placing our system close to the ultrastrong coupling regime ( ( g ) over bar = g/omega = 0.071 on resonance ) . combining this setup with a vacuum-gap transmon architecture shows the potential of reaching deep into the ultrastrong coupling ( g ) over bar similar to 0.45 with transmon qubits . story_separator_special_tag the strong coupling limit of cavity quantum electrodynamics ( qed ) implies the capability of a matter-like quantum system to coherently transform an individual excitation into a single photon within a resonant structure . this not only enables essential processes required for quantum information processing but also allows for fundamental studies of matter-light interaction . in this work we demonstrate strong coupling between the charge degree of freedom in a gate-detuned gaas double quantum dot ( dqd ) and a frequency-tunable high impedance resonator realized using an array of superconducting quantum interference devices ( squids ) . in the resonant regime , we resolve the vacuum rabi mode splitting of size $ 2g/2\\pi = 238 $ mhz at a resonator linewidth $ \\kappa/2\\pi = 12 $ mhz and a dqd charge qubit dephasing rate of $ \\gamma_2/2\\pi = 80 $ mhz extracted independently from microwave spectroscopy in the dispersive regime . our measurements indicate a viable path towards using circuit based cavity qed for quantum information processing in semiconductor nano-structures . story_separator_special_tag after reviewing the limitation by the fine structure constant of the dimensionless coupling constant of an hydrogenic atom with a mode of the electromagnetic field in a cavity , we show that the situation presents itself differently for an artificial josephson atom coupled to a transmission line resonator . whereas the coupling constant for the case where such an atom is placed inside the dielectric of the resonator is proportional to 1/2 , the coupling of the josephson atom when it is placed in series with the conducting elements of the resonator is proportional to -1/2 and can reach values greater than 1 . story_separator_special_tag we propose a method for detecting the presence of a single spin in a crystal by coupling it to a high-quality factor superconducting planar resonator . by confining the microwave field in the vicinity of a constriction of nanometric dimensions , the coupling constant can be as high as 5 -- 10 khz . this coupling affects the amplitude of the field reflected by the resonator and the integrated homodyne signal allows detection of a single spin with unit signal-to-noise ratio within few milliseconds . we further show that a stochastic master equation approach and a bayesian analysis of the full time-dependent homodyne signal improves this figure by $ \\ensuremath { \\sim } 30 % $ for typical parameters . story_separator_special_tag we report on electron spin resonance measurements of phosphorus donors localized in a 200 m^ { 2 } area below the inductive wire of a lumped element superconducting resonator . by combining quantum limited parametric amplification with a low impedance microwave resonator design , we are able to detect around 2\xd710^ { 4 } spins with a signal-to-noise ratio of 1 in a single shot . the 150\xa0hz coupling strength between the resonator field and individual spins is significantly larger than the 1-10\xa0hz coupling rates obtained with typical coplanar waveguide resonator designs . because of the larger coupling rate , we find that spin relaxation is dominated by radiative decay into the resonator and dependent upon the spin-resonator detuning , as predicted by purcell . story_separator_special_tag recent experiments on strongly coupled microwave and ferromagnetic resonance modes have focused on large volume bulk crystals such as yttrium iron garnet , typically of millimeter-scale dimensions . we extend these experiments to lower volumes of magnetic material by exploiting low-impedance lumped-element microwave resonators . the low impedance equates to a smaller magnetic mode volume , which allows us to couple to a smaller number of spins in the ferromagnet . compared to previous experiments , we reduce the number of participating spins by two orders of magnitude , while maintaining the strength of the coupling rate . strongly coupled devices with small volumes of magnetic material may allow the use of spin orbit torques , which require high current densities incompatible with existing structures . story_separator_special_tag the speed of quantum gates and measurements is a decisive factor for the overall fidelity of quantum protocols when performed on physical qubits with finite coherence time . reducing the time required to distinguish qubit states with high fidelity is therefore a critical goal in quantum information science . the state-of-the-art readout of superconducting qubits is based on the dispersive interaction with a readout resonator . here , we bring this technique to its current limit and demonstrate how the careful design of system parameters leads to fast and high-fidelity measurements without affecting qubit coherence . we achieve this result by increasing the dispersive interaction strength , by choosing an optimal linewidth of the readout resonator , by employing a purcell filter , and by utilizing phase-sensitive parametric amplification . in our experiment , we measure 98.25 % readout fidelity in only 48 ns , when minimizing read-out time , and 99.2 % in 88 ns , when maximizing the fidelity , limited predominantly by the qubit lifetime of 7.6 us . the presented scheme is also expected to be suitable for integration into a multiplexed readout architecture . story_separator_special_tag faster and more accurate state measurement is required for progress in superconducting qubit experiments with greater numbers of qubits and advanced techniques such as feedback . we have designed a multiplexed measurement system with a bandpass filter that allows fast measurement without increasing environmental damping of the qubits . we use this to demonstrate simultaneous measurement of four qubits on a single superconducting integrated circuit , the fastest of which can be measured to 99.8 % accuracy in 140 ns . this accuracy and speed is suitable for advanced multiqubit experiments including surface-code error correction . story_separator_special_tag we present a superconducting qubit for the circuit quantum electrodynamics architecture that has a tunable qubit-resonator coupling strength $ g $ . this coupling can be tuned from zero to values that are comparable with other superconducting qubits . at $ g=0 $ , the qubit is in a decoherence-free subspace with respect to spontaneous emission induced by the purcell effect . furthermore , we show that in this decoherence-free subspace , the state of the qubit can still be measured by either a dispersive shift on the resonance frequency of the resonator or by a cycling-type measurement . story_separator_special_tag physical implementations of qubits can be extremely sensitive to environmental coupling , which can result in decoherence . while efforts are made for protection , coupling to the environment is necessary to measure and manipulate the state of the qubit . as such , the goal of having long qubit energy relaxation times is in competition with that of achieving high-fidelity qubit control and measurement . here , we propose a method that integrates filtering techniques for preserving superconducting qubit lifetimes together with the dispersive coupling of the qubit to a microwave resonator for control and measurement . the result is a compact circuit that protects qubits from spontaneous loss to the environment , while also retaining the ability to perform fast , high-fidelity readout . importantly , we show the device operates in a regime that is attainable with current experimental parameters and provide a specific example for superconducting qubits in circuit quantum electrodynamics . story_separator_special_tag spontaneous emission of radiation is one of the fundamental mechanisms by which an excited quantum system returns to equilibrium . for spins , however , spontaneous emission is generally negligible compared to other non-radiative relaxation processes because of the weak coupling between the magnetic dipole and the electromagnetic field . in 1946 , purcell realized that the rate of spontaneous emission can be greatly enhanced by placing the quantum system in a resonant cavity . this effect has since been used extensively to control the lifetime of atoms and semiconducting heterostructures coupled to microwave or optical cavities , and is essential for the realization of high-efficiency single-photon sources . here we report the application of this idea to spins in solids . by coupling donor spins in silicon to a superconducting microwave cavity with a high quality factor and a small mode volume , we reach the regime in which spontaneous emission constitutes the dominant mechanism of spin relaxation . the relaxation rate is increased by three orders of magnitude as the spins are tuned to the cavity resonance , demonstrating that energy relaxation can be controlled on demand . our results provide a general way to initialize spin systems story_separator_special_tag we observe large spontaneous emission rate modification of individual inas quantum dots ( qds ) in a 2d photonic crystal with a modified , high- $ q $ single-defect cavity . compared to qds in a bulk semiconductor , qds that are resonant with the cavity show an emission rate increase of up to a factor of 8. in contrast , off-resonant qds indicate up to fivefold rate quenching as the local density of optical states is diminished in the photonic crystal . in both cases , we demonstrate photon antibunching , showing that the structure represents an on-demand single photon source with a pulse duration from 210 ps to 8 ns . we explain the suppression of qd emission rate using finite difference time domain simulations and find good agreement with experiment . story_separator_special_tag an efficient single-photon source based on low-density ingaas quantum dots in a photonic-crystal nanocavity is demonstrated . the single-photon source features the effects of a photonic band gap , yielding a single-mode spontaneous emission coupling efficiency as high as beta = 92 % and a linear polarization degree up to p = 95 % . this appealing performance makes it well suited for practical implementation of polarization-encoded schemes in quantum cryptography . story_separator_special_tag we report on the control of the spontaneous emission rates in inas self-assembled quantum dots weakly coupled to the mode of a modified h1 defect cavity in a two-dimensional photonic crystal slab . changes in sample temperature are used to spectrally tune the exciton emission from a single quantum dot to the monopole mode of the microcavity . a purcell enhancement of the spontaneous emission rate of up to a factor of 11.4 is seen on-resonance , while suppression by up to a factor of 4.4 is seen off-resonance . also , a two orders of magnitude increase in the intensity of light detected from the exciton is measured when compared to a quantum dot in bulk gaas . story_separator_special_tag on-chip single-photon sources are key components for integrated photonic quantum technologies . semiconductor quantum dots can exhibit near-ideal single-photon emission , but this can be significantly degraded in on-chip geometries owing to nearby etched surfaces . a long-proposed solution to improve the indistinguishablility is to use the purcell effect to reduce the radiative lifetime . however , until now only modest purcell enhancements have been observed . here we use pulsed resonant excitation to eliminate slow relaxation paths , revealing a highly purcell-shortened radiative lifetime ( 22.7 ps ) in a waveguide-coupled quantum dot photonic crystal cavity system . this leads to near-lifetime-limited single-photon emission that retains high indistinguishablility ( 93.9 % ) on a timescale in which 20 photons may be emitted . nearly background-free pulsed resonance fluorescence is achieved under -pulse excitation , enabling demonstration of an on-chip , on-demand single-photon source with very high potential repetition rates . story_separator_special_tag we report a sixfold purcell broadening of a resonance line of a 87rb atom , by strongly coupling it to a single-sided fiber-based fabry-perot cavity which collects 90 % of the enhanced single photon emission . story_separator_special_tag recent experiments have demonstrated that light and matter can mix together to an extreme degree , and previously uncharted regimes of light-matter interactions are currently being explored in a variety of settings . the so-called ultrastrong coupling ( usc ) regime is established when the light-matter interaction energy is a comparable fraction of the bare frequencies of the uncoupled systems . furthermore , when the interaction strengths become larger than the bare frequencies , the deep-strong coupling ( dsc ) regime emerges . this article reviews advances in the field of the usc and dsc regimes , in particular , for light modes confined in cavities interacting with two-level systems . an overview is first provided on the theoretical progress since the origins from the semiclassical rabi model until recent developments of the quantum rabi model . next , several key experimental results from a variety of quantum platforms are described , including superconducting circuits , semiconductor quantum wells , and other hybrid quantum systems . finally , anticipated applications are highlighted utilizing usc and dsc regimes , including novel quantum optical phenomena , quantum simulation , and quantum computation . story_separator_special_tag ultrastrong coupling between light and matter has , in the past decade , transitioned from theoretical idea to experimental reality . it is a new regime of quantum light-matter interaction , going beyond weak and strong coupling to make the coupling strength comparable to the transition frequencies in the system . the achievement of weak and strong coupling has led to increased control of quantum systems and applications like lasers , quantum sensing , and quantum information processing . here we review the theory of quantum systems with ultrastrong coupling , which includes entangled ground states with virtual excitations , new avenues for nonlinear optics , and connections to several important physical models . we also review the multitude of experimental setups , including superconducting circuits , organic molecules , semiconductor polaritons , and optomechanics , that now have achieved ultrastrong coupling . we then discuss the many potential applications that these achievements enable in physics and chemistry . story_separator_special_tag linking classical microwave electrical circuits to the optical telecommunication band is at the core of modern communication . future quantum information networks will require coherent microwave-to-optical conversion to link electronic quantum processors and memories via low-loss optical telecommunication networks . efficient conversion can be achieved with electro-optical modulators operating at the single microwave photon level . in the standard electro-optic modulation scheme , this is impossible because both up- and down-converted sidebands are necessarily present . here , we demonstrate true single-sideband up- or down-conversion in a triply resonant whispering gallery mode resonator by explicitly addressing modes with asymmetric free spectral range . compared to previous experiments , we show a 3 orders of magnitude improvement of the electro-optical conversion efficiency , reaching 0.1 % photon number conversion for a 10 ghz microwave tone at 0.42 mw of optical pump power . the presented scheme is fully compatible with existing superconducting 3d circuit quantum electrodynamics technology and can be used for nonclassical state conversion and communication . our conversion bandwidth is larger than 1 mhz and is not fundamentally limited . story_separator_special_tag leveraging the quantum information-processing ability of superconducting circuits and long-distance distribution ability of optical photons promises the realization of complex and large-scale quantum networks . in such a scheme , a coherent and efficient quantum transducer between superconducting and photonic circuits is critical . however , this quantum transducer is still challenging because the use of intermediate excitations in current schemes introduces extra noise and limits bandwidth . we realize direct and coherent transduction between superconducting and photonic circuits based on the triple-resonance electro-optic principle , with integrated devices incorporating both superconducting and optical cavities on the same chip . electromagnetically induced transparency is observed , indicating the coherent interaction between microwave and optical photons . internal conversion efficiency of 25.9 \xb1 0.3 % has been achieved , with 2.05 \xb1 0.04 % total efficiency . superconducting cavity electro-optics offers broad transduction bandwidth and high scalability and represents a significant step toward integrated hybrid quantum circuits and distributed quantum computation . story_separator_special_tag a simple theoretical study of the linear electro-optic effect is presented . this semiclassical approach is based on the single-energy-gap model , the dielectric theory and the concepts of bond charge and effective ionic charge . a general expression is obtained for the electro-optic coefficient of a crystal and is applied to a wide variety of diatomic and ternary compounds including zincblende ( gaas , gap , znse , zns , znte , cucl ) , wurtzite ( zns , cds , cdse ) , quartz ( sio2 ) , lithium niobate ( linbo3 , litao3 ) , kdp ( kh2po4 , kd2po4 , nh4h2po4 ) , chalcopyrite ( aggas2 , cugas2 ) and proustite ( ag3ass3 ) . the calculated results are generally in good agreement with experiment . story_separator_special_tag the bond-charge dielectric theory of phillips and van vechten is applied to the calculation of the electro-optic tensor coefficients . the agreement of the theoretical predictions with experimental values in the case of zinc blende and wurtzite crystals is very good . story_separator_special_tag in the previous paper [ m. tsang , phys . rev . a 81 , 063837 ( 2010 ) , e-print arxiv:1003.0116 ] , i proposed a quantum model of a cavity electro-optic modulator , which can coherently couple an optical cavity mode to a microwave resonator mode and enable novel quantum operations on the two modes , including laser cooling of the microwave mode , electro-optic entanglement , and backaction-evading optical measurement of a microwave quadrature . in this sequel , i focus on the quantum input-output relations between traveling optical and microwave fields coupled to a cavity electro-optic modulator . with red-sideband optical pumping , the relations are shown to resemble those of a beam splitter for the traveling fields , so that in the ideal case of zero parasitic loss and critical coupling , microwave photons can be coherently up-converted to `` flying '' optical photons with unit efficiency , and vice versa . with blue-sideband pumping , the modulator acts as a nondegenerate parametric amplifier , which can generate two-mode squeezing and hybrid entangled photon pairs at optical and microwave frequencies . these fundamental operations provide a potential bridge between circuit quantum electrodynamics and quantum optics story_separator_special_tag we demonstrate strongly nondegenerate optical continuous-wave parametric oscillations in crystalline whispering gallery mode resonators fabricated from linbo3 . the required phase matching is achieved by geometrical confinement of the modes in the resonator . story_separator_special_tag we describe and demonstrate sensitive room-temperature detection of terahertz ( thz ) radiation by nonlinearly upconverting terahertz to the near-infrared regime , relying on telecommications components . thz radiation at 700 ghz is mixed with pump light at 1550 nm in a bulk gaas crystal to generate an idler wave at 1555.6 nm , which is separated and detected by using a commercial p-i-n diode . the thz detector operates at room temperature and has an intrinsic thz-to-optical photon conversion efficiency of 0.001 % . story_separator_special_tag we report on the experimental observation of efficient all-resonant three-wave mixing using high-q whispering-gallery modes . the modes were excited in a millimeter size toroidal cavity fabricated from linbo3 . we implemented a low-noise resonant electro-optic modulator based on this wave mixing process . we observe an efficient modulation of light with coherent microwave pumping at 9 ghz with applied power of approximately 10 mw . used as a receiver , the modulator allows us to detect nanowatt microwave radiation . preliminary results with a 33-ghz modulator prototype are also reported . we present a theoretical interpretation of the experimental results and discuss possible applications of the device . story_separator_special_tag we demonstrate efficient upconversion of subterahertz radiation into the optical domain in a high-q whispering gallery mode resonator with quadratic optical nonlinearity . the 5x10 ( -3 ) power conversion efficiency of a cw 100 ghz signal is achieved with only 16 mw of optical pump . story_separator_special_tag a microwave photon counter can be based on upconversion of the microwave radiation to the optical domain , followed by the optical photon counting . the former process sets the detection bandwidth , the latter sets the time resolution sufficient for sub-thz photon counting at room temperature . we report our progress in developing an efficient and intrinsically noiseless microwaves-to-optics converter , which is based on a high-q whispering gallery resonator with quadratic nonlinearity . story_separator_special_tag optical whispering gallery modes ( wgms ) derive their name from a famous acoustic phenomenon of guiding a wave by a curved boundary observed nearly a century ago . this phenomenon has a rather general nature , equally applicable to sound and all other waves . it enables resonators of unique properties attractive both in science and engineering . very high quality factors of optical wgm resonators persisting in a wide wavelength range spanning from radio frequencies to ultraviolet light , their small mode volume , and tunable in- and out- coupling make them exceptionally efficient for nonlinear optical applications . nonlinear optics facilitates interaction of photons with each other and with other physical systems , and is of prime importance in quantum optics . in this paper we review numerous applications of wgm resonators in nonlinear and quantum optics . we outline the current areas of interest , summarize progress , highlight difficulties , and discuss possible future development trends in these areas . story_separator_special_tag we present an experimental study of the variation of quality factor ( q-factor ) of wgm resonators as a function of surface roughness . we consider mm-size whispering-gallery mode resonators manufactured with fluoride crystals , featuring q-factors of the order of 1 billion at 1550\xa0nm . the experimental procedure consists of repeated polishing steps , after which the surface roughness is evaluated using profilometry by white-light phase-shifting interferometry , while the q-factors are determined using the cavity-ring-down method . this protocol permits us to establish an explicit curve linking the q-factor of the disk-resonator to the surface roughness of the rim . we have performed measurements with four different crystals , namely , magnesium , calcium , strontium , and lithium fluoride . we have thereby found that the variations of q-factor as a function of surface roughness is universal , in the sense that it is globally independent of the bulk material under consideration . we also discuss our experimental results in the light of theoretical estimates of surface scattering q-factors already published in the literature . story_separator_special_tag we demonstrate second harmonic generation ( shg ) in an $ x $ -cut congruent lithium niobate ( ln ) whispering gallery mode resonator . we first show theoretically that independent control of the coupling of the pump and signal modes is optimal for high conversion rates . a scheme based on our earlier work in ref . [ 1 ] is then implemented experimentally to verify this . thereby we are able to improve on the efficiency of shg by more than an order of magnitude by selectively out-coupling using a ln prism , utilizing the birefringence of it and the resonator in kind . we report 5.28 % /mw efficiency for shg from 1555.4 nm to 777.7 nm . story_separator_special_tag the control of dispersion in fibre optical waveguides is of critical importance to optical fibre communications systems and more recently for continuum generation from the ultraviolet to the mid-infrared . the wavelength at which the group velocity dispersion crosses zero can be set by varying the fibre core diameter or index step . moreover , sophisticated methods to manipulate higher-order dispersion so as to shape and even flatten the dispersion over wide bandwidths are possible using multi-cladding fibres . here we introduce design and fabrication techniques that allow analogous dispersion control in chip-integrated optical microresonators , and thereby demonstrate higher-order , wide-bandwidth dispersion control over an octave of spectrum . importantly , the fabrication method we employ for dispersion control simultaneously permits optical q factors above 100 million , which is critical for the efficient operation of nonlinear optical oscillators . dispersion control in high-q systems has become of great importance in recent years with increased interest in chip-integrable optical frequency combs . story_separator_special_tag dispersion engineering of microresonators is very important for applications , such as optical comb generation which desires wideband flat and small dispersion . in this paper , dispersion investigation has been carried out for whispering gallery mode ( wgm ) microresonators fabricated by laser micromachining . the dispersion properties of wgm microresonators fabricated by co $ _2 $ and femtosecond lasers have been comprehensively studied by finite element method for different geometries . significantly flattened dispersion curves have been obtained for both microresonators fabricated by co $ _2 $ laser and femtosecond laser with optimized geometries . the belt-like resonator fabricated by femtosecond laser has better flattened small dispersion which is only between 0 and 4 ps/ ( nm $ \\cdot $ km ) within the wavelength range from 1300 to 1800 nm , comparable to that of a zero dispersion flattened fiber . the results are of great significance for guiding the wgm microresonator fabrication for optical comb generation applications . story_separator_special_tag high speed optical telecommunication is enabled by wavelength division multiplexing , whereby hundreds of individually stabilized lasers encode the information within a single mode optical fiber . in the seek for larger bandwidth the optical power sent into the fiber is limited by optical non-linearities within the fiber and energy consumption of the light sources starts to become a significant cost factor . optical frequency combs have been suggested to remedy this problem by generating multiple laser lines within a monolithic device , their current stability and coherence lets them operate only in small parameter ranges . here we show that a broadband frequency comb realized through the electro-optic effect within a high quality whispering gallery mode resonator can operate at low microwave and optical powers . contrary to the usual third order kerr non-linear optical frequency combs we rely on the second order non-linear effect which is much more efficient . our result uses a fixed microwave signal which is mixed with an optical pump signal to generate a coherent frequency comb with a precisely determined carrier separation . the resonant enhancement enables us to operate with microwave powers three order magnitude smaller than in commercially available devices . story_separator_special_tag the quantum dynamics of the coupling between a cavity optical field and a resonator microwave field via the electro-optic effect is studied . this coupling has the same form as the optomechanical coupling via radiation pressure , so all previously considered optomechanical effects can in principle be observed in electro-optic systems as well . in particular , i point out the possibilities of laser cooling of the microwave mode , entanglement between the optical mode and the microwave mode via electro-optic parametric amplification , and back-action-evading optical measurements of a microwave quadrature . story_separator_special_tag the electrooptic response of crystals becomes attenuated in the megahertz or higher frequencies where it is of the most use for communication systems . this research explores new possibilities of improved electrooptic interaction at high frequencies , discovered as a result of coupled electrooptic effects near selected piezoelectric resonances . results suggest that for electrooptics the key to a large interaction at high frequencies is the gradient of the strain in a modulated crystal and the acceleration of the accompanying lattice waves . while strains tend to be damped , acceleration of the lattice wave retains its amplitude at high frequencies . this interaction is studied by a high frequency laser doppler vibrometer and by numerical finite element analysis modeling using comsol . pmn-pt crystal was the primary material studied due to its large piezoelectric coupling and electrooptic coefficients . the dynamic displacement of the samples was measured over a broad range of frequencies , including the fundamental resonant modes and higher order harmonics where the mode structure becomes complex and not well described by existing analytical models . story_separator_special_tag in this paper , we propose a novel quantum approach for microwave-to-optical conversion in a multilayer graphene structure . the graphene layers are electrically connected and pumped by an optical field . the physical concept is based on using a driving microwave signal to modulate the optical input pump by controlling graphene conductivity . consequently , upper and lower optical sidebands are generated . to achieve low noise conversion , the lower sideband is suppressed by the multilayer graphene destruction resonance . a perturbation approach is implemented to model the effective permittivity of the electrically driven multilayer graphene . subsequently , a quantum mechanical analysis is carried out to describe the evolution of the interacting fields . it is shown that a quantum microwave-to-optical conversion is achieved for miltilayer graphene of the proper length ( i.e. , number of layers ) . the conversion rate and the number of converted photons are evaluated according to several parameters . these include the microwave signal frequency , the microwave driving voltages , the graphene intrinsic electron density , and the number of graphene layers . owing to multilayer dispersion and to the properties of graphene , it is shown that a significant story_separator_special_tag we propose a low noise , triply-resonant , electro-optic ( eo ) scheme for quantum microwave-to-optical conversion based on coupled nanophotonics resonators integrated with a superconducting qubit . our optical system features a split resonance - a doublet - with a tunable frequency splitting that matches the microwave resonance frequency of the superconducting qubit . this is in contrast to conventional approaches where large optical resonators with free-spectral range comparable to the qubit microwave frequency are used . in our system , eo mixing between the optical pump coupled into the low frequency doublet mode and a resonance microwave photon results in an up-converted optical photon on resonance with high frequency doublet mode . importantly , the down-conversion process , which is the source of noise , is suppressed in our scheme as the coupled-resonator system does not support modes at that frequency . our device has at least an order of magnitude smaller footprint than the conventional devices , resulting in large overlap between optical and microwave fields and large photon conversion rate ( $ g/2\\pi $ ) in the range of $ \\sim $ 5-15 khz . owing to large $ g $ factor and doubly-resonant nature of story_separator_special_tag coherent conversion of microwave and optical photons in the single quantum level can significantly expand our ability to process signals in various fields . efficient up-conversion of a feeble signal in the microwave domain to the optical domain will lead to quantum-noise-limited microwave amplifiers . coherent exchange between optical photons and microwave photons will also be a stepping stone to realize long-distance quantum communication . here we demonstrate bidirectional and coherent conversion between microwave and light using collective spin excitations in a ferromagnet . the converter consists of two harmonic oscillator modes , a microwave cavity mode and a magnetostatic mode called the kittel mode , where microwave photons and magnons in the respective modes are strongly coupled and hybridized . an itinerant microwave field and a traveling optical field can be coupled through the hybrid system , where the microwave field is coupled to the hybrid system through the cavity mode , while the optical field addresses the hybrid system through the kittel mode via faraday and inverse faraday effects . the conversion efficiency is theoretically analyzed and experimentally evaluated . the possible schemes for improving the efficiency are also discussed . story_separator_special_tag includes a full mathematical treatment of the interaction of an electromagnetic wave with a gyromagnetic ferrite material . story_separator_special_tag a detailed comparison between the magnetostatic theory and experimental observation is given . both the resonant field and the intensity of the magnetostatic modes are compared . the disagreement between observation of resonant fields and the static theory is about 50 gauss for an yttrium iron garnet sphere 1.3 mm in diameter . inclusion of the first-order propagation corrections reduces this disagreement to less than two gauss for 29 of the 31 modes that were compared . this increased accuracy allows both a more positive identification and a more accurate determination of $ g $ - factors . a comparison of the static theory of intensities with observation has likewise been made . the observed intensities are close to the predicted intensities for some lines and from 4 to 100 times greater for others . story_separator_special_tag it has been found recently that in ferromagnetic resonance experiments performed in inhomogeneous rf exciting fields at a fixed frequency , absorption of power takes place at a number of distinct magnetic fields . this is ascribed to the existence of long-wavelength modes of oscillation of the ferromagnetic sample . the mode spectrum of spheroids is examined for the case , which may often hold in practice , where exchange and electromagnetic propagation can be ignored simultaneously . story_separator_special_tag magnons in ferrimagnetic insulators such as yttrium iron garnet ( yig ) have recently emerged as promising candidates for coherent information processing in microwave circuits . here we demonstrate optical whispering gallery modes of a yig sphere interrogated by a silicon nitride photonic waveguide , with quality factors approaching 10^ { 6 } in the telecom c band after surface treatments . moreover , in contrast to conventional faraday setups , this implement allows an input photon polarized colinearly to the magnetization to be scattered to a sideband mode of orthogonal polarization . this brillouin scattering process is enhanced through triply resonant magnon , pump , and signal photon modes within an `` optomagnonic cavity . '' our results show the potential use of magnons for mediating microwave-to-optical carrier conversion . story_separator_special_tag we experimentally implement a system of cavity optomagnonics , where a sphere of ferromagnetic material supports whispering gallery modes ( wgms ) for photons and the magnetostatic mode for magnons . we observe pronounced nonreciprocity and asymmetry in the sideband signals generated by the magnon-induced brillouin scattering of light . the spin-orbit coupled nature of the wgm photons , their geometrical birefringence , and the time-reversal symmetry breaking in the magnon dynamics impose the angular-momentum selection rules in the scattering process and account for the observed phenomena . the unique features of the system may find interesting applications at the crossroad between quantum optics and spintronics . story_separator_special_tag we demonstrate that yttrium iron garnet microspheres support optical whispering gallery modes similar to those in non-magnetic dielectric materials . the direction of the ferromagnetic moment tunes both the resonant frequency via the voigt effect as well as the degree of polarization rotation via the faraday effect . an understanding of the magneto-optical coupling in whispering gallery modes , where the propagation direction rotates with respect to the magnetization , is fundamental to the emerging field of cavity optomagnonics . story_separator_special_tag an enhancement in brillouin light scattering of optical photons with magnons is demonstrated in magneto-optical whispering gallery mode resonators tuned to a triple-resonance point . this occurs when both the input and output optical modes are resonant with those of the whispering gallery resonator , with a separation given by the ferromagnetic resonance frequency . the identification and excitation of specific optical modes allows us to gain a clear understanding of the mode-matching conditions . a selection rule due to wave vector matching leads to an intrinsic single-sideband excitation . strong suppression of one sideband is essential for one-to-one frequency mapping in coherent optical-to-microwave conversion . story_separator_special_tag a ferromagnetic sphere can support optical vortices in the form of whispering gallery modes and magnetic quasivortices in the form of magnetostatic modes with nontrivial spin textures . these vortices can be characterized by their orbital angular momenta . we experimentally investigate brillouin scattering of photons in the whispering gallery modes by magnons in the magnetostatic modes , zeroing in on the exchange of the orbital angular momenta between the optical vortices and magnetic quasivortices . we find that the conservation of the orbital angular momentum results in different nonreciprocal behavior in the brillouin light scattering . new avenues for chiral optics and optospintronics can be opened up by taking the orbital angular momenta as a new degree of freedom for cavity optomagnonics . story_separator_special_tag we report the observation of strong coupling between the exchange-coupled spins in a gallium-doped yttrium iron garnet and a superconducting coplanar microwave resonator made from nb . the measured coupling rate of 450 mhz is proportional to the square root of the number of exchange-coupled spins and well exceeds the loss rate of 50 mhz of the spin system . this demonstrates that exchange-coupled systems are suitable for cavity quantum electrodynamics experiments , while allowing high integration densities due to their spin densities of the order of one bohr magneton per atom . our results furthermore show , that experiments with multiple exchange-coupled spin systems interacting via a single resonator are within reach . story_separator_special_tag we realize a cavity magnon-microwave photon system in which a magnetic dipole interaction mediates strong coupling between the collective motion of a large number of spins in a ferrimagnet and the microwave field in a three-dimensional cavity . by scaling down the cavity size and increasing the number of spins , an ultrastrong coupling regime is achieved with a cooperativity reaching 12 600. interesting dynamic features including classical rabi-like oscillation , magnetically induced transparency , and the purcell effect are demonstrated in this highly versatile platform , highlighting its great potential for coherent information processing . story_separator_special_tag magnons are quantized quasiparticles that can in principle be used in quantum computation . to implement such computations in practice , magnons must be strongly coupled with photons , which transfer information between them . in this work , the authors demonstrate extremely strong couplings using a type of multipost microwave cavity that can focus a magnetic field into submillimeter-sized samples . this ultrastrong coupling of magnons and photons can be a building block in the architecture of high-fidelity hybrid quantum systems for the processors of the future . story_separator_special_tag we demonstrate large normal-mode splitting between a magnetostatic mode ( the kittel mode ) in a ferromagnetic sphere of yttrium iron garnet and a microwave cavity mode . strong coupling is achieved in the quantum regime where the average number of thermally or externally excited magnons and photons is less than one . we also confirm that the coupling strength is proportional to the square root of the number of spins . a nonmonotonic temperature dependence of the kittel-mode linewidth is observed below 1 k and is attributed to the dissipation due to the coupling with a bath of two-level systems . story_separator_special_tag abstract the techniques of microwave quantum optics are applied to collective spin excitations in a macroscopic sphere of a ferromagnetic insulator . we demonstrate , in the single-magnon limit , strong coupling between a magnetostatic mode in the sphere and a microwave cavity mode . moreover , we introduce a superconducting qubit in the cavity and couple the qubit with the magnon excitation via the virtual photon excitation . we observe the magnon vacuum-induced rabi splitting . the hybrid quantum system enables generation and characterization of non-classical quantum states of magnons . story_separator_special_tag we have fabricated and measured a high-q josephson junction resonator with a tunable resonance frequency . a dc magnetic flux allows the resonance frequency to be changed by over 10 % . weak coupling to the environment allows a quality factor of $ \\thicksim $ 7000 when on average less than one photon is stored in the resonator . at large photon numbers , the nonlinearity of the josephson junction creates two stable oscillation states . this resonator can be used as a tool for investigating the quality of josephson junctions in qubits below the single photon limit , and can be used as a microwave qubit readout at high photon numbers . story_separator_special_tag we have fabricated and characterized tunable superconducting transmission line resonators . to change the resonance frequency , we modify the boundary condition at one end of the resonator through the tunable josephson inductance of a superconducting quantum interference device . we demonstrate a large tuning range ( several hundred megahertz ) , high quality factors ( 104 ) , and that we can change the frequency of a few-photon field on a time scale orders of magnitude faster than the photon lifetime of the resonator . this demonstration has implications in a variety of applications . story_separator_special_tag brillouin light scattering is an established technique to study magnons , the elementary excitations of a magnet . its efficiency can be enhanced by cavities that concentrate the light intensity . here , we theoretically study inelastic scattering of photons by a magnetic sphere that supports optical whispering gallery modes in a plane normal to the magnetization . magnons with low angular momenta scatter the light in the forward direction with a pronounced asymmetry in the stokes and the anti-stokes scattering strength , consistent with earlier studies . magnons with large angular momenta constitute damon-eschbach modes which are shown to inelastically reflect light . the reflection spectrum contains either a stokes or anti-stokes peak , depending on the direction of the magnetization , a selection rule that can be explained by the chirality of the damon-eshbach magnons . the controllable energy transfer can be used to manage the thermodynamics of the magnet by light . story_separator_special_tag we demonstrate , at room temperature , the strong coupling of the fundamental and non-uniform magnetostatic modes of an yttrium iron garnet ferrimagnetic sphere to the electromagnetic modes of a co-axial cavity . the well-defined field profile within the cavity yields a specific coupling strength for each magnetostatic mode . we experimentally measure the coupling strength for the different magnetostatic modes and , by calculating the expected coupling strengths , we are able to identify the modes themselves . story_separator_special_tag we report measurements made at millikelvin temperatures of a superconducting coplanar waveguide resonator ( cpwr ) coupled to a sphere of yttrium-iron garnet . systems hybridising collective spin excitations with microwave photons have recently attracted interest for their potential quantum information applications . in this experiment the non-uniform microwave\xa0field of the cpwr allows coupling to be achieved to many different magnon modes in the sphere . calculations of the relative coupling strength of different mode families in the sphere to the cpwr are used to successfully identify the magnon modes and their frequencies . the measurements are extended to the quantum limit by reducing the drive power until , on average , \xa0less than one photon is present in the cpwr . investigating the time-dependent response of the system to square pulses , oscillations in the output signal at the mode splitting frequency are observed . these results demonstrate the feasibility of future experiments combining magnonic elements with planar superconducting quantum devices . story_separator_special_tag we identify experimentally the magnetostatic modes active for brillouin light scattering in the optical whispering gallery modes of a yttrium iron garnet sphere . each mode is identified by magnetic-field dispersion of ferromagnetic-resonance spectroscopy and coupling strength to the known field distribution of the microwave drive antenna . our optical measurements confirm recent predictions that higher-order magnetostatic modes can also generate optical scattering , according to the selection rules derived from the axial symmetry . from this we summarize the selection rules for brillouin light scattering . we give experimental evidence that the optomagnonic coupling to nonuniform magnons can be higher than that of the uniform kittel mode . story_separator_special_tag magnetostatic modes supported by a ferromagnetic sphere have been known as the walker modes , each of which possesses an orbital angular momentum as well as a spin angular momentum along a static magnetic field . the walker modes with non-zero orbital angular momenta exhibit topologically non-trivial spin textures , which we call magnetic quasi-vortices . photons in optical whispering gallery modes supported by a dielectric sphere possess orbital and spin angular momenta forming optical vortices . within a ferromagnetic , as well as dielectric , sphere , two forms of vortices interact in the process of brillouin light scattering . we argue that in the scattering there is a selection rule that dictates the exchange of orbital angular momenta between the vortices . the selection rule is shown to be responsible for the experimentally observed nonreciprocal brillouin light scattering . story_separator_special_tag we propose a device for the reversible and quiet conversion of microwave photons to optical sideband photons that can reach 100 % quantum efficiency . the device is based on an erbium-doped crystal placed in both an optical and microwave resonator . we show that efficient conversion can be achieved so long as the product of the optical and microwave cooperativity factors can be made large . we argue that achieving this regime is feasible with current technology and we discuss a possible implementation . story_separator_special_tag we present an experimental demonstration of converting a microwave field to an optical field via frequency mixing in a cloud of cold $ ^ { 87 } \\mathrm { rb } $ atoms , where the microwave field strongly couples to an electric dipole transition between rydberg states . we show that the conversion allows the phase information of the microwave field to be coherently transferred to the optical field . with the current energy level scheme and experimental geometry , we achieve a photon-conversion efficiency of $ \\ensuremath { \\sim } 0.3 % $ at low microwave intensities and a broad conversion bandwidth of more than 4 mhz . theoretical simulations agree well with the experimental data , and they indicate that near-unit efficiency is possible in future experiments . story_separator_special_tag a candidate for converting quantum information from microwave to optical frequencies is the use of a single atom that interacts with a superconducting microwave resonator on one hand and an optical cavity on the other . the large electric dipole moments and microwave transition frequencies possessed by rydberg states allow them to couple strongly to superconducting devices . lasers can then be used to connect a rydberg transition to an optical transition to realize the conversion . since the fundamental source of noise in this process is spontaneous emission from the atomic levels , the resulting control problem involves choosing the pulse shapes of the driving lasers so as to maximize the transfer rate while minimizing this loss . here we consider the concrete example of a cesium atom , along with two specific choices for the levels to be used in the conversion cycle . under the assumption that spontaneous emission is the only significant source of errors , we use numerical optimization to determine the likely rates for reliable quantum communication that could be achieved with this device . these rates are on the order of a few megaqubits per second . story_separator_special_tag electromagnetically induced transparency is a technique for eliminating the effect of a medium on a propagating beam of electromagnetic radiation . eit may also be used , but under more limited conditions , to eliminate optical self focusing and defocusing and to improve the transmission of laser beams through inhomogeneous refracting gases and metal vapors , as figure 1 illustrates . the technique may be used to create large populations of coherently driven uniformly phased atoms , thereby making possible new types of optoelectronic devices . story_separator_special_tag techniques that use quantum interference effects are being actively investigated to manipulate the optical properties of quantum systems . one such example is electromagnetically induced transparency , a quantum effect that permits the propagation of light pulses through an otherwise opaque medium . here we report an experimental demonstration of electromagnetically induced transparency in an ultracold gas of sodium atoms , in which the optical pulses propagate at twenty million times slower than the speed of light in a vacuum . the gas is cooled to nanokelvin temperatures by laser and evaporative cooling . the quantum interference controlling the optical properties of the medium is set up by a coupling laser beam propagating at a right angle to the pulsed probe beam . at nanokelvin temperatures , the variation of refractive index with probe frequency can be made very steep . in conjunction with the high atomic density , this results in the exceptionally low light speeds observed . by cooling the cloud below the transition temperature for bose einstein condensation ( causing a macroscopic population of alkali atoms in the quantum ground state of the confining potential ) , we observe even lower pulse propagation velocities ( 17 m story_separator_special_tag the dynamics of resonant light propagation in rubidium vapor in a cell with antirelaxation wall coating are investigated . we change the polarization of the input light and measure the time dependence of the polarization after the cell . the observed dynamics are shown to be analogous to those in electromagnetically induced transparency . spectral dependence of light pulse delays is found to be similar to that of nonlinear magneto-optic rotation . delays up to [ approx ] 13 ms are observed , corresponding to a 8 m/s group velocity . fields of a few microgauss are used to control the group velocity . [ copyright ] [ ital 1999 ] [ ital the american physical society ] story_separator_special_tag electromagnetically induced transparency1,2,3 is a quantum interference effect that permits the propagation of light through an otherwise opaque atomic medium ; a coupling laser is used to create the interference necessary to allow the transmission of resonant pulses from a probe laser . this technique has been used4,5,6 to slow and spatially compress light pulses by seven orders of magnitude , resulting in their complete localization and containment within an atomic cloud4 . here we use electromagnetically induced transparency to bring laser pulses to a complete stop in a magnetically trapped , cold cloud of sodium atoms . within the spatially localized pulse region , the atoms are in a superposition state determined by the amplitudes and phases of the coupling and probe laser fields . upon sudden turn-off of the coupling laser , the compressed probe pulse is effectively stopped ; coherent information initially contained in the laser fields is frozen in the atomic medium for up to 1 ms. the coupling laser is turned back on at a later time and the probe pulse is regenerated : the stored coherence is read out and transferred back into the radiation field . we present a theoretical model that reveals story_separator_special_tag physical processes that could facilitate coherent control of light propagation are under active exploration1,2,3,4,5 . in addition to their fundamental interest , these efforts are stimulated by practical possibilities , such as the development of a quantum memory for photonic states6,7,8 . controlled localization and storage of photonic pulses may also allow novel approaches to manipulating of light via enhanced nonlinear optical processes9 . recently , electromagnetically induced transparency10 was used to reduce the group velocity of propagating light pulses11,12 and to reversibly map propagating light pulses into stationary spin excitations in atomic media13,14,15,16 . here we describe and experimentally demonstrate a technique in which light propagating in a medium of rb atoms is converted into an excitation with localized , stationary electromagnetic energy , which can be held and released after a controllable interval . our method creates pulses of light with stationary envelopes bound to an atomic spin coherence , offering new possibilities for photon state manipulation and nonlinear optical processes at low light levels . story_separator_special_tag we propose a scheme to couple short single photon pulses to superconducting qubits . an optical photon is first absorbed into an inhomogeneously broadened rare-earth doped crystal using controlled reversible inhomogeneous broadening . the optical excitation is then mapped into a spin state using a series of pulses and subsequently transferred to a superconducting qubit via a microwave cavity . to overcome the intrinsic and engineered inhomogeneous broadening of the optical and spin transitions in rare-earth doped crystals , we make use of a special transfer protocol using staggered pulses . we predict total transfer efficiencies on the order of 90 % . story_separator_special_tag the ability to convert quantum states from microwave photons to optical photons is important for hybrid system approaches to quantum information processing . in this paper we report the up-conversion of a microwave signal into the optical telecommunications wavelength band using erbium dopants in a yttrium orthosilicate crystal via stimulated raman scattering . the microwaves were applied to the sample using a 3d copper loop-gap resonator and the coupling and signal optical fields were single passed . the conversion efficiency was low , in agreement with a theoretical analysis , but can be significantly enhanced with an optical resonator . story_separator_special_tag quantum light matter interfaces connecting stationary qubits to photons will enable optical networks for quantum communications , precise global time keeping , photon switching and studies of fundamental physics . rare-earth-ion-doped crystals are state-of-the-art materials for optical quantum memories and quantum transducers between optical photons , microwave photons and spin waves . here we demonstrate coupling of an ensemble of neodymium rare-earth-ions to photonic nanocavities fabricated in the yttrium orthosilicate host crystal . cavity quantum electrodynamics effects including purcell enhancement ( f=42 ) and dipole-induced transparency are observed on the highly coherent ^4i_ ( 9/2 ) ^4f_ ( 3/2 ) optical transition . fluctuations in the cavity transmission due to statistical fine structure of the atomic density are measured , indicating operation at the quantum level . coherent optical control of cavity-coupled rare-earth ions is performed via photon echoes . long optical coherence times ( t_2~100 s ) and small inhomogeneous broadening are measured for the cavity-coupled rare-earth ions , thus demonstrating their potential for on-chip scalable quantum light matter interfaces . story_separator_special_tag rydberg atoms with principal quantum number $ n 1 $ have exaggerated atomic properties including dipole-dipole interactions that scale as $ { n } ^ { 4 } $ and radiative lifetimes that scale as $ { n } ^ { 3 } $ . it was proposed a decade ago to take advantage of these properties to implement quantum gates between neutral atom qubits . the availability of a strong long-range interaction that can be coherently turned on and off is an enabling resource for a wide range of quantum information tasks stretching far beyond the original gate proposal . rydberg enabled capabilities include long-range two-qubit gates , collective encoding of multiqubit registers , implementation of robust light-atom quantum interfaces , and the potential for simulating quantum many-body physics . the advances of the last decade are reviewed , covering both theoretical and experimental aspects of rydberg-mediated quantum information processing . story_separator_special_tag a light , compact optical isolator using an atomic vapor in the hyperfine paschen-back regime is presented . absolute transmission spectra for experiment and theory through an isotopically pure 87rb vapor cell show excellent agreement for fields of 0.6 t. we show /4 rotation for a linearly polarized beam in the vicinity of the d2 line and achieve an isolation of 30 db with a transmission > 95 % . story_separator_special_tag the electro-optic effect , where the refractive index of a medium is modified by an electric field , is of central importance in nonlinear optics , laser technology , quantum optics and optical communications . in general , electro-optic coefficients are very weak and a medium with a giant electro-optic coefficient could have profound implications for precision electrometry and nonlinear optics at the single-photon level . here we propose and demonstrate a giant d.c. electro-optic effect on the basis of polarizable ( rydberg ) dark states . when a medium is prepared in a dark state consisting of a superposition of ground and rydberg energy levels , it becomes transparent and acquires a refractive index that is dependent on the energy of the highly polarizable rydberg state . we demonstrate phase modulation of the light field in the rydberg-dark-state medium and measure an electro-optic coefficient that is more than six orders of magnitude larger than in usual kerr media . coupling of the rydberg states of an ensemble of rubidium atoms gives rise to a d.c. kerr effect that is six orders of magnitude greater than in conventional kerr media . such phenomena could enable the development of high-precision electric story_separator_special_tag a complete physical approach to quantum information requires a robust interface among flying qubits , long-lifetime memory , and computational qubits . here we present a unified interface for microwave and optical photons , potentially connecting engineerable quantum devices such as superconducting qubits at long distances through optical photons . our approach uses an ultracold ensemble of atoms for two purposes : quantum memory and to transduce excitations between the two frequency domains . using coherent control techniques , we examine an approach for converting and storing quantum information between microwave photons in superconducting resonators , ensembles of ultracold atoms , and optical photons , as well as a method for transferring information between two resonators . story_separator_special_tag deterministic quantum information processing will require hybrid quantum systems like an interface between microwave and optical photons . we propose a scheme for efficient , multimode and coherent microwave-optical conversion based on frequency mixing in rydberg atoms . story_separator_special_tag we show that cold rydberg gases enable an efficient six-wave mixing process where terahertz or microwave fields are coherently converted into optical fields and vice versa . this process is made possible by the long lifetime of rydberg states , the strong coupling of millimeter waves to rydberg transitions and by a quantum interference effect related to electromagnetically induced transparency ( eit ) . our frequency conversion scheme applies to a broad spectrum of millimeter waves due to the abundance of transitions within the rydberg manifold , and we discuss two possible implementations based on focussed terahertz beams and millimeter wave fields confined by a waveguide , respectively . we analyse a realistic example for the interconversion of terahertz and optical fields in rubidium atoms and find that the conversion efficiency can in principle exceed 90\\ % . story_separator_special_tag interfacing superconducting qubits with optical photons require noise-free microwave-to-optical transducers , a technology currently not realized at the single-photon level . we propose to use four-wave mixing in an ensemble of cold ytterbium ( yb ) atoms prepared in the metastable `` clock '' state . the parametric process uses two high-lying rydberg states for bidirectional conversion between a 10 ghz microwave photon and an optical photon in the telecommunication e-band . to avoid noise photons due to spontaneous emission , we consider continuous operation far detuned from the intermediate states . we use an input-output formalism to predict conversion efficiencies of $ \\ensuremath { \\approx } 50 % $ with bandwidths of $ \\ensuremath { \\approx } 100 $ khz . story_separator_special_tag placing an ensemble of $ { 10 } ^ { 6 } $ ultracold atoms in the near field of a superconducting coplanar waveguide resonator with a quality factor $ q\\ensuremath { \\sim } { 10 } ^ { 6 } $ , one can achieve strong coupling between a single microwave photon in the coplanar waveguide resonator and a collective hyperfine qubit state in the ensemble with $ { g } _ { \\mathrm { eff } } /2\\ensuremath { \\pi } \\ensuremath { \\sim } 40\\text { } \\text { } \\mathrm { khz } $ larger than the cavity linewidth of $ \\ensuremath { \\kappa } /2\\ensuremath { \\pi } \\ensuremath { \\sim } 7\\text { } \\text { } \\mathrm { khz } $ . integrated on an atomchip , such a system constitutes a hybrid quantum device , which also can be used to interconnect solid-state and atomic qubits , study and control atomic motion via the microwave field , observe microwave superradiance , build an integrated micromaser , or even cool the resonator field via the atoms . story_separator_special_tag we demonstrate microwave-to-optical conversion using six-wave mixing in cold $ ^ { 87 } \\mathrm { rb } $ atoms where the microwave field couples to two rydberg states and propagates collinearly with the converted optical field . our experiment is performed with a free-space microwave field , and we achieve a conversion efficiency of about $ 5 % $ for the microwave photons entering the conversion medium . in addition , we theoretically investigate all-resonant six-wave mixing and outline a realistic experimental scheme for reaching an efficiency close to 70 % . story_separator_special_tag we describe a scheme to coherently convert a microwave photon of a superconducting co-planar waveguide resonator to an optical photon emitted into a well-defined temporal and spatial mode . the conversion is realized by a cold atomic ensemble trapped close the surface of the superconducting atom chip , near the antinode of the microwave cavity . the microwave photon couples to a strong rydberg transition of the atoms that are also driven by a pair of laser fields with appropriate frequencies and wavevectors for an efficient wave-mixing process . with only several thousand atoms in an ensemble of moderate density , the microwave photon can be completely converted into an optical photon emitted with high probability into the phase matched direction and , e.g. , fed into a fiber waveguide . this scheme operates in a free-space configuration , without requiring strong coupling of the atoms to a resonant optical cavity . story_separator_special_tag an optomechanical system that converts microwaves to optical frequency light and vice versa is demonstrated . the technique achieves a conversion efficiency of approximately 10 % . the results indicate that the device could work at the quantum level , up- and down-converting individual photons , if it were cooled to millikelvin temperatures . it could , therefore , form an integral part of quantum-processor networks . story_separator_special_tag a nanomechanical interface between optical photons and microwave electrical signals is now demonstrated . coherent transfer between microwave and optical fields is achieved by parametric electro-optical coupling in a piezoelectric optomechanical crystal , and this on-chip technology could form the basis of photonic networks of superconducting quantum bits . story_separator_special_tag we present an overview of experimental work to embed high-q mesoscopic mechanical oscillators in microwave and optical cavities . based upon recent progress , the prospect for a broad field of `` cavity quantum mechanics '' is very real . these systems introduce mesoscopic mechanical oscillators as a new quantum resource and also inherently couple their motion to photons throughout the electromagnetic spectrum . story_separator_special_tag we describe schemes for transferring quantum states between light fields and the motion of a trapped atom . coupling between the motion and the light is achieved via raman transitions driven by a laser field and the quantized field of a high-finesse microscopic cavity mode . by cascading two such systems and tailoring laser field pulses , we show that it is possible to transfer an arbitrary motional state of one atom to a second atom at a spatially distant site . story_separator_special_tag in this paper , we describe a general optomechanical system for converting photons to phonons in an efficient and reversible manner . we analyze classically and quantum mechanically the conversion process and proceed to a more concrete description of a phonon photon translator ( ppt ) formed from coupled photonic and phononic crystal planar circuits . the application of the ppt to rf-microwave photonics and circuit qed , including proposals utilizing this system for optical wavelength conversion , long-lived quantum memory and state transfer from optical to superconducting qubits , is considered . story_separator_special_tag we review the field of cavity optomechanics , which explores the interaction between electromagnetic radiation and nano- or micromechanical motion . this review covers the basics of optical cavities and mechanical resonators , their mutual optomechanical interaction mediated by the radiation pressure force , the large variety of experimental systems which exhibit this interaction , optical measurements of mechanical motion , dynamical backaction amplification and cooling , nonlinear dynamics , multimode optomechanics , and proposals for future cavity quantum optomechanics experiments . in addition , we describe the perspectives for fundamental quantum physics and for possible applications of optomechanical devices . story_separator_special_tag we propose a scheme for transferring quantum states from the propagating light fields to macroscopic , collective vibrational degree of freedom of a massive mirror by exploiting radiation pressure effects . this scheme may prepare an einstein-podolsky-rosen state in position and momentum of a pair of distantly separated movable mirrors by utilizing the entangled light fields produced from a nondegenerate optical parametric amplifier . story_separator_special_tag an optomechanical interface that converts quantum states between optical fields with distinct wavelengths is proposed . a mechanical mode couples to two optical modes via radiation pressure and mediates the quantum state mapping between the two optical modes . a sequence of optomechanical $ \\ensuremath { \\pi } /2 $ pulses enables state-swapping between optical and mechanical states , as well as the cooling of the mechanical mode . theoretical analysis shows that high-fidelity conversion can be realized for states with small photon numbers in systems with experimentally achievable parameters . the pulsed conversion process also makes it possible to maintain high conversion fidelity at elevated bath temperatures . story_separator_special_tag we report the experimental demonstration of storing optical information as a mechanical excitation in a silica optomechanical resonator . we use writing and readout laser pulses tuned to one mechanical frequency below an optical cavity resonance to control the coupling between the mechanical displacement and the optical field at the cavity resonance . the writing pulse maps a signal pulse at the cavity resonance to a mechanical excitation . the readout pulse later converts the mechanical excitation back to an optical pulse . the storage lifetime is determined by the relatively long damping time of the mechanical excitation . story_separator_special_tag we revisit the problem of using a mechanical resonator to perform the transfer of a quantum state between two electromagnetic cavities ( e.g. , optical and microwave ) . we show that this system possesses an effective mechanically dark mode which is immune to mechanical dissipation ; utilizing this feature allows highly efficient transfer of intracavity states , as well as of itinerant photon states . we provide simple analytic expressions for the fidelity for transferring both gaussian and non-gaussian states . story_separator_special_tag optomechanical systems with strong coupling can be a powerful medium for quantum state engineering of the cavity modes . here , we show that quantum state conversion between cavity modes of distinctively different wavelengths can be realized with high fidelity by adiabatically varying the effective optomechanical couplings . the conversion fidelity for gaussian states is derived by solving the langevin equation in the adiabatic limit . meanwhile , we also show that traveling photon pulses can be transmitted between different input and output channels with high fidelity and the output pulse can be engineered via the optomechanical couplings . story_separator_special_tag microfabricated superconducting circuit elements can harness the power of quantum behaviour for information processing . unlike classical information bits , quantum information bits ( qubits ) can form superpositions or mixture states of on and off , offering a faster , natural form of parallel processing . previously , direct qubit qubit coupling has been achieved for up to four qubits , but now two independent groups demonstrate the next crucial step : communication and exchange of quantum information between two superconducting qubits via a quantum bus , in the form of a resonant cavity formed by a superconducting transmission line a few millimetres long . using this microwave cavity it is possible to store , transfer and exchange quantum information between two quantum bits . it can also perform multiplexed qubit readout . this basic architecture lends itself to expansion , offering the possibility for the coherent interaction of many superconducting qubits . the cover illustrates a zig-zag-shaped resonant cavity or quantum bus linking two superconducting phase qubits . one of two papers that demonstrate the communication of individual quantum states between superconducting qubits via a quantum bus . this quantum bus is a resonant cavity formed by a story_separator_special_tag we describe the principles of design , fabrication , and operation of a piezoelectric optomechanical crystal with which we demonstrate bi-directional conversion of energy between microwave and optical frequencies . the optomechanical crystal has an optical mode at 1523 nm co-located with a mechanical breathing mode at 3.8 ghz , with a measured optomechanical coupling strength gom/2 of 115 khz . the breathing mode is driven and detected by curved interdigitated transducers that couple to a lamb mode in suspended membranes on either end of the optomechanical crystal , allowing the external piezoelectric modulation of the optical signal as well as the converse , the detection of microwave electrical signals generated by a modulated optical signal . we compare measurements to theory where appropriate . story_separator_special_tag we report on the development of a new class of widely tunable resonant single-sideband electro-optical modulators based on interaction of different mode families of a crystalline whispering gallery mode resonator with an externally applied rf field . the tunability comes from the different response of mode families to either the temperature change or the voltage applied to the resonator . story_separator_special_tag the efficiency of the frequency conversion process at the heart of raman heterodyne spectroscopy was improved by nearly four orders of magnitude by resonant enhancement of both the pump and signal optical fields . our results using an erbium doped y $ _2 $ sio $ _5 $ crystal at temperatures near 4k suggest that such an approach is promising for the quantum conversion of microwave to optical photons . story_separator_special_tag most investigations of rare-earth ions in solids for quantum information have used crystals where the rare-earth ion is a dopant . here , we analyze the conversion of quantum information from microwave photons to optical frequencies using crystals where the rare-earth ions , rather than being dopants , are part of the host crystal . these concentrated crystals are attractive for frequency conversion because of their large ion densities and small linewidths . we show that conversion with both high efficiency and large bandwidth is possible in these crystals . in fact , the collective coupling between the rare-earth ions and the optical and microwave cavities is large enough that the limitation on the bandwidth of the devices will instead be the spacing between magnon modes in the crystal . story_separator_special_tag a transducer capable of converting quantum information stored as microwaves into telecom-wavelength signals is a critical piece of future quantum technology as it promises to enable the networking of quantum processors . cavity optomechanical devices that are simultaneously coupled to microwave fields and optical resonances are being pursued in this regard . yet even in the classical regime , developing optical modulators based on cavity optomechanics could provide lower power or higher bandwidth alternatives to current technology . here we demonstrate a magnetically-mediated wavelength conversion technique , based on mixing high frequency tones with an optomechanical torsional resonator . this process can act either as an optical phase or amplitude modulator depending on the experimental configuration , and the carrier modulation is always coherent with the input tone . such coherence allows classical information transduction and transmission via the technique of phase-shift keying . we demonstrate that we can encode up to eight bins of information , corresponding to three bits , simultaneously and demonstrate the transmission of an 52,500 pixel image over 6 km of optical fiber with just 0.67 % error . furthermore , we show that magneto-optomechanical transduction can be described in a fully quantum manner , story_separator_special_tag abstract in this work , we propose a concept of a microwave to optical photon converter for applications in quantum information ( qi ) that is based on travelling magnons in a thin magnetic film . the converter employs an epitaxially grown bi-substituted yttrium iron garnet ( bi-yig ) film as the medium for propagation of travelling magnons ( spin waves ) . the conversion is achieved through coupling of magnons to guided optical modes of the film . we evaluate the conversion efficiency for this device theoretically . our prediction is that it will be larger by at least four orders of magnitude than experimentally obtained in a similar process exploiting a uniform magnetization precession mode in a yig sphere . by creating an optical resonator of a large length from the film ( such that the traveling magnon decays before forming a standing wave over the resonator length ) one will be able to further increase the efficiency by several orders of magnitude , potentially reaching a value similar to achieved with opto-mechanical resonators . an important advantage of the suggested concept of the qi devices based on travelling spin waves is a perfectly planar geometry compatible with story_separator_special_tag conversion between signals in the microwave and optical domains is of great interest both for classical telecommunication , as well as for connecting future superconducting quantum computers into a global quantum network . for quantum applications , the conversion has to be both efficient , as well as operate in a regime of minimal added classical noise . while efficient conversion has been demonstrated using mechanical transducers , they have so far all operated with a substantial thermal noise background . here , we overcome this limitation and demonstrate coherent conversion between ghz microwave signals and the optical telecom band with a thermal background of less than one phonon . we use an integrated , on-chip electro-opto-mechanical device that couples surface acoustic waves driven by a resonant microwave signal to an optomechanical crystal featuring a 2.7 ghz mechanical mode . we initialize the mechanical mode in its quantum groundstate , which allows us to perform the transduction process with minimal added thermal noise , while maintaining an optomechanical cooperativity > 1 , so that microwave photons mapped into the mechanical resonator are effectively upconverted to the optical domain . we further verify the preservation of the coherence of the microwave
in this paper , an imaging system simulation tool is presented . with the tool , it is possible to simulate the performance ( quality ) of an imaging system . furthermore , the system allows optimization of the lens system for a given image sensor . experiments have shown that the tool is useful in actual lens design . story_separator_special_tag abstract understanding signal and noise quantities in any practical co mputational imaging system is critical . knowledge of the imaging environment , optical parameters , and detector sensitivity determine the signal quantities but often noise quantities are assumed to be independent of the signal and either uniform or gaussian additive . these simplistic noise models do not accurately model actual detectors . accurate no ise models are needed in order to design optimal systems . we describe a noise model for a modern aps cmos detector and a number of noise sources that we will be measuring . a method for characterizing the noise sources given a set of dark images and a set of flat field images is outlined . the noise characterization data is then used to simulate dark images and flat field images . the simulated data is a very good match to the real data thus validating the model and characterization procedure . keywords : wavefront coding , noise modeling , image sensor noise , system design . story_separator_special_tag the class of cameras that are based on ionization sensors , which includes the most common charge-coupled device ( ccd ) and vidicon cameras , is examined . camera signals are shown to be corrupted by direction-dependent stationary electronic noise sources and fluctuations due to the statistical nature of the sensing process . the authors develop and test a model of the inherent noises in cameras . these results are confirmed by measurement , and they suggest a locally stationary model of noise for adaptive signal processing . > story_separator_special_tag abstract the sampling error of a shack hartmann wavefront sensor with variable subaperture pixels is analysed under the consideration of various threshold values and detecting dynamic ranges . a generalized expression , which is used for fitting the sampling error of a shack-hartmann wavefront sensor with variable subaperture pixels , is presented . the computational results of the sampling error of a shack hartmann wavefront sensor with different pixel numbers per subaperture , different detecting dynamic ranges , different atmospheric coherence length , different extended degree of the object and the different threshold values are also given . the results indicate that the sampling error of the shack hartmann wavefront sensor is sensitive to the dynamic range of the subaperture , the pixel numbers per subaperture , the extended degree of the object and the coherent length of atmosphere , but not sensitive to the threshold value . story_separator_special_tag the image systems evaluation toolkit ( iset ) is an integrated suite of software routines that simulate the capture and processing of visual scenes . iset includes a graphical user interface ( gui ) for users to control the physical characteristics of the scene and many parameters of the optics , sensor electronics and image processing-pipeline . iset also includes color tools and metrics based on international standards ( chromaticity coordinates , cielab and others ) that assist the engineer in evaluating the color accuracy and quality of the rendered image . story_separator_special_tag in this work , a semi-analytical model , based on a thorough analysis of experimental data , is developed for photoresponse estimation of a photodiode-based cmos active pixel sensor ( aps ) . the model covers the substrate diffusion effect together with the influence of the photodiode active-area geometrical shape and size . it describes the pixel response dependence on integration photocarriers and conversion gain and demonstrates that the tradeoff between these two conflicting factors gives an optimum geometry enabling extraction of maximum photoresponse . the parameter dependence on the process and design data and the degree of accuracy for the photoresponse modeling are discussed . comparison of the derived expression with the measurement results obtained from a 256/spl times/256 cmos aps image sensor fabricated via hp in a standard 0.5-/spl mu/m cmos process exhibits excellent agreement . the simplicity and the accuracy of the model make it a suitable candidate for implementation in photoresponse simulation of cmos photodiode arrays . story_separator_special_tag the thermally induced charge ( dark current ) mechanism in ccd 's gives rise to a poisson distribution of random charge values in each pixel over the device . in the case of low radiant flux and/or quantum efficiency coupled with long integration times this may produce a large number of pixels with values significantly above or below the expected 'average ' values . such pixels in isolation usually pose no significant problem , but may be subject to misinterpretation if randomly aggregated ( clustered ) . in many cases this is of little concern since large quantities of data are captured for subsequent analysis and these random occurrences will be recognized as such . but in the case where cost or complexity mandate 'one-shot ' data capture , the question of how often such occurrences may be expected is altogether reasonable . a probabilistic model of such clustering is developed and several scenarios evaluated.\xa9 ( 1993 ) copyright spie -- the international society for optical engineering . downloading of the abstract is permitted for personal use only . story_separator_special_tag in this paper we present methods for characterizing ccd cameras . interesting properties are linearity of photometric response , signal-to-noise ratio , sensitivity , dark current , and spatial frequency response . the techniques to characterize ccd cameras are carefully designed to assist one in selecting a camera to solve a certain problem . the methods described were applied to a variety of cameras : an astromed te3/a with p86000 chip , a photometrics cc200 series with thompson chip th7882 , a photometrics cc200 series with kodak chip kaf1400 , a xillix ' micro imager 1400 with kodak chip kaf1400 , an hcs mxr ccd with a philips chip and a sony xc-77rrce . story_separator_special_tag the poisson and normal probability distributions poorly match the dark current histogram of a typical image sensor . the histogram has only positive values , and is positively skewed ( with a long tail ) . the normal distribution is symmetric ( and possesses negative values ) , while the poisson distribution is discrete . image sensor characterization and simulation would benefit from a different distribution function , which matches the experimental observations better . dark current fixed pattern noise is caused by discrete randomly-distributed charge generation centers . if these centers shared a common charge-generation rate , and were distributed uniformly , the poisson distribution would result . the fact that it does not indicates that the generation rates vary , a spatially non-uniform amplification is applied to the centers , or that the spatial distribution of centers is non-uniform . monte carlo simulations have been used to examine these hypotheses . the log-normal , gamma and inverse gamma distributions have been evaluated as empirical models for characterization and simulation . these models can accurately match the histograms of specific image sensors . they can also be used to synthesize the dark current images required in the development of story_separator_special_tag we describe a method for simulating the output of an image sensor to a broad array of test targets . the method uses a modest set of sensor calibration measurements to define the sensor parameters ; these parameters are used by an integrated suite of matlab software routines that simulate the sensor and create output images . we compare the simulations of specific targets to measured data for several different imaging sensors with very different imaging properties . the simulation captures the essential features of the images created by these different sensors . finally , we show that by specifying the sensor properties the simulations can predict sensor performance to natural scenes that are difficult to measure with a laboratory apparatus , such as natural scenes with high dynamic range or low light levels . story_separator_special_tag this paper describes the design and performance of an image capture simulator . the general model underlying the simulator assumes that the image capture device contains multiple classes of sensors with different spectral sensitivities and that each sensor responds in a known way to irradiance over most of its operating range . the input to the simulator is a set of narrow-band images of the scene taken with a custom-designed hyperspectral camera system . the parameters for the simulator are the number of sensor classes , the sensor spectral sensitivities , the noise statistics and number of quantization levels for each sensor class , the spatial arrangement of the sensors and the exposure duration . the output of the simulator is the raw image data that would have been acquired by the simulated image capture device . to test the simulator , we acquired images of the same scene both with the hyperspectral camera and with a calibrated kodak dcs-200 digital color camera . we used the simulator to predict the dcs-200 output from the hyperspectral data . the agreement between simulated and acquired images validated the image capture response model and our simulator implementation . we believe the simulator story_separator_special_tag as cmos technology scales , the effect of 1/f noise on low frequency analog circuits such as cmos image sensors becomes more pronounced , and therefore must be more accurately estimated . analysis of 1/f noise is typically performed in the frequency domain even though the process is nonstationary . to find out if the frequency domain analysis produces acceptable results , the paper introduces a time domain method based on a nonstationary extension of a recently developed , and generally agreed upon physical model for 1/f noise in mos transistors . the time domain method is used to analyze the effect of 1/f noise due to pixel level transistors in a cmos aps . the results show that the frequency domain results can be quite inaccurate especially in estimating the 1/f noise effect of the reset transistor . it is also shown that cds does not in general reduce the effect of the 1/f noise . story_separator_special_tag radiometry and photoetry solid state arrays arrya performance camera performance crt-based displays sampling theory linear system theory system mft image quality minimum resolvable contrast . story_separator_special_tag in this paper the lateral photoresponse and crosstalk ( ctk ) in complementary metal-oxide-semiconductor ( cmos ) photodiodes is investigated by means of a unique sub-micron scanning system ( s-cube system ) and numerical device simulation . an improved semi-analytical model developed for photoresponse estimation of a photodiode-based cmos active pixel sensor reveals the photosignal and the ctk dependence on the pixels geometrical shape and arrangement within the array . the trends that promise to increase cmos image sensor performance are presented and design tradeoffs intended to optimize the photoresponse and minimize ctk are discussed . story_separator_special_tag for pt.i see ibid. , vol.50 , no.5 , p.1233-38 ( 2003 ) . in part i of this paper , an improved one-dimensional ( 1-d ) analysis and a semiempirical model of quantum efficiency for cmos photodiode was illustrated . in this part of the paper , the lateral photoresponse in cmos photodiode arrays is investigated with test linear photodiode arrays and numerical device simulations . it is shown that the surface recombination and mobility degradation along the si-sio/sub 2/ interface are important factors in determining the lateral photoresponse of cmos photodiodes . the limitations of traditional analytical approaches are briefly discussed in this context , and a novel three-dimensional ( 3-d ) analysis of lateral photoresponse is presented . given the significant dependence of lateral photoresponse on the si-sio/sub 2/ interface quality , an empirical characterization method is proposed as a more reliable solution to modeling lateral photoresponse . story_separator_special_tag temporal noise sets the fundamental limit on image sensor performance , especially under low illumination and in video applications . in a ccd image sensor , temporal noise is primarily due to the photodetector shot noise and the output amplifier thermal and 1/f noise . cmos image sensors suffer from higher noise than ccds due to the additional pixel and column amplifier transistor thermal and 1/f noise . noise analysis is further complicated by the time-varying circuit models , the fact that the reset transistor operates in subthreshold during reset , and the nonlinearity of the charge to voltage conversion , which is becoming more pronounced as cmos technology scales . the paper presents a detailed and rigorous analysis of temporal noise due to thermal and shot noise sources in cmos active pixel sensor ( aps ) that takes into consideration these complicating factors . performing time-domain analysis , instead of the more traditional frequency-domain analysis , we find that the reset noise power due to thermal noise is at most half of its commonly quoted kt/c value . this result is corroborated by several published experimental data including data presented in this paper . the lower reset noise , story_separator_special_tag fixed pattern noise ( fpn ) for a ccd sensor is modeled as a sample of a spatial white noise process . this model is , however , not adequate for characterizing fpn in cmos sensors , since the redout circuitry of cmos sensors and ccds are very different . the paper presents a model for cmos fpn as the sum of two components : a column and a pixel component . each component is modeled by a first order isotropic autoregressive random process , and each component . each component is modeled by a first order isotropic autoregressive random process , and each component is assumed to be uncorrelated with the other . the parameters of the processes characterize each component of the fpn and the correlations between neighboring pixels and neighboring columns for a batch of sensor . we show how to estimate the model parameters from a set of measurements , and report estimates for 64 x 64 passive pixel sensor ( pps ) and active pixel sensor ( aps ) test structures implemented in a 0.35 micron cmos process . high spatial correlations between pixel components were measured for the pps structures , and between the story_separator_special_tag cmos imagers can possess higher levels of imager noise than their predecessors , ccds . this noise can be of the form of temporal variation and fixed pattern . the fixed pattern component of this noise can be removed , which is known already in the art . the invention in this disclosure is that proper correction can be developed for all imager conditions ( imager integration time and imager temperature ) using a single fpn ( fixed pattern noise ) dark map , a single fpn prnu ( pixel response nonuniformity ) map , imager integration time and imager temperature . without this invention , a dark frame capture and a flat field capture ( integrating sphere ) , are required before every image capture , a practical impossibility in typical picture taking . further , the estimates of both fpn maps ( dark and prnu ) in this invention are improved estimates relative to such captured directly preceding image capture since such have be formed with multiple frame averaging at calibration time , thus removing any temporal noise from these map estimates . these dark fpn and prnu fpn maps are modified by a scaling and biasing functional story_separator_special_tag the standard method for measuring qe for a ccd sensor is not adequate for cmos aps since it does not take into consideration the random offset , gain variations , and nonlinearity introduced by the aps readout circuits . the paper presents a new method to accurately estimate qe of an aps . instead of varying illumination as in the ccd method , illumination is kept constant and the pixel output is continuously observed - sampling at regular intervals . this makes it possible to eliminate random offset . the experiment is repeated multiple times to obtain good estimates of the pixel output mean and variance at each sample time . the sensor response is approximated by a piecewise linear function and using the poisson statistics of shot noise gain , charge and read noise are estimated for each line segment . this procedure is repeated at no illumination so that dark charge may be estimated and subtracted from the total charge estimates . the method can also be used to estimate readout noise and gain fpn . results from 64 x 64 pixel aps test structures implemented in a 0.35 micrometers cmos process are reported . using 6 different story_separator_special_tag accurate modeling of image noise is important in understanding the relative contributions of multiple-noise mechanisms in the sensing , readout , and reconstruction phases of image formation . there is a lack of high-level image-sensor system modeling tools that enable engineers to see realistic visual effects of noise and change-specific design or process parameters to quickly see the resulting effects on image quality . this paper reports a comprehensive tool , written in matlab , for modeling noise in cmos image sensors and showing the effect in images . the tool uses accepted theoretical/empirical noise models with parameters from measured process-data distributions . output images from the tool are used to demonstrate the effectiveness of this approach in determining the effects of various noise sources on image quality story_separator_special_tag this study presents a comprehensive measurement of ccd digital-video camera noise . knowledge of noise detail within images or video streams allows for the development of more sophisticated algorithms for separating true image content from the noise generated in an image sensor . the robustness and performance of an image-processing algorithm is fundamentally limited by sensor noise . the individual noise sources present in ccd sensors are well understood , but there has been little literature on the development of a complete noise model for ccd digital-video cameras , incorporating the effects of quantization and demosaicing . story_separator_special_tag this paper presents a technique to identify and measure the prominent sources of sensor noise in commercially available charge-coupled device ( ccd ) video cameras by analysis of the output images . noise fundamentally limits the distinguishable content in an image and can significantly reduce the robustness of an image processing application . although sources of image sensor noise are well documented , there has been little work on the development of techniques to identify and quantify the types of noise present in ccd video-camera images . a comprehensive noise model for ccd cameras was used to evaluate the technique on a commercially available ccd video camera . story_separator_special_tag this survey includes a description of all types of two-dimensional image sensors in current use in television . television is loosely defined as the acquisition , transmission , and display of moving pictures by electronic means . the technology of image acquisition is greatly complicated by the requirement in many parts of the industry that the images be in natural color . some imagers include the color analysis means as an inherent part of their makeup . however , many color cameras use two or more monochrome sensors and a substantial peripheral system of optical , control , and signal-processing functions . both of these major classes of color imagining systems are described . an outline is included of major categories of camera users , their equipment , and numbers . finally , a tabulation is presented , giving the physical and electrooptic properties of a variety of cameras . story_separator_special_tag changes in measured image irradiance have many physical causes and are the primary cue for several visual processes , such as edge detection and shape from shading . using physical models for charged-coupled device ( ccd ) video cameras and material reflectance , we quantify the variation in digitized pixel values that is due to sensor noise and scene variation . this analysis forms the basis of algorithms for camera characterization and calibration and for scene description . specifically , algorithms are developed for estimating the parameters of camera noise and for calibrating a camera to remove the effects of fixed pattern nonuniformity and spatial variation in dark current . while these techniques have many potential uses , we describe in particular how they can be used to estimate a measure of scene variation . this measure is independent of image irradiance and can be used to identify a surface from a single sensor band over a range of situations . experimental results confirm that the models presented in this paper are useful for modeling the different sources of variation in real images obtained from video cameras . > story_separator_special_tag numerical simulators for adaptive optics systems have become an essential tool for the research and development of the future advanced astronomical instruments . however , growing software code of the numerical simulator makes it difficult to continue to support the code itself . the problem of adequate documentation of the astronomical software for adaptive optics simulators may complicate the development since the documentation must contain up-to-date schemes and mathematical descriptions implemented in the software code . although most modern programming environments like matlab or octave have in-built documentation abilities , they are often insufficient for the description of a typical adaptive optics simulator code . this paper describes a general cross-platform framework for the documentation of scientific software using open-source tools such as latex , mercurial , doxygen , and perl . using the perl script that translates m-files matlab comments into c-like , one can use doxygen to generate and update the documentation for the scientific source code . the documentation generated by this framework contains the current code description with mathematical formulas , images , and bibliographical references . a detailed description of the framework components is presented as well as the guidelines for the framework deployment . story_separator_special_tag chapter 1 : introduction 1.1 statistics : the science of data 1.2 fundamental elements of statistics 1.3 types of data 1.4 the role of statistics in critical thinking 1.5 a guide to statistical methods presented in this text statistics in action : contamination of fish in the tennessee river collecting thedata chapter 2 : descriptive statistics 2.1 graphical and numerical methods for describing qualitative data 2.2 graphical methods for describing quantitative data 2.3 numerical methods for describing quantitative data 2.4 measures of central tendency 2.5 measures of variation 2.6 measures of relative standing 2.7 methods for detecting outliers 2.8 distorting the truth with descriptive statistics statistics in action : characteristics of contaminated fish in the tennessee river chapter 3 : probability 3.1 the role of probability in statistics 3.2 events , sample spaces , and probability 3.3 compound events 3.4 complementary events 3.5 conditional probability 3.6 probability rules for unions and intersections 3.7 bayes ' rule ( optional ) 3.8 some counting rules 3.9 probability and statistics : an example 3.10 random sampling statistics in action : assessing predictors of software defects chapter 4 : discrete random variables 4.1 discrete random variables 4.2 the probability distribution for a discrete random story_separator_special_tag this letter describes a modified gaussian approximation to a poisson distribution which , unlike the usual gaussian approximation , gives good agreement on the tails of the distribution . it is therefore useful in error-rate calculations where the usual gaussian approximation often is not . story_separator_special_tag photoelectronic image acquisition components ( x-ray sensors , optics , and video tubes ) are examined for their ability to reproduce the x-ray image obtained in diagnostic radiology . essential characteristics in terms of efficiency , signal , noise , dynamic range , contrast resolution , spatial resolution , and speed are examined for components and systems suited to clinical procedures . analysis for performance is managed on the basis of calculations using photons , flux , and flux rate with their counterparts in electron numbers for charge distribution and current flow . this avoids the difficulty of working with mixed units . static imaging for the chest , abdomen , and bone as well as dynamic imaging using intravenous angiography are discussed for their individual requirements . story_separator_special_tag the charge-coupled device dominates an ever-increasing variety of scientific imaging and spectroscopy applications . recent experience indicates , however , that the full potential of ccd performance lies well beyond that realized in devices currently available.test data suggest that major improvements are feasible in spectral response , charge collection , charge transfer , and readout noise . these properties , their measurement in existing ccds , and their potential for future improvement are discussed in this paper . story_separator_special_tag photo-response non-uniformity ( prnu ) of digital sensors was recently proposed [ 1 ] as a unique identification fingerprint for digital cameras . the prnu extracted from a specific image can be used to link it to the digital camera that took the image . because digital camcorders use the same imaging sensors , in this paper , we extend this technique for identification of digital camcorders from video clips . we also investigate the problem of determining whether two video clips came from the same camcorder and the problem of whether two differently transcoded versions of one movie came from the same camcorder . the identification technique is a joint estimation and detection procedure consisting of two steps : ( 1 ) estimation of prnus from video clips using the maximum likelihood estimator and ( 2 ) detecting the presence of prnu using normalized cross-correlation . we anticipate this technology to be an essential tool for fighting piracy of motion pictures . experimental results demonstrate the reliability and generality of our approach . story_separator_special_tag this paper presents a qualification methodology on imaging sensors . in addition to overall chip reliability characterization based on sensor 's overall figure of merit , such as dark rate , linearity , dark current non-uniformity , fixed pattern noise and photon response non-uniformity , a simulation technique is proposed and used to project pixel reliability . the projected pixel reliability is directly related to imaging quality and provides additional sensor reliability information and performance control . story_separator_special_tag lithium-ion transport in cathodes , anodes , solid electrolytes , and through their interfaces plays a crucial role in the electrochemical performance of solid-state lithium-ion batteries . direct visualization of the lithium-ion dynamics at the nanoscale provides valuable insight for understanding the fundamental ion behaviour in batteries . here , we report the dynamic changes of lithium-ion movement in a solid-state battery under charge and discharge reactions by time-resolved operando electron energy-loss spectroscopy with scanning transmission electron microscopy . applying image denoising and super-resolution via sparse coding drastically improves the temporal and spatial resolution of lithium imaging . dynamic observation reveals that the lithium ions in the lithium cobaltite cathode are complicatedly extracted with diffusion through the lithium cobaltite domain boundaries during charging . even in the open-circuit state , they move inside the cathode . operando electron energy-loss spectroscopy with sparse coding is a promising combination to visualize the ion dynamics and clarify the fundamentals of solid-state electrochemistry . understanding lithium ion dynamics holds the key to unlocking better battery materials and devices . here , by combining electron energy-loss spectroscopy and machine learning , the authors reveal how lithium is extracted from licoo2 cathode used in a solid-state story_separator_special_tag in this paper , we present an extensive study of leakage current mechanisms in diodes to model the dark current of various pixel architectures for active pixel cmos image sensors . dedicated test structures made in 0.35-/spl mu/m cmos have been investigated to determine the various contributions to the leakage current . three pixel variants with different photodiodes-n/sup +//pwell , n/sup +//nwell/p-substrate and p/sup +//nwell/p-substrate-are described . we found that the main part of the total dark current comes from the depletion of the photodiode edge at the surface . furthermore , the source of the reset transistor contributes significantly to the total leakage current of a pixel . from the investigation of reverse current-voltage ( i-v ) characteristics , temperature dependencies of leakage current , and device simulations we found that for a wide depletion , such as n-well/p-well , thermal shockley-read-hall generation is the main leakage mechanism , while for a junction with higher dopant concentrations , such as n/sup +//p-well or p/sup +//n-well , tunneling and impact ionization are the dominant mechanisms . story_separator_special_tag the statistics of the recombination of holes and electrons in semiconductors is analyzed on the basis of a model in which the recombination occurs through the mechanism of trapping . a trap is assumed to have an energy level in the energy gap so that its charge may have either of two values differing by one electronic charge . the dependence of lifetime of injected carriers upon initial conductivity and upon injected carrier density is discussed . story_separator_special_tag a stacked cmos-active pixel sensor ( aps ) with a newly devised pixel structure for charged particle detection has been developed . at low operation temperatures ( < 200 k ) , the dark current of the cmos-aps is determined by the hot carrier effect . a twin well cmos pixel with a p-mos readout and n-mos reset circuit achieves low leakage current as low as 5/spl times/10/sup -8/ v/s at the pixel electrode under liquid nitrogen temperature of 77 k. the total read noise floor of 0.1 mv/sub rms/ at the pixel electrode was obtained by nondestructive readout correlated double sampling ( cds ) with the cds interval of 21 s . story_separator_special_tag we present data for dark current of a back-illuminated ccd over the temperature range of 222 to 291 k. using an arrhenius law , we found that the analysis of the data leads to the relation between the prefactor and the apparent activation energy as described by the meyer-neldel rule . however , a more detailed analysis shows that the activation energy for the dark current changes in the temperature range investigated . this transition can be explained by the larger relative importance at high temperatures of the diffusion dark current and at low temperatures by the depletion dark current . the diffusion dark current , characterized by the band gap of silicon , is uniform for all pixels . at low temperatures , the depletion dark current , characterized by half the band gap , prevails , but it varies for different pixels . dark current spikes are pronounced at low temperatures and can be explained by large concentrations of deep level impurities in those particular pixels . we show that fitting the data with the impurity concentration as the only variable can explain the dark current characteristics of all the pixels on the chip . story_separator_special_tag 1. energy band theory . 2. theory of electrical conduction . 3. generation/recombination phenomena . 4. the pn junction diode . 5. metal-semiconductor contacts . 6. jfet and mesfet . 7. the mos transistor . 8. the bipolar transistor . 9. heterojunction devices . 10. quantum-effect devices . 11. semiconductor processing . story_separator_special_tag we present the results of a systematic study of the dark current in each pixel of a charged-coupled device chip . it was found that the arrhenius plot , at temperatures between 222 and 291 k , deviated from a linear behavior in the form of continuous bending . however , as a first approximation , the dark current , d , can be expressed as : d=d0 exp ( e/kt ) , where e is the activation energy , k is boltzmann s constant , and t the absolute temperature . it was found that e and the exponential prefactor d0 follow the meyer neldel rule ( mnr ) for all of the more than 222,000 investigated pixels . the isokinetic temperature , t0 , for the process was found as 294 k. however , measurements at 313 k did not show the predicted inversion in the dark current . it was found that the dark current for different pixels merged at temperatures higher than t0 . a model is presented which explains the nonlinearity and the merging of the dark current for different pixels with increasing temperature . possible implications of this finding re . story_separator_special_tag plasma doping ( plad ) was applied to reduce the dark current of cmos image sensor ( cis ) , for the first time . plad was employed around shallow trench isolation ( sti ) to screen the defective sidewalls and edges of sti from the depletion region of photodiode . this technique can provide not only shallow but also conformal doping around the sti , making it a suitable doping technique for pinning purposes for ciss with sub-2-mum pixel pitch . the measured results show that temporal noise and dark signal deviation as well as dark level decrease story_separator_special_tag recent developments of backside treatment for the backside-illuminated scientific charge-coupled device ( ccd ) imagers have shown near-theoretical efficiency even at the short wavelength region of the spectrum . by using a scanning electron microscope ( sem ) , we report here , for the first time , performance comparisons of backside-treated and untreated ccds to an electron flux varying from 1 to 100 pa and beam energy ranging from less than 1 kev up to 20 kev . we describe the theoretical analysis , the sem testing procedure , and the quantum efficiency measurement results . it is shown , for example , that the average quantum efficiency increases from less than 1 % for an untreated ccd to nearly 40 % for a backside-treated ccd at a beam energy of 1 kev . story_separator_special_tag the fixed pattern noise reduction methods , surrounding channel stop structure and the hole accumulation operation , are proposed and evaluated for the 2/3-in two-million pixel stack-ccd hdtv imager . the surrounding channel stop structure is surrounded by the channel stop region to suppress the fluctuation of the mean dark current from si-sio/sub 2/ interface and the depletion layer of p-n junction . the measured fixed pattern note ( fpn ) and signal-to-noise ( s/n ) ratio are improved from 45 electrons down to 19 electrons and from 49 db up to 54 db under the condition of f.8 and 2000 lux at 333 k , respectively . therefore , the 2/3-in two-million pixel hdtv handy-type color camera with high s/n ratio and low fpn can be obtained . story_separator_special_tag quantization in dark current generation has been observed for the first time through the use of a virtual-phase charge-coupled device . two sites for bulk silicon dark current have been identified with capture cross sections of 1.8 \xd7 10 < sup > -15 < /sup > cm < sup > 2 < /sup > and 5.4 \xd7 10 < sup > -16 < /sup > cm < sup > 2 < /sup > , and concentrations of 1.3 \xd7 10 < sup > 9 < /sup > cm < sup > -3 < /sup > and 1.5 \xd7 10 < sup > 8 < /sup > cm < sup > -3 < /sup > , respectively . story_separator_special_tag the study of dark current in ccd 's is difficult because of the complexity of the process and because generation can come from a variety of sources . for scaled devices , generation from channel-stop sidewalls is particularly important , since the sidewall scales as a perimeter . relatively little attention has been paid in the past to this source of generation . in this paper , analytical techniques are described for profiling interface states along channel-stop sidewalls . these techniques rely on changes in generation current caused by expansion of the surface depletion region . two methods of determining the extent of surface depletion are discussed . the first relies heavily on two-dimensional modeling , while the second uses experimental measurements of interelectrode capacitance to avoid certain limitations associated with model parameters . these techniques are used to show that the generation-current density peaks strongly in the birdsbeak region of locos isolation , resulting in a significant contribution to dark current during ccd operation , even for channel-stop spacings as far apart as 12 \xb5m . these techniques are also used to compare the behavior of different regions of the oxide interface of ccd imagers as a function of story_separator_special_tag existing experimental data on the bulk conductivity of $ { \\mathrm { ta } } _ { 2 } $ $ { \\mathrm { o } } _ { 5 } $ and sio films are shown to be consistent with the schottky effect rather than the poole-frenkel effect . a discussion of the physical properties of vacuum-deposited insulators has led to a simple model in which the insulator is proposed to contain neutral traps and donor centers . this model is shown to resolve the above-mentioned `` anomalous '' poole-frenkel effect . other simple models are discussed , but they do not exhibit the anomalous poole-frenkel effect . story_separator_special_tag raman spectroscopy has been widely used to characterize the physical properties of two-dimensional materials ( 2dms ) . the signal-to-noise ratio ( snr or s/n ratio ) of raman signal usually serves as an important indicator to evaluate the instrumental performance rather than raman intensity itself . multichannel detectors with outstanding sensitivity , rapid acquisition speed and low noise level have been widely equipped in raman instruments for the measurement of raman signal . in this mini-review , we first introduce the recent advances of raman spectroscopy of 2dms . then we take the most commonly used ccd detector and iga array detector as examples to overview the various noise sources in raman measurements and analyze their potential influences on snr of raman signal in experiments . this overview can contribute to a better understanding on the snr of raman signal and the performance of multichannel detector for numerous researchers and instrumental design for industry , as well as offer practical strategies for improving spectral quality in routine measurement . story_separator_special_tag this updated and expanded version of the very successful first edition offers new chapters on controlling the emission from electronic systems , especially digital systems , and on low-cost techniques for providing electromagnetic compatibility ( emc ) for consumer products sold in a competitive market . there is also a new chapter on the susceptibility of electronic systems to electrostatic discharge . there is more material on fcc regulations , digital circuit noise and layout , and digital circuit radiation . virtually all the material in the first edition has been retained . contains a new appendix on fcc emc test procedures . story_separator_special_tag preface digital still cameras at a glance kenji toyoda what is a digital still camera ? history of digital still cameras variations of digital still cameras basic structure of digital still cameras applications of digital still cameras optics in digital still cameras takeshi koyama optical system fundamentals and standards for evaluating optical performance characteristics of dsc imaging optics important aspects of imaging optics design for dscs dsc imaging lens zoom types and their applications conclusion references basics of image sensors junichi nakamura functions of an image sensor photodetector in a pixel noise photoconversion characteristics array performance optical format and pixel size ccd image sensor vs. cmos image sensor references ccd image sensors tetsuo yamada basics of ccds structures and characteristics of ccd image sensor dsc applications future prospects references cmos image sensors isao takayanagi introduction to cmos image sensors cmos active pixel technology signal processing and noise behavior cmos image sensors for dsc applications future prospects of cmos image sensors for dsc applications references evaluation of image sensors toyokazu mizoguchi what is evaluation of image sensors ? evaluation environment evaluation methods color theory and its application to digital still cameras po-chieh hung color theory camera spectral sensitivity characterization of a story_separator_special_tag analysis of 1/f noise in mosfet circuits is typically performed in the frequency domain using the standard stationary 1/f noise model . recent experimental results , however , have shown that the estimates using this model can be quite inaccurate especially for switched circuits . in the case of a periodically switched transistor , measured 1/f noise power spectral density ( psd ) was shown to be significantly lower than the estimate using the standard 1/f noise model . for a ring oscillator , measured 1/f-induced phase noise psd was shown to be significantly lower than the estimate using the standard 1/f noise model . for a source follower reset circuit , measured 1/f noise power was also shown to be lower than the estimate using the standard 1/f model . in analyzing noise in the follower reset circuit using frequency-domain analysis , a low cutoff frequency that is inversely proportional to the circuit on-time is assumed . the choice of this low cutoff frequency is quite arbitrary and can cause significant inaccuracy in estimating noise power . moreover , during reset , the circuit is not in steady state , and thus frequency-domain analysis does not apply . this story_separator_special_tag in this paper , we present a performance summary of cmos imager pixels from 5.2 /spl mu/m to 4.2 /spl mu/m using 0.18 /spl mu/m imager design rules , then to 3.2 /spl mu/m using 0.15 /spl mu/m imager design rules . these pixels support 1.3-megapixel , 2.0-megapixel , and 3.1-megapixel cmos image sensors for digital still cameral ( dsc ) applications at 3.3 v , respectively . the 4tc pixels are all based on technology shrinks of micron 's 2p3m imager process , and each of the technology nodes report excellent cmos imager low-noise , high-sensitivity , low-lag , and low-light performance , matching that of state-of-the-art charged-coupled device ( ccd ) imagers . we have put a model in place to provide the predictive performance of smaller pixels , and then use that model to discuss performance expectations down to 2.0 /spl mu/m pixels . with the combination of imager design rules , pixel architecture , and process technology tailored for cmos imagers , we see no fundamental reason that cmos imagers should not be able to continue matching ccd performance as pixel sizes shrink . story_separator_special_tag image sensors for digital cameras are built with ever decreasing pixel sizes . the size of the pixels seems to be limited by technology only . however , there is also a hard theoretical limit for classical video camera systems : during a certain exposure time only a certain number of photons will reach the sensor . the resulting shot noise thus limits the signal-to-noise ratio . in this letter we show that current sensors are already surprisingly close to this limit . story_separator_special_tag the low-frequency noise power spectrum of small dimension mosfets is dominated by lorentzians arising from random telegraph signals ( rts ) . the low-frequency noise is observed to decrease when the devices are periodically switched 'off ' . the technique of determining the statistical lifetimes and amplitudes of the rts by fitting the signal level histogram of the time-domain record to two-gaussian histograms has been reported in the literature . this procedure is then used for analysing the 'noisy ' rts along with the device background noise , which turned out to be 1/f noise . the 1/f noise of the device can then be separated from the rts using this procedure . in this work , rts observed in mosfets , under both constant and switched biased conditions , have been investigated in the time domain , further , the 1/f noise in both the constant and the switched biased conditions is investigated . story_separator_special_tag we study finite-size scaling of the roughness of signals in systems displaying gaussian 1/f power spectra . it is found that one of the extreme value distributions , the fisher-tippett-gumbel ( ftg ) distribution , emerges as the scaling function when boundary conditions are periodic . we provide a realistic example of periodic 1/f noise , and demonstrate by simulations that the ftg distribution is a good approximation for the case of nonperiodic boundary conditions as well . experiments on voltage fluctuations in gaas films are analyzed and excellent agreement is found with the theory . story_separator_special_tag the random telegraph signal ( rts ) behavior of the dark current has been studied in a radiation-hardened cmos active pixel sensor ( aps ) . several devices have been irradiated with protons of different energies and up to different fluences . the influence of the proton energy , fluence , and operating temperature on the amplitude , time constants , and occurrence of the rts is investigated . mechanisms for this behavior are discussed and several suggestions are made for possible defect types . story_separator_special_tag it has been shown that proton-induced defects in charge couple devices ( ccds ) can demonstrate the classic phenomenology of random telegraph signals ( rtss ) . these fluctuations take the form of rtss with well-defined amplitudes and time constants for the high and low dark current states . the time constants are strongly temperature-activated and the evidence suggests the presence of a bistable defect whose structural reconfigurations cause changes in the dark current . though an important noise source for room-temperature systems , the rts pulse width is increased on cooling and the effect is not likely to be important below approximately -20 degrees c. annealing of the rts defect was found to occur at approximately 100 degrees c . > story_separator_special_tag pixel reset noise sets the fundamental detection limit on photodiode based cmos image sensors . reset noise in standard active pixel sensor ( aps ) is well understood and is of order kt/c . in this paper we present a new technique for resetting photodiodes , called active reset , which reduces reset noise without adding lag . active reset can be applied to standard aps . active reset uses bandlimiting and capacitive feedback to reduce reset noise . this paper discusses the operation of an active reset pixel , and presents an analysis of lag and noise . measured results from a 6 transistor per pixel 0.35 micrometers cmos implementation are presented . measured results show that reset noise can be reduced to less than kt/18c using active reset . we find that theory simulation and measured results all match closely . story_separator_special_tag a monolithic active pixel sensor ( maps ) for charged particle tracking based on a novel detector structure was proposed , simulated , fabricated and tested . the detector designed accordingly to this idea is inseparable from the readout electronics , since both of them are integrated onto the same , standard for a cmos process , low-resistivity silicon wafer . the individual pixel is comprised of only 3 mos transistors and a photodiode collecting the charge created in a thin undepleted epitaxial layer . this approach provides the whole detector surface sensitive to radiation ( 100 % fill factor ) with reduced pixel pitch ( very high spatial resolution ) . this yields a low cost , high resolution and thin detecting device . the detailed device simulations using an ise-tcad package have been carried out in order to study a charge collection mechanism and to validate the proposed idea . consequently , two prototype chips have been fabricated using 0.6 /spl mu/m and 0.35 /spl mu/m cmos processes . special radiation tolerant layout techniques were used in the second chip design . both chips were tested and fully characterised . the pixel conversion gain was calibrated using 5.9 story_separator_special_tag recent advancements in cmos image sensor technology are reviewed , including both passive pixel sensors and active pixel sensors . on-chip analog to digital converters and on-chip timing and control circuits permit realization of an electronic camera-on-a-chip . story_separator_special_tag a family of cmos-based active pixel image sensors ( apss ) that are inherently compatible with the integration of on-chip signal processing circuitry is reported . the image sensors were fabricated using commercially available 2-/spl mu/m cmos processes and both p-well and n-well implementations were explored . the arrays feature random access , 5-v operation and transistor-transistor logic ( ttl ) compatible control signals . methods of on-chip suppression of fixed pattern noise to less than 0.1 % saturation are demonstrated . the baseline design achieved a pixel size of 40 /spl mu/m/spl times/40 /spl mu/m with 26 % fill-factor . array sizes of 28/spl times/28 elements and 128/spl times/128 elements have been fabricated and characterized . typical output conversion gain is 3.7 /spl mu/v/e/sup -/ for the p-well devices and 6.5 /spl mu/v/e/sup -/ for the n-well devices . input referred read noise of 28 e/sup -/ rms corresponding to a dynamic range of 76 db was achieved . characterization of various photogate pixel designs and a photodiode design is reported . photoresponse variations for different pixel designs are discussed . story_separator_special_tag the correlated double sampling ( cds ) signal processing method used in processing of video signals from ccd image sensors is theoretically analyzed . the cds signal processing is frequency used to remove noise , which is generated by the reset operation of the floating diffusion charge detection node , from the signal . the derived formulas for the noise power spectral density provide an invaluable insight into the choice of the circuit parameters affecting the noise spectrum . the obtained results are useful for determining the optimum cutoff frequency of the low-pass filter which precedes the sample-and-hold circuit and for finding the optimum size of the input transistor in the first amplifier stage . once the optimum parameters are determined it is possible to find the minimum electron equivalent noise and the maximum signal-to-noise ratio achievable with this signal processing method . the validity of the derived theoretical results is confirmed by making comparisons with the experimental data . > story_separator_special_tag the characterization of surface channel charge-coupled device line imagers with front-surface imaging , interline transfer , and 2-phase stepped oxide , silicon-gate ccd registers is presented . the analysis , design , and evaluation of 1/spl times/64 ccd line arrays are described in terms of their performance at low light levels . the authors describe the responsivity , resolution , spectral , and noise measurements on silicon-gate ccd sensors and ccd interline shift-registers . the influence of transfer inefficiency and electrical fat-zero insertion on resolution and noise is described at low light levels . story_separator_special_tag in this paper , a new correlated double sampling ( cds ) technique based on fixed voltage difference ( fvd ) is introduced . compared with the traditional cds technique with voltage sampling for a/d conversion , this method has the advantage of low voltage capability , which relieves the high resolution requirement of the subsequent a/d converter as a result of the limited voltage swing in advanced deep-submicron cmos technologies . the new technique also allows the use of reference voltages to control the dynamic range of the circuit . the fvd cds technique has been applied to the readout circuit of a low voltage cmos active pixel sensor ( aps ) circuits with an array size of 128\xd7128 fabricated by a 0.25\xb5m cmos process from tsmc . the circuit is proven to be functional at extremely low v dd with added dynamic range . story_separator_special_tag complementary metal oxide semiconductor ( cmos ) image sensors are more compatible than charge coupled devices ( ccds ) for lab-on-a-chip platforms due to their inherited advantages . however , without the noise reduction circuits , cmos technology would n't be able to compete with ccds . today , correlated double sampling circuits ( ccds ) are used in all cmos imagers in order to remove the reset noise and the fixed pattern noise . however , these circuits immensely decrease the fill factor of the image sensors because of their large area and their requirement of extra circuitries in order to convert their single ended outputs to differential outputs . in this paper , we propose a cds architecture convenient for cmos imagers that uses switched capacitor fully differential configuration which reduces the noise in the same way as the conventional cds architectures while decreasing the area and increasing the fill factor . story_separator_special_tag the performance of a color cmos photogate image sensor is reported . it is shown that by using two levels of correlated-double sampling it is possible to effectively cancel all fixed-pattern noise due to read-out circuit mismatch . instead the fixed-pattern noise performance of the sensor is limited by dark current nonuniformity at low signal levels , and conversion gain nonuniformity at high signal levels . it is further shown that the imaging performance of the sensor is comparable to low-end ccd sensors but inferior to that reported for high-end ccd sensors due to low quantum efficiency , high dark current , and pixel cross-talk . as such the performance of cmos sensors is limited at the device level rather than at the architectural level . if the imaging performance issues can be addressed at the fabrication process level without increasing cost or degrading transistor performance , cmos has the potential to seriously challenge ccd as the solid-state imaging technology of choice due to low power dissipation and compatibility with camera system integration . story_separator_special_tag this advanced text and reference covers the design and implementation of integrated circuits for analog-to-digital and digital-toanalog conversion . it begins with basic concepts and systematically leads the reader to advanced topics , describing design issues and techniques at both circuit and system level . gain a system-level perspective of data conversion units and their trade-offs with this state-of-the art book . topics covered include : sampling circuits and architectures , d/a and a/d architectures ; comparator and op amp design ; calibration techniques ; testing and characterization ; and more ! story_separator_special_tag this book highlights various theoretical developments on logistic distribution , illustrates the practical utility of these results , and describes univariate and multivariate generalizations of the distribution . it is useful for researchers , practicing statisticians , and graduate students . story_separator_special_tag 1. introduction 2. properties of the inverse gaussian distribution 3. genesis 4. certain useful transformations and characterizations 5. sampling and estimation of parameters 6. significance tests 7. bayesian inference 8. regression analysis 9. life testing and reliability 10. applications 11. additional topics story_separator_special_tag lloyds bank has its main root in a substantial private bank founded in birmingham nearly two centuries ago ; one hundred years ago this bank still had only the one office in birmingham , with a related private banking house in lombard street . but by amalgamation it has absorbed scores of other eighteenth and nineteenth century banks , both private and joint-stock , and at least two of the former reach back into restoration london , perhaps cromwellian london . although the records of these historic businesses have been gravely impaired , especially by bombing in 1940-41 , the bank still possesses rich material for the economic and social historian . this material has been at the disposal of professor r. s. sayers and , though the gaps in the records have prevented him from presenting a full and systematic history of the bank , he presents in this book a picture of english banking development as illustrated in the records of lloyds bank . story_separator_special_tag re-invented in the early 1990s on both sides of the atlantic , monolithic active pixel sensors ( maps ) in a cmos technology have slowly invaded the world of consumer imaging and are now on the edge of becoming the first technology in this field , previously dominated by charge-coupled devices ( ccd ) . thanks to the advantages brought by the use of standard cmos technology , maps have great potential in many areas including function integration , leading to the concept of a camera-on-a-chip , pixel size , random access to selected region-of-interest , low power , higher speed and radiation resistance . in many ways , maps have introduced a new way of doing imaging . despite their success in the consumer arena , maps are still to make a definitive impact in the world of scientific imaging . this paper first briefly reviews the way radiation is detected by a cmos sensor , before analysing the main noise source and its relationship with the full well capacity and the dynamic range . this paper will also show first examples of scientific results , obtained in the detection of low-energy electrons . story_separator_special_tag the dynamic range of an image sensor is often not wide enough to capture scenes with both high lights and dark shadows . a 640/spl times/512 image sensor with nyquist rate pixel level adc implemented in a 0.35 /spl mu/m cmos technology shows how a pixel level adc enables flexible efficient implementation of multiple sampling . since pixel values are available to the adcs at all times , the number and timing of the samples as well as the number of bits obtained from each sample can be freely selected without the long readout time of aps . typically , hundreds of nanoseconds of settling time per row are required for aps readout . in contrast , using pixel level adc , digital data is read out at fast sram speeds . this demonstrates another fundamental advantage of pixel level adc-the ability to programmably widen dynamic range with no loss in snr . story_separator_special_tag charge-coupled device ( ccd ) based star trackers provide reliable attitude estimation onboard most 3 axis stabilized spacecraft . the spacecraft attitude is calculated based on observed positions of stars , which are located and identified in a ccd image of the sky . a new photon sensitive imaging array , active pixel sensor ( aps ) , has emerged as a potential replacement to ccds . the aps chips utilize existing complementary metal oxide semiconductor ( cmos ) production facilities , and the technology has several advantages over ccd technology . these include : lower power consumption , higher dynamic range , higher blooming threshold , individual pixel readout , single 3.3 or 5 volt operation , the capability to integrate on-chip timing , control , windowing , analog to digital ( a/d ) conversion and centroiding operations . however , because the photosensitivity of an aps pixel is non-homogeneous , its suitability as a star tracker imager has been unknown . this paper reports test results of a 256/spl times/256-pixel aps chip for star tracker applications . using photon transfer curves , a system read-out noise of 7 electrons , under laboratory conditions , has been determined ( story_separator_special_tag we present an analysis of dark current from a comple- mentary metal-oxide-semiconductor ( cmos ) active pixels sensor with global shutter . the presence of two sources of dark current , one within the collection area of the pixel and another within the sense node , present complications to correction of the dark current . the two sources are shown to generate unique and characteristic dark current behavior with respect to varying exposure time , temperature , and/or frame rate . in particular , a pixel with storage time in the sense node will show a dark current dependence on frame rate and the appearance of being a `` stuck pixel '' with values independent of expo- sure time . on the other hand , a pixel with an impurity located within the collection area will show no frame rate dependence , but rather a linear dependence on exposure time . a method of computing dark frames based on past dark current behavior of the sensor is pre- sented and shown to intrinsically compensate for the two different and unique sources . in addition , dark frames requiring subtraction of negative values , arising from the option to modify the
olap systems support data analysis through a multidimensional data model , according to which data facts are viewed as points in a space of application-related `` dimensions '' , organized into levels which conform to a hierarchy . the usual assumption is that the data points reflect the dynamic aspect of the data warehouse , while dimensions are relatively static . however , in practice , dimension updates are often necessary to adapt the multidimensional database to changing requirements . structural updates can also take place , like addition of categories or modification of the hierarchical structure . when these updates are performed , the materialized aggregate views that are typically stored in olap systems must be efficiently maintained . these updates are poorly supported ( or not supported at all ) in current commercial systems , and have received little attention in the research literature . we present a formal model of dimension updates in a multidimensional model , a collection of primitive operators to perform them , and a study of the effect of these updates on a class of materialized views , giving an algorithm to efficiently maintain them . story_separator_special_tag database systems offering a multidimensional schema on a logical level ( e.g . olap systems ) are often used in data warehouse environments . the user requirements in these dynamic application areas are subject to frequent changes . this implies frequent structural changes of the database schema . in this paper , we present a formal framework to describe evolutions of multidimensional schemas and their effects on the schema and on the instances . the framework is based on a formal conceptual description of a multidimensional schema and a corresponding schema evolution algebra . thus , the approach is independent of the actual implementation ( e.g . molap or rolap ) . we also describe how the algebra enables a tool supported environment for schema evolution . story_separator_special_tag a data warehouse ( dw ) is fed with data that come from external data sources that are production systems . external data sources , which are usually autonomous , often change not only their content but also their structure . the evolution of external data sources has to be reflected in a dw , that uses the sources . traditional dw systems offer a limited support for handling dynamics in their structure and content . a promising approach to handling changes in dw structure and content is based on a multiversion data warehouse . in such a dw , each dw version describes a schema and data at certain period of time or a given business scenario , created for simulation purposes . in order to appropriately analyze multiversion data , an extension to a traditional sql language is required . in this paper we propose an approach to querying a multiversion dw . to this end , we extended a sql language and built a multiversion query language interface with functionality that allows : ( 1 ) expressing queries that address several dw versions and ( 2 ) presenting their results annotated with metadata information . story_separator_special_tag a data warehouse ( dw ) provides an information for analytical processing , decision making , and data mining tools . on the one hand , the structure and content of a data warehouse reflects a real world , i.e . data stored in a dw come from real production systems . on the other hand , a dw and its tools may be used for predicting trends and simulating a virtual business scenarios . this activity is often called the what-if analysis . traditional dw systems have static structure of their schemas and relationships between data , and therefore they are not able to support any dynamics in their structure and content . for these purposes , multiversion data warehouses seem to be very promising . in this paper we present a concept and an ongoing implementation of a multiversion data warehouse that is capable of handling changes in the structure of its schema as well as simulating alternative business scenarios . story_separator_special_tag we consider a variant of the view maintenance problem : how does one keep a materialized view up-to-date when the view definition itself changes ? can one do better than recomputing the view from the base relations ? traditional view maintenance tries to maintain the materialized view in response to modifications to the base relations ; we try to `` adapt '' the view in response to changes in the view definition.such techniques are needed for applications where the user can change queries dynamically and see the changes in the results fast . data archaeology , data visualization , and dynamic queries are examples of such applications.we consider all possible redefinitions of sql select-from-where-groupby , union , and except views , and show how these views can be adapted using the old materialization for the cases where it is possible to do so . we identify extra information that can be kept with a materialization to facilitate redefinition . multiple simultaneous changes to a view can be handled without necessarily materializing intermediate results . we identify guidelines for users and database administrators that can be used to facilitate efficient view adaptation . story_separator_special_tag while current view technology assumes that information systems ( iss ) do not change their schemas , our evolvable view environment ( eve ) project addresses this problem by evolving the view definitions affected by is schema changes , which we call view synchronization . in eve , the view synchronizer rewrites the view definitions by replacing view components with suitable components from other iss . however , after such a view redefinition process , the view extents , if materialized , must also be brought up to date . in this paper , we propose strategies to address this incremental adaptation of the view extent after view synchronization . one key idea of our approach is to regard the complex changes done to a view definition after synchronization as an atomic unit ; another is to exploit knowledge of how the view definition was synchronized , especially the containment information between the old and new views . our techniques would successfully adapt views under the unavailability of base relations , while currently known maintenance strategies from the literature would fail . story_separator_special_tag the construction and maintenance of data warehouses ( views ) in large-scale environments composed of numerous distributed information sources ( iss ) such as the www has received great attention recently . such environments are plagued with continuously changing information because iss tend to continuously evolve by modifying not only their content but also their query capabilities and interface and by joining or leaving the environment at any time . in this paper , we outline our position on issues related to the challenging new problem of how to adapt views in such evolving environments . we rst present a taxonomy of view adaptation problems by describing the dimensions along which view adaptation problems can be classiied . based on this taxonomy , we identify a new view adaptation problem for view evolution in the context of iss capability changes , which we call view synchronization . we also outline the evolvable view environment ( eve ) that we propose as framework for solving the view synchronization problem , along with our decisions concerning some of the key design issues surrounding eve . we would also like to thank our industrial sponsors , in particular , ibm and informix . story_separator_special_tag in this paper , we address the issues related to the evolution and maintenance of data warehousing systems , when underlying data sources change their schema capabilities . these changes can invalidate views at the data warehousing system . we present an approach for dynamically adapting views according to schema changes arising on source relations . this type of maintenance concerns both the schema and the data of the data warehouse . the main issue is to avoid the view recomputation from scratch especially when views are defined from multiple sources . the data of the data warehouse is used primarily in organizational decision-making and may be strategic . therefore , the schema of the data warehouse can evolve for modeling new requirements resulting from analysis or data-mining processing . our approach provides means to support schema evolution of the data warehouse independently of the data sources . story_separator_special_tag 1 overview supporting independent iss and integrating them in distributed data warehouses ( materialized views ) is becoming more important with the growth of the www . however , views deened over autonomous iss are susceptible to schema changes . in the eve project we are developing techniques to support the maintenance of data warehouses deened over distributed dynamic iss 6 , 7 , 8 ] . the eve system is the rst to allow views to survive schema changes of their underlying iss while also adapting to changing data in those sources . eve achieves this in two steps : applying view query rewriting algorithms that exploit information about alternative iss and the information they contain , and incrementally adapting the view extent to the view deenition changes . those processes are referred to as view synchronization and view adaption , respectively . they increase the survivability of materialized views in changing environments and reduce the necessity of human interaction in system maintenance . rundensteiner would like to thank our industrial sponsors , in particular , ibm for the ibm partnership award and for the ibm corporate fellowship for one of her graduate students . e-sql or evolvable-sql is story_separator_special_tag when integrating heterogeneous information resources , it is often the case that the source is rather limited in the kinds of queries it can answer . if a query is asked of the entire system , we have a new kind of optimization problem , in which we must try to express the given query in terms of the limited query templates that this source can answer . for the case of conjunctive queries , we show how to decide with a nondeterministic polynomial-time algorithm whether the given query can be answered . we then extend our results to allow arithmetic comparisons in the given query and in the templates . story_separator_special_tag we provide a principled extension of sql , called schemasql , that offers the capability of uniform manipulation of data and meta-data in relational multi-database systems . we develop a precise syntax and semantics of schemasql in a manner that extends traditional sql syntax and semantics , and demonstrate the following . ( 1 ) schemasql retains the flavour of sql while supporting querying of both data and meta-data . ( 2 ) it can be used to represent data in a database in a structure substantially different from original database , in which data and meta-data may be interchanged . ( 3 ) it also permits the creation of views whose schema is dynamically dependent on the contents of the input instance . ( 4 ) while aggregation in sql is restricted to values occurring in one column at a time , schemasql permits horizontal aggregation and even aggregation over more general blocks of information . ( 5 ) schemasql provides a great facility for interoperability and data/meta-data management in relational multi-database systems . we provide many examples to illustrate our claims . we outline an architecture for the implementation of schemasql and discuss implementation algorithms based on available story_separator_special_tag data warehouses are complex systems consisting of many components which store highlyaggregated data for decision support . due to the role of the data warehouses in the daily business work of an enterprise , the requirements for the design and the implementation are dynamic and subjective . therefore , data warehouse design is a continuous process which has to reflect the changing environment of a data warehouse , i.e . the data warehouse must evolve in reaction to the enterprise s evolution . based on existing meta models for the architecture and quality of a data warehouse , we propose in this paper a data warehouse process model to capture the dynamics of a data warehouse . the evolution of a data warehouse is represented as a special process and the evolution operators are linked to the corresponding architecture components and quality factors they affect . we show the application of our model on schema evolution in data warehouses and its consequences on data warehouse views . the models have been implemented in the metadata repository conceptbase which can be used to analyze the result of evolution operations and to monitor the quality of a data warehouse .
an emotion monitoring system for a call-center is proposed . it aims to simplify the tracking and management of emotions extracted from call center employee-customer conversations . the system is composed of four modules : emotion detection , emotion analysis and report generation , database manager , and user interface . the emotion detection module uses tone analyzer to extract them for reliable emotion ; it also performs the utterance analysis for detecting emotion . the 14 emotions detected by the tone analyzer are happy , joy , anger , sad and neutral , etc . the emotion analysis module performs classification into the 3 categories : neutral , anger and joy . by using this category , it applies the point-scoring technique for calculating the employee score . this module also polishes the output of the emotion detection module to provide a more presentable output of a sequence of emotions of the employee and the customer . the database manager is responsible for the management of the database wherein it handles the creation , and update of data . the interface module serves as the view and user interface for the whole system . the system is comprised of story_separator_special_tag we offer an analytical solution for the optimal frame size of the non-muting version of the basic frame slotted aloha collision resolution protocol for rfid networks . previous investigations of rfid frame size have been empirical and have not yielded a general result . our theoretical analysis provides a generalized result . our solution can be used to determine the optimal frame size for any given number of rfid tags . suboptimal selection for the frame size can result in substantially longer than minimum census delays and can unnecessarily increase energy consumption . we were able to demonstrate about 20 % performance improvement in reduced census delay for a given range of values . our results can help speed-up reader-side processing times , lower the implementation complexity of rfid readers , and increase their energy efficiency . story_separator_special_tag in this paper , a novel data hiding technique is proposed , as an improvement over the fibonacci lsb data-hiding technique proposed by battisti et al . first we mathematically model and generalize our approach . then we propose our novel technique , based on decomposition of a number ( pixel-value ) in sum of prime numbers . the particular representation generates a different set of ( virtual ) bit-planes altogether , suitable for embedding purposes . they not only allow one to embed secret message in higher bit-planes but also do it without much distortion , with a much better stego-image quality , and in a reliable and secured manner , guaranteeing efficient retrieval of secret message . a comparative performance study between the classical least significant bit ( lsb ) method , the fibonacci lsb data-hiding technique and our proposed schemes has been done . analysis indicates that image quality of the stego-image hidden by the technique using fibonacci decomposition improves against that using simple lsb substitution method , while the same using the prime decomposition method improves drastically against that using fibonacci decomposition technique . experimental results show that , the stego-image is visually indistinguishable from the story_separator_special_tag wireless sensor network is a collection of sensor nodes with limited processor and limited memory unit embedded in it . sensor networks are used in wide range of applications such as environment monitoring , health , industrial control units , military applications and many more . this paper defines the security requirements and various attacks on sensor network . this paper also review proposed security mechanisms for wsn . story_separator_special_tag wireless sensor networks ( wsns ) have attracted a lot of interest in the research community due to their potential applicability in a wide range of real-world practical applications . however , due to the distributed nature and their deployments in critical applications without human interventions and sensitivity and criticality of data communicated , these networks are vulnerable to numerous security and privacy threats that can adversely affect their performance . these issues become even more critical in cognitive wireless sensor networks ( cwsns ) in which the sensor nodes have the capabilities of changing their transmission and reception parameters according to the radio environment under which they operate in order to achieve reliable and efficient communication and optimum utilization of the network resources . this chapter presents a comprehensive discussion on the security and privacy issues in cwsns by identifying various security threats in these networks and various defense mechanisms to counter these vulnerabilities . various types of attacks on cwsns are categorized under different classes based on their natures and targets , and corresponding to each attack class , appropriate security mechanisms are also discussed . some critical research issues on security and privacy in cwsns are also story_separator_special_tag the security in wireless sensor networks ( wsns ) is a critical issue due to the inherent limitations of computational capacity and power usage . while a variety of security techniques are being developed and a lot of research is going on in security field at a brisk pace but the field lacks a common integrated platform which provides a comprehensive comparison of the seemingly unconnected but linked issues . in this paper we attempt to comparatively analyse the various available security approaches highlighting their advantages and weaknesses . this will surely ease the implementers burden of choosing between various available modes of defence . story_separator_special_tag wireless sensor networks are result of developments in micro electro mechanical systems and wireless networks . these networks are made of tiny nodes which are becoming future of many applications where sensor networks are deployed in hostile environments . the deployment nature where sensor networks are prone to physical interaction with environment and resource limitations raises some serious questions to secure these nodes against adversaries . the traditional security measures are not enough to overcome these weaknesses . to address the special security needs of tiny sensor nodes and sensor networks as a whole we introduce a security framework . in our framework we emphasize on three areas : ( 1 ) cluster formation ( 2 ) secure key management scheme , and ( 3 ) a secure routing algorithm . our security analysis shows that the framework presented in this paper meets the unique security needs of sensor networks . story_separator_special_tag of sensor network in hostile environment makes it mainly vulnerable to battery drainage attacks because it is impossible to recharge or replace the battery power of sensor nodes . among different types of security threats , low power sensor nodes are immensely affected by the attacks which cause random drainage of the energy level of sensors , leading to death of the nodes . the most dangerous type of attack in this category is sleep deprivation , where target of the intruder is to maximize the power consumption of sensor nodes , so that their lifetime is minimized . most of the existing works on sleep deprivation attack detection involve a lot of overhead , leading to poor throughput . the need of the day is to design a model for detecting intrusions accurately in an energy efficient manner . this paper proposes a hierarchical framework based on distributed collaborative mechanism for detecting sleep deprivation torture in wireless sensor network efficiently . proposed model uses anomaly detection technique in two steps to reduce the probability of false intrusion . story_separator_special_tag wireless sensor networks ( wsns ) use small nodes with constrained capabilities to sense , collect , and disseminate information in many types of applications . as sensor networks become wide-spread , security issues become a central concern , especially in mission-critical tasks . in this paper , we identify the threats and vulnerabilities to wsns and summarize the defense methods based on the networking protocol layer analysis first . then we give a holistic overview of security issues . these issues are divided into seven categories : cryptography , key management , attack detections and preventions , secure routing , secure location security , secure data fusion , and other security issues . along the way we analyze the advantages and disadvantages of current secure schemes in each category . in addition , we also summarize the techniques and methods used in these categories , and point out the open research issues and directions in each area . story_separator_special_tag mobile ad hoc network ( manet ) is a dynamic multihop wireless network which is established by a set of mobile nodes on a shared wireless channel . one of the major issues in manet is routing due to the mobility of the nodes . routing means the act of moving information across an internet work from a source to a destination . when it comes to manet , the complexity increases due to various characteristics like dynamic topology , time varying qos requirements , limited resources and energy etc . qos routing plays an important role for providing qos in wireless ad hoc networks . the biggest challenge in this kind of networks is to find a path between the communication end points satisfying user s qos requirement . nature-inspired algorithms ( swarm intelligence ) such as ant colony optimization ( aco ) algorithms have shown to be a good technique for developing routing algorithms for manets . in this paper , a new qos algorithm for mobile ad hoc network has been proposed . the proposed algorithm combines the idea of ant colony optimization ( aco ) with optimized link state routing ( olsr ) protocol to identify story_separator_special_tag with a widespread growth in the potential applications of wireless sensor networks ( wsn ) , the need for reliable security mechanisms for them has increased manifold . security protocols in wsns , unlike the traditional mechanisms , need special efforts and issues to be addressed . this is attributed to the inherent computational and communicational constraints in these tiny embedded system devices . another reason which distinguishes them from traditional network security mechanisms , is their usage in extremely hostile and unattended environments . the sensitivity of the data sensed by these devices also pose everincreasing challenges . we present a layer based classification of wsn security threats and defenses proposed in the literature , with special focus on physical , link and network layer issues . story_separator_special_tag wireless sensor networks ( wsns ) have recently attracted a lot of interest in the research community due their wide range of applications . due to distributed nature of these networks and their deployment in remote areas , these networks are vulnerable to numerous security threats that can adversely affect their proper functioning . this problem is more critical if the network is deployed for some mission-critical applications such as in a tactical battlefield . random failure of nodes is also very likely in real-life deployment scenarios . due to resource constraints in the sensor nodes , traditional security mechanisms with large overhead of computation and communication are infeasible in wsns . security in sensor networks is , therefore , a particularly challenging task . this paper discusses the current state of the art in security mechanisms for wsns . various types of attacks are discussed and their countermeasures presented . a brief discussion on the future direction of research in wsn security is also included . story_separator_special_tag wireless sensor networks ( wsn ) are a most challenging and emerging technology for the research due to their vital scope in the field coupled with their low processing power and associated low energy . today wireless sensor networks are broadly used in environmental control , surveillance tasks , monitoring , tracking and controlling etc . on the top of all this the wireless sensor networks need very secure communication in wake of they being in open field and being based on broadcasting technology . in this paper we deal with the security of the wireless sensor networks . staring with a brief overview of the sensor networks , a review is made of and how to provide the security on the wireless sensor networks . story_separator_special_tag we present a new icmp message and an automatic process capable of tracing reflective dos attacks back to attack agents . the newly designed icmp message carries the packet routing history and is signed by each forwarding router . after receiving the loaded icmp messages , attack targets can identify the border routers of reflectors in the first flooding path and then use an icmp message to inform accountable border routers to continue the traceback process to find the attack agents . in this paper , we propose an automatic , efficient , and secure traceback process across domains and discuss some limitations of the protocol . story_separator_special_tag wireless sensor network ( wsn ) , composed of a huge number of resource-constrained sensors can be used for a large number of securitysensitive applications . regardless of the type of application , smooth collection and delivery of data from this type of network is one of the critical requirements . if the data supply process is hampered and thus the expected services become unavailable due to the intentional attempts of the adversaries , we consider this as a denial of service ( dos ) attack . as dos attack targets to jeopardize the usual services , it can often drastically curtail the utility of wireless sensor network . in this short communication , we explore the meaning of dos in wsn , its effective mitigation techniques , and recent issues and challenges in this research area . story_separator_special_tag rfid is one of the enabling technologies of the internet of things . rfid has the potential to enable machines to identify objects , understand their status , and communicate and take action if necessary , to create `` real time awareness . '' the pervasiveness of rfid technology has given rise to a number of serious issues including security and privacy concerns . this paper will discuss current rfid usage issues and conduct a threat analysis of the rfid system components then identify issues/risks and elucidate how these issues can be resolved or risks can be mitigated . story_separator_special_tag we consider routing security in wireless sensor networks . many sensor network routing protocols have been proposed , but none of them have been designed with security as a goal . we propose security goals for routing in sensor networks , show how attacks against ad-hoc and peer-to-peer networks can be adapted into powerful attacks against sensor networks , introduce two classes of novel attacks against sensor networks sinkholes and hello floods , and analyze the security of all the major sensor network routing protocols . we describe crippling attacks against all of them and suggest countermeasures and design considerations . this is the first such analysis of secure routing in sensor networks . story_separator_special_tag as wearable fitness trackers gain widespread acceptance among the general population , there is a concomitant need to ensure that associated privacy and security vulnerabilities are kept to a minimum . we discuss potential vulnerabilities of these trackers , in general , and specific vulnerabilities in one such tracker - fitbit - identified by rahman et al . ( 2013 ) who then proposed means to address identified vulnerabilities . however , the ` fix ' has its own vulnerabilities . we discuss possible means to alleviate related issues . story_separator_special_tag a number of sensor applications in recent years collect data which can be directly associated with human interactions . some examples of such applications include gps applications on mobile devices , accelerometers , or location sensors designed to track human and vehicular traffic . such data lends itself to a variety of rich applications in which one can use the sensor data in order to model the underlying relationships and interactions . it also leads to a number of challenges , since such data may often be private , and it is important to be able to perform the mining process without violating the privacy of the users . in this chapter , we provide a broad survey of the work in this important and rapidly emerging field . we also discuss the key problems which arise in the context of this important field and the corresponding solutions . story_separator_special_tag we provide an exposition and proof of renault 's equivalence theo- rem for crossed products by locally hausdorff , locally compact groupoids . our approach stresses the bundle approach , concrete imprimitivity bimodules and is a preamble to a detailed treatment of the brauer semigroup for a locally hausdorff , locally compact groupoid . story_separator_special_tag the need for sustainable catalysts for an efficient hydrogen evolution reaction is of significant interest for modern society . inspired by comparable structural properties of [ feni ] -hydrogenase , here we present the natural ore pentlandite ( fe4.5ni4.5s8 ) as a direct 'rock ' electrode material for hydrogen evolution under acidic conditions with an overpotential of 280 mv at 10 ma cm ( -2 ) . furthermore , it reaches a value as low as 190 mv after 96 h of electrolysis due to surface sulfur depletion , which may change the electronic structure of the catalytically active nickel-iron centres . the 'rock ' material shows an unexpected catalytic activity with comparable overpotential and tafel slope to some well-developed metallic or nanostructured catalysts . notably , the 'rock ' material offers high current densities ( 650 ma cm ( -2 ) ) without any loss in activity for approximately 170 h. the superior hydrogen evolution performance of pentlandites as 'rock ' electrode labels this ore as a promising electrocatalyst for future hydrogen-based economy . story_separator_special_tag the fusion of social networks and wearable sensors is becoming increasingly popular , with systems like fitbit automating the process of reporting and sharing user fitness data . in this paper we show that while compelling , the integration of health data into social networks is fraught with privacy and security vulnerabilities . case in point , by reverse engineering the communication protocol , storage details and operation codes , we identified several vulnerabilities in fitbit . we have built fitbite , a suite of tools that exploit these vulnerabilities to launch a wide range of attacks against fitbit . besides eavesdropping , injection and denial of service , several attacks can lead to rewards and financial gains . we have built fitlock , a lightweight defense system that protects fitbit while imposing only a small overhead . our experiments on beagleboard and xperia devices show that fitlock 's end-to-end overhead over fitbit is only 2.4 % . story_separator_special_tag this paper proposes a combination of an intrusion detection system with a routing protocol to strengthen the defense of a mobile ad hoc network . our system is socially inspired , since we use the new paradigm of reputation inherited from human behavior . the proposed ids also has a unique characteristic of being semi-distributed , since it neither distributes its observation results globally nor keeps them entirely locally ; however , managing to communicate this vital information without accretion of the network traffic . this innovative approach also avoids void assumptions and complex calculations for calculating and maintaining trust values used to estimate the reliability of other nodes observations . a robust path manager and monitor system and redemption and fading concepts are other salient features of this design . the design has shown to outperform normal dsr in terms of packet delivery ratio and routing overhead even when up to half of nodes in the network behave as malicious . story_separator_special_tag scikit-learn is a python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems . this package focuses on bringing machine learning to non-specialists using a general-purpose high-level language . emphasis is put on ease of use , performance , documentation , and api consistency . it has minimal dependencies and is distributed under the simplified bsd license , encouraging its use in both academic and commercial settings . source code , binaries , and documentation can be downloaded from http : //scikit-learn.sourceforge.net . story_separator_special_tag smart contracts are computer programs that can be correctly executed by a network of mutually distrusting nodes , without the need of an external trusted authority . since smart contracts handle and transfer assets of considerable value , besides their correct execution it is also crucial that their implementation is secure against attacks which aim at stealing or tampering the assets . we study this problem in ethereum , the most well-known and used framework for smart contracts so far . we analyse the security vulnerabilities of ethereum smart contracts , providing a taxonomy of common programming pitfalls which may lead to vulnerabilities . we show a series of attacks which exploit these vulnerabilities , allowing an adversary to steal money or cause other damage .
principal component analysis pca is a multivariate technique that analyzes a data table in which observations are described by several inter-correlated quantitative dependent variables . its goal is to extract the important information from the table , to represent it as a set of new orthogonal variables called principal components , and to display the pattern of similarity of the observations and of the variables as points in maps . the quality of the pca model can be evaluated using cross-validation techniques such as the bootstrap and the jackknife . pca can be generalized as correspondence analysis ca in order to handle qualitative variables and as multiple factor analysis mfa in order to handle heterogeneous sets of variables . mathematically , pca depends upon the eigen-decomposition of positive semi-definite matrices and upon the singular value decomposition svd of rectangular matrices . copyright \xa9 2010 john wiley & sons , inc . story_separator_special_tag let a be a real m\xd7n matrix with m n. it is well known ( cf . [ 4 ] ) that $ $ a = u\\sum { v^t } $ $ ( 1 ) where $ $ { u^t } u = { v^t } v = v { v^t } = { i_n } { \\text { and } } \\sum { \\text { = diag ( } } { \\sigma _ { \\text { 1 } } } { \\text { , } } \\ldots { \\text { , } } { \\sigma _n } { \\text { ) } } { \\text { . } } $ $ the matrix u consists of n orthonormalized eigenvectors associated with the n largest eigenvalues of aa t , and the matrix v consists of the orthonormalized eigenvectors of a t a. the diagonal elements of are the non-negative square roots of the eigenvalues of a t a ; they are called singular values . we shall assume that $ $ { \\sigma _1 } \\geqq { \\sigma _2 } \\geqq \\cdots \\geqq { \\sigma _n } \\geqq 0. $ $ thus if rank ( a ) =r , story_separator_special_tag abstract the independent component analysis ( ica ) of a random vector consists of searching for a linear transformation that minimizes the statistical dependence between its components . in order to define suitable search criteria , the expansion of mutual information is utilized as a function of cumulants of increasing orders . an efficient algorithm is proposed , which allows the computation of the ica of a data matrix within a polynomial time . the concept of ica may actually be seen as an extension of the principal component analysis ( pca ) , which can only impose independence up to the second order and , consequently , defines directions that are orthogonal . potential applications of ica include data analysis and compression , bayesian detection , localization of sources , and blind identification and deconvolution . story_separator_special_tag a vector quantizer is a system for mapping a sequence of continuous or discrete vectors into a digital sequence suitable for communication over or storage in a digital channel . the goal of such a system is data compression : to reduce the bit rate so as to minimize communication channel capacity or digital storage memory requirements while maintaining the necessary fidelity of the data . the mapping for each vector may or may not have memory in the sense of depending on past actions of the coder , just as in well established scalar techniques such as pcm , which has no memory , and predictive quantization , which does . even though information theory implies that one can always obtain better performance by coding vectors instead of scalars , scalar quantizers have remained by far the most common data compression system because of their simplicity and good performance when the communication rate is sufficiently large . in addition , relatively few design techniques have existed for vector quantizers . during the past few years several design algorithms have been developed for a variety of vector quantizers and the performance of these codes has been studied for speech waveforms story_separator_special_tag is perception of the whole based on perception of its parts ? there is psychological and physiological evidence for parts-based representations in the brain , and certain computational theories of object recognition rely on such representations . but little is known about how brains or computers might learn the parts of objects . here we demonstrate an algorithm for non-negative matrix factorization that is able to learn parts of faces and semantic features of text . this is in contrast to other methods , such as principal components analysis and vector quantization , that learn holistic , not parts-based , representations . non-negative matrix factorization is distinguished from the other methods by its use of non-negativity constraints . these constraints lead to a parts-based representation because they allow only additive , not subtractive , combinations . when non-negative matrix factorization is implemented as a neural network , parts-based representations emerge by virtue of two properties : the firing rates of neurons are never negative and synaptic strengths do not change sign . story_separator_special_tag in this paper , we propose a new data clustering method called concept factorization that models each concept as a linear combination of the data points , and each data point as a linear combination of the concepts . with this model , the data clustering task is accomplished by computing the two sets of linear coefficients , and this linear coefficients computation is carried out by finding the non-negative solution that minimizes the reconstruction error of the data points . the cluster label of each data point can be easily derived from the obtained linear coefficients . this method differs from the method of clustering based on non-negative matrix factorization ( nmf ) \\citexu03 in that it can be applied to data containing negative values and the method can be implemented in the kernel space . our experimental results show that the proposed data clustering method and its variations performs best among 11 algorithms and their variations that we have evaluated on both tdt2 and reuters-21578 corpus . in addition to its good performance , the new method also has the merit in its easy and reliable derivation of the clustering results . story_separator_special_tag previous studies have demonstrated that document clustering performance can be improved significantly in lower dimensional linear subspaces . recently , matrix factorization-based techniques , such as nonnegative matrix factorization ( nmf ) and concept factorization ( cf ) , have yielded impressive results . however , both of them effectively see only the global euclidean geometry , whereas the local manifold geometry is not fully considered . in this paper , we propose a new approach to extract the document concepts which are consistent with the manifold geometry such that each concept corresponds to a connected component . central to our approach is a graph model which captures the local geometry of the document submanifold . thus , we call it locally consistent concept factorization ( lccf ) . by using the graph laplacian to smooth the document-to-concept mapping , lccf can extract concepts with respect to the intrinsic manifold structure and thus documents associated with the same concept can be well clustered . the experimental results on tdt2 and reuters-21578 have shown that the proposed approach provides a better representation and achieves better clustering results in terms of accuracy and mutual information . story_separator_special_tag in past decades , tremendous growths in the amount of text documents and images have become omnipresent , and it is very important to group them into clusters upon desired . recently , matrix factorization based techniques , such as non-negative matrix factorization ( nmf ) and concept factorization ( cf ) , have yielded impressive results for clustering . however , both of them effectively see only the global euclidean geometry , whereas the local manifold geometry is not fully considered . recent research has shown that not only the observed data are found to lie on a nonlinear low dimensional manifold , namely data manifold , but also the features lie on a manifold , namely feature manifold . in this paper , we propose a novel algorithm , called dual-graph regularized concept factorization for clustering ( gcf ) , which simultaneously considers the geometric structures of both the data manifold and the feature manifold . as an extension of gcf , we extend that our proposed method can also be apply to the negative dataset . moreover , we develop the iterative updating optimization schemes for gcf , and provide the convergence proof of our optimization scheme story_separator_special_tag existing matrix factorization based techniques , such as nonnegative matrix factorization and concept factorization , have been widely applied for data representation . in order to make the obtained concepts to be as close to the original data points as possible , one state-of-the-art method called locality constraint concept factorization is put forward , which represent the data by a linear combination of only a few nearby basis concepts . but its locality constraint does not well reveal the intrinsic data structure since it only requires the concept to be as close to the original data points as possible . to address these problems , by considering the manifold geometrical structure in local concept factorization via graph-based learning , we propose a novel algorithm , called graph-regularized local coordinate concept factorization ( grlcf ) . by constructing a parameter-free graph using constrained laplacian rank ( clr ) algorithm , we also present an extension of grlcf algorithm as $ $ \\hbox { grlcf } _ { \\mathrm { clr } } $ $ . moreover , we develop the iterative updating optimization schemes , and provide the convergence proof of our optimization scheme . since grlcf simultaneously considers the geometric story_separator_special_tag abstract concept factorization ( cf ) has been a powerful data representation method , which has been widely applied in image processing and document clustering . however , traditional cf can not guarantee the decomposition results of cf to be sparse in theory and do not consider the geometric structure of the databases . in this paper , we propose a graph-regularized cf with local coordinate ( lgcf ) method , which enforces the learned coefficients to be sparse by using the local coordinate constraint meanwhile preserving the intrinsic geometric structure of the data space by incorporating graph regularization . an iterative optimization method is also proposed to solve the objective function of lgcf . by comparing with the state-of-the-arts algorithms ( kmeans , nmf , cf , lccf , lcf ) , experimental results on four popular databases show that the proposed lgcf method has better performance in terms of average accuracy and mutual information . story_separator_special_tag many traditional concept factorization methods employ single graph to approximate the manifold structure of data . therefore , they can not capture the underlying geometric structure hidden in data effectively . in this paper , we propose a novel method , called multiple graph regularized concept factorization with adaptive weights ( mcfaws ) , for data representation . it exploits the intrinsic geometric manifold of the data by the linear combination of multiple graphs with parameter free . therefore , our proposed mcfaw method can be applied to many real problems . besides , an efficient optimization algorithm is presented to solve the proposed model . some experimental results on the benchmarks show that the proposed mcfaw method outperforms the state-of-the-art methods . story_separator_special_tag in this paper , a novel concept factorization ( cf ) method , called cf with adaptive neighbors ( cfans ) , is proposed . the idea of cfan is to integrate an ans regularization constraint into the cf decomposition . the goal of cfan is to extract the representation space that maintains geometrical neighborhood structure of the data . similar to the existing graph-regularized cf , cfan builds a neighbor graph weights matrix . the key difference is that the cfan performs dimensionality reduction and finds the neighbor graph weights matrix simultaneously . an efficient algorithm is also derived to solve the proposed problem . we apply the proposed method to the problem of document clustering on the 20 newsgroups , reuters-21578 , and tdt2 document data sets . our experiments demonstrate the effectiveness of the method . story_separator_special_tag recently , manifold regularization with the affinity graph in matrix factorization-related studies , such as dual-graph regularized concept factorization ( gcf ) , have yielded impressive results for clustering . however , due to the noisy and irrelevant features of the data samples , the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples . to overcome this problem , we integrate feature selection into the construction of the data ( feature ) graph and propose a novel algorithm called adaptive dual-graph regularized cf with feature selection $ $ ( \\hbox { adgcf } _ { \\mathrm { fs } } ) $ $ ( adgcffs ) , which simultaneously considers the geometric structures of both the data manifold and the feature manifold . we unify feature selections , dual-graph regularized cf into a joint objective function and minimize this objective function with iterative and alternative updating optimization schemes . moreover , we provide the convergence proof of our optimization scheme . experimental results on tdt2 and reuters document datasets , coil20 and pie image datasets demonstrate the effectiveness of our proposed method . story_separator_special_tag we investigate the high-dimensional data clustering problem by proposing a novel and unsupervised representation learning model called robust flexible auto-weighted local-coordinate concept factorization ( rfa-lcf ) . rfa-lcf integrates the robust flexible cf , robust sparse local-coordinate coding and the adaptive reconstruction weighting learning into a unified model . the adaptive weighting is driven by including thejoint manifold preserving constraints on the recovered clean data , basis concepts and new representation . specifically , our rfa-lcf uses a l2,1-norm based flexible residue to encode the mismatch between clean data and its reconstruction , and also applies the robust adaptive sparse local-coordinate coding to represent the data using a few nearby basis concepts , which can make the factorization more accurate and robust to noise . the robust flexible factorization is also performed in the recovered clean data space for enhancing representations . rfa-lcf also considers preserving the local manifold structures of clean data space , basis concept space and the new coordinate space jointly in an adaptive manner way . extensive comparisons show that rfa-lcf can deliver enhanced clustering results . story_separator_special_tag concept factorization ( cf ) and its variants may produce inaccurate representation and clustering results due to the sensitivity to noise , hard constraint on the reconstruction error , and pre-obtained approximate similarities . to improve the representation ability , a novel unsupervised robust flexible auto-weighted local-coordinate concept factorization ( rfa-lcf ) framework is proposed for clustering high-dimensional data . specifically , rfa-lcf integrates the robust flexible cf by clean data space recovery , robust sparse local-coordinate coding , and adaptive weighting into a unified model . rfa-lcf improves the representations by enhancing the robustness of cf to noise and errors , providing a flexible constraint on the reconstruction error and optimizing the locality jointly . for robust learning , rfa-lcf clearly learns a sparse projection to recover the underlying clean data space , and then the flexible cf is performed in the projected feature space . rfa-lcf also uses a l2,1-norm based flexible residue to encode the mismatch between the recovered data and its reconstruction , and uses the robust sparse local-coordinate coding to represent data using a few nearby basis concepts . for auto-weighting , rfa-lcf jointly preserves the manifold structures in the basis concept space and new story_separator_special_tag matrix factorization based techniques , such as non-negative matrix factorization ( nmf ) and concept factorization ( cf ) , have attracted great attention in dimension reduction and data clustering . both of them are linear learning problems and lead to a sparse representation of the data . however , the sparsity obtained by these methods does not always satisfy locality conditions , thus the obtained data representation is not the best . this paper introduces a locality-constrained concept factorization method which imposes a locality constraint onto the traditional concept factorization . by requiring the concepts ( basis vectors ) to be as close to the original data points as possible , each data can be represented by a linear combination of only a few basis concepts . thus our method is able to achieve sparsity and locality at the same time . we demonstrate the effectiveness of this novel algorithm through a set of evaluations on real world applications . story_separator_special_tag learning sparse representation of high-dimensional data is a state-of-the-art method for modeling data . matrix factorization-based techniques , such as nonnegative matrix factorization and concept factorization ( cf ) , have shown great advantages in this area , especially useful for image representation . both of them are linear learning problems and lead to a sparse representation of the images . however , the sparsity obtained by these methods does not always satisfy locality conditions . for example , the learned new basis vectors may be relatively far away from the original data . thus , we may not be able to achieve the optimal performance when using the new representation for other learning tasks , such as classification and clustering . in this paper , we introduce a locality constraint into the traditional cf . by requiring the concepts ( basis vectors ) to be as close to the original data points as possible , each datum can be represented by a linear combination of only a few basis concepts . thus , our method is able to achieve sparsity and locality simultaneously . we analyze the complexity of our novel algorithm and demonstrate the effectiveness in comparison with story_separator_special_tag ubiquitous data are increasingly expanding in large volumes due to human activities , and grouping them into appropriate clusters is an important and yet challenging problem . existing matrix factorization techniques have shown their significant power in solving this problem , e.g. , nonnegative matrix factorization , concept factorization . recently , one state-of-the-art method called locality-constrained concept factorization is put forward , but its locality constraint does not well reveal the intrinsic data structure since it only requires the concept to be as close to the original data points as possible . to address this issue , we present a graph-based local concept coordinate factorization ( glcf ) method , which respects the intrinsic structure of the data through manifold kernel learning in the warped reproducing kernel hilbert space . besides , a generalized update algorithm is developed to handle data matrices containing both positive and negative entries . since glcf is essentially based on the local coordinate coding and concept factorization , it inherits many advantageous properties , such as the locality and sparsity of the data representation . moreover , it can better encode the locally geometrical structure via graph laplacian in the manifold adaptive kernel . story_separator_special_tag abstract various exponential-growing documents and images have become omnipresent in past decades , and it is of vital importance to group them into clusters upon desired . matrix factorization is exhibited to help yield encouraging clustering results in previous works , whereas the data manifold structure , which holds plentiful spatial model information , is not fully respected by most existing techniques . and kernel learning is advantageous for unfolding nonlinear structure . therefore , in this paper we propose a novel clustering approach called manifold kernel concept factorization ( mkcf ) that incorporates the manifold kernel learning in concept factorization , which encodes the local geometrical structure in the kernel space . this method efficiently preserves the data semantic structure using graph laplacian , and the nonlinear manifold learning in the warped rkhs potentially reflects the underlying local geometry of the data . thus , the concepts consistent with the intrinsic manifold structure are well extracted , and this greatly benefits aggregating documents and images within the same concept into the same cluster . extensive empirical studies demonstrate that mkcf owns the superiority of achieving the more satisfactory clustering performance as well as deriving the better-represented lower data space story_separator_special_tag matrix factorization based methods , e.g. , the concept factorization ( cf ) and nonnegative matrix factorization ( nmf ) , have been proved to be efficient and effective for data clustering tasks . in recent years , various graph extensions of cf and nmf have been proposed to explore intrinsic geometrical structure of data for the purpose of better clustering performance . however , many methods build the affinity matrix used in the manifold structure directly based on the input data . therefore , the clustering results are highly sensitive to the input data . to further improve the clustering performance , we propose a novel manifold concept factorization model with adaptive neighbor structure to learn a better affinity matrix and clustering indicator matrix at the same time . technically , the proposed model constructs the affinity matrix by assigning the adaptive and optimal neighbors to each point based on the local distance of the learned new representation of the original data with itself as a dictionary . our experimental results present superior performance over the state-of-the-art alternatives on numerous datasets . story_separator_special_tag as one of the matrix factorization models , concept factorization ( cf ) achieved promising performance in learning data representation in both original feature space and reproducible kernel hilbert space ( rkhs ) . based on the consensuses that 1 ) learning performance of models can be enhanced by exploiting the geometrical structure of data and 2 ) jointly performing structured graph learning and clustering can avoid the suboptimal solutions caused by the two-stage strategy in graph-based learning , we developed a new cf model with self-expression . our model has a combined coefficient matrix which is able to learn more efficiently . in other words , we propose a cf-based joint structured graph learning and clustering model ( jsgcf ) . a new efficient iterative method is developed to optimize the jsgcf objective function . experimental results on representative data sets demonstrate the effectiveness of our new jsgcf algorithm . story_separator_special_tag concept factorization ( cf ) divides a matrix into the product of three matrices . it is considered as one variant of non-negative matrix factorization ( nmf ) . the biggest difference between the two methods is that cf can be executed in a kernel space . because of this characteristic , many schemes based on cf have been proposed in computer vision and pattern recognition fields . recent studies have shown that high dimensional data is often located in a low dimensional manifold space , in order to improve the performance and reduce the storage space , how to find the mapping function is particularly important . in addition , the development of supervised learning methods show that label information is critical to enhance the model s ability . in this paper , a supervised graph regularized discriminative concept factorization ( sgdcf ) method is presented for image clustering . in the sgdcf , we make use of local manifold geometry structure and label information . the corresponding multiplicative update solutions and convergence verification are given . clustering results on four image data sets reveal that the sgdcf outperforms the state-of-the-art algorithms in terms of accuracy and normalized mutual story_separator_special_tag non-negative matrix factorization ( nmf ) has become a popular technique for finding low-dimensional representations of data . while the standard nmf can only be performed in the original feature space , one variant of nmf , named concept factorization , can be naturally kernelized and inherits all the strengths of nmf . to make use of label information , we propose a semi-supervised concept factorization technique called discriminative concept factorization ( dcf ) for data representation in this paper . dcf adopts a unified objective to combine the task of data reconstruction with the task of classification . these two tasks have mutual impacts on each other , which results in a concept factorization adapted to the classification task and a classifier built on the low-dimensional representations . furthermore , we develop an iterative algorithm to solve the optimization problem through alternative convex programming . experimental results on three real-word classification tasks demonstrate the effectiveness of dcf . story_separator_special_tag for the tasks of pattern analysis and recognition , nonnegative matrix factorization and concept factorization ( cf ) have attracted much attention due to its effective application to find the meaningful low-dimensional representation of data . however , they neglect the geometry information embedded in the local neighborhoods of the data and fail to exploit the prior knowledge . in this paper , a novel semi-supervised learning algorithm named hyper-graph regularized discriminative concept factorization ( hdcf ) is proposed . for the sake of exploring intrinsic geometrical structure of the data and making use of label information , hdcf incorporates hyper-graph regularizer into cf framework and uses the label information to train a classifier for the classification task . hdcf can learn a new concept factorization with respect to the intrinsic manifold structure of the data and also simultaneously adapted to the classification task and a classifier built on the low-dimensional representations . moreover , an iterative updating optimization scheme is developed to solve the objective function of the proposed hdcf and the convergence proof of our optimization scheme is also provided . experimental results on orl , yale and usps image databases demonstrate the effectiveness of our proposed algorithm story_separator_special_tag concept factorization ( cf ) is a variant of non-negative matrix factorization ( nmf ) . in cf , each concept is represented by a linear combination of data points , and each data point is represented by a linear combination of concepts . more specifically , each concept is represented by more than one data point with different weights , and each data point carries various weights called membership to represent their degrees belonging to that concept . however , cf is actually an unsupervised method without making use of prior information of the data . in this paper , we propose a novel semi-supervised concept factorization method , called pairwise constrained concept factorization ( pccf ) , which incorporates pairwise constraints into the cf framework . we expect that data points which have pairwise must-link constraints should have the same class label as much as possible , while data points with pairwise can not -link constraints will have different class labels as much as possible . due to the incorporation of the pairwise constraints , the learning quality of the cf has been significantly enhanced . experimental results show the effectiveness of our proposed novel method in comparison story_separator_special_tag nonnegative matrix factorization ( nmf ) and concept factorization ( cf ) are two popular methods for finding the low-rank approximation of nonnegative matrix . different from nmf , cf can be applied not only to the matrix containing negative values but also to the kernel space . based on nmf and cf , many methods , such as graph regularized nonnegative matrix factorization ( gnmf ) and locally consistent clustering factorization ( lccf ) can significantly improve the performance of clustering . unfortunately , these are unsupervised learning methods . in order to enhance the clustering performance with the supervisory information , a semi-supervised concept factorization ( sscf ) is proposed in this paper by incorporating the pairwise constraints into cf as the reward and penalty terms , which can guarantee that the data points belonging to a cluster in the original space are still in the same cluster in the transformed space . by comparing with the state-of-the-arts algorithms ( km , nmf , cf , lccf , gnmf , pccf ) , experimental results on document clustering show that the proposed algorithm has better performance in terms of accuracy and mutual information . story_separator_special_tag matrix factorization based techniques , such as nonnegative matrix factorization ( nmf ) and concept factorization ( cf ) , have attracted a great deal of attentions in recent years , mainly due to their ability of dimension reduction and sparse data representation . both techniques are of unsupervised nature and thus do not make use of a priori knowledge to guide the clustering process . this could lead to inferior performance in some scenarios . as a remedy to this , a semi-supervised learning method called pairwise constrained concept factorization ( pccf ) was introduced to incorporate some pairwise constraints into the cf framework . despite its improved performance , pccf uses only a priori knowledge and neglects the proximity information of the whole data distribution ; this could lead to rather poor performance ( although slightly improved comparing to cf ) when only limited a priori information is available . to address this issue , we propose in this paper a novel method called constrained neighborhood preserving concept factorization ( cnpcf ) . cnpcf utilizes both a priori knowledge and local geometric structure of the dataset to guide its clustering . experimental studies on three real-world clustering tasks story_separator_special_tag abstract document clustering is an important tool for text mining with its goal in grouping similar documents into a single cluster . as typical clustering methods , concept factorization ( cf ) and its variants have gained attention in recent studies . to improve the clustering performance , most of the cf methods use additional supervisory information to guide the clustering process . when the amount of supervisory information is scarce , the improved performance of cf methods will be limited . to overcome this limitation , this paper proposes a novel regularized concept factorization ( rcf ) algorithm with dual connected constraints , which focuses on whether two documents belong to the same class ( must-connected constraint ) or different classes ( can not -connected constraint ) . rcf propagates the limited constraint information from constrained samples to unconstrained samples , allowing the collection of constraint information from the entire data set . this information is used to construct a new data similarity matrix that concentrates on the local discriminative structure of data . the similarity matrix is incorporated as a regularization term in the cf objective function . by doing so , rcf is able to make full story_separator_special_tag matrix factorization based techniques , such as nonnegative matrix factorization and concept factorization , have attracted great attention in dimensionality reduction and data clustering . previous studies show that both of them yield impressive results on image processing and document clustering . however , both of them are essentially unsupervised methods and can not incorporate label information . in this paper , we propose a novel semisupervised matrix decomposition method for extracting the image concepts that are consistent with the known label information . with this constraint , we call the new approach constrained concept factorization . by requiring that the data points sharing the same label have the same coordinate in the new representation space , this approach has more discriminating power . the experimental results on several corpora show good performance of our novel algorithm in terms of clustering accuracy and mutual information . story_separator_special_tag matrix factorization methods have been widely applied for data representation . traditional concept factorization , however , fails to utilize the discriminative structure information and the geometric structure information that can improve the performance in clustering . in this paper , we propose a novel matrix factorization method , called local regularization concept factorization ( lrcf ) , for image representation and clustering tasks . in lrcf , according to local learning assumption , the label of each sample can be predicted by the samples in its neighborhoods . the new representation of our proposed lrcf can encode the intrinsic geometric structure and discriminative structure of the high-dimensional data . furthermore , in order to utilize the label information of labeled data , we propose a semi-supervised version of lrcf , namely local regularization constrained concept factorization ( lrccf ) , which incorporates the label information as additional constraints . moreover , we develop the corresponding optimization schemes for our proposed methods , and provide the convergence proofs of the optimization schemes . various experiments on real databases show that our proposed lrcf and lrccf are able to capture the intrinsic latent structure of data and achieve the state-of-the-art performance story_separator_special_tag a robust semi-supervised concept factorization ( rsscf ) method is proposed in this paper , which not only makes good use of the available label information , but also addresses noise and extracts meaningful information simultaneously . in the proposed method , a constraint matrix is embedded into the basic concept factorization model to guarantee data with the same label share the same new representation . we utilize l 2,1 -norm on both loss function and regularization , thus this new model is not sensitive to outliers and the l 2,1 -norm regularization helps select useful information with joint sparsity . an efficient and elegant iterative updating scheme is also introduced with convergence and correctness analysis . simulations are given to illustrate the effectiveness of our proposed method . story_separator_special_tag concept factorization ( cf ) is a modified version of nonnegative matrix factorization ( nmf ) and both of them have been proved to be effective matrix factorization methods for dimensionality reduction and data clustering . however , cf is essentially an unsupervised method which can not utilize any prior knowledge of data . in this paper , we propose a new semi-supervised concept factorization method , called constrained concept factorization with graph laplacian ( ccf-gl ) , which not only incorporates the geometrical information of data , but also utilizes the prior label information to enhance the accuracy of cf . specifically , we expect that the graph laplacian could preserve the intrinsic manifold structure of original data . meanwhile , in the low-dimensional space , we hope that data points sharing the same label will have the same coordinate , while the coordinates of data points possessing different labels will be as dissimilar as possible . as a result , the learning quality of this semi-supervised cf method has been significantly enhanced . the experimental results on image clustering show good performance of our algorithm . story_separator_special_tag constrained concept factorization ( ccf ) yields the enhanced representation ability over cf by incorporating label information as additional constraints , but it can not classify and group unlabeled data appropriately . minimizing the difference between the original data and its reconstruction directly can enable ccf to model a small noisy perturbation , but is not robust to gross sparse errors . besides , ccf can not preserve the manifold structures in new representation space explicitly , especially in an adaptive manner . in this paper , we propose a joint label prediction based robust semi-supervised adaptive concept factorization ( rs2acf ) framework . to obtain robust representation , rs2acf relaxes the factorization to make it simultaneously stable to small entrywise noise and robust to sparse errors . to enrich prior knowledge to enhance the discrimination , rs2acf clearly uses class information of labeled data and more importantly propagates it to unlabeled data by jointly learning an explicit label indicator for unlabeled data . by the label indicator , rs2acf can ensure the unlabeled data of the same predicted label to be mapped into the same class in feature space . besides , rs2acf incorporates the joint neighborhood reconstruction error story_separator_special_tag recently , concept factorization ( cf ) , which is a variant of nonnegative matrix factorization , has attracted great attentions in image representation . in cf , each concept is modeled as a nonnegative linear combination of the data points , and each data point as a linear combination of the concepts . cf has impressive performances in data representation . however , it is an unsupervised learning method without considering the label information of the data points . in this paper , we propose a novel semi-supervised cf method , called class-driven concept factorization ( cdcf ) , which associates the class labels of data points with their representations by introducing a class-driven constraint . this constraint forces the representations of data points to be more similar within the same class while different between classes . thus , the discriminative abilities of the representations are enhanced in the image representation . experimental results on several databases have shown the effectiveness of our proposed method in terms of clustering accuracy and mutual information . story_separator_special_tag previous studies have demonstrated that concept factorization ( cf ) have yielded impressive results for dimensionality reduction and data representation . however , it is difficult to get a desired result by using single layer concept factorization for some complex data , especially for ill-conditioned and badly scaled data . to improve the performance of the existing cf algorithms , in this paper , we proposed a novel clustering approach , called multilayer concept factorization ( mcf ) , for data representation . mcf is a cascade connection of l mixing subsystems to decompose the observation matrix iteratively in a number of layers . meanwhile , we explore the corresponding update solutions of the mcf method to reduce the risk of getting stuck in local minima in non-convex alternating computations . experimental results on document and face dataset demonstrate that our proposed method achieves better clustering performance in terms of accuracy and normalized mutual information in clustering . story_separator_special_tag previous studies have demonstrated that matrix factorization techniques , such as nonnegative matrix factorization ( nmf ) and concept factorization ( cf ) , have yielded impressive results in image processing and data representation . however , conventional cf and its variants with single layer factorization fail to capture the intrinsic structure of data . in this paper , we propose a novel sequential factorization method , namely graph regularized multilayer concept factorization ( gmcf ) for clustering . gmcf is a multi-stage procedure , which decomposes the observation matrix iteratively in a number of layers . in addition , gmcf further incorporates graph laplacian regularization in each layer to efficiently preserve the manifold structure of data . an efficient iterative updating scheme is developed for optimizing gmcf . the convergence of this algorithm is strictly proved ; the computational complexity is detailedly analyzed . extensive experiments demonstrate that gmcf owns the superiorities in terms of data representation and clustering performance . story_separator_special_tag in this paper , we investigate the unsupervised deep representation learning issue and technically propose a novel framework called deep self-representative concept factorization network ( dscf-net ) , for clustering deep features . to improve the representation and clustering abilities , dscf-net explicitly considers discovering hidden deep semantic features , enhancing the robustness proper-ties of the deep factorization to noise and preserving the local man-ifold structures of deep features . specifically , dscf-net seamlessly integrates the robust deep concept factorization , deep self-expressive representation and adaptive locality preserving feature learning into a unified framework . to discover hidden deep repre-sentations , dscf-net designs a hierarchical factorization architec-ture using multiple layers of linear transformations , where the hierarchical representation is performed by formulating the prob-lem as optimizing the basis concepts in each layer to improve the representation indirectly . dscf-net also improves the robustness by subspace recovery for sparse error correction firstly and then performs the deep factorization in the recovered visual subspace . to obtain locality-preserving representations , we also present an adaptive deep self-representative weighting strategy by using the coefficient matrix as the adaptive reconstruction weights to keep the locality of representations . extensive comparison results with several other story_separator_special_tag in this paper , we technically propose an enriched prior guided framework , called dual-constrained deep semi-supervised coupled factorization network ( dscf-net ) , for discovering hierarchical coupled data representation . to extract hidden deep features , dscf-net is formulated as a partial-label and geometrical structure-constrained framework . specifically , dscf-net designs a deep factorization architecture using multilayers of linear transformations , which can coupled update both the basis vectors and new representations in each layer . to enable learned deep representations and coefficients to be discriminative , we also consider enriching the supervised prior by joint deep coefficients-based label prediction and then incorporate the enriched prior information as additional label and structure constraints . the label constraint can enable the intra-class samples to have same coordinate in feature space , and the structure constraint forces the coefficients in each layer to be block-diagonal so that the enriched prior using the self-expressive label propagation are more accurate . our network also integrates the adaptive dualgraph learning to retain the local structures of both data and feature manifolds in each layer . extensive experiments on image datasets demonstrate the effectiveness of dscf-net for representation learning and clustering . introduction learning compact story_separator_special_tag the most popular algorithms for nonnegative matrix factorization ( nmf ) belong to a class of multiplicative lee-seung algorithms which have usually relative low complexity but are characterized by slow-convergence and the risk of getting stuck to in local minima . in this paper , we present and compare the performance of additive algorithms based on three different variations of a projected gradient approach . additionally , we discuss a novel multilayer approach to nmf algorithms combined with multi-start initializations procedure , which in general , considerably improves the performance of all the nmf algorithms . we demonstrate that this approach ( the multilayer system with projected gradient algorithms ) can usually give much better performance than standard multiplicative algorithms , especially , if data are ill-conditioned , badly-scaled , and/or a number of observations is only slightly greater than a number of nonnegative hidden components . our new implementations of nmf are demonstrated with the simulations performed for blind source separation ( bss ) data . story_separator_special_tag semi-non-negative matrix factorization is a technique that learns a low-dimensional representation of a dataset that lends itself to a clustering interpretation . it is possible that the mapping between this new representation and our original data matrix contains rather complex hierarchical information with implicit lower-level hidden attributes , that classical one level clustering methodologies can not interpret . in this work we propose a novel model , deep semi-nmf , that is able to learn such hidden representations that allow themselves to an interpretation of clustering according to different , unknown attributes of a given dataset . we also present a semi-supervised version of the algorithm , named deep wsf , that allows the use of ( partial ) prior information for each of the known attributes of a dataset , that allows the model to be used on datasets with mixed attribute knowledge . finally , we show that our models are able to learn low-dimensional representations that are better suited for clustering , but also classification , outperforming semi-non-negative matrix factorization , but also other state-of-the-art methodologies variants . story_separator_special_tag the number of images associated with weakly supervised user-provided tags has increased dramatically in recent years . user-provided tags are incomplete , subjective and noisy . in this paper , we focus on the problem of social image understanding , i.e. , tag refinement , tag assignment , and image retrieval . different from previous work , we propose a novel weakly supervised deep matrix factorization algorithm , which uncovers the latent image representations and tag representations embedded in the latent subspace by collaboratively exploring the weakly supervised tagging information , the visual structure , and the semantic structure . due to the well-known semantic gap , the hidden representations of images are learned by a hierarchical model , which are progressively transformed from the visual feature space . it can naturally embed new images into the subspace using the learned deep architecture . the semantic and visual structures are jointly incorporated to learn a semantic subspace without overfitting the noisy , incomplete , or subjective tags . besides , to remove the noisy or redundant visual features , a sparse model is imposed on the transformation matrix of the first layer in the deep architecture . finally , a story_separator_special_tag many clustering methods partition the data groups based on the input data similarity matrix . thus , the clustering results highly depend on the data similarity learning . because the similarity measurement and data clustering are often conducted in two separated steps , the learned data similarity may not be the optimal one for data clustering and lead to the suboptimal results . in this paper , we propose a novel clustering model to learn the data similarity matrix and clustering structure simultaneously . our new model learns the data similarity matrix by assigning the adaptive and optimal neighbors for each data point based on the local distances . meanwhile , the new rank constraint is imposed to the laplacian matrix of the data similarity matrix , such that the connected components in the resulted similarity matrix are exactly equal to the cluster number . we derive an efficient algorithm to optimize the proposed challenging problem , and show the theoretical analysis on the connections between our method and the k-means clustering , and spectral clustering . we also further extend the new clustering model for the projected clustering to handle the high-dimensional data . extensive empirical results on both story_separator_special_tag graph-based clustering methods perform clustering on a fixed input data graph . if this initial construction is of low quality then the resulting clustering may also be of low quality . moreover , existing graph-based clustering methods require post-processing on the data graph to extract the clustering indicators . we address both of these drawbacks by allowing the data graph itself to be adjusted as part of the clustering procedure . in particular , our constrained laplacian rank ( clr ) method learns a graph with exactly k connected components ( where k is the number of clusters ) . we develop two versions of this method , based upon the l1-norm and the l2-norm , which yield two new graph-based clustering objectives . we derive optimization algorithms to solve these objectives . experimental results on synthetic datasets and real-world benchmark datasets exhibit the effectiveness of this new graph-based clustering method . story_separator_special_tag nonnegative matrix factorization ( nmf ) and concept factorization ( cf ) have been widely used for different purposes such as feature learning , dimensionality reduction and image clustering in data representation . however , cf is a variant of nmf , which is an unsupervised learning method without making use of the available label information to guide the clustering process . in this paper , we put forward a semi-supervised discriminative concept factorization ( sdcf ) method , which utilizes the limited label information of the data as a discriminative constraint . this constraint forces the representation of data points within the same class should be very close together or aligned on the same axis in the new representation . furthermore , in order to utilize the local manifold regularization , we propose a novel semi-supervised graph-based discriminative concept factorization ( gdcf ) method , which incorporates the local manifold regularization and the label information of the data into the cf to improve the performance of cf . gdcf not only encodes the local geometrical structure of the data space by constructing k-nearest graph , but also takes into account the available label information . thus , the discriminative story_separator_special_tag the success of machine learning algorithms generally depends on data representation , and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data . although specific domain knowledge can be used to help design representations , learning with generic priors can also be used , and the quest for ai is motivating the design of more powerful representation-learning algorithms implementing such priors . this paper reviews recent work in the area of unsupervised feature learning and deep learning , covering advances in probabilistic models , auto-encoders , manifold learning , and deep networks . this motivates longer-term unanswered questions about the appropriate objectives for learning good representations , for computing representations ( i.e. , inference ) , and the geometrical connections between representation learning , density estimation and manifold learning . story_separator_special_tag recently , multi-view representation learning has become a rapidly growing direction in machine learning and data mining areas . this paper introduces two categories for multi-view representation learning : multi-view representation alignment and multi-view representation fusion . consequently , we first review the representative methods and theories of multi-view representation learning based on the perspective of alignment , such as correlation-based alignment . representative examples are canonical correlation analysis ( cca ) and its several extensions . then , from the perspective of representation fusion , we investigate the advancement of multi-view representation learning that ranges from generative methods including multi-modal topic learning , multi-view sparse coding , and multi-view latent space markov networks , to neural network-based methods including multi-modal autoencoders , multi-view convolutional neural networks , and multi-modal recurrent neural networks . further , we also investigate several important applications of multi-view representation learning . overall , this survey aims to provide an insightful overview of theoretical foundation and state-of-the-art developments in the field of multi-view representation learning and to help researchers find the most appropriate tools for particular applications . story_separator_special_tag in this technology-based era , network-based systems are facing new cyber-attacks on daily bases . traditional cybersecurity approaches are based on old threat-knowledge databases and need to be updated on a daily basis to stand against new generation of cyber-threats and protect underlying network-based systems . along with updating threat-knowledge databases , there is a need for proper management and processing of data generated by sensitive real-time applications . in recent years , various computing platforms based on representation learning algorithms have emerged as a useful resource to manage and exploit the generated data to extract meaningful information . if these platforms are properly utilized , then strong cybersecurity systems can be developed to protect the underlying network-based systems and support sensitive real-time applications . in this survey , we highlight various cyber-threats , real-life examples , and initiatives taken by various international organizations . we discuss various computing platforms based on representation learning algorithms to process and analyze the generated data . we highlight various popular datasets introduced by well-known global organizations that can be used to train the representation learning algorithms to predict and detect threats . we also provide an in-depth analysis of research efforts based on story_separator_special_tag graphs arise naturally in many real-world applications including social networks , recommender systems , ontologies , biology , and computational finance . traditionally , machine learning models for graphs have been mostly designed for static graphs . however , many applications involve evolving graphs . this introduces important challenges for learning and inference since nodes , attributes , and edges change over time . in this survey , we review the recent advances in representation learning for dynamic graphs , including dynamic knowledge graphs . we describe existing models from an encoder-decoder perspective , categorize these encoders and decoders based on the techniques they employ , and analyze the approaches in each category . we also review several prominent applications and widely used datasets and highlight directions for future research . story_separator_special_tag with the widespread use of information technologies , information networks are becoming increasingly popular to capture complex relationships across various disciplines , such as social networks , citation networks , telecommunication networks , and biological networks . analyzing these networks sheds light on different aspects of social life such as the structure of societies , information diffusion , and communication patterns . in reality , however , the large scale of information networks often makes network analytic tasks computationally expensive or intractable . network representation learning has been recently proposed as a new learning paradigm to embed network vertices into a low-dimensional vector space , by preserving network topology structure , vertex content , and other side information . this facilitates the original network to be easily handled in the new vector space for further analysis . in this survey , we perform a comprehensive review of the current literature on network representation learning in the data mining and machine learning field . we propose new taxonomies to categorize and summarize the state-of-the-art network representation learning techniques according to the underlying learning mechanisms , the network information intended to preserve , as well as the algorithmic designs and methodologies . story_separator_special_tag as a paradigm to recover unknown entries of a matrix from partial observations , low-rank matrix completion ( lrmc ) has generated a great deal of interest . over the years , there have been lots of works on this topic but it might not be easy to grasp the essential knowledge from these studies . this is mainly because many of these works are highly theoretical or a proposal of new lrmc technique . in this paper , we give a contemporary survey on lrmc . in order to provide better view , insight , and understanding of potentials and limitations of lrmc , we present early scattered results in a structured and accessible way . specifically , we classify the state-of-the-art lrmc techniques into two main categories and then explain each category in detail . we next discuss issues to be considered when one considers using lrmc techniques . these include intrinsic properties required for the matrix recovery and how to exploit a special structure in lrmc design . we also discuss the convolutional neural network ( cnn ) based lrmc algorithms exploiting the graph structure of a low-rank matrix . further , we present the recovery performance story_separator_special_tag during the past several years , as one of the most successful applications of sparse coding and dictionary learning , dictionary-based face recognition has received significant attention . although some surveys of sparse coding and dictionary learning have been reported , there is no specialized survey concerning dictionary learning algorithms for face recognition . this paper provides a survey of dictionary learning algorithms for face recognition . to provide a comprehensive overview , we not only categorize existing dictionary learning algorithms for face recognition but also present details of each category . since the number of atoms has an important impact on classification performance , we also review the algorithms for selecting the number of atoms . specifically , we select six typical dictionary learning algorithms with different numbers of atoms to perform experiments on face databases . in summary , this paper provides a broad view of dictionary learning algorithms for face recognition and advances study in this field . it is very useful for readers to understand the profiles of this subject and to grasp the theoretical rationales and potentials as well as their applicability to different cases of face recognition . story_separator_special_tag sparse representation has attracted much attention from researchers in fields of signal processing , image processing , computer vision , and pattern recognition . sparse representation also has a good reputation in both theoretical research and practical applications . many different algorithms have been proposed for sparse representation . the main purpose of this paper is to provide a comprehensive study and an updated review on sparse representation and to supply guidance for researchers . the taxonomy of sparse representation methods can be studied from various viewpoints . for example , in terms of different norm minimizations used in sparsity constraints , the methods can be roughly categorized into five groups : 1 ) sparse representation with $ l_ { 0 } $ -norm minimization ; 2 ) sparse representation with $ l_ { p } $ -norm ( $ 0 ) minimization ; 3 ) sparse representation with $ l_ { 1 } $ -norm minimization ; 4 ) sparse representation with $ l_ { 2,1 } $ -norm minimization ; and 5 ) sparse representation with $ l_ { 2 } $ -norm minimization . in this paper , a comprehensive overview of sparse representation is provided . story_separator_special_tag nonnegative matrix factorization ( nmf ) was first introduced as a low-rank matrix approximation technique , and has enjoyed a wide area of applications . although nmf does not seem related to the clustering problem at first , it was shown that they are closely linked . in this report , we provide a gentle introduction to clustering and nmf before reviewing the theoretical relationship between them . we then explore several nmf variants , namely sparse nmf , projective nmf , nonnegative spectral clustering and cluster-nmf , along with their clustering interpretations . story_separator_special_tag nonnegative matrix factorization ( nmf ) , a relatively novel paradigm for dimensionality reduction , has been in the ascendant since its inception . it incorporates the nonnegativity constraint and thus obtains the parts-based representation as well as enhancing the interpretability of the issue correspondingly . this survey paper mainly focuses on the theoretical research into nmf over the last 5 years , where the principles , basic models , properties , and algorithms of nmf along with its various modifications , extensions , and generalizations are summarized systematically . the existing nmf algorithms are divided into four categories : basic nmf ( bnmf ) , constrained nmf ( cnmf ) , structured nmf ( snmf ) , and generalized nmf ( gnmf ) , upon which the design principles , characteristics , problems , relationships , and evolution of these algorithms are presented and analyzed comprehensively . some related work not on nmf that nmf should learn from or has connections with is involved too . moreover , some open issues remained to be solved are discussed . several relevant application areas of nmf are also briefly described . this survey aims to construct an integrated , state-of-the-art framework story_separator_special_tag we survey the techniques for image-based rendering ( ibr ) and for compressing image-based representations . unlike traditional three-dimensional ( 3-d ) computer graphics , in which 3-d geometry of the scene is known , ibr techniques render novel views directly from input images . ibr techniques can be classified into three categories according to how much geometric information is used : rendering without geometry , rendering with implicit geometry ( i.e. , correspondence ) , and rendering with explicit geometry ( either with approximate or accurate geometry ) . we discuss the characteristics of these categories and their representative techniques . ibr techniques demonstrate a surprising diverse range in their extent of use of images and geometry in representing 3-d scenes . we explore the issues in trading off the use of images and geometry by revisiting plenoptic-sampling analysis and the notions of view dependency and geometric proxies . finally , we highlight compression techniques specifically designed for image-based representations . such compression techniques are important in making ibr techniques practical . story_separator_special_tag in a sparse-representation-based face recognition scheme , the desired dictionary should have good representational power ( i.e. , being able to span the subspace of all faces ) while supporting optimal discrimination of the classes ( i.e. , different human subjects ) . we propose a method to learn an over-complete dictionary that attempts to simultaneously achieve the above two goals . the proposed method , discriminative k-svd ( d-ksvd ) , is based on extending the k-svd algorithm by incorporating the classification error into the objective function , thus allowing the performance of a linear classifier and the representational power of the dictionary being considered at the same time by the same optimization procedure . the d-ksvd algorithm finds the dictionary and solves for the classifier using a procedure derived from the k-svd algorithm , which has proven efficiency and performance . this is in contrast to most existing work that relies on iteratively solving sub-problems with the hope of achieving the global optimal through iterative approximation . we evaluate the proposed method using two commonly-used face databases , the extended yaleb database and the ar database , with detailed comparison to 3 alternative approaches , including the leading story_separator_special_tag in this paper , we propose an analysis mechanism based structured analysis discriminative dictionary learning ( addl ) framework . addl seamlessly integrates the analysis discriminative dictionary learning , analysis representation and analysis classifier training into a unified model . the applied analysis mechanism can make sure that the learnt dictionaries , representations and linear classifiers over different classes are independent and discriminating as much as possible . the dictionary is obtained by minimizing a reconstruction error and an analytical incoherence promoting term that encourages the sub-dictionaries associated with different classes to be independent . to obtain the representation coefficients , addl imposes a sparse l2,1-norm constraint on the coding coefficients instead of using l0 or l1-norm , since the l0 or l1-norm constraint applied in most existing dl criteria makes the training phase time consuming . the codes-extraction projection that bridges data with the sparse codes by extracting special features from the given samples is calculated via minimizing a sparse codes approximation term . then we compute a linear classifier based on the approximated sparse codes by an analysis mechanism to simultaneously consider the classification and representation powers . thus , the classification approach of our model is very story_separator_special_tag discriminative dictionary learning ( dl ) has been widely studied in various pattern classification problems . most of the existing dl methods aim to learn a synthesis dictionary to represent the input signal while enforcing the representation coefficients and/or representation residual to be discriminative . however , the l0 or l1-norm sparsity constraint on the representation coefficients adopted in most dl methods makes the training and testing phases time consuming . we propose anew discriminative dl framework , namely projective dictionary pair learning ( dpl ) , which learns a synthesis dictionary and an analysis dictionary jointly to achieve the goal of signal representation and discrimination . compared with conventional dl methods , the proposed dpl method can not only greatly reduce the time complexity in the training and testing phases , but also lead to very competitive accuracies in a variety of visual classification tasks . story_separator_special_tag the projective dictionary pair learning ( dpl ) model jointly seeks a synthesis dictionary and an analysis dictionary by extracting the block-diagonal coefficients with an incoherence-constrained analysis dictionary . however , dpl fails to discover the underlying subspaces and salient features at the same time , and it can not encode the neighborhood information of the embedded coding coefficients , especially adaptively . in addition , although the data can be well reconstructed via the minimization of the reconstruction error , useful distinguishing salient feature information may be lost and incorporated into the noise term . in this article , we propose a novel self-expressive adaptive locality-preserving framework : twin-incoherent self-expressive latent dpl ( slatdpl ) . to capture the salient features from the samples , slatdpl minimizes a latent reconstruction error by integrating the coefficient learning and salient feature extraction into a unified model , which can also be used to simultaneously discover the underlying subspaces and salient features . to make the coefficients block diagonal and ensure that the salient features are discriminative , our slatdpl regularizes them by imposing a twin-incoherence constraint . moreover , slatdpl utilizes a self-expressive adaptive weighting strategy that uses normalized block-diagonal coefficients story_separator_special_tag we propose a novel structured discriminative block-diagonal dictionary learning method , referred to as scalable locality-constrained projective dictionary learning ( lc-pdl ) , for efficient representation and classification . to improve the scalability by saving both training and testing time , our lc-pdl aims at learning a structured discriminative dictionary and a block-diagonal representation without using costly l0/l1-norm . besides , it avoids extra time-consuming sparse reconstruction process with the well-trained dictionary for new sample as many existing models . more importantly , lc-pdl avoids using the complementary data matrix to learn the sub-dictionary over each class . to enhance the performance , we incorporate a locality constraint of atoms into the dl procedures to keep local information and obtain the codes of samples over each class separately . a block-diagonal discriminative approximation term is also derived to learn a discriminative projection to bridge data with their codes by extracting the special block-diagonal features from data , which can ensure the approximate coefficients to associate with its label information clearly . then , a robust multiclass classifier is trained over extracted block-diagonal codes for accurate label predictions . experimental results verify the effectiveness of our algorithm . story_separator_special_tag in recent years there has been a growing interest in the study of sparse representation of signals . using an overcomplete dictionary that contains prototype signal-atoms , signals are described by sparse linear combinations of these atoms . applications that use sparse representation are many and include compression , regularization in inverse problems , feature extraction , and more . recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary . designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals . both of these techniques have been considered , but this topic is largely still open . in this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations . given a set of training signals , we seek the dictionary that leads to the best representation for each member in this set , under strict sparsity constraints . we present a new method-the k-svd algorithm-generalizing the k-means clustering process . k-svd is an iterative method that alternates story_separator_special_tag both the dictionary learning ( dl ) and convolutional neural networks ( cnn ) are powerful image representation learning systems based on different mechanisms and principles , however whether we can seamlessly integrate them to improve the per-formance is noteworthy exploring . to address this issue , we propose a novel generalized end-to-end representation learning architecture , dubbed convolutional dictionary pair learning network ( cdpl-net ) in this paper , which integrates the learning schemes of the cnn and dictionary pair learning into a unified framework . generally , the architecture of cdpl-net includes two convolutional/pooling layers and two dictionary pair learn-ing ( dpl ) layers in the representation learning module . besides , it uses two fully-connected layers as the multi-layer perception layer in the nonlinear classification module . in particular , the dpl layer can jointly formulate the discriminative synthesis and analysis representations driven by minimizing the batch based reconstruction error over the flatted feature maps from the convolution/pooling layer . moreover , dpl layer uses l1-norm on the analysis dictionary so that sparse representation can be delivered , and the embedding process will also be robust to noise . to speed up the training process of dpl story_separator_special_tag in this article , we propose a structured robust adaptive dictionary pair learning ( ra-dpl ) framework for the discriminative sparse representation ( sr ) learning . to achieve powerful representation ability of the available samples , the setting of ra-dpl seamlessly integrates the robust projective dpl , locality-adaptive srs , and discriminative coding coefficients learning into a unified learning framework . specifically , ra-dpl improves existing projective dpl in four perspectives . first , it applies a sparse $ l_ { 2,1 } $ -norm-based metric to encode the reconstruction error to deliver the robust projective dictionary pairs , and the $ l_ { 2,1 } $ -norm has the potential to minimize the error . second , it imposes the robust $ l_ { 2,1 } $ -norm clearly on the analysis dictionary to ensure the sparse property of the coding coefficients rather than using the costly $ l_ { 0 } /l_ { 1 } $ -norm . as such , the robustness of the data representation and the efficiency of the learning process are jointly considered to guarantee the efficacy of our ra-dpl . third , ra-dpl conceives a structured reconstruction weight learning paradigm to preserve story_separator_special_tag we propose a joint subspace recovery and enhanced locality-based robust flexible label consistent dictionary learning method called robust flexible discriminative dictionary learning ( rfddl ) . the rfddl mainly improves the data representation and classification abilities by enhancing the robust property to sparse errors and encoding the locality , reconstruction error , and label consistency more accurately . first , for the robustness to noise and sparse errors in data and atoms , the rfddl aims at recovering the underlying clean data and clean atom subspaces jointly , and then performs dl and encodes the locality in the recovered subspaces . second , to enable the data sampled from a nonlinear manifold to be handled potentially and obtain the accurate reconstruction by avoiding the overfitting , the rfddl minimizes the reconstruction error in a flexible manner . third , to encode the label consistency accurately , the rfddl involves a discriminative flexible sparse code error to encourage the coefficients to be soft . fourth , to encode the locality well , the rfddl defines the laplacian matrix over recovered atoms , includes label information of atoms in terms of intra-class compactness and inter-class separation , and associates with group sparse story_separator_special_tag a label consistent k-svd ( lc-ksvd ) algorithm to learn a discriminative dictionary for sparse coding is presented . in addition to using class labels of training data , we also associate label information with each dictionary item ( columns of the dictionary matrix ) to enforce discriminability in sparse codes during the dictionary learning process . more specifically , we introduce a new label consistency constraint called `` discriminative sparse-code error '' and combine it with the reconstruction error and the classification error to form a unified objective function . the optimal solution is efficiently obtained using the k-svd algorithm . our algorithm learns a single overcomplete dictionary and an optimal linear classifier jointly . the incremental dictionary learning algorithm is presented for the situation of limited memory resources . it yields dictionaries so that feature points with the same class labels have similar sparse codes . experimental results demonstrate that our algorithm outperforms many recently proposed sparse-coding techniques for face , action , scene , and object category recognition under the same learning conditions . story_separator_special_tag in this paper , we discuss the sparse codes auto-extractor based classification . a joint label consistent embedding and dictionary learning approach is proposed for delivering a linear sparse codes auto-extractor and a multi-class classifier by simultaneously minimizing the sparse reconstruction , discriminative sparse-code , code approximation and classification errors . the auto-extractor is characterized with a projection that bridges signals with sparse codes by learning special features from input signals for characterizing sparse codes . the classifier is trained based on extracted sparse codes directly . in our setting , the performance of the classifier depends on the discriminability of sparse codes , and the representation power of the extractor depends on the discriminability of input sparse codes , so we incorporate label information into the dictionary learning to enhance the discriminability of sparse codes . so , for inductive classification , our model forms an integration process from test signals to sparse codes and finally to assigned labels , which is essentially different from existing sparse coding based approaches that involve an extra sparse reconstruction with the trained dictionary for each test signal . remarkable results are obtained by our model compared with other state-of-the-arts . story_separator_special_tag terrain scene classification plays an important role in various synthetic aperture radar ( sar ) image understanding and interpretation . this paper presents a novel approach to characterize sar image content by addressing category with a limited number of labeled samples . in the proposed approach , each sar image patch is characterize by a discriminant feature which is generated in a semisupervised manner by utilizing a spare ensemble learning procedure . in particular , a nonnegative sparse coding procedure is applied on the given sar image patch set to generate the feature descriptors first . the set is combined with a limited number of labeled sar image patches and an abundant number of unlabeled ones . then , a semisupervised sampling approach is proposed to construct a set of weak learners , in which each one is modeled by a logistic regression procedure . the discriminant information can be introduced by projecting sar image patch on each weak learner . finally , the features of sar image patches are produced by a sparse ensemble procedure which can reduce the redundancy of multiple weak learners . experimental results show that the proposed discriminant feature learning approach can achieve a higher story_separator_special_tag data may often contain noise or irrelevant information , which negatively affect the generalization capability of machine learning algorithms . the objective of dimension reduction algorithms , such as principal component analysis ( pca ) , non-negative matrix factorization ( nmf ) , random projection ( rp ) , and auto-encoder ( ae ) , is to reduce the noise or irrelevant information of the data . the features of pca ( eigenvectors ) and linear ae are not able to represent data as parts ( e.g . nose in a face image ) . on the other hand , nmf and non-linear ae are maimed by slow learning speed and rp only represents a subspace of original data . this paper introduces a dimension reduction framework which to some extend represents data as parts , has fast learning speed , and learns the between-class scatter subspace . to this end , this paper investigates a linear and non-linear dimension reduction framework referred to as extreme learning machine ae ( elm-ae ) and sparse elm-ae ( selm-ae ) . in contrast to tied weight ae , the hidden neurons in elm-ae and selm-ae need not be tuned , and their story_separator_special_tag we propose two nuclear- and l2,1-norm regularized 2d neighborhood preserving projection ( 2dnpp ) methods for extracting representative 2d image features . 2dnpp extracts neighborhood preserving features by minimizing a frobenius norm-based reconstruction error that is very sensitive noise and outliers in given data . to make the distance metric more reliable and robust , and encode the neighborhood reconstruction error more accurately , we minimize the nuclear- and l2,1-norm-based reconstruction error , respectively and measure it over each image . technically , we propose two enhanced variants of 2dnpp , nuclear-norm-based 2dnpp and sparse reconstruction-based 2dnpp . besides , to optimize the projection for more promising feature extraction , we also add the nuclear- and sparse l2,1-norm constraints on it accordingly , where l2,1-norm ensures the projection to be sparse in rows so that discriminative features are learnt in the latent subspace and the nuclear-norm ensures the low-rank property of features by projecting data into their respective subspaces . by fully considering the neighborhood preserving power , using more reliable and robust distance metric , and imposing the low-rank or sparse constraints on projections at the same time , our methods can outperform related state-of-the-arts in a variety of story_separator_special_tag a large family of algorithms - supervised or unsupervised ; stemming from statistics or geometry theory - has been designed to provide different solutions to the problem of dimensionality reduction . despite the different motivations of these algorithms , we present in this paper a general formulation known as graph embedding to unify them within a common framework . in graph embedding , each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set , with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided . furthermore , the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms . by utilizing this framework as a tool , we propose a new supervised dimensionality reduction algorithm called marginal fisher analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class , while the penalty graph connects the marginal points and characterizes the interclass separability . we show that mfa effectively overcomes the story_separator_special_tag many feature extraction methods reduce the dimensionality of data based on the input graph matrix . the graph construction which reflects relationships among raw data points is crucial to the quality of resulting low-dimensional representations . to improve the quality of graph and make it more suitable for feature extraction tasks , we incorporate a new graph learning mechanism into feature extraction and add an interaction between the learned graph and the low-dimensional representations . based on this learning mechanism , we propose a novel framework , termed as unsupervised single view feature extraction with structured graph ( fesg ) , which learns both a transformation matrix and an ideal structured graph containing the clustering information . moreover , we propose a novel way to extend fesg framework for multi-view learning tasks . the extension is named as unsupervised multiple views feature extraction with structured graph\xa0 ( mfesg ) , which learns an optimal weight for each view automatically without requiring an additional parameter . to show the effectiveness of the framework , we design two concrete formulations within fesg and mfesg , together with two efficient solving algorithms . promising experimental results on plenty of real-world datasets have validated story_separator_special_tag in this paper , we propose a novel unsupervised nonnegative adaptive feature extraction ( nafe ) algorithm for data representation and classification . the formulation of nafe integrates the sparsity constrained nonnegative matrix factorization ( nmf ) , representation learning , and adaptive reconstruction weight learning into a unified model . specifically , nafe performs feature and weight learning over the new robust representations of nmf for more accurate measure and representation . for nonnegative adaptive feature extraction , our nafe first utilizes the sparsity constrained nmf to obtain the new and robust representations of the original data . to preserve the manifold structures of the learnt new representations , we also incorporate a neighborhood reconstruction error over the weight matrix for joint minimization . note that to further improve the representation power , the weights are jointly shared in the new low-dimensional nonnegative representation space , low-dimensional nonlinear manifold space , and low-dimensional projective subspace , i.e. , local neighborhood information is clearly preserved in different feature spaces so that informative representations and features can be jointly obtained . to enable nafe to extract features from new data , we also include a feature approximation error by a linear story_separator_special_tag we explore the discriminative feature extraction problem.a semisupervised local multimanifold isomap by linear embedding is proposed.our model can use labeled and unlabeled data to deliver manifold features.our model aims to minimize pairwise intraclass distances in the same manifold.our model aims to maximize the distances between different manifolds . in this paper , we mainly propose a semi-supervised local multi-manifold isomap learning framework by linear embedding , termed ssmm-isomap , that can apply the labeled and unlabeled training samples to perform the joint learning of neighborhood preserving local nonlinear manifold features and a linear feature extractor . the formulation of ssmm-isomap aims at minimizing pairwise distances of intra-class points in the same manifold and maximizing the distances over different manifolds . to enhance the performance of nonlinear manifold feature learning , we also incorporate the neighborhood reconstruction error to preserve local topology structures between both labeled and unlabeled samples . to enable our ssmm-isomap to extract local manifold features from the outside new data , we also add a feature approximation error that correlates manifold features with embedded features by the jointly learnt feature extractor . thus , the learnt linear extractor can extract the local manifold features from the new story_separator_special_tag a new graph based constrained semi-supervised learning ( g-cssl ) framework is proposed . pairwise constraints ( pc ) are used to specify the types ( intra- or inter-class ) of points with labels . since the number of labeled data is typically small in ssl setting , the core idea of this framework is to create and enrich the pc sets using the propagated soft labels from both labeled and unlabeled data by special label propagation ( slp ) , and hence obtaining more supervised information for delivering enhanced performance . we also propose a two-stage sparse coding , termed tsc , for achieving adaptive neighborhood for slp . the first stage aims at correcting the possible corruptions in data and training an informative dictionary , and the second stage focuses on sparse coding . to deliver enhanced inter-class separation and intra-class compactness , we also present a mixed soft-similarity measure to evaluate the similarity/dissimilarity of constrained pairs using the sparse codes and outputted probabilistic values by slp . simulations on the synthetic and real datasets demonstrated the validity of our algorithms for data representation and image recognition , compared with other related state-of-the-art graph based semi-supervised techniques . story_separator_special_tag of late , there are many studies on the robust discriminant analysis , which adopt l1-norm as the distance metric , but their results are not robust enough to gain universal acceptance . to overcome this problem , the authors of this article present a nonpeaked discriminant analysis ( npda ) technique , in which cutting l1-norm is adopted as the distance metric . as this kind of norm can better eliminate heavy outliers in learning models , the proposed algorithm is expected to be stronger in performing feature extraction tasks for data representation than the existing robust discriminant analysis techniques , which are based on the l1-norm distance metric . the authors also present a comprehensive analysis to show that cutting l1-norm distance can be computed equally well , using the difference between two special convex functions . against this background , an efficient iterative algorithm is designed for the optimization of the proposed objective . theoretical proofs on the convergence of the algorithm are also presented . theoretical insights and effectiveness of the proposed method are validated by experimental tests on several real data sets . story_separator_special_tag abstract recently , l1-norm distance measure based linear discriminant analysis ( lda ) techniques have been shown to be robust against outliers . however , these methods have no guarantee of obtaining a satisfactory-enough performance due to the insufficient robustness of l1-norm measure . to mitigate this problem , inspired by recent works on lp-norm based learning , this paper proposes a new discriminant method , called lp- and ls-norm distance based robust linear discriminant analysis ( flda-lsp ) . the proposed method achieves robustness by replacing the l2-norm within- and between-class distances in conventional lda with lp- and ls-norm ones . by specifying the values of p and s , many of previous efforts can be naturally expressed by our objective . the requirement of simultaneously maximizing and minimizing a number of lp- and ls-norm terms results in a difficulty to the optimization of the formulated objective . as one of the important contributions of this paper , we design an efficient iterative algorithm to address this problem , and also conduct some insightful analysis on the existence of local minimum and the convergence of the proposed algorithm . theoretical insights of our method are further supported by promising story_separator_special_tag scientists working with large volumes of high-dimensional data , such as global climate patterns , stellar spectra , or human gene distributions , regularly confront the problem of dimensionality reduction : finding meaningful low-dimensional structures hidden in their high-dimensional observations . the human brain confronts the same problem in everyday perception , extracting from its high-dimensional sensory inputs-30,000 auditory nerve fibers or 10 ( 6 ) optic nerve fibers-a manageably small number of perceptually relevant features . here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set . unlike classical techniques such as principal component analysis ( pca ) and multidimensional scaling ( mds ) , our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations , such as human handwriting or images of a face under different viewing conditions . in contrast to previous algorithms for nonlinear dimensionality reduction , ours efficiently computes a globally optimal solution , and , for an important class of data manifolds , is guaranteed to converge asymptotically to the true structure . story_separator_special_tag dealing with high-dimensional data has always been a major problem in research of pattern recognition and machine learning , and linear discriminant analysis ( lda ) is one of the most popular methods for dimension reduction . however , it only uses labeled samples while neglecting unlabeled samples , which are abundant and can be easily obtained in the real world . in this paper , we propose a new dimension reduction method , called `` sl-lda '' , by using unlabeled samples to enhance the performance of lda . the new method first propagates label information from the labeled set to the unlabeled set via a label propagation process , where the predicted labels of unlabeled samples , called `` soft labels '' , can be obtained . it then incorporates the soft labels into the construction of scatter matrixes to find a transformed matrix for dimension reduction . in this way , the proposed method can preserve more discriminative information , which is preferable when solving the classification problem . we further propose an efficient approach for solving sl-lda under a least squares framework , and a flexible method of sl-lda ( fsl-lda ) to better cope with story_separator_special_tag in earth observations technical literature , several methods have been proposed and implemented to efficiently extract a proper set of features for classification and segmentation purposes . however , these architectures show drawbacks when the considered datasets are characterized by complex interactions among the samples , especially when they rely on strong assumptions on noise and label domains . in this paper , a new unsupervised approach for feature extraction , based on data driven discovery , is introduced for accurate classification of remotely sensed data . specifically , the proposed architecture exploits mutual information maximization in order to retrieve the most relevant features with respect to information measures . experimental results on real datasets show that the proposed approach represents a valid framework for feature extraction from remote sensing images . story_separator_special_tag dealing with high-dimensional data has always been a major problem in the research of pattern recognition and machine learning . among all the dimensionality reduction techniques , linear discriminant analysis ( lda ) is one of the most popular methods that have been widely used in many classification applications . but lda can only utilize labeled samples while neglect the unlabeled samples , which are abundant and can be easily obtained in the real world . in this paper , we propose a new dimensionality reduction method by using unlabeled samples to enhance the performance of lda . the new method first propagates the label information from labeled set to unlabeled set via a label propagation process , where the predicted labels of unlabeled samples , called soft labels , can be obtained . it then incorporates the soft labels into the construction of scatter matrixes to find a transformed matrix for dimensionality reduction . in this way , the proposed method can preserve more discriminative information , which is preferable when solving the classification problem . extensive simulations are conducted on several datasets and the results show the effectiveness of the proposed method . story_separator_special_tag two novel unsupervised dimensionality reduction techniques , termed sparse distance preserving embedding ( sdpe ) and sparse proximity preserving embedding ( sppe ) , are proposed for feature extraction and classification . sdpe and sppe perform in the clean data space recovered by sparse representation and enhanced euclidean distances over noise removed data are employed to measure pairwise similarities of points . in extracting informative features , sdpe and sppe aim at preserving pairwise similarities between data points in addition to preserving the sparse characteristics . this paper calculates the sparsest representation of all vectors jointly by a convex optimization . the sparsest codes enable certain local information of data to be preserved , and can endow sdpe and sppe a natural discriminating power , adaptive neighborhood and robust characteristic against noise and errors in delivering low-dimensional embeddings . we also mathematically show sdpe and sppe can be effectively extended for discriminant learning in a supervised manner . the validity of sdpe and sppe is examined by extensive simulations . comparison with other related state-of-the-art unsupervised algorithms show that promising results are delivered by our techniques . story_separator_special_tag isomap is a well-known nonlinear dimensionality reduction ( dr ) method , aiming at preserving geodesic distances of all similarity pairs for delivering highly nonlinear manifolds . isomap is efficient in visualizing synthetic data sets , but it usually delivers unsatisfactory results in benchmark cases . this paper incorporates the pairwise constraints into isomap and proposes a marginal isomap ( m-isomap ) for manifold learning . the pairwise can not -link and must-link constraints are used to specify the types of neighborhoods . m-isomap computes the shortest path distances over constrained neighborhood graphs and guides the nonlinear dr through separating the interclass neighbors . as a result , large margins between both interand intraclass clusters are delivered and enhanced compactness of intracluster points is achieved at the same time . the validity of m-isomap is examined by extensive simulations over synthetic , university of california , irvine , and benchmark real olivetti research library , yale , and cmu pose , illumination , and expression databases . the data visualization and clustering power of m-isomap are compared with those of six related dr methods . the visualization results show that m-isomap is able to deliver more separate clusters . clustering story_separator_special_tag many manifold learning procedures try to embed a given feature data into a flat space of low dimensionality while preserving as much as possible the metric in the natural feature space . the embedding process usually relies on distances between neighboring features , mainly since distances between features that are far apart from each other often provide an unreliable estimation of the true distance on the feature manifold due to its non-convexity . distortions resulting from using long geodesics indiscriminately lead to a known limitation of the isomap algorithm when used to map non-convex manifolds . presented is a framework for nonlinear dimensionality reduction that uses both local and global distances in order to learn the intrinsic geometry of flat manifolds with boundaries . the resulting algorithm filters out potentially problematic distances between distant feature points based on the properties of the geodesics connecting those points and their relative distance to the boundary of the feature manifold , thus avoiding an inherent limitation of the isomap algorithm . since the proposed algorithm matches non-local structures , it is robust to strong noise . we show experimental results demonstrating the advantages of the proposed approach over conventional dimensionality reduction techniques , story_separator_special_tag this paper incorporates the group sparse representation into the well-known canonical correlation analysis ( cca ) framework and proposes a novel discriminant feature extraction technique named group sparse canonical correlation analysis ( gscca ) . gscca uses two sets of variables and aims at preserving the group sparse ( gs ) characteristics of data within each set in addition to maximize the global interset covariance . with gs weights computed prior to feature extraction , the locality , sparsity and discriminant information of data can be adaptively determined . the gs weights are obtained from an np-hard group-sparsity promoting problem that considers all highly correlated data within a group . by defining one of the two variable sets as the class label matrix , gscca is effectively extended to multiclass scenarios . then gscca is theoretically formulated as a least-squares problem as cca does . comparative analysis between this work and the related studies demonstrate that our algorithm is more general exhibiting attractive properties . the projection matrix of gscca is analytically solved by applying eigen-decomposition and trace ratio ( tr ) optimization . extensive benchmark simulations are conducted to examine gscca . results show that our approach delivers promising story_separator_special_tag when performing visualization and classification , people often confront the problem of dimensionality reduction . isomap is one of the most promising nonlinear dimensionality reduction techniques . however , when isomap is applied to real-world data , it shows some limitations , such as being sensitive to noise . in this paper , an improved version of isomap , namely s-isomap , is proposed . s-isomap utilizes class information to guide the procedure of nonlinear dimensionality reduction . such a kind of procedure is called supervised nonlinear dimensionality reduction . in s-isomap , the neighborhood graph of the input data is constructed according to a certain kind of dissimilarity between data points , which is specially designed to integrate the class information . the dissimilarity has several good properties which help to discover the true neighborhood of the data and , thus , makes s-isomap a robust technique for both visualization and classification , especially for real-world problems . in the visualization experiments , s-isomap is compared with isomap , lle , and weightediso . the results show that s-isomap performs the best . in the classification experiments , s-isomap is used as a preprocess of classification and compared with story_separator_special_tag visualizing similarity data of different objects by exhibiting more separate organizations with local and multimodal characteristics preserved is important in multivariate data analysis . laplacian eigenmaps ( lae ) and locally linear embedding ( lle ) aim at preserving the embeddings of all similarity pairs in the close vicinity of the reduced output space , but they are unable to identify and separate interclass neighbors . this paper considers the semi-supervised manifold learning problems . we apply the pairwise can not -link and must-link constraints induced by the neighborhood graph to specify the types of neighboring pairs . more flexible regulation on supervised information is provided . two novel multimodal nonlinear techniques , which we call trace ratio ( tr ) criterion-based semi-supervised lae ( s2lae ) and lle ( s2lle ) , are then proposed for marginal manifold visualization . we also present the kernelized s2lae and s2lle . we verify the feasibility of s2lae and s2lle through extensive simulations over benchmark real-world mit cbcl , cmu pie , mnist , and usps data sets . manifold visualizations show that s2lae and s2lle are able to deliver large margins between different clusters or classes with multimodal distributions preserved . story_separator_special_tag drawing on the correspondence between the graph laplacian , the laplace-beltrami operator on a manifold , and the connections to the heat equation , we propose a geometrically motivated algorithm for constructing a representation for data sampled from a low dimensional manifold embedded in a higher dimensional space . the algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality preserving properties and a natural connection to clustering . several applications are considered . story_separator_special_tag we consider the general problem of utilizing both labeled and unlabeled data to improve classification accuracy . under the assumption that the data lie on a submanifold in a high dimensional space , we develop an algorithmic framework to classify a partially labeled data set in a principled manner . the central idea of our approach is that classification functions are naturally defined only on the submanifold in question rather than the total ambient space . using the laplace-beltrami operator one produces a basis ( the laplacian eigenmaps ) for a hilbert space of square integrable functions on the submanifold . to recover such a basis , only unlabeled examples are required . once such a basis is obtained , training can be performed using the labeled data set . our algorithm models the manifold using the adjacency graph for the data and approximates the laplace-beltrami operator by the graph laplacian . we provide details of the algorithm , its theoretical justification , and several practical applications for image , speech , and text classification . story_separator_special_tag in this paper we present the methodology of multidimensional scaling problems ( mds ) solved by means of the majorization algorithm . the objective function to be minimized is known as stress and functions which majorize stress are elaborated . this strategy to solve mds problems is called smacof and it is implemented in an r package of the same name which is presented in this article . we extend the basic smacof theory in terms of configuration constraints , three-way data , unfolding models , and projection of the resulting configurations onto spheres and other quadratic surfaces . various examples are presented to show the possibilities of the smacof approach offered by the corresponding package . story_separator_special_tag recovering low-rank and sparse subspaces jointly for enhanced robust representation and classification is discussed . technically , we first propose a transductive low-rank and sparse principal feature coding ( lspfc ) formulation that decomposes given data into a component part that encodes low-rank sparse principal features and a noise-fitting error part . to well handle the outside data , we then present an inductive lspfc ( i-lspfc ) . i-lspfc incorporates embedded low-rank and sparse principal features by a projection into one problem for direct minimization , so that the projection can effectively map both inside and outside data into the underlying subspaces to learn more powerful and informative features for representation . to ensure that the learned features by i-lspfc are optimal for classification , we further combine the classification error with the feature coding error to form a unified model , discriminative lspfc ( d-lspfc ) , to boost performance . the model of d-lspfc seamlessly integrates feature coding and discriminative classification , so the representation and classification powers can be enhanced . the proposed approaches are more general , and several recent existing low-rank or sparse coding algorithms can be embedded into our problems as special cases story_separator_special_tag low-rank representation ( lrr ) has recently attracted a great deal of attention due to its pleasing efficacy in exploring low-dimensional subspace structures embedded in data . for a given set of observed data corrupted with sparse errors , lrr aims at learning a lowest-rank representation of all data jointly . lrr has broad applications in pattern recognition , computer vision and signal processing . in the real world , data often reside on low-dimensional manifolds embedded in a high-dimensional ambient space . however , the lrr method does not take into account the non-linear geometric structures within data , thus the locality and similarity information among data may be missing in the learning process . to improve lrr in this regard , we propose a general laplacian regularized low-rank representation framework for data representation where a hypergraph laplacian regularizer can be readily introduced into , i.e. , a non-negative sparse hyper-laplacian regularized lrr model ( nshlrr ) . by taking advantage of the graph regularizer , our proposed method not only can represent the global low-dimensional structures , but also capture the intrinsic non-linear geometric information in data . the extensive experimental results on image clustering , semi-supervised image story_separator_special_tag low-rank coding-based representation learning is powerful for discovering and recovering the subspace structures in data , which has obtained an impressive performance ; however , it still can not obtain deep hidden information due to the essence of single-layer structures . in this article , we investigate the deep low-rank representation of images in a progressive way by presenting a novel strategy that can extend existing single-layer latent low-rank models into multiple layers . technically , we propose a new progressive deep latent low-rank fusion network ( dlrf-net ) to uncover deep features and the clustering structures embedded in latent subspaces . the basic idea of dlrf-net is to progressively refine the principal and salient features in each layer from previous layers by fusing the clustering and projective subspaces , respectively , which can potentially learn more accurate features and subspaces . to obtain deep hidden information , dlrf-net inputs shallow features from the last layer into subsequent layers . then , it aims at recovering the hierarchical information and deeper features by respectively congregating the subspaces in each layer of the network . as such , one can also ensure the representation learning of deeper layers to remove the story_separator_special_tag in this paper , we address the subspace clustering problem . given a set of data samples ( vectors ) approximately drawn from a union of multiple subspaces , our goal is to cluster the samples into their respective subspaces and remove possible outliers as well . to this end , we propose a novel objective function named low-rank representation ( lrr ) , which seeks the lowest rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary . it is shown that the convex program associated with lrr solves the subspace clustering problem in the following sense : when the data is clean , we prove that lrr exactly recovers the true subspace structures ; when the data are contaminated by outliers , we prove that under certain conditions lrr can exactly recover the row space of the original data and detect the outlier as well ; for data corrupted by arbitrary sparse errors , lrr can also approximately recover the row space with theoretical guarantees . since the subspace membership is provably determined by the row space , these further imply that lrr can perform robust story_separator_special_tag we propose a novel and unsupervised representation learning model , i.e. , robust block-diagonal adaptive locality-constrained latent representation ( rbdlr ) . rbdlr is able to recover multi-subspace structures and extract the adaptive locality-preserving salient features jointly . leveraging on the frobenius-norm based latent low-rank representation model , rbdlr jointly learns the coding coefficients and salient features , and improves the results by enhancing the robustness to outliers and errors in given data , preserving local information of salient features adaptively and ensuring the block-diagonal structures of the coefficients . to improve the robustness , we perform the latent representation and adaptive weighting in a recovered clean data space . to force the coefficients to be block-diagonal , we perform auto-weighting by minimizing the reconstruction error based on salient features , constrained using a block-diagonal regularizer . this ensures that a strict block-diagonal weight matrix can be obtained and salient features will possess the adaptive locality preserving ability . by minimizing the difference between the coefficient and weights matrices , we can obtain a block-diagonal coefficients matrix and it can also propagate and exchange useful information between salient features and coefficients . extensive results demonstrate the superiority of rbdlr over story_separator_special_tag in this paper , we investigate the robust dictionary learning ( dl ) to discover the hybrid salient low-rank and sparse representation in a factorized compressed space . a joint robust factorization and projective dictionary learning ( j-rfdl ) model is presented . the setting of j-rfdl aims at improving the data representations by enhancing the robustness to outliers and noise in data , encoding the reconstruction error more accurately and obtaining hybrid salient coefficients with accurate reconstruction ability . specifically , j-rfdl performs the robust representation by dl in a factorized compressed space to eliminate the negative effects of noise and outliers on the results , which can also make the dl process efficient . to make the encoding process robust to noise in data , j-rfdl clearly uses sparse l2 , 1-norm that can potentially minimize the factorization and reconstruction errors jointly by forcing rows of the reconstruction errors to be zeros . to deliver salient coefficients with good structures to reconstruct given data well , j-rfdl imposes the joint low-rank and sparse constraints on the embedded coefficients with a synthesis dictionary . based on the hybrid salient coefficients , we also extend j-rfdl for the joint classification story_separator_special_tag low-rank representation is powerful for recovering and clustering the subspace structures , but it can not obtain deep hierarchical information due to the single-layer mode . in this paper , we present a new and effective strategy to extend the single-layer latent low-rank models into multiple-layers , and propose a new and progressive deep latent low-rank fusion network ( dlrf-net ) to uncover deep features and structures embedded in input data . the basic idea of dlrf-net is to refine features progressively from the previous layers by fusing the subspaces in each layer , which can potentially obtain accurate features and subspaces for representation . to learn deep information , dlrf-net inputs shallow features of the last layers into subsequent layers . then , it recovers the deeper features and hierarchical information by congregating the projective subspaces and clustering subspaces respectively in each layer . thus , one can learn hierarchical subspaces , remove noise and discover the underlying clean subspaces . note that most existing latent low-rank coding models can be extended to multilayers using dlrf-net . extensive results show that our network can deliver enhanced performance over other related frameworks . story_separator_special_tag for subspace recovery , most existing low-rank representation ( lrr ) models performs in the original space in single-layer mode . as such , the deep hierarchical information can not be learned , which may result in inaccurate recoveries for complex real data . in this paper , we explore the deep multi-subspace recovery problem by designing a multilayer architecture for latent lrr . technically , we propose a new multilayer collabora-tive low-rank representation network model termed deeplrr to discover deep features and deep subspaces . in each layer ( > 2 ) , deeplrr bilinearly reconstructs the data matrix by the collabo-rative representation with low-rank coefficients and projection matrices in the previous layer . the bilinear low-rank reconstruc-tion of previous layer is directly fed into the next layer as the input and low-rank dictionary for representation learning , and is further decomposed into a deep principal feature part , a deep salient feature part and a deep sparse error . as such , the coher-ence issue can be also resolved due to the low-rank dictionary , and the robustness against noise can also be enhanced in the feature subspace . to recover the sparse errors in layers accurately , story_separator_special_tag low-rank representation ( lrr ) [ 16 , 17 ] is an effective method for exploring the multiple subspace structures of data . usually , the observed data matrix itself is chosen as the dictionary , which is a key aspect of lrr . however , such a strategy may depress the performance , especially when the observations are insufficient and/or grossly corrupted . in this paper we therefore propose to construct the dictionary by using both observed and unobserved , hidden data . we show that the effects of the hidden data can be approximately recovered by solving a nuclear norm minimization problem , which is convex and can be solved efficiently . the formulation of the proposed method , called latent low-rank representation ( latlrr ) , seamlessly integrates subspace segmentation and feature extraction into a unified framework , and thus provides us with a solution for both subspace segmentation and feature extraction . as a subspace segmentation algorithm , latlrr is an enhanced version of lrr and outperforms the state-of-the-art algorithms . being an unsupervised feature extraction algorithm , latlrr is able to robustly extract salient features from corrupted data , and thus can work much better story_separator_special_tag abstract most existing low-rank and sparse representation models can not preserve the local manifold structures of samples adaptively , or separate the locality preservation from the coding process , which may result in the decreased performance . in this paper , we propose an inductive robust auto-weighted low-rank and sparse representation ( ralsr ) framework by joint feature embedding for the salient feature extraction of high-dimensional data . technically , the model of our ralsr seamlessly integrates the joint low-rank and sparse recovery with robust salient feature extraction . specifically , ralsr integrates the adaptive locality preserving weighting , joint low-rank/sparse representation and the robustness-promoting representation into a unified model . for accurate similarity measure , ralsr computes the adaptive weights by minimizing the joint reconstruction errors over the recovered clean data and salient features simultaneously , where l1-norm is also applied to ensure the sparse properties of learnt weights . the joint minimization can also potentially enable the weight matrix to have the power to remove noise and unfavorable features by reconstruction adaptively . the underlying projection is encoded by a joint low-rank and sparse regularization , which can ensure it to be powerful for salient feature extraction . story_separator_special_tag principal component analysis is a fundamental operation in computational data analysis , with myriad applications ranging from web search to bioinformatics to computer vision and image analysis . however , its performance and applicability in real scenarios are limited by a lack of robustness to outlying or corrupted observations . this paper considers the idealized `` robust principal component analysis '' problem of recovering a low rank matrix a from corrupted observations d = a + e. here , the corrupted entries e are unknown and the errors can be arbitrarily large ( modeling grossly corrupted observations common in visual and bioinformatic data ) , but are assumed to be sparse . we prove that most matrices a can be efficiently and exactly recovered from most error sign-and-support patterns by solving a simple convex program , for which we give a fast and provably convergent algorithm . our result holds even when the rank of a grows nearly proportionally ( up to a logarithmic factor ) to the dimensionality of the observation space and the number of errors e grows in proportion to the total number of entries in the matrix . a by-product of our analysis is the first story_separator_special_tag benefiting from global rank constraints , the low-rank representation ( lrr ) method has been shown to be an effective solution to subspace learning . however , the global mechanism also means that the lrr model is not suitable for handling large-scale data or dynamic data . for large-scale data , the lrr method suffers from high time complexity , and for dynamic data , it has to recompute a complex rank minimization for the entire data set whenever new samples are dynamically added , making it prohibitively expensive . existing attempts to online lrr either take a stochastic approach or build the representation purely based on a small sample set and treat new input as out-of-sample data . the former often requires multiple runs for good performance and thus takes longer time to run , and the latter formulates online lrr as an out-of-sample classification problem and is less robust to noise . in this paper , a novel online lrr subspace learning method is proposed for both large-scale and dynamic data . the proposed algorithm is composed of two stages : static learning and dynamic updating . in the first stage , the subspace structure is learned from story_separator_special_tag in this article , we consider the problem of simultaneous low-rank recovery and sparse projection . more specifically , a new robust principal component analysis ( rpca ) -based framework called sparse projection and low-rank recovery ( splrr ) is proposed for handwriting representation and salient stroke feature extraction . in addition to achieving a low-rank component encoding principal features and identify errors or missing values from a given data matrix as rpca , splrr also learns a similarity-preserving sparse projection for extracting salient stroke features and embedding new inputs for classification . these properties make splrr applicable for handwriting recognition and stroke correction and enable online computation . a cosine-similarity-style regularization term is incorporated into the splrr formulation for encoding the similarities of local handwriting features . the sparse projection and low-rank recovery are calculated from a convex minimization problem that can be efficiently solved in polynomial time . besides , the supervised extension of splrr is also elaborated . the effectiveness of our splrr is examined by extensive handwritten digital repairing , stroke correction , and recognition based on benchmark problems . compared with other related techniques , splrr delivers strong generalization capability and state-of-the-art performance for handwriting story_separator_special_tag latent low-rank representation ( latlrr ) delivers robust and promising results for subspace recovery and feature extraction through mining the so-called hidden effects , but the locality of both similar principal and salient features can not be preserved in the optimizations . to solve this issue for achieving enhanced performance , a boosted version of latlrr , referred to as regularized low-rank representation ( rlrr ) , is proposed through explicitly including an appropriate laplacian regularization that can maximally preserve the similarity among local features . resembling latlrr , rlrr decomposes given data matrix from two directions by seeking a pair of low-rank matrices . but the similarities of principal and salient features can be effectively preserved by rlrr . as a result , the correlated features are well grouped and the robustness of representations is also enhanced . based on the outputted bi-directional low-rank codes by rlrr , an unsupervised subspace learning framework termed low-rank similarity preserving projections ( lspp ) is also derived for feature learning . the supervised extension of lspp is also discussed for discriminant subspace learning . the validity of rlrr is examined by robust representation and decomposition of real images . results demonstrated the story_separator_special_tag convex formulations of low-rank matrix factorization problems have received considerable attention in machine learning . however , such formulations often require solving for a matrix of the size of the data matrix , making it challenging to apply them to large scale datasets . moreover , in many applications the data can display structures beyond simply being low-rank , e.g. , images and videos present complex spatio-temporal structures that are largely ignored by standard low-rank methods . in this paper we study a matrix factorization technique that is suitable for large datasets and captures additional structure in the factors by using a particular form of regularization that includes well-known regularizers such as total variation and the nuclear norm as particular cases . although the resulting optimization problem is non-convex , we show that if the size of the factors is large enough , under certain conditions , any local minimizer for the factors yields a global minimizer . a few practical algorithms are also provided to solve the matrix factorization problem , and bounds on the distance from a given approximate solution of the optimization problem to the global optimum are derived . examples in neural calcium imaging video segmentation story_separator_special_tag matrix factorization techniques have been frequently applied in information retrieval , computer vision , and pattern recognition . among them , nonnegative matrix factorization ( nmf ) has received considerable attention due to its psychological and physiological interpretation of naturally occurring data whose representation may be parts based in the human brain . on the other hand , from the geometric perspective , the data is usually sampled from a low-dimensional manifold embedded in a high-dimensional ambient space . one then hopes to find a compact representation , which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure . in this paper , we propose a novel algorithm , called graph regularized nonnegative matrix factorization ( gnmf ) , for this purpose . in gnmf , an affinity graph is constructed to encode the geometrical information and we seek a matrix factorization , which respects the graph structure . our empirical study shows encouraging results of the proposed algorithm in comparison to the state-of-the-art algorithms on real-world problems . story_separator_special_tag low-rank matrix factorization is one of the most useful tools in scientific computing , data mining and computer vision . among of its techniques , non-negative matrix factorization ( nmf ) has received considerable attention due to producing a parts-based representation of the data . recent research has shown that not only the observed data are found to lie on a nonlinear low dimensional manifold , namely data manifold , but also the features lie on a manifold , namely feature manifold . in this paper , we propose a novel algorithm , called graph dual regularization non-negative matrix factorization ( dnmf ) , which simultaneously considers the geometric structures of both the data manifold and the feature manifold . we also present a graph dual regularization non-negative matrix tri-factorization algorithm ( dnmtf ) as an extension of dnmf . moreover , we develop two iterative updating optimization schemes for dnmf and dnmtf , respectively , and provide the convergence proofs of our two optimization schemes . experimental results on uci benchmark data sets , several image data sets and a radar hrrp data set demonstrate the effectiveness of both dnmf and dnmtf . story_separator_special_tag nonnegative matrix factorization ( nmf ) is a popular technique for finding parts-based , linear representations of nonnegative data . it has been successfully applied in a wide range of applications such as pattern recognition , information retrieval , and computer vision . however , nmf is essentially an unsupervised method and can not make use of label information . in this paper , we propose a novel semi-supervised matrix decomposition method , called constrained nonnegative matrix factorization ( cnmf ) , which incorporates the label information as additional constraints . specifically , we show how explicitly combining label information improves the discriminating power of the resulting matrix decomposition . we explore the proposed cnmf method with two cost function formulations and provide the corresponding update solutions for the optimization problems . empirical experiments demonstrate the effectiveness of our novel algorithm in comparison to the state-of-the-art approaches through a set of evaluations based on real-world applications . story_separator_special_tag as an effective technique to learn low-dimensional node features in complicated network environment , network embedding has become a promising research direction in the field of network analysis . due to the virtues of better interpretability and flexibility , matrix factorization based methods for network embedding have received increasing attentions . however , most of them are inadequate to learn more complicated hierarchical features hidden in complex networks because of their mechanisms of single-layer factorization structure . besides , their original feature matrices used for factorization and their robustness against noises also need to be further improved . to solve these problems , we propose a novel network embedding method named drnmf ( deep robust nonnegative matrix factorization ) , which is formed by multi-layer nmf learning structure . meanwhile , drnmf employs the combination of high-order proximity matrices of the network as the original feature matrix for the factorization . to improve the robustness against noises , we use $ \\ell _ { 2,1 } $ norm to devise the objective function for the drnmf network embedding model . effective iterative update rules are derived to resolve the model , and the convergence of these rules are strictly proved story_separator_special_tag hyperspectral images contain mixed pixels due to low spatial resolution of hyperspectral sensors . spectral unmixing problem refers to decomposing mixed pixels into a set of endmembers and abundance fractions . due to nonnegativity constraint on abundance fractions , nonnegative matrix factorization ( nmf ) methods have been widely used for solving spectral unmixing problem . in this letter we proposed using multilayer nmf ( mlnmf ) for the purpose of hyperspectral unmixing . in this approach , spectral signature matrix can be modeled as a product of sparse matrices . in fact mlnmf decomposes the observation matrix iteratively in a number of layers . in each layer , we applied sparseness constraint on spectral signature matrix as well as on abundance fractions matrix . in this way signatures matrix can be sparsely decomposed despite the fact that it is not generally a sparse matrix . the proposed algorithm is applied on synthetic and real data sets . synthetic data is generated based on endmembers from u.s. geological survey spectral library . aviris cuprite data set has been used as a real data set for evaluation of proposed method . results of experiments are quantified based on sad and aad story_separator_special_tag we propose an effective online background subtraction method , which can be robustly applied to practical videos that have variations in both foreground and background . different from previous methods which often model the foreground as gaussian or laplacian distributions , we model the foreground for each frame with a specific mixture of gaussians ( mog ) distribution , which is updated online frame by frame . particularly , our mog model in each frame is regularized by the learned foreground/background knowledge in previous frames . this makes our online mog model highly robust , stable and adaptive to practical foreground and background variations . the proposed model can be formulated as a concise probabilistic map model , which can be readily solved by em algorithm . we further embed an affine transformation operator into the proposed model , which can be automatically adjusted to fit a wide range of video background transformations and make the method more robust to camera movements . with using the sub-sampling technique , the proposed method can be accelerated to execute more than 250 frames per second on average , meeting the requirement of real-time background subtraction for practical video processing tasks . the story_separator_special_tag recommendation from implicit feedback is a highly challenging task due to the lack of the reliable observed negative data . a popular and effective approach for implicit recommendation is to treat unobserved data as negative but downweight their confidence . naturally , how to assign confidence weights and how to handle the large number of the unobserved data are two key problems for implicit recommendation models . however , existing methods either pursuit fast learning by manually assigning simple confidence weights , which lacks flexibility and may create empirical bias in evaluating user 's preference ; or adaptively infer personalized confidence weights but suffer from low efficiency.to achieve both adaptive weights assignment and efficient model learning , we propose a fast adaptively weighted matrix factorization ( fawmf ) based on variational auto-encoder . the personalized data confidence weights are adaptively assigned with a parameterized neural network ( function ) and the network can be inferred from the data . further , to support fast and stable learning of fawmf , a new specific batch-based learning algorithm fbgd has been developed , which trains on all feedback data but its complexity is linear to the number of observed data . extensive story_separator_special_tag a variant of nonnegative matrix factorization ( nmf ) which was proposed earlier is analyzed here . it is called projective nonnegative matrix factorization ( pnmf ) . the new method approximately factorizes a projection matrix , minimizing the reconstruction error , into a positive low-rank matrix and its transpose . the dissimilarity between the original data matrix and its approximation can be measured by the frobenius matrix norm or the modified kullback-leibler divergence . both measures are minimized by multiplicative update rules , whose convergence is proven for the first time . enforcing orthonormality to the basic objective is shown to lead to an even more efficient update rule , which is also readily extended to nonlinear cases . the formulation of the pnmf objective is shown to be connected to a variety of existing nmf methods and clustering approaches . in addition , the derivation using lagrangian multipliers reveals the relation between reconstruction and sparseness . for kernel principal component analysis ( pca ) with the binary constraint , useful in graph partitioning problems , the nonlinear kernel pnmf provides a good approximation which outperforms an existing discretization approach . empirical study on three real-world databases shows that story_separator_special_tag the deep convolutional neural networks ( cnns ) have obtained a great success for pattern recognition , such as recognizing the texts in images . but existing cnns based frameworks still have several drawbacks : 1 ) the traditaional pooling operation may lose important feature information and is unlearnable ; 2 ) the tradi-tional convolution operation optimizes slowly and the hierar-chical features from different layers are not fully utilized . in this work , we address these problems by developing a novel deep network model called fully-convolutional intensive feature flow neural network ( intensivenet ) . specifically , we design a further dense block called intensive block to extract the feature information , where the original inputs and two dense blocks are connected tightly . to encode data appropriately , we present the concepts of dense fusion block and further dense fusion opera-tions for our new intensive block . by adding short connections to different layers , the feature flow and coupling between layers are enhanced . we also replace the traditional convolution by depthwise separable convolution to make the operation efficient . to prevent important feature information being lost to a certain extent , we use a convolution operation story_separator_special_tag single image deraining task is still a very challenging task due to its ill-posed nature in reality . recently , researchers have tried to fix this issue by training the cnn-based end-to-end models , but they still can not extract the negative rain streaks from rainy images precisely , which usually leads to an over de-rained or under de-rained result . to handle this issue , this paper proposes a new coarse-to-fine single image deraining framework termed multi-stream hybrid deraining network ( shortly , mh-derainnet ) . to obtain the negative rain streaks during training process more accurately , we present a new module named dual path residual dense block , i.e. , residual path and dense path . the residual path is used to reuse com-mon features from the previous layers while the dense path can explore new features . in addition , to concatenate different scaled features , we also apply the idea of multi-stream with shortcuts between cascaded dual path residual dense block based streams . to obtain more distinct derained images , we combine the ssim loss and perceptual loss to preserve the per-pixel similarity as well as preserving the global structures so that the deraining story_separator_special_tag abstract similarity learning is a kind of machine learning algorithm that aims to measure the relevance between given objects . however , conventional similarity learning algorithms usually measure the distance between the entire given objects in the latent feature space . consequently , the obtained similarity scores only represent how close are the entire given objects , but are incapable of demonstrating which part of them are similar to each other and how semantically similar are they . to address the above problems , in this paper , we propose a self-attention driven adversarial similarity learning network . discriminative self-attention weights are firstly assigned to different regions of the given objects . the similarity learning step measures the relevance between these self-attention weighted feature maps of given objects under various topic vectors . the topic vectors are conditioned to capture and preserve hidden semantic information within data distribution by a generator-discriminator model with adversarial loss . this model aims to generate objects from topic vectors and propagates the difference between the generated and the real objects back to the similarity learning step , which forces the topic vectors to not only assign discriminative similarity scores to different object pairs but story_separator_special_tag zero-shot learning ( zsl ) is a challenging task due to the lack of unseen class data during training . existing works attempt to establish a mapping between the visual and class spaces through a common intermediate semantic space . the main limitation of existing methods is the strong bias towards seen class , known as the domain shift problem , which leads to unsatisfactory performance in both conventional and generalized zsl tasks . to tackle this challenge , we propose to convert zsl to the conventional supervised learning by generating features for unseen classes . to this end , a joint generative model that couples variational autoencoder ( vae ) and generative adversarial network ( gan ) , called zero-vae-gan , is proposed to generate high-quality unseen features . to enhance the class-level discriminability , an adversarial categorization network is incorporated into the joint framework . besides , we propose two self-training strategies to augment unlabeled unseen features for the transductive extension of our model , addressing the domain shift problem to a large extent . experimental results on five standard benchmarks and a large-scale dataset demonstrate the superiority of our generative model over the state-of-the-art methods for conventional story_separator_special_tag the choice of which clothes to wear affects how one is perceived , as well as constitutes an expression of one s personal style . based on the recent advances in image-to-image translation by the conditional generative adversarial network ( cgan ) , we propose a new framework with a multidiscriminator by incorporating different types of conditional information into the discriminator of cgan for clothing matches . in contrast with most extant frameworks under cgan , with one generator and one discriminator , the proposed framework investigates the potential of utilizing conditional information delivered by multidiscriminators to guide the generator . under this framework , we propose an attribute-gan with two discriminators and a category-attribute gan ( ca-gan ) with three discriminators . in order to evaluate the performance of our proposed models , we built a large-scale data set that consists of 19 081 pairs of collocation clothing images with 90 manually labeled attributes . experimental results demonstrate that with supervision of the additional attribute discriminator or category discriminator , the quality of the generated clothing images by gans is consistently improved in comparison with the state-of-the-art methods . story_separator_special_tag multi-view clustering aims at integrating complementary information from multiple heterogeneous views to improve clustering results . existing multi-view clustering solutions can only output a single clustering of the data . due to their multiplicity , multi-view data , can have different groupings that are reasonable and interesting from different perspectives . however , how to find multiple , meaningful , and diverse clustering results from multi-view data is still a rarely studied and challenging topic in multi-view clustering and multiple clusterings . in this paper , we introduce a deep matrix factorization based solution ( dmclusts ) to discover multiple clusterings . dmclusts gradually factorizes multi-view data matrices into representational subspaces layer-by-layer and generates one clustering in each layer . to enforce the diversity between generated clusterings , it minimizes a new redundancy quantification term derived from the proximity between samples in these subspaces . we further introduce an iterative optimization procedure to simultaneously seek multiple clusterings with quality and diversity . experimental results on benchmark datasets confirm that dmclusts outperforms state-of-the-art multiple clustering solutions . story_separator_special_tag low-rank matrix factorization is one of the most useful tools in image representation and computer vision . among of its techniques , concept factorization ( cf ) is a new matrix decomposition technique for data representation . a modified cf algorithm called sparse dual regularized concept factorization ( sdrcf ) is proposed for addressing the limitations of cf and local consistent concept factorization ( lccf ) , which did not consider the geometric structure or the label information of the data . sdrcf simultaneously preserves the intrinsic geometry of the data and the feature as regularized term , and preserve the sparse reconstructive relationship of the data . we also present sdrcf as an extension of cf and lccf . compared with non-negative matrix factorization ( nmf ) , graph nmf ( gnmf ) , cf and lccf , experiment results on orl face database and coil20 image database have shown that the proposed method achieves better clustering results . story_separator_special_tag concept factorization ( cf ) , as a variant of nonnegative matrix factorization ( nmf ) , has been widely used for learning compact representation for images because of its psychological and physiological interpretation of naturally occurring data . and graph regularization has been incorporated into the objective function of cf to exploit the intrinsic low-dimensional manifold structure , leading to better performance . but some shortcomings are shared by existing cf methods . 1 ) the squared loss used to measure the data reconstruction quality is sensitive to noise in image data . 2 ) the graph regularization may lead to trivial solution and scale transfer problems for cf such that the learned representation is meaningless . 3 ) existing methods mostly ignore the discriminative information in image data . in this paper , we propose a novel method , called robust and discriminative concept factorization ( rdcf ) for image representation . specifically , rdcf explicitly considers the influence of noise by imposing a sparse error matrix , and exploits the discriminative information by approximate orthogonal constraints which can also lead to nontrivial solution . we propose an iterative multiplicative updating rule for the optimization of rdcf and story_separator_special_tag real-world datasets often have representations in multiple views or come from multiple sources . exploiting consistent or complementary information from multi-view data , multi-view clustering aims to get better clustering quality rather than relying on the individual view . in this paper , we propose a novel multi-view clustering method called multi-view concept clustering based on concept factorization with local manifold regularization , which drives a common consensus representation for multiple views . the local manifold regularization is incorporated into concept factorization to preserve the locally geometrical structure of the data space . moreover , the weight of each view is learnt automatically and a co-normalized approach is designed to make fusion meaningful in terms of driving the common consensus representation . an iterative optimization algorithm based on the multiplicative rules is developed to minimize the objective function . experimental results on nine reality datasets involving different fields demonstrate that the proposed method performs better than several state-of-the-art multi-view clustering methods . story_separator_special_tag most existing multiview clustering methods require that graph matrices in different views are computed beforehand and that each graph is obtained independently . however , this requirement ignores the correlation between multiple views . in this letter , we tackle the problem of multiview clustering by jointly optimizing the graph matrix to make full use of the data correlation between views . with the interview correlation , a concept factorization-based multiview clustering method is developed for data integration , and the adaptive method correlates the affinity weights of all views . this method differs from nonnegative matrix factorization-based clustering methods in that it can be applicable to data sets containing negative values . experiments are conducted to demonstrate the effectiveness of the proposed method in comparison with state-of-the-art approaches in terms of accuracy , normalized mutual information , and purity . story_separator_special_tag abstract concept factorization ( cf ) technique is one of the most powerful approaches for feature learning , and has been successfully adopted in a wide range of practical applications such as data mining , computer vision , and information retrieval . most existing concept factorization methods mainly minimize the square of the euclidean distance , which is seriously sensitive to non-gaussian noises or outliers in the data . to alleviate the adverse influence of this limitation , in this paper , a robust graph regularized concept factorization method , called correntropy based graph regularized concept factorization ( gccf ) , is proposed for clustering tasks . specifically , based on the maximum correntropy criterion ( mcc ) , gccf is derived by incorporating the graph structure information into our proposed objective function . a half-quadratic optimization technique is adopted to solve the non-convex objective function of the gccf method effectively . in addition , algorithm analysis of gccf is studied . extensive experiments on real world datasets demonstrate that the proposed gccf method outperforms seven competing methods for clustering applications . story_separator_special_tag concept factorization ( cf ) , as a matrix factorization method , has been applied widely in obtaining an optimal data representation and has yielded impressive results . however , some shortcomings exist in the existing cf method . 1 ) the standard concept factorization uses the squared loss function that is sensitive to outlier points and noises . 2 ) the graph generated by the original data does not reflect the real geometric structure of the data distribution . 3 ) the discriminant information is ignored . herein , we propose a novel method , called robust local learning and discriminative concept factorization ( rlldcf ) for data representation . specifically , rlldcf adopts the $ \\ell _ { 2,1 } $ -norm-based loss function to improve its robustness against noises and outliers , and exploits the discriminative information by local linear regression constraints . in addition , the method obtains the topology structure of the data distribution during learning rather than known a priori and fixed . a new iterative multiplicative updating rule is derived to solve rlldcf s objective function . the convergence of the optimization algorithm is proved both theoretically and empirically . numerous experiments on story_separator_special_tag in recent years , concept factorization methods become a popular data representation technique in many real applications . however , conventional concept factorization methods can not capture the intrinsic geometric structure embedded in data using the fixed nearest neighbor graph . to overcome this problem , we propose a novel method , called concept factorization with optimal graph learning ( cf_ogl ) , for data representation . in cf_ogl , a novel rank constraint is imposed on the laplacian matrix of the initial graph model , which encourages the learned graph with exactly c connected components for the data with c clusters . then the learned optimal graph regularizer is integrated into the model of concept factorization . therefore , this learned structure is benefit to the clustering analysis . in addition , we develop an efficient and effective iterative optimization algorithm to solve our proposed model . extensive experimental results on three benchmark datasets have demonstrated that our proposed method can effectively improve the performance of clustering . story_separator_special_tag non-negative matrix factorization ( nmf ) known as learnt parts-based representation has become a data analysis tool for clustering tasks . it provides an alternative learning paradigm to cope with non-negative data clustering . in this paradigm , concept factorization ( cf ) and symmetric non-negative factorization ( symnmf ) are two typically important representative models . in general , they have distinct behaviors : in cf , each cluster is modeled as a linear combination of samples , and vice versa , i.e. , sample reconstruction , while symnmf built on pair-wise sample similarity measure , is to preserve similarity of samples in a low-dimensional subspace , namely similarity reconstruction . in this paper , we propose a similarity-based concept factorization ( scf ) as a synthesis of the two behaviors . this design can be formulated as : the similarity of reconstructed samples by cf is close to that of original samples . to optimize it , we develop an optimization algorithm which leverages the alternating direction of multipliers ( admm ) method to solve each sub-problem of scf . besides , we take a further step to consider the robust issue of similarity reconstruction and explore a story_separator_special_tag concept factorization has attracted much attention in the past few years . to consider the manifold structure embedded in data , the graph regularizer is incorporated into model of concept factorization . however , a single graph can not effectively model the intrinsic structure information of data . to solve this problem , a novel method , called structured discriminative concept factorization ( sdcf ) , is proposed to explore the intrinsic structure information of data . specifically , the proposed sdcf method incorporates both the local affinity and the distant repulsion constraints into the model of cf . moreover , an efficient optimization scheme based on multiple update algorithm for the proposed sdcf method is developed . experimental results on benchmark datasets have validated the effectiveness of the proposed method . story_separator_special_tag matrix factorization techniques have been frequently applied in data representation and pattern recognition . one of them is concept factorization ( cf ) , which is a new matrix decomposition technique for data representation . in this paper , we propose a novel semi-supervised matrix factorization algorithm , called constrained graph concept factorization ( cgcf ) , which incorporates the label information as additional constraints . specifically , cgcf preserves the intrinsic geometry of data as regularized term and use the label information as semi-supervised learning , it makes nearby samples with the same class-label are more compact , and nearby classes are separated . an efficient multiplicative updating procedure was produced along with its theoretic justification of the algorithmic convergence . compared with nmf , gnmf , cf , lccf and kmeans , experiment results on orl and yale face databases have shown that the proposed method achieves better clustering results . story_separator_special_tag non-negative matrix factorization ( nmf ) and concept factorization ( cf ) have attracted great attention in pattern recognition and machine learning . however , nmf and cf do not explore the local manifold structure among the high dimensional data . in this paper , a novel semi-supervised matrix factorization method , called neighborhood preserving concept factorization ( npcf ) , is proposed . the npcf algorithm exploits local manifold structure information among the data with resorting to adding neighborhood preserving regulation . this method makes full use for the local geometric structure information in concept factorization . therefore , the npcf algorithm has more discriminate ability than traditional concept factorization . meanwhile , the updating rules and the convergence proof of the npcf algorithm are provided in this paper . experiments on benchmark data sets demonstrate this proposed approach is more effective than other algorithms . story_separator_special_tag concept factorization ( cf ) has shown its great advantage for both clustering and data representation and is particularly useful for image representation . compared with nonnegative matrix factorization ( nmf ) , cf can be applied to data containing negative values . however , the performance of cf method and its extensions will degenerate a lot due to the negative effects of outliers , and cf is an unsupervised method that can not incorporate label information . in this article , we propose a novel cf method , with a novel model built based on the maximum correntropy criterion ( mcc ) . in order to capture the local geometry information of data , our method integrates the robust adaptive embedding and cf into a unified framework . the label information is utilized in the adaptive learning process . furthermore , an iterative strategy based on the accelerated block coordinate update is proposed . the convergence property of the proposed method is analyzed to ensure that the algorithm converges to a reliable solution . the experimental results on four real-world image data sets show that the new method can almost always filter out the negative effects of the outliers story_separator_special_tag non-negative matrix factorization ( nmf ) is a dimensionality reduction approach for learning a parts-based and linear representation of non-negative data . it has attracted more attention because of that . in practice , nmf not only neglects the manifold structure of data samples , but also overlooks the priori label information of different classes . in this paper , a novel matrix decomposition method called hyper-graph regularized constrained non-negative matrix factorization ( hcnmf ) is proposed for selecting differentially expressed genes and tumor sample classification . the advantage of hyper-graph learning is to capture local spatial information in high dimensional data . this method incorporates a hyper-graph regularization constraint to consider the higher order data sample relationships . the application of hyper-graph theory can effectively find pathogenic genes in cancer datasets . besides , the label information is further incorporated in the objective function to improve the discriminative ability of the decomposition matrix . supervised learning with label information greatly improves the classification effect . we also provide the iterative update rules and convergence proofs for the optimization problems of hcnmf . experiments under the cancer genome atlas ( tcga ) datasets confirm the superiority of hcnmf algorithm compared story_separator_special_tag concept factorization ( cf ) improves nonnegative matrix factorization ( nmf ) , which can be only performed in the original data space , by conducting factorization within proper kernel space where the structure of data become much clear than the original data space . cf-based methods have been widely applied and yielded impressive results in optimal data representation and clustering tasks . however , cf methods still face with the problem of proper kernel function design or selection in practice . most existing multiple kernel clustering ( mkc ) algorithms do not sufficiently consider the intrinsic neighborhood structure of base kernels . in this paper , we propose a novel discriminative multiple kernel concept factorization method for data representation and clustering . we first extend the original kernel concept factorization with the integration of multiple kernel clustering framework to alleviate the problem of kernel selection . for each base kernel , we extract the local discriminant structure of data via the local discriminant models with global integration . moreover , we further linearly combine all these kernel-level local discriminant models to obtain an integrated consensus characterization of the intrinsic structure across base kernels . in this way , it story_separator_special_tag many areas of science depend on exploratory data analysis and visualization . the need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction : how to discover compact representations of high-dimensional data . here , we introduce locally linear embedding ( lle ) , an unsupervised learning algorithm that computes low-dimensional , neighborhood-preserving embeddings of high-dimensional inputs . unlike clustering methods for local dimensionality reduction , lle maps its inputs into a single global coordinate system of lower dimensionality , and its optimizations do not involve local minima . by exploiting the local symmetries of linear reconstructions , lle is able to learn the global structure of nonlinear manifolds , such as those generated by images of faces or documents of text . story_separator_special_tag concept factorization ( cf ) , as a popular matrix factorization technique , has recently attracted increasing attention in image clustering , due to the strong ability of dimension reduction and data representation . existing cf variants only consider the local structure of data , but ignore the global structure information embedded in data , which is very crucial for data representation . to address the above issue , we propose an improved cf method , namely local and global regularized concept factorization ( lgcf ) , by considering the local and global structures simultaneously . specifically , the local geometric structure is depicted in lgcf via a hypergraph , which is capable of precisely capturing high-order geometrical information . in addition , to discover the global structure , we establish an unsupervised discriminant criterion , which characterizes the between-class scatter and the total scatter of the data with the help of latent features in lgcf . for the formulated lgcf , a multiplicative update rule is developed , and the convergence is rigorously proved . extensive experiments on several real image datasets demonstrate the superiority of the proposed method over the state-of-the-art methods in terms of clustering accuracy and story_separator_special_tag we study a number of open issues in spectral clustering : ( i ) selecting the appropriate scale of analysis , ( ii ) handling multi-scale data , ( iii ) clustering with irregular background clutter , and , ( iv ) finding automatically the number of groups . we first propose that a 'local ' scale should be used to compute the affinity between each pair of points . this local scaling leads to better clustering especially when the data includes multiple scales and when the clusters are placed within a cluttered background . we further suggest exploiting the structure of the eigenvectors to infer automatically the number of groups . this leads to a new algorithm in which the final randomly initialized k-means stage is eliminated . story_separator_special_tag abstract . automatic facial beauty scoring in images is an emerging research topic in face-based biometrics . all existing methods adopt fully supervised schemes . we introduce the use of semisupervised learning schemes for solving the problem of face beauty scoring . the paper has two main contributions . first , instead of using fully supervised techniques , we show that graph-based score propagation methods can enrich model learning without the need of additional labeled face images . second , we propose a nonlinear flexible manifold embedding for solving the score propagation . this model can be used for transductive and inductive settings . the proposed semisupervised schemes were tested on three recent public datasets for face beauty analysis : scut-fbp , m2b , and scut-fbp5500 . these experiments , as well as many comparisons with supervised schemes , show that the nonlinear semisupervised scheme compares favorably with many supervised schemes . they also show that its performances in terms of error prediction and pearson correlation are better than those reported for the used datasets . story_separator_special_tag this paper is about a curious phenomenon . suppose we have a data matrix , which is the superposition of a low-rank component and a sparse component . can we recover each component individually ? we prove that under some suitable assumptions , it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called principal component pursuit ; among all feasible decompositions , simply minimize a weighted combination of the nuclear norm and of the l1 norm . this suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted . this extends to the situation where a fraction of the entries are missing as well . we discuss an algorithm for solving this optimization problem , and present applications in the area of video surveillance , where our methodology allows for the detection of objects in a cluttered background , and in the area of face recognition , where it offers a principled way of removing shadows and specularities in story_separator_special_tag low crest-factor of excitation and response signals is desirable in transfer function measurements , since this allows the maximization of the signal-to-noise ratios ( snrs ) for given allowable amplitude ranges of the signals . the authors present a new crest-factor minimization algorithm for periodic signals with prescribed power spectrum . the algorithm is based on approximation of the nondifferentiable chebyshev ( minimax ) norm by l/sub p/-norms with increasing values of p , and the calculations are accelerated by using ffts . several signals related by linear systems can also be compressed simultaneously . the resulting crest-factors are significantly better than those provided by earlier methods . it is shown that the peak value of a signal can be further decreased by allowing some extra energy at additional frequencies . > story_separator_special_tag incomplete multi-view clustering ( imvc ) optimally fuses multiple pre-specified incomplete views to improve clustering performance . among various excellent solutions , the recently proposed multiple kernel k-means with incomplete kernels ( mkkm-ik ) forms a benchmark , which redefines imvc as a joint optimization problem where the clustering and kernel matrix imputation tasks are alternately performed until convergence . though demonstrating promising performance in various applications , we observe that the manner of kernel matrix imputation in mkkm-ik would incur intensive computational and storage complexities , overcomplicated optimization and limitedly improved clustering performance . in this paper , we propose an efficient and effective incomplete multi-view clustering ( ee-imvc ) algorithm to address these issues . instead of completing the incomplete kernel matrices , ee-imvc proposes to impute each incomplete base matrix generated by incomplete views with a learned consensus clustering matrix . we carefully develop a three-step iterative algorithm to solve the resultant optimization problem with linear computational complexity and theoretically prove its convergence . further , we conduct comprehensive experiments to study the proposed ee-imvc in terms of clustering accuracy , running time , evolution of the learned consensus clustering matrix and the convergence . as indicated story_separator_special_tag with the popularity of multimedia technology , information is always represented or transmitted from multiple views . most of the existing algorithms are graph-based ones to learn the complex structures within multiview data but overlooked the information within data representations . furthermore , many existing works treat multiple views discriminatively by introducing some hyperparameters , which is undesirable in practice . to this end , abundant multiview based methods have been proposed for dimension reduction . however , there are still no research to leverage the existing work into a unified framework . to address this issue , in this paper , we propose a general framework for multiview data dimension reduction , named kernelized multiview subspace analysis ( kmsa ) . it directly handles the multi-view feature representation in the kernel space , which provides a feasible channel for direct manipulations on multiview data with different dimensions . meanwhile , compared with those graph-based methods , kmsa can fully exploit information from multiview data with nothing to lose . furthermore , since different views have different influences on kmsa , we propose a self-weighted strategy to treat different views discriminatively according to their contributions . a co-regularized term is story_separator_special_tag an important underlying assumption that guides the success of the existing multiview learning algorithms is the full observation of the multiview data . however , such rigorous precondition clearly violates the common-sense knowledge in practical applications , where in most cases , only incomplete fractions of the multiview data are given . the presence of the incomplete settings generally disables the conventional multiview clustering methods . in this article , we propose a simple but effective incomplete multiview clustering ( imc ) framework , which simultaneously considers the local geometric information and the unbalanced discriminating powers of these incomplete multiview observations . specifically , a novel graph-regularized matrix factorization model , on the one hand , is developed to preserve the local geometric similarities of the learned common representations from different views . on the other hand , the semantic consistency constraint is introduced to stimulate these view-specific representations toward a unified discriminative representation . moreover , the importance of different views is adaptively determined to reduce the negative influence of the unbalanced incomplete views . furthermore , an efficient learning algorithm is proposed to solve the resulting optimization problem . extensive experimental results performed on several incomplete multiview datasets story_separator_special_tag an approach to semi-supervised learning is proposed that is based on a gaussian random field model . labeled and unlabeled data are represented as vertices in a weighted graph , with edge weights encoding the similarity between instances . the learning problem is then formulated in terms of a gaussian random field on this graph , where the mean of the field is characterized in terms of harmonic functions , and is efficiently obtained using matrix methods or belief propagation . the resulting learning algorithms have intimate connections with random walks , electric networks , and spectral graph theory . we discuss methods to incorporate class priors and the predictions of classifiers obtained by supervised learning . we also propose a method of parameter learning by entropy minimization , and show the algorithm 's ability to perform feature selection . promising experimental results are presented for synthetic data , digit classification , and text classification tasks . story_separator_special_tag we propose a robust inductive semi-supervised label prediction model over the embedded representation , termed adaptive embedded label propagation with weight learning ( aelp-wl ) , for classification . aelp-wl offers several properties . first , our method seamlessly integrates the robust adaptive embedded label propagation with adaptive weight learning into a unified framework . by minimizing the reconstruction errors over embedded features and embedded soft labels jointly , our aelp-wl can explicitly ensure the learned weights to be joint optimal for representation and classification , which differs from most existing lp models that perform weight learning separately by an independent step before label prediction . second , existing models usually precalculate the weights over the original samples that may contain unfavorable features and noise decreasing performance . to this end , our model adds a constraint that decomposes original data into a sparse component encoding embedded noise-removed sparse representations of samples and a sparse error part fitting noise , and then performs the adaptive weight learning over the embedded sparse representations . third , our aelp-wl computes the projected soft labels by trading-off the manifold smoothness and label fitness errors over the adaptive weights and the embedded representations for story_separator_special_tag the graph-based semisupervised label propagation ( lp ) algorithm has delivered impressive classification results . however , the estimated soft labels typically contain mixed signs and noise , which cause inaccurate predictions due to the lack of suitable constraints . moreover , the available methods typically calculate the weights and estimate the labels in the original input space , which typically contains noise and corruption . thus , the encoded similarities and manifold smoothness may be inaccurate for label estimation . in this article , we present effective schemes for resolving these issues and propose a novel and robust semisupervised classification algorithm , namely the triple matrix recovery-based robust auto-weighted label propagation framework ( alp-tmr ) . our alp-tmr introduces a tmr mechanism to remove noise or mixed signs from the estimated soft labels and improve the robustness to noise and outliers in the steps of assigning weights and predicting the labels simultaneously . our method can jointly recover the underlying clean data , clean labels , and clean weighting spaces by decomposing the original data , predicted soft labels , or weights into a clean part plus an error part by fitting noise . in addition , alp-tmr integrates story_separator_special_tag kernel methods have been successfully applied to the areas of pattern recognition and data mining . in this paper , we mainly discuss the issue of propagating labels in kernel space . a kernel-induced label propagation ( kernel-lp ) framework by mapping is proposed for high-dimensional data classification using the most informative patterns of data in kernel space . the essence of kernel-lp is to perform joint label propagation and adaptive weight learning in a transformed kernel space . that is , our kernel-lp changes the task of label propagation from the commonly-used euclidean space in most existing work to kernel space . the motivation of our kernel-lp to propagate labels and learn the adaptive weights jointly by the assumption of an inner product space of inputs , i.e. , the original linearly inseparable inputs may be mapped to be separable in kernel space . kernel-lp is based on existing positive and negative lp model , i.e. , the effects of negative label information are integrated to improve the label prediction power . also , kernel-lp performs adaptive weight construction over the same kernel space , so it can avoid the tricky process of choosing the optimal neighborhood size suffered story_separator_special_tag a novel semi-supervised learning approach is proposed based on a linear neighborhood model , which assumes that each data point can be linearly reconstructed from its neighborhood . our algorithm , named linear neighborhood propagation ( lnp ) , can propagate the labels from the labeled points to the whole dataset using these linear neighborhoods with sufficient smoothness . we also derive an easy way to extend lnp to out-of-sample data . promising experimental results are presented for synthetic data , digit and text classification tasks . story_separator_special_tag compared with supervised learning for feature selection , it is much more difficult to select the discriminative features in unsupervised learning due to the lack of label information . traditional unsupervised feature selection algorithms usually select the features which best preserve the data distribution , e.g. , manifold structure , of the whole feature set . under the assumption that the class label of input data can be predicted by a linear classifier , we incorporate discriminative analysis and l2,1-norm minimization into a joint framework for unsupervised feature selection . different from existing unsupervised feature selection algorithms , our algorithm selects the most discriminative feature subset from the whole feature set in batch mode . extensive experiment on different data types demonstrates the effectiveness of our algorithm . story_separator_special_tag this paper proposes an enhanced semi-supervised classification approach termed nonnegative sparse neighborhood propagation ( sparsenp ) that is an improvement to the existing neighborhood propagation due to the fact that the outputted soft labels of points can not be ensured to be sufficiently sparse , discriminative , robust to noise and be probabilistic values . note that the sparse property and strong discriminating ability of predicted labels is important , since ideally the soft label of each sample should have only one or few positive elements ( that is , less unfavorable mixed signs are included ) deciding its class assignment . to reduce the negative effects of unfavorable mixed signs on the learning performance , we regularize the l2,1-norm on the soft labels during optimization for enhancing the prediction results . the non-negativity and sum-to-one constraints are also included to ensure the outputted labels are probabilistic values . the proposed framework is solved in an alternative manner for delivering a more reliable solution so that the accuracy can be improved . simulations show that satisfactory results can be obtained by the proposed sparsenp compared with other related approaches . story_separator_special_tag feature selection has aroused considerable research interests during the last few decades . traditional learning-based feature selection methods separate embedding learning and feature ranking . in this paper , we propose a novel unsupervised feature selection framework , termed as the joint embedding learning and sparse regression ( jelsr ) , in which the embedding learning and sparse regression are jointly performed . specifically , the proposed jelsr joins embedding learning with sparse regression to perform feature selection . to show the effectiveness of the proposed framework , we also provide a method using the weight via local linear approximation and adding the $ \\ell_ { 2,1 } $ -norm regularization , and design an effective algorithm to solve the corresponding optimization problem . furthermore , we also conduct some insightful discussion on the proposed feature selection approach , including the convergence analysis , computational complexity , and parameter determination . in all , the proposed framework not only provides a new perspective to view traditional methods but also evokes some other deep researches for feature selection . compared with traditional unsupervised feature selection methods , our approach could integrate the merits of embedding learning and sparse regression . promising story_separator_special_tag while labeled data is expensive to prepare , ever increasing amounts of unlabeled data is becoming widely available . in order to adapt to this phenomenon , several semi-supervised learning ( ssl ) algorithms , which learn from labeled as well as unlabeled data , have been developed . in a separate line of work , researchers have started to realize that graphs provide a natural way to represent data in a variety of domains . graph-based ssl algorithms , which bring together these two lines of work , have been shown to outperform the state-of-the-art in many applications in speech processing , computer vision , natural language processing , and other areas of artificial intelligence . recognizing this promising and emerging area of research , this synthesis lecture focuses on graph-based ssl algorithms ( e.g. , label propagation methods ) . our hope is that after reading this book , the reader will walk away with the following : ( 1 ) an in-depth knowledge of the current state-of-the-art in graph-based ssl algorithms , and the ability to implement them ; ( 2 ) the ability to decide on the suitability of graph-based ssl methods for a problem ; story_separator_special_tag in the field of machine learning , semi-supervised learning ( ssl ) occupies the middle ground , between supervised learning ( in which all training examples are labeled ) and unsupervised learning ( in which no label data are given ) . interest in ssl has increased in recent years , particularly because of application domains in which unlabeled data are plentiful , such as images , text , and bioinformatics . this first comprehensive overview of ssl presents state-of-the-art algorithms , a taxonomy of the field , selected applications , benchmark experiments , and perspectives on ongoing and future research . semi-supervised learning first presents the key assumptions and ideas underlying the field : smoothness , cluster or low-density separation , manifold structure , and transduction . the core of the book is the presentation of ssl methods , organized according to algorithmic strategies . after an examination of generative models , the book describes algorithms that implement the low-density separation assumption , graph-based methods , and algorithms that perform two-step learning . the book then discusses ssl applications and offers guidelines for ssl practitioners by analyzing the results of extensive benchmark experiments . finally , the book looks story_separator_special_tag hashing has recently sparked a great revolution in cross-modal retrieval because of its low storage cost and high query speed . recent cross-modal hashing methods often learn unified or equal-length hash codes to represent the multi-modal data and make them intuitively comparable . however , such unified or equal-length hash representations could inherently sacrifice their representation scalability because the data from different modalities may not have one-to-one correspondence and could be encoded more efficiently by different hash codes of unequal lengths . to mitigate these problems , this paper exploits a related and relatively unexplored problem : encode the heterogeneous data with varying hash lengths and generalize the cross-modal retrieval in various challenging scenarios . to this end , a generalized and flexible cross-modal hashing framework , termed matrix tri-factorization hashing ( mtfh ) , is proposed to work seamlessly in various settings including paired or unpaired multi-modal data , and equal or varying hash length encoding scenarios . more specifically , mtfh exploits an efficient objective function to flexibly learn the modality-specific hash codes with different length settings , while synchronously learning two semantic correlation matrices to semantically correlate the different hash representations for heterogeneous data comparable . as story_separator_special_tag learning graphs from data automatically have shown encouraging performance on clustering and semisupervised learning tasks . however , real data are often corrupted , which may cause the learned graph to be inexact or unreliable . in this paper , we propose a novel robust graph learning scheme to learn reliable graphs from the real-world noisy data by adaptively removing noise and errors in the raw data . we show that our proposed model can also be viewed as a robust version of manifold regularized robust principle component analysis ( rpca ) , where the quality of the graph plays a critical role . the proposed model is able to boost the performance of data clustering , semisupervised classification , and data recovery significantly , primarily due to two key factors : 1 ) enhanced low-rank recovery by exploiting the graph smoothness assumption and 2 ) improved graph construction by exploiting clean data recovered by rpca . thus , it boosts the clustering , semisupervised classification , and data recovery performance overall . extensive experiments on image/document clustering , object recognition , image shadow removal , and video background subtraction reveal that our model outperforms the previous state-of-the-art methods . story_separator_special_tag the use of multiple features has been shown to be an effective strategy for visual tracking because of their complementary contributions to appearance modeling . the key problem is how to learn a fused representation from multiple features for appearance modeling . different features extracted from the same object should share some commonalities in their representations while each feature should also have some feature-specific representation patterns which reflect its complementarity in appearance modeling . different from existing multi-feature sparse trackers which only consider the commonalities among the sparsity patterns of multiple features , this paper proposes a novel multiple sparse representation framework for visual tracking which jointly exploits the shared and feature-specific properties of different features by decomposing multiple sparsity patterns . moreover , we introduce a novel online multiple metric learning to efficiently and adaptively incorporate the appearance proximity constraint , which ensures that the learned commonalities of multiple features are more representative . experimental results on tracking benchmark videos and other challenging videos demonstrate the effectiveness of the proposed tracker . story_separator_special_tag visual tracking using multiple features has been proved as a robust approach because features could complement each other . since different types of variations such as illumination , occlusion , and pose may occur in a video sequence , especially long sequence videos , how to properly select and fuse appropriate features has become one of the key problems in this approach . to address this issue , this paper proposes a new joint sparse representation model for robust feature-level fusion . the proposed method dynamically removes unreliable features to be fused for tracking by using the advantages of sparse representation . in order to capture the non-linear similarity of features , we extend the proposed method into a general kernelized framework , which is able to perform feature fusion on various kernel spaces . as a result , robust tracking performance is obtained . both the qualitative and quantitative experimental results on publicly available videos show that the proposed method outperforms both sparse representation-based and fusion based-trackers . story_separator_special_tag consensus clustering provides a framework to ensemble multiple clustering results to obtain a consensus and robust result . most existing consensus clustering methods usually apply all data to ensemble learning , whereas ignoring the side effects caused by some difficult or unreliable instances . to tackle this problem , we propose a novel selfpaced consensus clustering method to gradually involve instances from more reliable to less reliable ones into the ensemble learning . we first construct an initial bipartite graph from the multiple base clustering results , where the nodes represent the instances and clusters and the edges indicate that an instance belongs to a cluster . then , we learn a structured bipartite graph from the initial one by self-paced learning , i.e. , we automatically decide the reliability of each edge and involves the edges into graph learning in order of their reliability . at last , we obtain the final consensus clustering result from the learned bipartite graph . the extensive experimental results demonstrate the effectiveness and superiority of the proposed method . story_separator_special_tag recent work has shown that convolutional networks can be substantially deeper , more accurate , and efficient to train if they contain shorter connections between layers close to the input and those close to the output . in this paper , we embrace this observation and introduce the dense convolutional network ( densenet ) , which connects each layer to every other layer in a feed-forward fashion . whereas traditional convolutional networks with l layers have l connections & # x2014 ; one between each layer and its subsequent layer & # x2014 ; our network has l ( l+1 ) /2 direct connections . for each layer , the feature-maps of all preceding layers are used as inputs , and its own feature-maps are used as inputs into all subsequent layers . densenets have several compelling advantages : they alleviate the vanishing-gradient problem , strengthen feature propagation , encourage feature reuse , and substantially reduce the number of parameters . we evaluate our proposed architecture on four highly competitive object recognition benchmark tasks ( cifar-10 , cifar-100 , svhn , and imagenet ) . densenets obtain significant improvements over the state-of-the-art on most of them , whilst requiring less story_separator_special_tag deeper neural networks are more difficult to train . we present a residual learning framework to ease the training of networks that are substantially deeper than those used previously . we explicitly reformulate the layers as learning residual functions with reference to the layer inputs , instead of learning unreferenced functions . we provide comprehensive empirical evidence showing that these residual networks are easier to optimize , and can gain accuracy from considerably increased depth . on the imagenet dataset we evaluate residual nets with a depth of up to 152 layers -- -8x deeper than vgg nets but still having lower complexity . an ensemble of these residual nets achieves 3.57 % error on the imagenet test set . this result won the 1st place on the ilsvrc 2015 classification task . we also present analysis on cifar-10 with 100 and 1000 layers . the depth of representations is of central importance for many visual recognition tasks . solely due to our extremely deep representations , we obtain a 28 % relative improvement on the coco object detection dataset . deep residual nets are foundations of our submissions to ilsvrc & coco 2015 competitions , where we also won story_separator_special_tag abstract convolutional neural network ( cnn ) -based encoder-decoder models have profoundly inspired recent works in the field of salient object detection ( sod ) . with the rapid development of encoder-decoder models with respect to most pixel-level dense prediction tasks , an empirical study still does not exist that evaluates performance by applying a large body of encoder-decoder models on sod tasks . in this paper , instead of limiting our survey to sod methods , a broader view is further presented from the perspective of fundamental architectures of key modules and structures in cnn-based encoder-decoder models for pixel-level dense prediction tasks . moreover , we focus on performing sod by leveraging deep encoder-decoder models , and present an extensive empirical study on baseline encoder-decoder models in terms of different encoder backbones , loss functions , training batch sizes , and attention structures . moreover , state-of-the-art encoder-decoder models adopted from semantic segmentation and deep cnn-based sod models are also investigated . new baseline models that can outperform state-of-the-art performance were discovered . in addition , these newly discovered baseline models were further evaluated on three video-based sod benchmark datasets . experimental results demonstrate the effectiveness of these baseline story_separator_special_tag deep neural networks ( dnn ) have achieved state-of-the-art results in a wide range of tasks , with the best results obtained with large training sets and large models . in the past , gpus enabled these breakthroughs because of their greater computational speed . in the future , faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices . as a result , there is much interest in research and development of dedicated hardware for deep learning ( dl ) . binary weights , i.e. , weights which are constrained to only two possible values ( e.g . -1 or 1 ) , would bring great benefits to specialized dl hardware by replacing many multiply-accumulate operations by simple accumulations , as multipliers are the most space and power-hungry components of the digital implementation of neural networks . we introduce binaryconnect , a method which consists in training a dnn with binary weights during the forward and backward propagations , while retaining precision of the stored weights in which gradients are accumulated . like other dropout schemes , we show that binaryconnect acts as regularizer and we story_separator_special_tag a very deep convolutional neural network ( cnn ) has recently achieved great success for image super-resolution ( sr ) and offered hierarchical features as well . however , most deep cnn based sr models do not make full use of the hierarchical features from the original low-resolution ( lr ) images , thereby achieving relatively-low performance . in this paper , we propose a novel residual dense network ( rdn ) to address this problem in image sr. we fully exploit the hierarchical features from all the convolutional layers . specifically , we propose residual dense block ( rdb ) to extract abundant local features via dense connected convolutional layers . rdb further allows direct connections from the state of preceding rdb to all the layers of current rdb , leading to a contiguous memory ( cm ) mechanism . local feature fusion in rdb is then used to adaptively learn more effective features from preceding and current local features and stabilizes the training of wider network . after fully obtaining dense local features , we use global feature fusion to jointly and adaptively learn global hierarchical features in a holistic way . experiments on benchmark datasets with different story_separator_special_tag this paper tackles the problem of recognizing characters in images of natural scenes . in particular , we focus on recognizing characters in situations that would traditionally not be handled well by ocr techniques . we present an annotated database of images containing english and kannada characters . the database comprises of images of street scenes taken in bangalore , india using a standard camera . the problem is addressed in an object cateogorization framework based on a bag-of-visual-words representation . we assess the performance of various features based on nearest neighbour and svm classification . it is demonstrated that the performance of the proposed method , using as few as 15 training images , can be far superior to that of commercial ocr systems . furthermore , the method can benefit from synthetically generated training data obviating the need for expensive data collection and annotation . story_separator_special_tag we present a neural network-based face detection system . a retinally connected neural network examines small windows of an image and decides whether each window contains a face . the system arbitrates between multiple networks to improve performance over a single network . we use a bootstrap algorithm for training the networks , which adds false detections into the training set as training progresses . this eliminates the difficult task of manually selecting non-face training examples , which must be chosen to span the entire space of non-face images . comparisons with other state-of-the-art face detection systems are presented ; our system has better performance in terms of detection and false-positive rates . story_separator_special_tag content-based image retrieval ( cbir ) is the most challenging task of retrieving similar images for large databases . in this work , we present experimental analysis based on scale invariant feature transform ( sift ) and speeded up robust features ( surf ) as local features for multi-object image retrieval . the experiments are conducted on database consisting of group of images obtained by aggregating two objects , three objects , four objects and five objects together from the columbia object image library ( coil-100 ) image dataset of ten categories . for the query image , features are extracted using sift or surf . these features of query image are compared with corresponding features stored in the database for similarity retrieval . the precision , recall and fmeasure values are computed for each group and results are tabulated . the experimental results show that the accuracy of retrieval from surf is better than sift method . story_separator_special_tag reducing the dimensionality of data without losing intrinsic information is an important preprocessing step in high-dimensional data analysis . fisher discriminant analysis ( fda ) is a traditional technique for supervised dimensionality reduction , but it tends to give undesired results if samples in a class are multimodal . an unsupervised dimensionality reduction method called locality-preserving projection ( lpp ) can work well with multimodal data due to its locality preserving property . however , since lpp does not take the label information into account , it is not necessarily useful in supervised learning scenarios . in this paper , we propose a new linear supervised dimensionality reduction method called local fisher discriminant analysis ( lfda ) , which effectively combines the ideas of fda and lpp . lfda has an analytic form of the embedding transformation and the solution can be easily computed just by solving a generalized eigenvalue problem . we demonstrate the practical usefulness and high scalability of the lfda method in data visualization and classification tasks through extensive simulation studies . we also show that lfda can be extended to non-linear dimensionality reduction scenarios by applying the kernel trick . story_separator_special_tag the problem of distributed representation learning is one in which multiple sources of information $ x_1 , \\ldots , x_k $ x 1 , . , x k are processed separately so as to learn as much information as possible about some ground truth $ y $ y . we investigate this problem from information-theoretic grounds , through a generalization of tishby 's centralized information bottleneck ( ib ) method to the distributed setting . specifically , $ k $ k encoders , $ k \\geq 2 $ k 2 , compress their observations $ x_1 , \\ldots , x_k $ x 1 , . , x k separately in a manner such that , collectively , the produced representations preserve as much information as possible about $ y $ y . we study both discrete memoryless ( dm ) and memoryless vector gaussian data models . for the discrete model , we establish a single-letter characterization of the optimal tradeoff between complexity ( or rate ) and relevance ( or information ) for a class of memoryless sources ( the observations $ x_1 , \\ldots , x_k $ x 1 , . , x k being conditionally independent given story_separator_special_tag object recognition has reached a level where we can identify a large number of previously seen and known objects . however , the more challenging and important task of categorizing previously unseen objects remains largely unsolved . traditionally , contour and shape based methods are regarded most adequate for handling the generalization requirements needed for this task . appearance based methods , on the other hand , have been successful in object identification and detection scenarios . today little work is done to systematically compare existing methods and characterize their relative capabilities for categorizing objects . in order to compare different methods we present a new database specifically tailored to the task of object categorization . it contains high-resolution color images of 80 objects from 8 different categories , for a total of 3280 images . it is used to analyze the performance of several appearance and contour based methods . the best categorization result is obtained by an appropriate combination of different methods . story_separator_special_tag multiview representation learning ( mvrl ) leverages information from multiple views to obtain a common representation summarizing the consistency and complementarity in multiview data . most previous matrix factorization-based mvrl methods are shallow models that neglect the complex hierarchical information . the recently proposed deep multiview factorization models can not explicitly capture consistency and complementarity in multiview data . we present the deep multiview concept learning ( dmcl ) method , which hierarchically factorizes the multiview data , and tries to explicitly model consistent and complementary information and capture semantic structures at the highest abstraction level . we explore two variants of the dmcl framework , dmcl-l and dmcl-n , with respectively linear/nonlinear transformations between adjacent layers . we propose two block coordinate descent-based optimization methods for dmcl-l and dmcl-n. we verify the effectiveness of dmcl on three real-world data sets for both clustering and classification tasks . story_separator_special_tag between october 2000 and december 2000 , we collected a database of over 40,000 facial images of 68 people . using the cmu ( carnegie mellon university ) 3d room , we imaged each person across 13 different poses , under 43 different illumination conditions , and with four different expressions . we call this database the cmu pose , illumination and expression ( pie ) database . in this paper , we describe the imaging hardware , the collection procedure , the organization of the database , several potential uses of the database , and how to obtain the database . story_separator_special_tag we propose a novel document clustering method which aims to cluster the documents into different semantic classes . the document space is generally of high dimensionality and clustering in such a high dimensional space is often infeasible due to the curse of dimensionality . by using locality preserving indexing ( lpi ) , the documents can be projected into a lower-dimensional semantic space in which the documents related to the same semantics are close to each other . different from previous document clustering methods based on latent semantic indexing ( lsi ) or nonnegative matrix factorization ( nmf ) , our method tries to discover both the geometric and discriminating structures of the document space . theoretical analysis of our method shows that lpi is an unsupervised approximation of the supervised linear discriminant analysis ( lda ) method , which gives the intuitive motivation of our method . extensive experimental evaluations are performed on the reuters-21578 and tdt2 data sets . story_separator_special_tag in supervised learning scenarios , feature selection has been studied widely in the literature . selecting features in unsupervised learning scenarios is a much harder problem , due to the absence of class labels that would guide the search for relevant information . and , almost all of previous unsupervised feature selection methods are `` wrapper '' techniques that require a learning algorithm to evaluate the candidate feature subsets . in this paper , we propose a `` filter '' method for feature selection which is independent of any learning algorithm . our method can be performed in either supervised or unsupervised fashion . the proposed method is based on the observation that , in many real world classification problems , data from the same class are often close to each other . the importance of a feature is evaluated by its power of locality preserving , or , laplacian score . we compare our method with data variance ( unsupervised ) and fisher score ( supervised ) on two data sets . experimental results demonstrate the effectiveness and efficiency of our algorithm . story_separator_special_tag both interclass variances and intraclass similarities are crucial for improving the classification performance of discriminative dictionary learning ( ddl ) algorithms . however , existing ddl methods often ignore the combination between the interclass and intraclass properties of dictionary atoms and coding coefficients . to address this problem , in this paper , we propose a discriminative fisher embedding dictionary learning ( dfedl ) algorithm that simultaneously establishes fisher embedding models on learned atoms and coefficients . specifically , we first construct a discriminative fisher atom embedding model by exploring the fisher criterion of the atoms , which encourages the atoms of the same class to reconstruct the corresponding training samples as much as possible . at the same time , a discriminative fisher coefficient embedding model is formulated by imposing the fisher criterion on the profiles ( row vectors of the coding coefficient matrix ) and coding coefficients , which forces the coding coefficient matrix to become a block-diagonal matrix . since the profiles can indicate which training samples are represented by the corresponding atoms , the proposed two discriminative fisher embedding models can alternatively and interactively promote the discriminative capabilities of the learned dictionary and coding coefficients . story_separator_special_tag in this paper , we present a novel local sensitive dual concept learning ( lsdcl ) method for the task of unsupervised feature selection . we first reconstruct the original data matrix by the proposed dual concept learning model , which inherits the merit of co-clustering based dual learning mechanism for more interpretable and compact data reconstruction . we then adopt the local sensitive loss function , which emphasizes more on most similar pairs with small errors to better characterize the local structure of data . in this way , our method can select features with better clustering results by more compact data reconstruction and more faithful local structure preserving . an iterative algorithm with convergence guarantee is also developed to find the optimal solution . we fully investigate the performance improvement by the newly developed terms , individually and simultaneously . extensive experiments on benchmark datasets further show that lsdcl outperforms many state-of-the-art unsupervised feature selection algorithms . story_separator_special_tag growth of the internet and web applications has led to vast amount of information over web . information filtering systems such as recommenders have become potential tools to deal with such plethora of information , help users select and provide relevant information . collaborative filtering is the popular approach to recommendation systems . collaborative filtering works on the fact that users with similar behavior will have similar interests in future , and using this notion collaborative filtering recommends items to user . however , the sparseness in data and high dimensionality has become a challenge . to resolve such issues , model based , matrix factorization techniques have well emerged . these techniques have evolved from using simple user-item rating information to auxiliary information such as time and trust . in this paper , we present a comprehensive review on such matrix factorization techniques and their usage in recommenders . story_separator_special_tag the fundamentals and properties of non-negative matrix factorization ( nmf ) are introduced , and available nmf algorithms are classified into two categories : basic nmf model-based algorithms and improved nmf model-based algorithms.based on these , the design principles , application characteristics , and existing problems of the algorithms are systematically discussed.besides , some open problems in the development of nmf algorithms are presented and analyzed . story_separator_special_tag data clustering is a fundamental problem in the field of machine learning . among the numerous clustering techniques , matrix factorization-based methods have achieved impressive performances because they are able to provide a compact and interpretable representation of the input data . however , most of the existing works assume that each class has a global centroid , which does not hold for data with complicated structures . besides , they can not guarantee that the sample is associated with the nearest centroid . in this work , we present a concept factorization with the local centroids ( cflcs ) approach for data clustering . the proposed model has the following advantages : 1 ) the samples from the same class are allowed to connect with multiple local centroids such that the manifold structure is captured ; 2 ) the pairwise relationship between the samples and centroids is modeled to produce a reasonable label assignment ; and 3 ) the clustering problem is formulated as a bipartite graph partitioning task , and an efficient algorithm is designed for optimization . experiments on several data sets validate the effectiveness of the cflc model and demonstrate its superior performance over the state story_separator_special_tag an image database for handwritten text recognition research is described . digital images of approximately 5000 city names , 5000 state names , 10000 zip codes , and 50000 alphanumeric characters are included . each image was scanned from mail in a working post office at 300 pixels/in in 8-bit gray scale on a high-quality flat bed digitizer . the data were unconstrained for the writer , style , and method of preparation . these characteristics help overcome the limitations of earlier databases that contained only isolated characters or were prepared in a laboratory setting under prescribed circumstances . also , the database is divided into explicit training and testing sets to facilitate the sharing of results among researchers as well as performance comparisons . > story_separator_special_tag we present fashion-mnist , a new dataset comprising of 28x28 grayscale images of 70,000 fashion products from 10 categories , with 7,000 images per category . the training set has 60,000 images and the test set has 10,000 images . fashion-mnist is intended to serve as a direct drop-in replacement for the original mnist dataset for benchmarking machine learning algorithms , as it shares the same image size , data format and the structure of training and testing splits . the dataset is freely available at this https url story_separator_special_tag the trace ratio criterion based lda method is utilized for fault diagnosis of rolling element bearings.tr-lda is also extended to handle the nonlinear datasets confronted in real-world fault diagnosis.we evaluate the proposed method by visualizing and classifying the rolling element bearing fault data.simulations results show the superiority of the method in fault diagnosis of rolling element bearings . rolling element bearings play an important role in ensuring the availability of industrial machines . unexpected bearing failures in such machines during field operation can lead to machine breakdown , which may have some pretty severe implications . to address such concern , we extend our algorithm for solving trace ratio problem in linear discriminant analysis to diagnose faulty bearings in this paper . our algorithm is validated by comparison with other state-of art methods based on a uci data set , and then be extended to rolling element bearing data . through the construction of feature data set from sensor-based vibration signals of bearing , the fault diagnosis problem is solved as a pattern classification and recognition way . the two-dimensional visualization and classification accuracy of bearing data show that our algorithm is able to recognize different bearing fault categories effectively story_separator_special_tag abstractcontrol charts as used in statistical process control can exhibit six principal types of patterns : normal , cyclic , increasing trend , decreasing trend , upward shift and downward shift . apart from normal patterns , all the other patterns indicate abnormalities in the process that must be corrected . accurate and speedy detection of such patterns is important to achieving tight control of the process and ensuring good product quality . this paper describes a new type of neural network for control chart pattern recognition . the neural network is self-organizing and can learn to recognize new patterns in an on-line incremental manner . the key feature of the proposed neural network is the criterion employed to select the firing neuron , i.e . the neuron indicating the pattern class . the paper gives a comparison of the results obtained using the proposed network and those for other self-organizing networks employing a different firing criterion . story_separator_special_tag this paper presents a general inductive graph representation learning framework called $ \\text { deepgl } $ deepgl for learning deep node and edge features that generalize across-networks . in particular , $ \\text { deepgl } $ deepgl begins by deriving a set of base features from the graph ( e.g. , graphlet features ) and automatically learns a multi-layered hierarchical graph representation where each successive layer leverages the output from the previous layer to learn features of a higher-order . contrary to previous work , $ \\text { deepgl } $ deepgl learns relational functions ( each representing a feature ) that naturally generalize across-networks and are therefore useful for graph-based transfer learning tasks . moreover , $ \\text { deepgl } $ deepgl naturally supports attributed graphs , learns interpretable inductive graph representations , and is space-efficient ( by learning sparse feature vectors ) . in addition , $ \\text { deepgl } $ deepgl is expressive , flexible with many interchangeable components , efficient with a time complexity of $ \\mathcal { o } ( |e| ) $ o ( | e | ) , and scalable for large networks via an efficient parallel implementation . story_separator_special_tag machine learning on graphs is an important and ubiquitous task with applications ranging from drug design to friendship recommendation in social networks . the primary challenge in this domain is finding a way to represent , or encode , graph structure so that it can be easily exploited by machine learning models . traditionally , machine learning approaches relied on user-defined heuristics to extract features encoding structural information about a graph ( e.g. , degree statistics or kernel functions ) . however , recent years have seen a surge in approaches that automatically learn to encode graph structure into low-dimensional embeddings , using techniques based on deep learning and nonlinear dimensionality reduction . here we provide a conceptual review of key advancements in this area of representation learning on graphs , including matrix factorization-based methods , random-walk based algorithms , and graph neural networks . we review methods to embed individual nodes as well as approaches to embed entire ( sub ) graphs . in doing so , we develop a unified framework to describe these recent approaches , and we highlight a number of important applications and directions for future work . story_separator_special_tag topic modeling has been a key problem for document analysis . one of the canonical approaches for topic modeling is probabilistic latent semantic indexing , which maximizes the joint probability of documents and terms in the corpus . the major disadvantage of plsi is that it estimates the probability distribution of each document on the hidden topics independently and the number of parameters in the model grows linearly with the size of the corpus , which leads to serious problems with overfitting . latent dirichlet allocation ( lda ) is proposed to overcome this problem by treating the probability distribution of each document over topics as a hidden random variable . both of these two methods discover the hidden topics in the euclidean space . however , there is no convincing evidence that the document space is euclidean , or flat . therefore , it is more natural and reasonable to assume that the document space is a manifold , either linear or nonlinear . in this paper , we consider the problem of topic modeling on intrinsic document manifold . specifically , we propose a novel algorithm called laplacian probabilistic latent semantic indexing ( lapplsi ) for topic modeling story_separator_special_tag clustering is a long-standing important research problem , however , remains challenging when handling large-scale image data from diverse sources . in this paper , we present a novel binary multi-view clustering ( bmvc ) framework , which can dexterously manipulate multi-view image data and easily scale to large data . to achieve this goal , we formulate bmvc by two key components : compact collaborative discrete representation learning and binary clustering structure learning , in a joint learning framework . specifically , bmvc collaboratively encodes the multi-view image descriptors into a compact common binary code space by considering their complementary information ; the collaborative binary representations are meanwhile clustered by a binary matrix factorization model , such that the cluster structures are optimized in the hamming space by pure , extremely fast bit-operations . for efficiency , the code balance constraints are imposed on both binary data representations and cluster centroids . finally , the resulting optimization problem is solved by an alternating optimization scheme with guaranteed fast convergence . extensive experiments on four large-scale multi-view image datasets demonstrate that the proposed method enjoys the significant reduction in both computation and memory footprint , while observing superior ( in story_separator_special_tag clustering with incomplete views is a challenge in multi-view clustering . in this paper , we provide a novel and simple method to address this issue . specifically , the proposed method simultaneously exploits the local information of each view and the complementary information among views to learn the common latent representation for all samples , which can greatly improve the compactness and discriminability of the obtained representation . compared with the conventional graph embedding methods , the proposed method does not introduce any extra regularization term and corresponding penalty parameter to preserve the local structure of data , and thus does not increase the burden of extra parameter selection . by imposing the orthogonal constraint on the basis matrix of each view , the proposed method is able to handle the out-of-sample . moreover , the proposed method can be viewed as a unified framework for multi-view learning since it can handle both incomplete and complete multi-view clustering and classification tasks . extensive experiments conducted on several multi-view datasets prove that the proposed method can significantly improve the clustering performance . story_separator_special_tag in recent years , incomplete multi-view clustering , which studies the challenging multi-view clustering problem on missing views , has received growing research interests . although a series of methods have been proposed to address this issue , the following problems still exist : 1 ) almost all of the existing methods are based on shallow models , which is difficult to obtain discriminative common representations . 2 ) these methods are generally sensitive to noise or outliers since the negative samples are treated equally as the important samples . in this paper , we propose a novel incomplete multi-view clustering network , called cognitive deep incomplete multi-view clustering network ( cdimc-net ) , to address these issues . specifically , it captures the high-level features and local structure of each view by incorporating the view-specific deep encoders and graph embedding strategy into a framework . moreover , based on the human cognition , i.e. , learning from easy to hard , it introduces a self-paced strategy to select the most confident samples for model training , which can reduce the negative influence of outliers . experimental results on several incomplete datasets show that cdimc-net outperforms the state-of-theart incomplete multi-view
nitrogen ( n ) is one of the most important limiting nutrients for sugarcane production . conventionally , sugarcane n concentration is examined using direct methods such as collecting leaf samples from the field followed by analytical assays in the laboratory . these methods do not offer real-time , quick , and non-destructive strategies for estimating sugarcane n concentration . methods that take advantage of remote sensing , particularly hyperspectral data , can present reliable techniques for predicting sugarcane leaf n concentration . hyperspectral data are extremely large and of high dimensionality . many hyperspectral features are redundant due to the strong correlation between wavebands that are adjacent . hence , the analysis of hyperspectral data is complex and needs to be simplified by selecting the most relevant spectral features . the aim of this study was to explore the potential of a random forest ( rf ) regression algorithm for selecting spectral features in hyperspectral data necessary for predicting sugarcane leaf n concentration . to achieve this , two hyperion images were captured from fields of 6 7 month-old sugarcane , variety n19 . the machine-learning rf algorithm was used as a feature-selection and regression method to analyse the story_separator_special_tag folivorous primate biomass has been shown to positively correlate with the average protein-to-fiber ratio in mature leaves of tropical forests . however , studies have failed to explain the mismatch between dietary selection and the role of the protein-to-fiber ratio on primate biomass ; why do not folivores always favor mature leaves or leaves with the highest protein-to-fiber ratio ? we examined the effect of leaf chemical characteristics and plant abundance ( using transect censuses ; 0.37 ha , 233 trees ) on food choices and nutrient/toxin consumption in a folivorous lemur ( propithecus verreauxi ) in a gallery forest in southern madagascar . to assess the nutritional quality of the habitat , we calculated an abundance-weighted chemical index for each chemical variable . food intake was quantified using a continuous count of mouthfuls during individual full-day follows across three seasons . we found a significant positive correlation between food ranking in the diet and plant abundance . the protein-to-fiber ratio and most other chemical variables tested had no statistical effect on dietary selection . numerous chemical characteristics of the sifaka 's diet were essentially by-products of generalist feeding and `` low energy input/low energy crop '' strategy . the story_separator_special_tag accurate estimates of papyrus cyperus papyrus biomass are critical for an efficient papyrus swamp monitoring and management system . the objective of this study was to test the utility of random forest rf regression and two narrow-band vegetation indices in estimating above-ground biomass agb for complex and densely vegetated swamp canopies . the normalized difference vegetation index ndvi and enhanced vegetation index evi were calculated from field spectrometry data and fresh agb was measured in 82 quadrats at three different areas in the isimangaliso wetland park , south africa . ndvi was calculated from all possible band combinations of the electromagnetic spectrum 350 and 2500\xa0nm , while evi was calculated from possible band combinations in the blue , red , and near infrared of the spectrum . backward feature elimination and rf regression were used as variable selection and modelling techniques to predict papyrus agb . results showed that the effective portions of electromagnetic spectrum for estimation agb of papyrus swamp were located within the blue , red , red-edge , and near-infrared regions . the three best selected evis were computed from bands located at i 445 , 682 , and 829\xa0nm , ii 497 , 676 , and story_separator_special_tag ing and non-profit use of the material is permitted with credit to the source . statements and opinions expressed in the chapters are these of the individual contributors and not necessarily those of the editors or publisher . no responsibility is accepted for the accuracy of information contained in the published articles . publisher assumes no responsibility liability for any damage or injury to persons or property arising out of the use of any materials , instructions , methods or ideas contained inside . after this work has been published by the intech , authors have the right to republish it , in whole or part , in any publication of which they are an author or editor , and the make other personal use of the work . preface space technology has become increasingly important after the great development and rapid progress in information and communication technology as well as the technology of space exploration . this book deals with the latest and most prominent research in space technology . the first part of the book ( first six chapters ) deals with the algorithms and software used in information processing , communications and control of spacecrafts . the story_separator_special_tag this paper presents an unsupervised algorithm for nonlinear unmixing of hyperspectral images . the proposed model assumes that the pixel reflectances result from a nonlinear function of the abundance vectors associated with the pure spectral components . we assume that the spectral signatures of the pure components and the nonlinear function are unknown . the first step of the proposed method estimates the abundance vectors for all the image pixels using a bayesian approach an a gaussian process latent variable model for the nonlinear function ( relating the abundance vectors to the observations ) . the endmembers are subsequently estimated using gaussian process regression . the performance of the unmixing strategy is first evaluated on synthetic data . the proposed method provides accurate abundance and endmember estimations when compared to other linear and nonlinear unmixing strategies . an interesting property is its robustness to the absence of pure pixels in the image . the analysis of a real hyperspectral image shows results that are in good agreement with state of the art unmixing strategies and with a recent classification method . story_separator_special_tag this paper presents a nonlinear mixing model for hyperspectral image unmixing . the proposed model assumes that the pixel reflectances are post-nonlinear functions of unknown pure spectral components contaminated by an additive white gaussian noise . these nonlinear functions are approximated using polynomials leading to a polynomial post-nonlinear mixing model . a bayesian algorithm is proposed to estimate the parameters involved in the model yielding an unsupervised nonlinear unmixing algorithm . due to the large number of parameters to be estimated , an efficient hamiltonian monte carlo algorithm is investigated . the classical leapfrog steps of this algorithm are modified to handle the parameter constraints . the performance of the unmixing strategy , including convergence and parameter tuning , is first evaluated on synthetic data . simulations conducted with real data finally show the accuracy of the proposed unmixing strategy for the analysis of hyperspectral images . story_separator_special_tag this paper presents a nonlinear mixing model for hyperspectral image unmixing . the proposed model assumes that the pixel reflectances are nonlinear functions of pure spectral components contaminated by an additive white gaussian noise . these nonlinear functions are approximated using polynomial functions leading to a polynomial postnonlinear mixing model . a bayesian algorithm and optimization methods are proposed to estimate the parameters involved in the model . the performance of the unmixing strategies is evaluated by simulations conducted on synthetic and real data . story_separator_special_tag this paper presents a new bayesian model and algorithm for nonlinear unmixing of hyperspectral images . the model proposed represents the pixel reflectances as linear combinations of the endmembers , corrupted by nonlinear ( with respect to the endmembers ) terms and additive gaussian noise . prior knowledge about the problem is embedded in a hierarchical model that describes the dependence structure between the model parameters and their constraints . in particular , a gamma markov random field is used to model the joint distribution of the nonlinear terms , which are expected to exhibit significant spatial correlations . an adaptive markov chain monte carlo algorithm is then proposed to compute the bayesian estimates of interest and perform bayesian inference . this algorithm is equipped with a stochastic optimisation adaptation mechanism that automatically adjusts the parameters of the gamma markov random field by maximum marginal likelihood estimation . finally , the proposed methodology is demonstrated through a series of experiments with comparisons using synthetic and real data and with competing state-of-the-art approaches . story_separator_special_tag this paper presents a nonlinear mixing model for joint hyperspectral image unmixing and nonlinearity detection . the proposed model assumes that the pixel reflectances are linear combinations of known pure spectral components corrupted by an additional nonlinear term , affecting the end members and contaminated by an additive gaussian noise . a markov random field is considered for nonlinearity detection based on the spatial structure of the nonlinear terms . the observed image is segmented into regions where nonlinear terms , if present , share similar statistical properties . a bayesian algorithm is proposed to estimate the parameters involved in the model yielding a joint nonlinear unmixing and nonlinearity detection algorithm . the performance of the proposed strategy is first evaluated on synthetic data . simulations conducted with real data show the accuracy of the proposed unmixing and nonlinearity detection strategy for the analysis of hyperspectral images . story_separator_special_tag this purpose of this introductory paper is threefold . first , it introduces the monte carlo method with emphasis on probabilistic machine learning . second , it reviews the main building blocks of modern markov chain monte carlo simulation , thereby providing and introduction to the remaining papers of this special issue . lastly , it discusses new interesting research horizons . story_separator_special_tag abstract developing spectral models of soil properties is an important frontier in remote sensing and soil science . several studies have focused on modeling soil properties such as total pools of soil organic matter and carbon in bare soils . we extended this effort to model soil parameters in areas densely covered with coastal vegetation . moreover , we investigated soil properties indicative of soil functions such as nutrient and organic matter turnover and storage . these properties include the partitioning of mineral and organic soil between particulate ( > 53\xa0 m ) and fine size classes , and the partitioning of soil carbon and nitrogen pools between stable and labile fractions . soil samples were obtained from avicennia germinans mangrove forest and juncus roemerianus salt marsh plots on the west coast of central florida . spectra corresponding to field plot locations from hyperion hyperspectral image were extracted and analyzed . the spectral information was regressed against the soil variables to determine the best single bands and optimal band combinations for the simple ratio ( sr ) and normalized difference index ( ndi ) indices . the regression analysis yielded levels of correlation for soil variables with r2 values ranging story_separator_special_tag effective spatial spectral pixel description is of crucial significance for the classification of hyperspectral remote sensing images . attribute profiles are considered as one of the most prominent approaches in this regard , since they can capture efficiently arbitrary geometric and spectral properties . lately though , the advent of deep learning in its various forms has also led to remarkable classification performances by operating directly on hyperspectral input . in this letter , we explore the collaboration potential of these two powerful feature extraction approaches . specifically , we propose a new strategy for hyperspectral image classification , where attribute filtered images are stacked and provided as input to convolutional neural networks . our experiments with two real hyperspectral remote sensing data sets show that the proposed strategy leads to a performance improvement , as opposed to using each of the involved approaches individually . story_separator_special_tag the k-means method is a widely used clustering technique that seeks to minimize the average squared distance between points in the same cluster . although it offers no accuracy guarantees , its simplicity and speed are very appealing in practice . by augmenting k-means with a very simple , randomized seeding technique , we obtain an algorithm that is ( logk ) -competitive with the optimal clustering . preliminary experiments show that our augmentation improves both the speed and the accuracy of k-means , often quite dramatically . story_separator_special_tag abstract analyses of various biophysical and biochemical factors affecting plant canopy reflectance have been carried out over the past few decades , yet the relative importance of these factors has not been adequately addressed . a combination of field and modeling techniques were used to quantify the relative contribution of leaf , stem , and litter optical properties ( incorporating known variation in foliar biochemical properties ) and canopy structural attributes to nadir-viewed vegetation reflectance data . variability in tissue optical properties was wavelength-dependent . for green foliage , the lowest variation was in the visible ( vis ) spectral region and the highest in the near-infrared ( nir ) . for standing litter material , minimum variation occurred in the vis/nir , while the largest differences were observed in the shortwave-ir ( swir ) . woody stem material showed opposite trends , with lowest variation in the swir and highest in the nir . leaf area index ( lai ) and leaf angle distribution ( lad ) were the dominant controls on canopy reflectance data with the exception of soil reflectance and vegetation cover in sparse canopies . leaf optical properties ( and thus foliar chemistry ) were expressed story_separator_special_tag while hyperspectral data are very rich in information , processing the hyperspectral data poses several challenges regarding computational requirements , information redundancy removal , relevant information identification , and modeling accuracy . in this paper we present a new methodology for combining unsupervised and supervised methods under classification accuracy and computational requirement constraints that is designed to perform hyperspectral band ( wavelength range ) selection and statistical modeling method selection . the band and method selections are utilized for prediction of continuous ground variables using airborne hyperspectral measurements . the novelty of the proposed work is in combining strengths of unsupervised and supervised band selection methods to build a computationally efficient and accurate band selection system . the unsupervised methods are used to rank hyperspectral bands while the accuracy of the predictions of supervised methods are used to score those rankings . we conducted experiments with seven unsupervised and three supervised methods . the list of unsupervised methods includes information entropy , first and second spectral derivative , spatial contrast , spectral ratio , correlation , and principal component analysis ranking combined with regression , regression tree , and instance-based supervised methods . these methods were applied to a data story_separator_special_tag airborne remote sensing has an important role to play in mapping and monitoring biodiversity over large spatial scales . techniques for applying this technology to biodiversity mapping have focused on remote species identification of individual crowns ; however , this requires collection of a large number of crowns to train a classifier , which may limit the usefulness of this approach in many study regions . based on the premise that the spectral variation among sites is related to their ecological dissimilarity , we asked whether it is possible to estimate the beta diversity , or turnover in species composition , among sites without the use of training data . we evaluated alternative methods using simulated communities constructed from the spectra of field-identified tree and shrub crowns from an african savanna . a method based on the k-means clustering of crown spectra produced beta diversity estimates ( measured as bray-curtis dissimilarity ) among sites with an average pairwise correlation of ~0.5 with the true beta diversity , compared to an average correlation of ~0.8 obtained by a supervised species classification approach . when applied to savanna landscapes , the unsupervised clustering method produced beta diversity estimates similar to those obtained story_separator_special_tag ultimagoldtm ab and optiphasetrisafe are two liquid scintillators made by perkin elmer and eg & g company respectively . both are commercially promoted as scintillation detectors for and particles . in this work , the responses to -rays and neutrons of ultimagoldtm ab and optiphasetrisafe liquid scintillators , without and with reflector , have been measured aiming to use these scintillators as -rays and neutron detectors . responses to -rays and neutrons were measured as pulse shape spectra in a multichannel analyzer . scintillators were exposed to gamma rays produced by 137cs , 54mn , 22na and 60co sources . the response to neutrons was obtained with a 241ambe neutron source that was measured to 25 and 50 cm from the scintillators . the pulse height spectra due to gamma rays are shifted to larger channels as the photon energy increases and these responses are different from the response due to neutrons . thus , ultimagoldtm ab and optiphasetrisafe can be used to detect -rays and neutrons . hector rene vega-carrillo1/ * , martha isabel escalona-llaguno1 , luis hernandez-adame2 , sergio m. sarmiento-rosales1 , claudia a. m\xe1rquez-mata1 , guillermo e. campillo-rivera1 , v.p . singh3 , teodoro rivera-montalvo4 & segundo story_separator_special_tag abstract : isodata , a novel method of data analysis and pattern classification , is described in verbal and pictorial terms , in terms of a two-dimensional example , and by giving the mathematical calculations that the method uses . the technique clusters many-variable data around points in the data 's original high- dimensional space and by doing so provides a useful description of the data . a brief summary of results from analyzing alphanumeric , gaussian , sociological and meteorological data is given . in the appendix , generalizations of the existing technique to clustering around lines and planes are discussed and a tentative algorithm for clustering around lines is given . story_separator_special_tag this paper analyzes the classification of hyperspectral remote sensing images with linear discriminant analysis ( lda ) in the presence of a small ratio between the number of training samples and the number of spectral features . in these particular ill-posed problems , a reliable lda requires one to introduce regularization for problem solving . nonetheless , in such a challenging scenario , the resulting regularized lda ( rlda ) is highly sensitive to the tuning of the regularization parameter . in this context , we introduce in the remote sensing community an efficient version of the rlda recently presented by ye to cope with critical ill-posed problems . in addition , several lda-based classifiers ( i.e. , penalized lda , orthogonal lda , and uncorrelated lda ) are compared theoretically and experimentally with the standard lda and the rlda . method differences are highlighted through toy examples and are exhaustively tested on several ill-posed problems related to the classification of hyperspectral remote sensing images . experimental results confirm the effectiveness of the presented rlda technique and point out the main properties of other analyzed lda techniques in critical ill-posed hyperspectral image classification problems . story_separator_special_tag this paper presents a method for anomaly detection in hyperspectral images based on the support vector data description ( svdd ) , a kernel method for modeling the support of a distribution . conventional anomaly-detection algorithms are based upon the popular reed-xiaoli detector . however , these algorithms typically suffer from large numbers of false alarms due to the assumptions that the local background is gaussian and homogeneous . in practice , these assumptions are often violated , especially when the neighborhood of a pixel contains multiple types of terrain . to remove these assumptions , a novel anomaly detector that incorporates a nonparametric background model based on the svdd is derived . expanding on prior svdd work , a geometric interpretation of the svdd is used to propose a decision rule that utilizes a new test statistic and shares some of the properties of constant false-alarm rate detectors . using receiver operating characteristic curves , the authors report results that demonstrate the improved performance and reduction in the false-alarm rate when using the svdd-based detector on wide-area airborne mine detection ( waamd ) and hyperspectral digital imagery collection experiment ( hydice ) imagery story_separator_special_tag recent remote sensing literature has shown that support vector machine ( svm ) methods generally outperform traditional statistical and neural methods in classification problems involving hyperspectral images . however , there are still open issues that , if suitably addressed , could allow further improvement of their performances in terms of classification accuracy . two especially critical issues are : 1 ) the determination of the most appropriate feature subspace where to carry out the classification task and 2 ) model selection . in this paper , these two issues are addressed through a classification system that optimizes the svm classifier accuracy for this kind of imagery . this system is based on a genetic optimization framework formulated in such a way as to detect the best discriminative features without requiring the a priori setting of their number by the user and to estimate the best svm parameters ( i.e. , regularization and kernel parameters ) in a completely automatic way . for these purposes , it exploits fitness criteria intrinsically related to the generalization capabilities of svm classifiers . in particular , two criteria are explored , namely : 1 ) the simple support vector count and 2 ) story_separator_special_tag in this paper , a novel semisupervised regression approach is proposed to tackle the problem of biophysical parameter estimation that is constrained by a limited availability of training ( labeled ) samples . the main objective of this approach is to increase the accuracy of the estimation process based on the support vector machine ( svm ) technique by exploiting unlabeled samples that are available from the image under analysis at zero cost . the integration of such samples in the regression process is controlled through a particle swarm optimization ( pso ) framework that is defined by considering separately or jointly two different optimization criteria , thus leading to the implementation of three different inflation strategies . these two criteria are empirical and structural expressions of the generalization capability of the resulting semisupervised pso-svm regression system . the conducted experiments were focused on the problem of estimating chlorophyll concentrations in coastal waters from multispectral remote sensing images . in particular , we report and discuss results of experiments that are designed in such a way as to test the proposed approach in terms of : 1 ) capability to capture useful information from a set of unlabeled samples for story_separator_special_tag gaussian processes ( gps ) represent a powerful and interesting theoretical framework for bayesian classification . despite having gained prominence in recent years , they remain an approach whose potentialities are not yet sufficiently known . in this paper , we propose a thorough investigation of the gp approach for classifying multisource and hyperspectral remote sensing images . to this end , we explore two analytical approximation methods for gp classification , namely , the laplace and expectation-propagation methods , which are implemented with two different covariance functions , i.e. , the squared exponential and neural-network covariance functions . moreover , we analyze how the computational burden of gp classifiers ( gpcs ) can be drastically reduced without significant losses in terms of discrimination power through a fast sparse-approximation method like the informative vector machine . experiments were designed aiming also at testing the sensitivity of gpcs to the number of training samples and to the curse of dimensionality . in general , the obtained classification results show clearly that the gpc can compete seriously with the state-of-the-art support vector machine classifier . story_separator_special_tag abstract a random forest ( rf ) classifier is an ensemble classifier that produces multiple decision trees , using a randomly selected subset of training samples and variables . this classifier has become popular within the remote sensing community due to the accuracy of its classifications . the overall objective of this work was to review the utilization of rf classifier in remote sensing . this review has revealed that rf classifier can successfully handle high data dimensionality and multicolinearity , being both fast and insensitive to overfitting . it is , however , sensitive to the sampling design . the variable importance ( vi ) measurement provided by the rf classifier has been extensively exploited in different scenarios , for example to reduce the number of dimensions of hyperspectral data , to identify the most relevant multisource remote sensing and geographic data , and to select the most suitable season to classify particular target classes . further investigations are required into less commonly exploited uses of this classifier , such as for sample proximity analysis to detect and remove outliers in the training samples . story_separator_special_tag classification of hyperspectral data with high spatial resolution from urban areas is investigated . a method based on mathematical morphology for preprocessing of the hyperspectral data is proposed . in this approach , opening and closing morphological transforms are used in order to isolate bright ( opening ) and dark ( closing ) structures in images , where bright/dark means brighter/darker than the surrounding features in the images . a morphological profile is constructed based on the repeated use of openings and closings with a structuring element of increasing size , starting with one original image . in order to apply the morphological approach to hyperspectral data , principal components of the hyperspectral imagery are computed . the most significant principal components are used as base images for an extended morphological profile , i.e. , a profile based on more than one original image . in experiments , two hyperspectral urban datasets are classified . the proposed method is used as a preprocessing method for a neural network classifier and compared to more conventional classification methods with different types of statistical computations and feature extraction . story_separator_special_tag the success of machine learning algorithms generally depends on data representation , and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data . although specific domain knowledge can be used to help design representations , learning with generic priors can also be used , and the quest for ai is motivating the design of more powerful representation-learning algorithms implementing such priors . this paper reviews recent work in the area of unsupervised feature learning and deep learning , covering advances in probabilistic models , auto-encoders , manifold learning , and deep networks . this motivates longer-term unanswered questions about the appropriate objectives for learning good representations , for computing representations ( i.e. , inference ) , and the geometrical connections between representation learning , density estimation and manifold learning . story_separator_special_tag abstract : the lowtran computer codes , with lowtran 7 being the most current version , are widely used to calculate atmospheric transmittance and/or radiance in the infrared , visible and near ultraviolet spectral regions ; lowtran 7 has been extended to include the microwave spectral region . the code is easily used , runs quickly and provides the user with a wide variety of atmospheric models and options . its spectral resolution is 20 1/cm full width/half maximum ( fwhm ) with calculations being done in 5 1/cm increments . this report describes work done to increase lowtran 's spectral resolution from 20 to 2 1/cm ( fwhm ) . specifically , the technical objectives for this program were : 1 ) to develop algorithms providing 2 1/cm resolution ( fwhm ) ; 2 ) to model molecular absorption of atmospheric molecules as a function of temperature and pressure ; 3 ) to calculate band model parameters for twelve lowtran molecular species ; and 4 ) to integrate the lowtran 7 capabilities into the new algorithms , maintaining compatibility with the multiple scattering option . modtran , the final product of this effort , is a moderate resolution lowtran story_separator_special_tag we describe the maximum-likelihood parameter estimation problem and how the expectationmaximization ( em ) algorithm can be used for its solution . we first describe the abstract form of the em algorithm as it is often given in the literature . we then develop the em parameter estimation procedure for two applications : 1 ) finding the parameters of a mixture of gaussian densities , and 2 ) finding the parameters of a hidden markov model ( hmm ) ( i.e. , the baum-welch algorithm ) for both discrete and gaussian mixture observation models . we derive the update equations in fairly explicit detail but we do not prove any convergence properties . we try to emphasize intuition rather than mathematical rigor . story_separator_special_tag imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras . imaging spectrometers are therefore often referred to as hyperspectral cameras ( hscs ) . higher spectral resolution enables material identification via spectroscopic analysis , which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis . due to low spatial resolution of hscs , microscopic material mixing , and multiple scattering , spectra measured by hscs are mixtures of spectra of materials in a scene . thus , accurate estimation requires unmixing . pixels are assumed to be mixtures of a few materials , called endmembers . unmixing involves estimating all or some of : the number of endmembers , their spectral signatures , and their abundances at each pixel . unmixing is a challenging , ill-posed inverse problem because of model inaccuracies , observation noise , environmental conditions , endmember variability , and data set size . researchers have devised and investigated many models searching for robust , stable , tractable , and accurate unmixing algorithms . this paper presents an overview of unmixing methods from the time of story_separator_special_tag looking for competent reading resources ? we have pattern recognition and machine learning information science and statistics to read , not only read , but also download them or even check out online . locate this fantastic book writtern by by now , simply here , yeah just here . obtain the reports in the kinds of txt , zip , kindle , word , ppt , pdf , as well as rar . once again , never ever miss to review online and download this book in our site right here . click the link . story_separator_special_tag this volume demonstrates the power of the markov random field ( mrf ) in vision , treating the mrf both as a tool for modeling image data and , utilizing recently developed algorithms , as a means of making inferences about images . these inferences concern underlying image and scene structure as well as solutions to such problems as image reconstruction , image segmentation , 3d vision , and object labeling . it offers key findings and state-of-the-art research on both algorithms and applications . after an introduction to the fundamental concepts used in mrfs , the book reviews some of the main algorithms for performing inference with mrfs ; presents successful applications of mrfs , including segmentation , super-resolution , and image restoration , along with a comparison of various optimization methods ; discusses advanced algorithmic topics ; addresses limitations of the strong locality assumptions in the mrfs discussed in earlier chapters ; and showcases applications that use mrfs in more complex ways , as components in bigger systems or with multiterm energy functions . the book will be an essential guide to current research on these powerful mathematical tools . story_separator_special_tag this paper presents a new bayesian approach to hyperspectral image segmentation that boosts the performance of the discriminative classifiers . this is achieved by combining class densities based on discriminative classifiers with a multi-level logistic markov-gibs prior . this density favors neighbouring labels of the same class . the adopted discriminative classifier is the fast sparse multinomial regression . the discrete optimization problem one is led to is solved efficiently via graph cut tools . the effectiveness of the proposed method is evaluated , with simulated and real aviris images , in two directions : 1 ) to improve the classification performance and 2 ) to decrease the size of the training sets . story_separator_special_tag support vector machines ( svm ) are increasingly used in methodological as well as application oriented research throughout the remote sensing community . their classification accuracy and the fact that they can be applied on virtually any kind of remote sensing data set are their key advantages . especially researchers working with hyperspectral or other high dimensional datasets tend to favor svms as they suffer far less from the hughes phenomenon than classifiers designed for multispectral datasets do . due to these issues , numerous researchers have published a broad range of enhancements on svm . many of these enhancements aim at introducing probability distributions and the bayes theorem . within this paper , we present an assessment and comparison of classification results of the svm and two enhancements-import vector machines ( ivm ) and relevance vector machines ( rvm ) -on simulated datasets of the environmental mapping and analysis program enmap . story_separator_special_tag an approach based on multiple estimator systems ( mess ) for the estimation of biophysical parameters from remotely sensed data is proposed . the rationale behind the proposed approach is to exploit the peculiarities of an ensemble of different estimators in order to improve the robustness ( and in some cases the accuracy ) of the estimation process . the proposed mess can be implemented in two conceptually different ways . one extends the use of an approach previously proposed in the regression literature to the estimation of biophysical parameters from remote sensing data . this approach integrates the estimates obtained from the different regression algorithms making up the ensemble by a direct linear combination ( combination-based approach ) . the other consists of a novel approach that provides as output the estimate obtained by the regression algorithm ( included in the ensemble ) characterized by the highest expected accuracy in the region of the feature space associated with the considered pattern ( selection-based approach ) . this estimator is identified based on a proper partition of the feature space . the effectiveness of the proposed approach has been assessed on the problem of estimating water quality parameters from multispectral story_separator_special_tag remote sensing data processing deals with real-life applications with great societal values . for instance urban monitoring , fire detection or flood prediction from remotely sensed multispectral or radar images have a great impact on economical and environmental issues . to treat efficiently the acquired data and provide accurate products , remote sensing has evolved into a multidisciplinary field , where machine learning and signal processing algorithms play an important role nowadays . this paper serves as a survey of methods and applications , and reviews the latest methodological advances in machine learning for remote sensing data analysis . story_separator_special_tag machine learning has become a standard paradigm for the analysis of remote sensing and geoscience data at both local and global scales . in the upcoming years , with the advent of new satellite constellations , machine learning will have a fundamental role in processing large and heterogeneous data sources . machine learning will move from mere statistical data processing to actual learning , understanding , and knowledge extraction . the ambitious goal is to provide responses to the challenging scientific questions about the earth system . this special issue aims at providing an updated , refreshing view of current developments in the field . for this special issue , we have collected five articles that present snapshots of the recent advances in machine-learning methodologies for remote sensing and geosciences . story_separator_special_tag this letter introduces the epsiv-huber loss function in the support vector regression ( svr ) formulation for the estimation of biophysical parameters extracted from remotely sensed data . this cost function can handle the different types of noise contained in the dataset . the method is successfully compared to other cost functions in the svr framework , neural networks and classical bio-optical models for the particular case of the estimation of ocean chlorophyll concentration from satellite remote sensing data . the proposed model provides more accurate , less biased , and improved robust estimation results on the considered case study , especially significant when few in situ measurements are available story_separator_special_tag this letter presents two kernel-based methods for semisupervised regression . the methods rely on building a graph or hypergraph laplacian with both the available labeled and unlabeled data , which is further used to deform the training kernel matrix . the deformed kernel is then used for support vector regression ( svr ) . given the high computational burden involved , we present two alternative formulations based on the nystrom method and the incomplete cholesky factorization to achieve operational processing times . the semisupervised svr algorithms are successfully tested in multiplatform leaf area index estimation and oceanic chlorophyll concentration prediction . experiments are carried out with both multispectral and hyperspectral data , demonstrating good generalization capabilities when a low number of labeled samples are available , which is usually the case in biophysical parameter retrieval . story_separator_special_tag hyperspectral images show similar statistical properties to natural grayscale or color photographic images . however , the classification of hyperspectral images is more challenging because of the very high dimensionality of the pixels and the small number of labeled examples typically available for learning . these peculiarities lead to particular signal processing problems , mainly characterized by indetermination and complex manifolds . the framework of statistical learning has gained popularity in the last decade . new methods have been presented to account for the spatial homogeneity of images , to include user 's interaction via active learning , to take advantage of the manifold structure with semisupervised learning , to extract and encode invariances , or to adapt classifiers and image representations to unseen yet similar scenes . this tutuorial reviews the main advances for hyperspectral remote sensing image classification through illustrative examples . story_separator_special_tag gaussian processes ( gps ) have experienced tremendous success in biogeophysical parameter retrieval in the last few years . gps constitute a solid bayesian framework to consistently formulate many function approximation problems . this article reviews the main theoretical gp developments in the field , considering new algorithms that respect signal and noise characteristics , extract knowledge via automatic relevance kernels to yield feature rankings automatically , and allow applicability of associated uncertainty intervals to transport gp models in space and time that can be used to uncover causal relations between variables and can encode physically meaningful prior knowledge via radiative transfer model ( rtm ) emulation . the important issue of computational efficiency will also be addressed . these developments are illustrated in the field of geosciences and remote sensing at local and global scales through a set of illustrative examples . in particular , important problems for land , ocean , and atmosphere monitoring are considered , from accurately estimating oceanic chlorophyll content and pigments to retrieving vegetation properties from multi- and hyperspectral sensors as well as estimating atmospheric parameters ( e.g. , temperature , moisture , and ozone ) from infrared sounders . story_separator_special_tag this letter presents a framework of composite kernel machines for enhanced classification of hyperspectral images . this novel method exploits the properties of mercer 's kernels to construct a family of composite kernels that easily combine spatial and spectral information . this framework of composite kernels demonstrates : 1 ) enhanced classification accuracy as compared to traditional approaches that take into account the spectral information only : 2 ) flexibility to balance between the spatial and spectral information in the classifier ; and 3 ) computational efficiency . in addition , the proposed family of kernel classifiers opens a wide field for future developments in which spatial and spectral information can be easily integrated . story_separator_special_tag abstract in this article , a comprehensive review of the state-of-art graph-based learning methods for classification of the hyperspectral images ( hsi ) is provided , including a spectral information based graph semi-supervised classification and a spectral-spatial information based graph semi-supervised classification . in addition , related techniques are categorized into the following sub-types : ( 1 ) manifold representation based graph semi-supervised learning for hsi classification ( 2 ) sparse representation based graph semi-supervised learning for hsi classification . for each technique , methodologies , training and testing samples , various technical difficulties , as well as performances , are discussed . additionally , future research challenges imposed by the graph-based model are indicated . story_separator_special_tag this paper briefly presents the aims , requirements and results of partial least squares regression analysis ( plsr ) , and its potential utility in ecological studies . this statistical technique is particularly well suited to analyzing a large array of related predictor variables ( i.e . not truly independent ) , with a sample size not large enough compared to the number of independent variables , and in cases in which an attempt is made to approach complex phenomena or syndromes that must be defined as a combination of several variables obtained independently . a simulation experiment is carried out to compare this technique with multiple regression ( mr ) and with a combination of principal component analysis and multiple regression ( pca+mr ) , varying the number of predictor variables and sample sizes . plsr models explained a similar amount of variance to those results obtained by mr and pca+mr . however , plsr was more reliable than other techniques when identifying relevant variables and their magnitudes of influence , especially in cases of small sample size and low tolerance . finally , we present one example of plsr to illustrate its application and interpretation in ecology . story_separator_special_tag a method is presented for subpixel modeling , mapping , and classification in hyperspectral imagery using learned block-structured discriminative dictionaries , where each block is adapted and optimized to represent a material in a compact and sparse manner . the spectral pixels are modeled by linear combinations of subspaces defined by the learned dictionary atoms , allowing for linear mixture analysis . this model provides flexibility in source representation and selection , thus accounting for spectral variability , small-magnitude errors , and noise . a spatial-spectral coherence regularizer in the optimization allows pixel classification to be influenced by similar neighbors . we extend the proposed approach for cases for which there is no knowledge of the materials in the scene , unsupervised classification , and provide experiments and comparisons with simulated and real data . we also present results when the data have been significantly undersampled and then reconstructed , still retaining high-performance classification , showing the potential role of compressive sensing and sparse modeling techniques in efficient acquisition/transmission missions for hyperspectral imagery . story_separator_special_tag this study aims at comparing the capability of different sensors to detect land cover materials within an historical urban center . the main objective is to evaluate the added value of hyperspectral sensors in mapping a complex urban context . in this study we used : ( a ) the ali and hyperion satellite data , ( b ) the landsat etm+ satellite data , ( c ) mivis airborne data and ( d ) the high spatial resolution ikonos imagery as reference . the venice city center shows a complex urban land cover and therefore was chosen for testing the spectral and spatial characteristics of different sensors in mapping the urban tissue . for this purpose , an object-oriented approach and different common classification methods were used . moreover , spectra of the main anthropogenic surfaces ( i.e . roofing and paving materials ) were collected during the field campaigns conducted on the study area . they were exploited for applying band-depth and sub-pixel analyses to subsets of hyperion and mivis hyperspectral imagery . the results show that satellite data with a 30m spatial resolution ( ali , landsat etm+ and hyperion ) are able to identify only the story_separator_special_tag hyperspectral data processing : algorithm design and analysis is a culmination of the research conducted in the remote sensing signal and image processing laboratory ( rssipl ) at the university of maryland , baltimore county . specifically , it treats hyperspectral image processing and hyperspectral signal processing as separate subjects in two different categories . most materials covered in this book can be used in conjunction with the author s first book , hyperspectral imaging : techniques for spectral detection and classification , without much overlap . story_separator_special_tag virtual dimensionality ( vd ) is originally defined as the number of spectrally distinct signatures in hyperspectral data . unfortunately , there is no provided specific definition of what spectrally distinct signatures are . as a result , many techniques developed to estimate vd have produced various values for vd with different interpretations . this paper revisits vd and interprets vd in the context of neyman pearson detection theory where a vd estimation is formulated as a binary composite hypothesis testing problem with targets of interest considered as signal sources under the alternative hypothesis , and the null hypothesis representing the absence of targets . in particular , the signal sources under both hypotheses are specified by three aspects . one is signal sources completely characterized by data statistics via eigenanalysis , which yields harsanyi farrand chang method and maximum orthogonal complement algorithm . another one is signal sources obtained by a linear mixing model fitting error analysis . a third one is signal sources specified by inter-band spectral information statistics which derives a new concept , called target-specified vd . a comparative analysis among these three aspects is also conducted by synthetic and real image experiments . story_separator_special_tag anomaly detection becomes increasingly important in hyperspectral image analysis , since hyperspectral imagers can now uncover many material substances which were previously unresolved by multispectral sensors . two types of anomaly detection are of interest and considered in this paper . one was previously developed by reed and yu to detect targets whose signatures are distinct from their surroundings . another was designed to detect targets with low probabilities in an unknown image scene . interestingly , they both operate the same form as does a matched filter . moreover , they can be implemented in real-time processing , provided that the sample covariance matrix is replaced by the sample correlation matrix . one disadvantage of an anomaly detector is the lack of ability to discriminate the detected targets from another . in order to resolve this problem , the concept of target discrimination measures is introduced to cluster different types of anomalies into separate target classes . by using these class means as target information , the detected anomalies can be further classified . with inclusion of target discrimination in anomaly detection , anomaly classification can be implemented in a three-stage process , first by anomaly detection to find story_separator_special_tag the pixel purity index ( ppi ) has been widely used in hyperspectral image analysis for endmember extraction due to its publicity and availability in the environment for visualizing images ( envi ) software . unfortunately , its detailed implementation has never been made available in the literature . this paper investigates the ppi based on limited published results and proposes a fast iterative algorithm to implement the ppi , referred to as fast iterative ppi ( fippi ) . it improves the ppi in several aspects . instead of using randomly generated vectors as initial endmembers , the fippi produces an appropriate initial set of endmembers to speed up its process . additionally , it estimates the number of endmembers required to be generated by a recently developed concept , virtual dimensionality ( vd ) which is one of the most crucial issues in the implementation of ppi . furthermore , it is an iterative algorithm , where an iterative rule is developed to improve each of the iterations until it reaches a final set of endmembers . most importantly , it is an unsupervised algorithm as opposed to the ppi , which requires human intervention to manually select story_separator_special_tag the spectral features in hyperspectral imagery ( hsi ) contain significant structure that , if properly characterized , could enable more efficient data acquisition and improved data analysis . because most pixels contain reflectances of just a few materials , we propose that a sparse coding model is well-matched to hsi data . sparsity models consider each pixel as a combination of just a few elements from a larger dictionary , and this approach has proven effective in a wide range of applications . furthermore , previous work has shown that optimal sparse coding dictionaries can be learned from a dataset with no other a priori information ( in contrast to many hsi endmember discovery algorithms that assume the presence of pure spectra or side information ) . we modified an existing unsupervised learning approach and applied it to hsi data ( with significant ground truth labeling ) to learn an optimal sparse coding dictionary . using this learned dictionary , we demonstrate three main findings : 1 ) the sparse coding model learns spectral signatures of materials in the scene and locally approximates nonlinear manifolds for individual materials ; 2 ) this learned dictionary can be used to infer story_separator_special_tag a new sparsity-based algorithm for the classification of hyperspectral imagery is proposed in this paper . the proposed algorithm relies on the observation that a hyperspectral pixel can be sparsely represented by a linear combination of a few training samples from a structured dictionary . the sparse representation of an unknown pixel is expressed as a sparse vector whose nonzero entries correspond to the weights of the selected training samples . the sparse vector is recovered by solving a sparsity-constrained optimization problem , and it can directly determine the class label of the test sample . two different approaches are proposed to incorporate the contextual information into the sparse recovery optimization problem in order to improve the classification performance . in the first approach , an explicit smoothing constraint is imposed on the problem formulation by forcing the vector laplacian of the reconstructed image to become zero . in this approach , the reconstructed pixel of interest has similar spectral characteristics to its four nearest neighbors . the second approach is via a joint sparsity model where hyperspectral pixels in a small neighborhood around the test pixel are simultaneously represented by linear combinations of a few common training samples , story_separator_special_tag in this paper , we propose a new sparsity-based algorithm for automatic target detection in hyperspectral imagery ( hsi ) . this algorithm is based on the concept that a pixel in hsi lies in a low-dimensional subspace and thus can be represented as a sparse linear combination of the training samples . the sparse representation ( a sparse vector corresponding to the linear combination of a few selected training samples ) of a test sample can be recovered by solving an l0-norm minimization problem . with the recent development of the compressed sensing theory , such minimization problem can be recast as a standard linear programming problem or efficiently approximated by greedy pursuit algorithms . once the sparse vector is obtained , the class of the test sample can be determined by the characteristics of the sparse vector on reconstruction . in addition to the constraints on sparsity and reconstruction accuracy , we also exploit the fact that in hsi the neighboring pixels have a similar spectral characteristic ( smoothness ) . in our proposed algorithm , a smoothness constraint is also imposed by forcing the vector laplacian at each reconstructed pixel to be minimum all the time within story_separator_special_tag in this paper , a novel nonlinear technique for hyperspectral image ( hsi ) classification is proposed . our approach relies on sparsely representing a test sample in terms of all of the training samples in a feature space induced by a kernel function . for each test pixel in the feature space , a sparse representation vector is obtained by decomposing the test pixel over a training dictionary , also in the same feature space , by using a kernel-based greedy pursuit algorithm . the recovered sparse representation vector is then used directly to determine the class label of the test pixel . projecting the samples into a high-dimensional feature space and kernelizing the sparse representation improve the data separability between different classes , providing a higher classification accuracy compared to the more conventional linear sparsity-based classification algorithms . moreover , the spatial coherency across neighboring pixels is also incorporated through a kernelized joint sparsity model , where all of the pixels within a small neighborhood are jointly represented in the feature space by selecting a few common training samples . kernel greedy optimization algorithms are suggested in this paper to solve the kernel versions of the single-pixel and story_separator_special_tag due to the advantages of deep learning , in this paper , a regularized deep feature extraction ( fe ) method is presented for hyperspectral image ( hsi ) classification using a convolutional neural network ( cnn ) . the proposed approach employs several convolutional and pooling layers to extract deep features from hsis , which are nonlinear , discriminant , and invariant . these features are useful for image classification and target detection . furthermore , in order to address the common issue of imbalance between high dimensionality and limited availability of training samples for the classification of hsi , a few strategies such as l2 regularization and dropout are investigated to avoid overfitting in class data modeling . more importantly , we propose a 3-d cnn-based fe model with combined regularization to extract effective spectral-spatial features of hyperspectral imagery . finally , in order to further improve the performance , a virtual sample enhanced method is proposed . the proposed approaches are carried out on three widely used hyperspectral data sets : indian pines , university of pavia , and kennedy space center . the obtained results reveal that the proposed models with sparse constraints provide competitive results story_separator_special_tag classification is one of the most popular topics in hyperspectral remote sensing . in the last two decades , a huge number of methods were proposed to deal with the hyperspectral data classification problem . however , most of them do not hierarchically extract deep features . in this paper , the concept of deep learning is introduced into hyperspectral data classification for the first time . first , we verify the eligibility of stacked autoencoders by following classical spectral information-based classification . second , a new way of classifying with spatial-dominated information is proposed . we then propose a novel deep learning framework to merge the two features , from which we can get the highest classification accuracy . the framework is a hybrid of principle component analysis ( pca ) , deep learning architecture , and logistic regression . specifically , as a deep learning architecture , stacked autoencoders are aimed to get useful high-level features . experimental results with widely-used hyperspectral data indicate that classifiers built in this deep learning-based framework provide competitive performance . in addition , the proposed joint spectral-spatial deep neural network opens a new window for future research , showcasing the deep learning-based story_separator_special_tag hyperspectral data classification is a hot topic in remote sensing community . in recent years , significant effort has been focused on this issue . however , most of the methods extract the features of original data in a shallow manner . in this paper , we introduce a deep learning approach into hyperspectral image classification . a new feature extraction ( fe ) and image classification framework are proposed for hyperspectral data analysis based on deep belief network ( dbn ) . first , we verify the eligibility of restricted boltzmann machine ( rbm ) and dbn by the following spectral information-based classification . then , we propose a novel deep architecture , which combines the spectral spatial fe and classification together to get high classification accuracy . the framework is a hybrid of principal component analysis ( pca ) , hierarchical learning-based fe , and logistic regression ( lr ) . experimental results with hyperspectral data indicate that the classifier provide competitive solution with the state-of-the-art methods . in addition , this paper reveals that deep learning system has huge potential for hyperspectral data classification . story_separator_special_tag in hyperspectral remote sensing image classification , ensemble systems with support vector machine ( svm ) , such as the random subspace svm ensemble ( rsse ) , have significantly outperformed single svm on the robustness and overall accuracy . in this paper , we introduce a novel subspace mechanism , the optimizing subspace svm ensemble ( osse ) , to improve rsse by selecting discriminating subspaces for individual svms . the framework is based on genetic algorithm ( ga ) , adopting the jeffries-matusita ( jm ) distance as a criterion , to optimize the selected subspaces . the combination of optimizing subspaces is more suitable for classification than the random one , at the same time having the ability to accommodate requisite diversity within the ensemble . the modifications have improved the accuracies of individual classifiers ; as a result , better overall accuracies are present . experiments on the classification of two hyperspectral datasets reveal that our proposed osse obtains sound performances compared with rsse , single svm , and other ensemble with ga to optimize svm . story_separator_special_tag object detection in optical remote sensing images , being a fundamental but challenging problem in the field of aerial and satellite image analysis , plays an important role for a wide range of applications and is receiving significant attention in recent years . while enormous methods exist , a deep review of the literature concerning generic object detection is still lacking . this paper aims to provide a review of the recent progress in this field . different from several previously published surveys that focus on a specific object class such as building and road , we concentrate on more generic object categories including , but are not limited to , road , building , tree , vehicle , ship , airport , urban-area . covering about 270 publications we survey ( 1 ) template matching-based object detection methods , ( 2 ) knowledge-based object detection methods , ( 3 ) object-based image analysis ( obia ) -based object detection methods , ( 4 ) machine learning-based object detection methods , and ( 5 ) five publicly available datasets and three standard evaluation metrics . we also discuss the challenges of current studies and propose two promising research directions , story_separator_special_tag feature selection is a key task in remote sensing data processing , particularly in case of classification from hyperspectral images . a logistic regression ( lr ) model may be used to predict the probabilities of the classes on the basis of the input features , after ranking them according to their relative importance . in this letter , the lr model is applied for both the feature selection and the classification of remotely sensed images , where more informative soft classifications are produced naturally . the results indicate that , with fewer restrictive assumptions , the lr model is able to reduce the features substantially without any significant decrease in the classification accuracy of both the soft and hard classifications story_separator_special_tag mean shift , a simple interactive procedure that shifts each data point to the average of data points in its neighborhood is generalized and analyzed in the paper . this generalization makes some k-means like clustering algorithms its special cases . it is shown that mean shift is a mode-seeking process on the surface constructed with a `` shadow '' kernal . for gaussian kernels , mean shift is a gradient mapping . convergence is studied for mean shift iterations . cluster analysis if treated as a deterministic problem of finding a fixed point of mean shift that characterizes the data . applications in clustering and hough transform are demonstrated . mean shift is also considered as an evolutionary strategy that performs multistart global optimization . > story_separator_special_tag the papers in this special issue focus on the deployment of big data applications for use in remote sensing . this issue is intended to introduce the latest techniques to manage , exploit , process , and analyze big data in remote sensing applications . it contains 11 papers that exhibit the latest advances in big data in remote sensing . to understand big data , usually three facets should be taken into account from owning data , data methods , and data applications , which contribute together to a single big data life cycle , including identification of applications , data collections , data processing , data analysis , data visualization , data evaluation , and so on . story_separator_special_tag this paper addresses classification of hyperspectral remote sensing images with kernel-based methods defined in the framework of semisupervised support vector machines ( s3vms ) . in particular , we analyzed the critical problem of the nonconvexity of the cost function associated with the learning phase of s3vms by considering different ( s3vms ) techniques that solve optimization directly in the primal formulation of the objective function . as the nonconvex cost function can be characterized by many local minima , different optimization techniques may lead to different classification results . here , we present two implementations , which are based on different rationales and optimization methods . the presented techniques are compared with s3vms implemented in the dual formulation in the context of classification of real hyperspectral remote sensing images . experimental results point out the effectiveness of the techniques based on the optimization of the primal formulation , which provided higher accuracy and better generalization ability than the s3vms optimized in the dual formulation story_separator_special_tag in real applications , it is difficult to obtain a sufficient number of training samples in supervised classification of hyperspectral remote sensing images . furthermore , the training samples may not represent the real distribution of the whole space . to attack these problems , an ensemble algorithm which combines generative ( mixture of gaussians ) and discriminative ( support cluster machine ) models for classification is proposed . experimental results carried out on hyperspectral data set collected by the reflective optics system imaging spectrometer sensor , validates the effectiveness of the proposed approach . story_separator_special_tag neural machine translation is a relatively new approach to statistical machine translation based purely on neural networks . the neural machine translation models often consist of an encoder and a decoder . the encoder extracts a fixed-length representation from a variable-length input sentence , and the decoder generates a correct translation from this representation . in this paper , we focus on analyzing the properties of the neural machine translation using two models ; rnn encoder decoder and a newly proposed gated recursive convolutional neural network . we show that the neural machine translation performs relatively well on short sentences without unknown words , but its performance degrades rapidly as the length of the sentence and the number of unknown words increase . furthermore , we find that the proposed gated recursive convolutional network learns a grammatical structure of a sentence automatically . story_separator_special_tag abstract the main objective was to determine whether partial least squares ( pls ) regression improves grass/herb biomass estimation when compared with hyperspectral indices , that is normalised difference vegetation index ( ndvi ) and red-edge position ( rep ) . to achieve this objective , fresh green grass/herb biomass and airborne images ( hymap ) were collected in the majella national park , italy in the summer of 2005. the predictive performances of hyperspectral indices and pls regression models were then determined and compared using calibration ( n \xa0=\xa030 ) and test ( n \xa0=\xa012 ) data sets . the regression model derived from ndvi computed from bands at 740 and 771\xa0nm produced a lower standard error of prediction ( sep\xa0=\xa0264\xa0g\xa0m 2 ) on the test data compared with the standard ndvi involving bands at 665 and 801\xa0nm ( sep\xa0=\xa0331\xa0g\xa0m 2 ) , but comparable results with reps determined by various methods ( sep\xa0=\xa0261 to 295\xa0g\xa0m 2 ) . pls regression models based on original , derivative and continuum-removed spectra produced lower prediction errors ( sep\xa0=\xa0149 to 256\xa0g\xa0m 2 ) compared with ndvi and rep models . the lowest prediction error ( sep\xa0=\xa0149\xa0g\xa0m 2 , 19 % of mean story_separator_special_tag hyperspectral data are a challenge for data compression . several factors make the constraints particularly stringent and the challenge exciting . first is the size of the data : as a third dimension is added , the amount of data increases dramatically making the compression necessary at different steps of the processing chain . also different properties are required at different stages of the processing chain with variable tradeoff . second , the differences in spatial and spectral relation between values make the more traditional 3d compression algorithms obsolete . and finally , the high expectations from the scientists using hyperspectral data require the assurance that the compression will not degrade the data quality . all these aspects are investigated in the present chapter and the different possible tradeoffs are explored . in conclusion , we see that a number of challenges remain , of which the most important is to find an easier way to qualify the different algorithm proposals . story_separator_special_tag [ 1 ] \xa0imaging spectroscopy is a tool that can be used to spectrally identify and spatially map materials based on their specific chemical bonds . spectroscopic analysis requires significantly more sophistication than has been employed in conventional broadband remote sensing analysis . we describe a new system that is effective at material identification and mapping : a set of algorithms within an expert system decision-making framework that we call tetracorder . the expertise in the system has been derived from scientific knowledge of spectral identification . the expert system rules are implemented in a decision tree where multiple algorithms are applied to spectral analysis , additional expert rules and algorithms can be applied based on initial results , and more decisions are made until spectral analysis is complete . because certain spectral features are indicative of specific chemical bonds in materials , the system can accurately identify and map those materials . in this paper we describe the framework of the decision making process used for spectral identification , describe specific spectral feature analysis algorithms , and give examples of what analyses and types of maps are possible with imaging spectroscopy data . we also present the expert system story_separator_special_tag a study was conducted to investigate whether reflectance data from vegetation in a tropical forest canopy could be used for species level discrimination . reflectance spectra of 11 species were analysed at the scale of the leaf , branch , tree and species . to enhance separation of species-of-interest spectra from the other spectra in the data , the variation in reflectance values for the species-of-interest were used to create a characteristic spectral shape . with a simple algorithm , the resultant shape-space was used as a data filter that correctly discriminated against 94 % of the non-species-of-interest trees . story_separator_special_tag traditional machine learning makes a basic assumption : the training and test data should be under the same distribution . however , in many cases , this identical-distribution assumption does not hold . the assumption might be violated when a task from one new domain comes , while there are only labeled data from a similar old domain . labeling the new data can be costly and it would also be a waste to throw away all the old data . in this paper , we present a novel transfer learning framework called tradaboost , which extends boosting-based learning algorithms ( freund & schapire , 1997 ) . tradaboost allows users to utilize a small amount of newly labeled data to leverage the old data to construct a high-quality classification model for the new data . we show that this method can allow us to learn an accurate model using only a tiny amount of new data and a large amount of old data , even when the new data are not sufficient to train a model alone . we show that tradaboost allows knowledge to be effectively transferred from the old data to the new . the effectiveness of story_separator_special_tag abstract in this review , various applications of near-infrared hyperspectral imaging ( nir-hsi ) in agriculture and in the quality control of agro-food products are presented . nir-hsi is an emerging technique that combines classical nir spectroscopy and imaging techniques in order to simultaneously obtain spectral and spatial information from a field or a sample . the technique is nondestructive , nonpolluting , fast , and relatively inexpensive per analysis . currently , its applications in agriculture include vegetation mapping , crop disease , stress and yield detection , component identification in plants , and detection of impurities . there is growing interest in hsi for safety and quality assessments of agro-food products . the applications have been classified from the level of satellite images to the macroscopic or molecular level . story_separator_special_tag in this letter , a technique based on independent component analysis ( ica ) and extended morphological attribute profiles ( eaps ) is presented for the classification of hyperspectral images . the ica maps the data into a subspace in which the components are as independent as possible . aps , which are extracted by using several attributes , are applied to each image associated with an extracted independent component , leading to a set of extended eaps . two approaches are presented for including the computed profiles in the analysis . the features extracted by the morphological processing are then classified with an svm . the experiments carried out on two hyperspectral images proved the effectiveness of the proposed technique . story_separator_special_tag tree species mapping in forest areas is an important topic in forest inventory . in recent years , several studies have been carried out using different types of hyperspectral sensors under various forest conditions . the aim of this work was to evaluate the potential of two high spectral and spatial resolution hyperspectral sensors ( hyspex-vnir 1600 and hyspex-swir 320i ) , operating at different wavelengths , for tree species classification of boreal forests . to address this objective , many experiments were carried out , taking into consideration : 1 ) three classifiers ( support vector machines ( svm ) , random forest ( rf ) , and gaussian maximum likelihood ) ; 2 ) two spatial resolutions ( 1.5 m and 0.4 m pixel sizes ) ; 3 ) two subsets of spectral bands ( all and a selection ) ; and 4 ) two spatial levels ( pixel and tree levels ) . the study area is characterized by the presence of four classes 1 ) norway spruce , 2 ) scots pine , together with 3 ) scattered birch and 4 ) other broadleaves . our results showed that : 1 ) the hyspex vnir 1600 story_separator_special_tag accurate generation of a land cover map using hyperspectral data is an important application of remote sensing . multiple classifier system ( mcs ) is an effective tool for hyperspectral image classification . however , most of the research in mcs addressed the problem of classifier combination , while the potential of selecting classifiers dynamically is least explored for hyperspectral image classification . the goal of this paper is to assess the potential of dynamic classifier selection/dynamic ensemble selection ( dcs/des ) for classification of hyperspectral images , which consists in selecting the best ( subset of ) optimal classifier ( s ) \xa0relative to each input pixel by exploiting the local information content of the image pixel . in order to have an accurate as well as computationally fast dcs/des , we proposed a new dcs/des framework based on extreme learning machine ( elm ) regression and a new spectral spatial classification model , which incorporates the spatial contextual information by using the markov random field ( mrf ) with the proposed des method . the proposed classification framework can be considered as a unified model to exploit the full spectral and spatial information . classification experiments carried out story_separator_special_tag statistical and physical models have seldom been compared in studying grasslands . in this paper , both modeling approaches are investigated for mapping leaf area index ( lai ) in a mediterranean grassland ( majella national park , italy ) using hymap airborne hyperspectral images . we compared inversion of the prosail radiative transfer model with narrow band vegetation indices ( ndvi-like and savi2-like ) and partial least squares regression ( pls ) . to assess the performance of the investigated models , the normalized rmse ( nrmse ) and r2 between in situ measurements of leaf area index and estimated parameter values are reported . the results of the study demonstrate that lai can be estimated through prosail inversion with accuracies comparable to those of statistical approaches ( r2 = 0.89 , nrmse = 0.22 ) . the accuracy of the radiative transfer model inversion was further increased by using only a spectral subset of the data ( r2 = 0.91 , nrmse = 0.18 ) . for the feature selection wavebands not well simulated by prosail were sequentially discarded until all bands fulfilled the imposed accuracy requirements . story_separator_special_tag background and aims : near infrared ( nir ) spectroscopy techniques are not only used for a variety of physical and chemical analyses in the food industry , but also in remote sensing studies as tools to predict plant water status . in this study , nir spectroscopy was evaluated as a method to estimate water potential of grapevines . methods and results : cabernet sauvignon , chardonnay and shiraz leaves were scanned using an integrated spectronic ( 300 1100 nm ) or an asd fieldspec\xae 3 ( analytical spectral devices , boulder , colorado , usa ) ( 350 1850 nm ) spectrophotometer and then measured to obtain midday leaf water potential using a pressure chamber . on the same shoot , the leaf adjacent the one used for midday leaf water potential measurement was used to measure midday stem water potential . calibrations were built and nir showed good prediction ability ( standard error in cross validation ( secv ) < 0.24 mpa ) for stem water potential for each of the three grapevine varieties . the best calibration was obtained for the prediction of stem water potential in shiraz ( r = 0.92 and a secv = story_separator_special_tag the 2013 data fusion contest organized by the data fusion technical committee ( dftc ) of the ieee geoscience and remote sensing society aimed at investigating the synergistic use of hyperspectral and light detection and ranging ( lidar ) data . the data sets distributed to the participants during the contest , a hyperspectral imagery and the corresponding lidar-derived digital surface model ( dsm ) , were acquired by the nsf-funded center for airborne laser mapping over the university of houston campus and its neighboring area in the summer of 2012. this paper highlights the two awarded research contributions , which investigated different approaches for the fusion of hyperspectral and lidar data , including a combined unsupervised and supervised classification scheme , and a graph-based method for the fusion of spectral , spatial , and elevation information . story_separator_special_tag abstract among the techniques that have been developed in spectroscopy , derivative analysis is particularly promising for use with remote sensing data . in the first step of this research we apply the derivative spectrum in a real hyperspectral image and introduce a new target detection approach called dcem . for this purpose , 1st to 5th orders of derivative spectrum were applied to the dcem . the outcome of this research has shown that the application of derivative spectrum in target detection is perfectly advisable in a specific derivative order for each target . this order can be introduced as an optimized order or the best dcem . the spectrum differentiation eliminates low frequency components of the spectrum . despite the little information included in those low frequency components of a signal or spectrum , their complete elimination cause an information loss problem . hence , in the second step of this research an ensemble classifier approach was employed for the combined use of both spectra and the best derivative order . this simultaneous use of the derivative and zero order spectra is introduced as ecem . experiments were conducted via a hymap hyperspectral airborne image in eastern iran story_separator_special_tag very high resolution hyperspectral data should be very useful to provide detailed maps of urban land cover . in order to provide such maps , both accurate and precise classification tools need , however , to be developed . in this letter , new methods for classification of hyperspectral remote sensing data are investigated , with the primary focus on multiple classifications and spatial analysis to improve mapping accuracy in urban areas . in particular , we compare spatial reclassification and mathematical morphology approaches . we show results for classification of dais data over the town of pavia , in northern italy . classification maps of two test areas are given , and the overall and individual class accuracies are analyzed with respect to the parameters of the proposed classification procedures . story_separator_special_tag in order to ensure homogeneity in performance assessment of proposed algorithms for information extraction in the earth observation ( eo ) domain , standardized remotely sensed datasets are particularly useful and welcome . fully aware of this principle , the ieee geoscience and remote sensing society ( grss ) and especially its image analysis and data fusion technical committee ( iadf ) , has been organizing for some years now the data fusion contest ( dfc ) . in the dfc , one specific dataset is made available to the scientific community , which can download it and use it to test its newly developed algorithms . the consistence of the starting dataset across participating groups ensures the significance of assessing and ranking results , to finally proclaim the winner who scored the highest . more recently , the ieee grss has provided one more contribution to the standardization effort by building the data and algorithm standard evaluation ( dase ) website . dase can distribute to registered users a limited set of possible standard open datasets , together with some ground truth info , and automatically assess the processing results provided by the users . in this paper we story_separator_special_tag this letter presents a hyperspectral image classification method based on relevance vector machines ( rvms ) . support vector machine ( svm ) -based approaches have been recently proposed for hyperspectral image classification and have raised important interest . in this letter , it is genuinely proposed to use an rvm-based approach for the classification of hyperspectral images . it is shown that approximately the same classification accuracy is obtained using rvm-based classification , with a significantly smaller relevance vector rate and , therefore , much faster testing time , compared with svm-based classification . this feature makes the rvm-based hyperspectral classification approach more suitable for applications that require low complexity and , possibly , real-time classification . story_separator_special_tag the explosion of image data on the internet has the potential to foster more sophisticated and robust models and algorithms to index , retrieve , organize and interact with images and multimedia data . but exactly how such data can be harnessed and organized remains a critical problem . we introduce here a new database called imagenet , a large-scale ontology of images built upon the backbone of the wordnet structure . imagenet aims to populate the majority of the 80,000 synsets of wordnet with an average of 500-1000 clean and full resolution images . this will result in tens of millions of annotated images organized by the semantic hierarchy of wordnet . this paper offers a detailed analysis of imagenet in its current state : 12 subtrees with 5247 synsets and 3.2 million images in total . we show that imagenet is much larger in scale and diversity and much more accurate than the current image datasets . constructing such a large-scale database is a challenging task . we describe the data collection scheme with amazon mechanical turk . lastly , we illustrate the usefulness of imagenet through three simple applications in object recognition , image classification and automatic story_separator_special_tag this paper studies a fully bayesian algorithm for endmember extraction and abundance estimation for hyperspectral imagery . each pixel of the hyperspectral image is decomposed as a linear combination of pure endmember spectra following the linear mixing model . the estimation of the unknown endmember spectra is conducted in a unified manner by generating the posterior distribution of abundances and endmember parameters under a hierarchical bayesian model . this model assumes conjugate prior distributions for these parameters , accounts for nonnegativity and full-additivity constraints , and exploits the fact that the endmember proportions lie on a lower dimensional simplex . a gibbs sampler is proposed to overcome the complexity of evaluating the resulting posterior distribution . this sampler generates samples distributed according to the posterior distribution and estimates the unknown parameters using these generated samples . the accuracy of the joint bayesian estimator is illustrated by simulations conducted on synthetic and real aviris images . story_separator_special_tag this paper proposes a hierarchical bayesian model that can be used for semi-supervised hyperspectral image unmixing . the model assumes that the pixel reflectances result from linear combinations of pure component spectra contaminated by an additive gaussian noise . the abundance parameters appearing in this model satisfy positivity and additivity constraints . these constraints are naturally expressed in a bayesian context by using appropriate abundance prior distributions . the posterior distributions of the unknown model parameters are then derived . a gibbs sampler allows one to draw samples distributed according to the posteriors of interest and to estimate the unknown abundances . an extension of the algorithm is finally studied for mixtures with unknown numbers of spectral components belonging to a know library . the performance of the different unmixing strategies is evaluated via simulations conducted on synthetic and real data . story_separator_special_tag spectral unmixing and classification have been widely used in the recent literature to analyze remotely sensed hyperspectral data . however , few strategies have combined these two approaches in the analysis . in this work , we propose a new hybrid strategy for semisupervised classification of hyperspectral data which exploits both spectral unmixing and classification in a synergetic fashion . during the process , the most informative unlabeled samples are automatically selected from the pool of candidates , thus reducing the computational cost of the process by including only the most informative unlabeled samples . our approach integrates a well-established discriminative probabilistic classifier-the multinomial logistic regression ( mlr ) with different spectral unmixing chains , thus bridging the gap between spectral unmixing and classification and exploiting them together for the analysis of hyperspectral data . the effectiveness of the proposed method is evaluated using two real hyperspectral data sets , collected by the nasa jet propulsion laboratory 's airborne visible infrared imaging spectrometer ( aviris ) over the indian pines region , indiana , and by the reflective optics spectrographic imaging system ( rosis ) over the university of pavia , italy . story_separator_special_tag in sparse representation ( sr ) driven hyperspectral image classification , signal-to-reconstruction rule-based classification may lack generalization performance . in order to overcome this limitation , we presents a new method for discriminative sparse representation of hyperspectral data by learning a reconstructive dictionary and a discriminative classifier in a sr model regularized with total variation ( tv ) . the proposed method features the following components . first , we adopt a spectral unmixing by variable splitting augmented lagrangian and tv method to guarantee the spatial homogeneity of sparse representations . second , we embed dictionary learning in the method to enhance the representative power of sparse representations via gradient descent in a class-wise manner . finally , we adopt a sparse multinomial logistic regression ( smlr ) model and design a class-oriented optimization strategy to obtain a powerful classifier , which improves the performance of the learnt model for specific classes . the first two components are beneficial to produce discriminative sparse representations . whereas , adopting smlr allows for effectively modeling the discriminative information . experimental results with both simulated and real hyperspectral data sets in a number of experimental comparisons with other related approaches demonstrate the superiority story_separator_special_tag we investigate the application of independent-component analysis ica to remotely sensed hyperspectral image classification . we focus on the performance of two well-known and frequently used ica algorithms : joint approximate diagonalization of eigenmatrices jade and fastica ; but the proposed method is applicable to other ica algo- rithms . the major advantage of using ica is its ability to classify objects with unknown spectral signatures in an unknown image scene , i.e. , un- supervised classification . however , ica suffers from computational ex- pensiveness , which limits its application to high-dimensional data analy- sis . in order to make it applicable or reduce the computation time in hyperspectral image classification , a data-preprocessing procedure is employed to reduce the data dimensionality . instead of using principal- component analysis pca , a noise-adjusted principal-components napc transform is employed for this purpose , which can reorganize the original data with respect to the signal-to-noise ratio , a more appro- priate image-ranking criterion than variance in pca . the experimental results demonstrate that the major principal components from the napc transform can better maintain the object information in the original data than those from pca . as a result , an story_separator_special_tag it is well known that there is a strong relation between class definition precision and classification accuracy in pattern classification applications . in hyperspectral data analysis , usually classes of interest contain one or more components and may not be well represented by a single gaussian density function . in this paper , a model-based mixture classifier , which uses mixture models to characterize class densities , is discussed . however , a key outstanding problem of this approach is how to choose the number of components and determine their parameters for such models in practice , and to do so in the face of limited training sets where estimation error becomes a significant factor . the proposed classifier estimates the number of subclasses and class statistics simultaneously by choosing the best model . the structure of class covariances is also addressed through a model-based covariance estimation technique introduced in this paper . story_separator_special_tag we propose a computationally efficient method for determining anomalies in hyperspectral data . in the first stage of the algorithm , the background classes , which are the dominant classes in the image , are found . the method consists of robust clustering of a randomly chosen small percentage of the image pixels . the clusters are the representatives of the background classes . by using a subset of the pixels instead of the whole image , the computation is sped up , and the probability of including outliers in the background model is reduced . anomalous pixels are the pixels with spectra that have large relative distances from the cluster centers . several clustering techniques are investigated , and experimental results using realistic hyperspectral data are presented . a self-organizing map clustered using the local minima of the u-matrix ( unified distance matrix ) is identified as the most reliable method for background class extraction . the proposed algorithm for anomaly detection is evaluated using realistic hyperspectral data , is compared with a state-of-the-art anomaly detection algorithm , and is shown to perform significantly better . story_separator_special_tag this paper studies a new bayesian unmixing algorithm for hyperspectral images . each pixel of the image is modeled as a linear combination of so-called endmembers . these endmembers are supposed to be random in order to model uncertainties regarding their knowledge . more precisely , we model endmembers as gaussian vectors whose means have been determined using an endmember extraction algorithm such as the famous n-finder ( n-findr ) or vertex component analysis ( vca ) algorithms . this paper proposes to estimate the mixture coefficients ( referred to as abundances ) using a bayesian algorithm . suitable priors are assigned to the abundances in order to satisfy positivity and additivity constraints whereas conjugate priors are chosen for the remaining parameters . a hybrid gibbs sampler is then constructed to generate abundance and variance samples distributed according to the joint posterior of the abundances and noise variances . the performance of the proposed methodology is evaluated by comparison with other unmixing algorithms on synthetic and real images . story_separator_special_tag linear spectral unmixing is a challenging problem in hyperspectral imaging that consists of decomposing an observed pixel into a linear combination of pure spectra ( or endmembers ) with their corresponding proportions ( or abundances ) . endmember extraction algorithms can be employed for recovering the spectral signatures while abundances are estimated using an inversion step . recent works have shown that exploiting spatial dependencies between image pixels can improve spectral unmixing . markov random fields ( mrf ) are classically used to model these spatial correlations and partition the image into multiple classes with homogeneous abundances . this paper proposes to define the mrf sites using similarity regions . these regions are built using a self-complementary area filter that stems from the morphological theory . this kind of filter divides the original image into flat zones where the underlying pixels have the same spectral values . once the mrf has been clearly established , a hierarchical bayesian algorithm is proposed to estimate the abundances , the class labels , the noise variance , and the corresponding hyperparameters . a hybrid gibbs sampler is constructed to generate samples according to the corresponding posterior distribution of the unknown parameters and hyperparameters story_separator_special_tag this paper describes a new algorithm for hyperspectral image unmixing . most of the unmixing algorithms proposed in the literature do not take into account the possible spatial correlations between the pixels . in this work , a bayesian model is introduced to exploit these correlations . the image to be unmixed is assumed to be partitioned into regions ( or classes ) where the statistical properties of the abundance coefficients are homogeneous . a markov random field is then proposed to model the spatial dependency of the pixels within any class . conditionally upon a given class , each pixel is modeled by using the classical linear mixing model with additive white gaussian noise . this strategy is investigated the well known linear mixing model . for this model , the posterior distributions of the unknown parameters and hyperparameters allow ones to infer the parameters of interest . these parameters include the abundances for each pixel , the means and variances of the abundances for each class , as well as a classification map indicating the classes of all pixels in the image . to overcome the complexity of the posterior distribution of interest , we consider markov chain story_separator_special_tag hyperspectral remote sensing is an emerging , multidisciplinary field with diverse applications that builds on the principles of material spectroscopy , radiative transfer , imaging spectrometry , and hyperspectral data processing . while there are many resources that suitably cover these areas individually and focus on specific aspects of the hyperspectral remote sensing field , this book provides a holistic treatment that thoroughly captures its multidisciplinary nature . the content is oriented toward the physical principles of hyperspectral remote sensing as opposed to applications of hyperspectral technology . readers can expect to finish the book armed with the required knowledge to understand the immense literature available in this technology area and apply their knowledge to the understanding of material spectral properties , the design of hyperspectral systems , the analysis of hyperspectral imagery , and the application of the technology to specific problems . story_separator_special_tag traditional machine learning ( ml ) techniques are often employed to perform complex pattern recognition tasks for remote sensing images , such as land-use classification . in order to obtain acceptable classification results , these techniques require there to be sufficient training data available for every particular image . obtaining training samples is challenging , particularly for near real-time applications . therefore , past knowledge must be utilized to overcome the lack of training data in the current regime . this challenge is known as domain adaptation ( da ) , and one of the common approaches to this problem is based on finding invariant representations for both the training and test data , which are often assumed to come from different domains . in this study , we consider two deep learning techniques for learning domain-invariant representations : denoising autoencoders ( dae ) and domain-adversarial neural networks ( dann ) . while the dae is a typical two-stage da technique ( unsupervised invariant representation learning followed by supervised classification ) , dann is an end-to-end approach where invariant representation learning and classification are considered jointly during training . the proposed techniques are applied to both hyperspectral and multispectral images story_separator_special_tag this paper presents a new technique for hyperspectral image ( hsi ) classification by using superpixel guided deep-sparse-representation learning . the proposed technique constructs a hierarchical architecture by exploiting the sparse coding to learn the hsi representation . specifically , a multiple-layer architecture using different superpixel maps is designed , where each superpixel map is generated by downsampling the superpixels gradually along with enlarged spatial regions for labeled samples . in each layer , sparse representation of pixels within every spatial region is computed to construct a histogram via the sum-pooling with $ l_ { 1 } $ normalization . finally , the representations ( features ) learned from the multiple-layer network are aggregated and trained by a support vector machine classifier . the proposed technique has been evaluated over three public hsi data sets , including the indian pines image set , the salinas image set , and the university of pavia image set . experiments show superior performance compared with the state-of-the-art methods . story_separator_special_tag for the classification of hyperspectral images ( hsis ) , this paper presents a novel framework to effectively utilize the spectral-spatial information of superpixels via multiple kernels , which is termed as superpixel-based classification via multiple kernels ( sc-mk ) . in the hsi , each superpixel can be regarded as a shape-adaptive region , which consists of a number of spatial neighboring pixels with very similar spectral characteristics . first , the proposed sc-mk method adopts an oversegmentation algorithm to cluster the hsi into many superpixels . then , three kernels are separately employed for the utilization of the spectral information , as well as spatial information , within and among superpixels . finally , the three kernels are combined together and incorporated into a support vector machine classifier . experimental results on three widely used real hsis indicate that the proposed sc-mk approach outperforms several well-known classification methods . story_separator_special_tag due to constraints both at the sensor and on the ground , dimension reduction is a common preprocessing step performed on many hyperspectral imaging datasets . however , this transformation is not necessarily done with the ultimate data exploitation task in mind-for example , target detection or ground cover classification . indeed , theoretically speaking it is possible that a lossy operation such as dimension reduction might have a negative impact on detection performance . this notion is investigated experimentally using real-world hyperspectral imaging data . the popular principal components transform [ aka . principal components analysis ( pca ) ] is used to explore the impact that dimension reduction has on adaptive detection of difficult targets in both the reflective and emissive regimes . using seven state-of-the-art algorithms , it is shown that in many cases pca can have a minimal impact on the detection statistic value for a target that is spectrally similar to the background against which it is sought . story_separator_special_tag a method is proposed for the classification of urban hyperspectral data with high spatial resolution . the approach is an extension of previous approaches and uses both the spatial and spectral information for classification . one previous approach is based on using several principal components ( pcs ) from the hyperspectral data and building several morphological profiles ( mps ) . these profiles can be used all together in one extended mp . a shortcoming of that approach is that it was primarily designed for classification of urban structures and it does not fully utilize the spectral information in the data . similarly , the commonly used pixelwise classification of hyperspectral data is solely based on the spectral content and lacks information on the structure of the features in the image . the proposed method overcomes these problems and is based on the fusion of the morphological information and the original hyperspectral data , i.e. , the two vectors of attributes are concatenated into one feature vector . after a reduction of the dimensionality , the final classification is achieved by using a support vector machine classifier . the proposed approach is tested in experiments on rosis data from urban story_separator_special_tag a family of parsimonious gaussian process models for classification is proposed in this letter . a subspace assumption is used to build these models in the kernel feature space . by constraining some parameters of the models to be common between classes , parsimony is controlled . experimental results are given for three real hyperspectral data sets , and comparisons are done with three other classifiers . the proposed models show good results in terms of classification accuracy and processing time . story_separator_special_tag classification of hyperspectral remote sensing data with support vector machines ( svms ) is investigated . svms have been introduced recently in the field of remote sensing image processing . using the kernel method , svms map the data into higher dimensional space to increase the separability and then fit an optimal hyperplane to separate the data . in this paper , two kernels have been considered . the generalization capability of svms as well as the ability of svms to deal with high dimensional feature spaces have been tested in the situation of very limited training set . svms have been tested on real hyperspectral data . the experimental results show that svms used with the two kernels are appropriate for remote sensing classification problems . story_separator_special_tag kernel principal component analysis ( kpca ) is investigated for feature extraction from hyperspectral remote sensing data . features extracted using kpca are classified using linear support vector machines . in one experiment , it is shown that kernel principal component features are more linearly separable than features extracted with conventional principal component analysis . in a second experiment , kernel principal components are used to construct the extended morphological profile ( emp ) . classification results , in terms of accuracy , are improved in comparison to original approach which used conventional principal component analysis for constructing the emp . experimental results presented in this paper confirm the usefulness of the kpca for the analysis of hyperspectral data . for the one data set , the overall classification accuracy increases from 79 % to 96 % with the proposed approach . story_separator_special_tag recent advances in spectral-spatial classification of hyperspectral images are presented in this paper . several techniques are investigated for combining both spatial and spectral information . spatial information is extracted at the object ( set of pixels ) level rather than at the conventional pixel level . mathematical morphology is first used to derive the morphological profile of the image , which includes characteristics about the size , orientation , and contrast of the spatial structures present in the image . then , the morphological neighborhood is defined and used to derive additional features for classification . classification is performed with support vector machines ( svms ) using the available spectral information and the extracted spatial information . spatial postprocessing is next investigated to build more homogeneous and spatially consistent thematic maps . to that end , three presegmentation techniques are applied to define regions that are used to regularize the preliminary pixel-wise thematic map . finally , a multiple-classifier ( mc ) system is defined to produce relevant markers that are exploited to segment the hyperspectral image with the minimum spanning forest algorithm . experimental results conducted on three real hyperspectral images with different spatial and spectral resolutions and story_separator_special_tag abstract the prospect leaf optical model has , to date , combined the effects of photosynthetic pigments , but a finer discrimination among the key pigments is important for physiological and ecological applications of remote sensing . here we present a new calibration and validation of prospect that separates plant pigment contributions to the visible spectrum using several comprehensive datasets containing hundreds of leaves collected in a wide range of ecosystem types . these data include leaf biochemical ( chlorophyll a , chlorophyll b , carotenoids , water , and dry matter ) and optical properties ( directional hemispherical reflectance and transmittance measured from 400\xa0nm to 2450\xa0nm ) . we first provide distinct in vivo specific absorption coefficients for each biochemical constituent and determine an average refractive index of the leaf interior . then we invert the model on independent datasets to check the prediction of the biochemical content of intact leaves . the main result of this study is that the new chlorophyll and carotenoid specific absorption coefficients agree well with available in vitro absorption spectra , and that the new refractive index displays interesting spectral features in the visible , in accordance with physical principles . moreover , story_separator_special_tag in the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework . the model we study can be interpreted as a broad , abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting . we show that the multiplicative weightupdate littlestone warmuth rule can be adapted to this model , yielding bounds that are slightly weaker in some cases , but applicable to a considerably more general class of learning problems . we show how the resulting learning algorithm can be applied to a variety of problems , including gambling , multiple-outcome prediction , repeated games , and prediction of points in r. in the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm . this boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm . we also study generalizations of the new boosting algorithm to the problem of learning functions whose range , rather than being binary , is an arbitrary finite set or a bounded segment of the real line . ] 1997 academic press story_separator_special_tag clustering data by identifying a subset of representative examples is important for processing sensory signals and detecting patterns in data . such `` exemplars '' can be found by randomly choosing an initial subset of data points and then iteratively refining it , but this works well only if that initial choice is close to a good solution . we devised a method called `` affinity propagation , '' which takes as input measures of similarity between pairs of data points . real-valued messages are exchanged between data points until a high-quality set of exemplars and corresponding clusters gradually emerges . we used affinity propagation to cluster images of faces , detect genes in microarray data , identify representative sentences in this manuscript , and identify cities that are efficiently accessed by airline travel . affinity propagation found clusters with much lower error than other methods , and it did so in less than one-hundredth the amount of time . story_separator_special_tag solar zenith and view angle effects on the normalized difference vegetation index ( ndvi ) of land cover types in the brazilian amazon region were analysed . airborne hyperspectral mapper ( hymap ) data were collected in 126 narrow bands ( 450 2500 nm ) with a field of view ( fov ) of \xb130\xb0 from nadir . data collection was performed initially in two flight lines with solar zenith angles of 29\xb0 and 53\xb0 . in a third flight line , view angles were up to +60\xb0 from nadir by airplane banking . surface reflectance spectra representative of major land cover types were selected , and principal component analysis was applied to indicate their spectral similarity relationships in response to solar zenith angle variations . reflectance and ndvi differences between pairs of land cover types were plotted for a variable band positioning . atmospheric and coupled directional effects were analysed for variations in apparent and surface reflectance values , in the depth of the major water vapour absorption bands , and in the ndvi values , as . story_separator_special_tag we introduce a new representation learning approach for domain adaptation , in which data at training and test time come from similar but different distributions . our approach is directly inspired by the theory on domain adaptation suggesting that , for effective domain transfer to be achieved , predictions must be made based on features that can not discriminate between the training ( source ) and test ( target ) domains . the approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain ( no labeled target-domain data is necessary ) . as the training progresses , the approach promotes the emergence of features that are ( i ) discriminative for the main learning task on the source domain and ( ii ) indiscriminate with respect to the shift between the domains . we show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer . the resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent , and can thus be implemented with story_separator_special_tag abstract for a better evaluation of the accuracy of vis in estimating biophysical parameters , a true vi value attributed only to the vegetation signal and free of any contamination is needed . in this article , pure vegetation spectra were extracted from a set of open and closed canopies by unmixing the green vegetation signal from the background component . canopy model-simulation and reflectances derived from graph-based linear extrapolation were used to unmix and derive a true vegetation signal , equivalent to a perfect absorber ( free boundary ) canopy background reflectance condition . optical biophysical relationships were then derived for a variety of canopy structures with differences in foliage clumping , horizontal heterogeneity , and leaf type . a 3-dimensional canopy radiative transfer model and a hybrid geometric optical-radiative transfer model ( gort ) were used to simulate the directional-hemispherical reflectances from agricultural , grassland , and forested canopies ( cereal and broadleaf crop , grass , needleleaf , and broadleaf forest ) . the relationships of the extracted red and near-infrared reflectances and derived vegetation indices ( vis ) to various biophysical parameters ( leaf area index , fraction of absorbed photosynthetically active radiation , and percent story_separator_special_tag remotely extracting information about the biochemical properties of the materials in an environment from airborne- or satellite-based hyperspectral sensor has a variety of applications in forestry , agriculture , mining , environmental monitoring and space exploration . in this paper , we propose a new non-stationary covariance function , called exponential spectral angle mapper ( esam ) for predicting the biochemistry of vegetation from hyperspectral imagery using gaussian processes . the proposed covariance function is based on the angle between the spectra , which is known to be a better measure of similarity for hyperspectral data due to its robustness to illumination variations . we demonstrate the efficacy of the proposed method with experiments on a real-world hy-perspectral dataset . story_separator_special_tag abstractundirected graphical models have been successfully used to jointly model the spatial and the spectral dependencies in earth observing hyperspectral images . they produce less noisy , smooth , . story_separator_special_tag in this letter , a self-improving convolutional neural network ( cnn ) based method is proposed for the classification of hyperspectral data . this approach solves the so-called curse of dimensionality and the lack of available training samples by iteratively selecting the most informative bands suitable for the designed network via fractional order darwinian particle swarm optimization . the selected bands are then fed to the classification system to produce the final classification map . experimental results have been conducted with two well-known hyperspectral data sets : indian pines and pavia university . results indicate that the proposed approach significantly improves a cnn-based classification method in terms of classification accuracy . in addition , this letter uses the concept of dither for the first time in the remote sensing community to tackle overfitting . story_separator_special_tag this review paper evaluates the potential of hyperspectral remote sensing for assessing species diversity in homogeneous ( non-tropical ) and heterogeneous ( tropical ) forest , an increasingly urgent task . existing studies of species distribution patterns using hyperspectral remote sensing have used different techniques to discriminate different species , in which the wavelet transforms , derivative analysis and red edge positions are the most important of them . the wavelet transform is used based on its effectiveness and determined as the most powerful technique to identify species . furthermore , estimations of relationships between spectral values and species distributions using chemical composition of foliage , tree phenology , selection of signature training sites based on field measured canopy composition , selection of the best wavelet coefficient and waveband regions may be useful to identify different plant species . this paper presents a summary on the feasibility , operational applications and possible strategies of hyperspectral remote sensing in forestry , especially in assessing its biodiversity . the paper also reviews the processing and analysis of techniques for hyperspectral data in discriminating different forest tree species . story_separator_special_tag abstract visible and near-infrared ( vnir ) through short wavelength infrared ( swir ) ( 0.4 2.5\xa0 m ) aviris data , along with laboratory spectral measurements and analyses of field samples , were used to characterize grain size variations in aeolian gypsum deposits across barchan-transverse , parabolic , and barchan dunes at white sands , new mexico , usa . all field samples contained a mineralogy of \xa0100 % gypsum . in order to document grain size variations at white sands , surficial gypsum samples were collected along three transects parallel to the prevailing downwind direction . grain size analyses were carried out on the samples by sieving them into seven size fractions ranging from 45 to 621\xa0 m , which were subjected to spectral measurements . absorption band depths of the size fractions were determined after applying an automated continuum-removal procedure to each spectrum . then , the relationship between absorption band depth and gypsum size fraction was established using a linear regression . three software processing steps were carried out to measure the grain size variations of gypsum in the dune area using aviris data . aviris mapping results , field work and laboratory analysis all show story_separator_special_tag this paper presents a new spectral-spatial classification method for hyperspectral ( hs ) images . the proposed method is based on integrating hierarchical segmentation results into markov random field ( mrf ) spatial prior in the bayesian framework . this work includes two main contributions . first , statistical region merging ( srm ) segmentation algorithm is extended to a hierarchical version , hsrm . second , a method for extracting a multilevel fuzzy no-border/border map from hsrm segmentation hierarchy is proposed , which are then exploited as weighting coefficients to modify the spatial prior of mrf-based multilevel logistic ( mll ) model . the proposed method , named as mrf + hsrm , addresses the common problem of mrf-based methods , i.e. , over-smoothing of classification result . several experiments are conducted using real hs images to evaluate the performance of the proposed method in comparison with conventional mrf , and some state-of-the-art weighted mrf and object-based classifiers . to estimate the class conditional probability distribution in bayesian framework , probabilistic support vector machine ( svm ) and subspace multinomial logistic regression ( mlrsub ) classifiers are used . the experimental results demonstrate that the proposed method is able story_separator_special_tag abstract this paper compares predictions of soil organic carbon ( soc ) using visible and near infrared reflectance ( vis nir ) hyperspectral proximal and remote sensing data . soil samples were collected in the narrabri region , dominated by vertisols , in north western new south wales ( nsw ) , australia . vis nir spectra were collected over this region proximally with an agrispec portable spectrometer ( 350 2500\xa0nm ) and remotely from the hyperion hyperspectral sensor onboard satellite ( 400 2500\xa0nm ) . soc contents were predicted by partial least-squares regression ( plsr ) using both the proximal and remote sensing spectra . the spectral resolution of the proximal and remote sensing data did not affect prediction accuracy . however , predictions of soc using the hyperion spectra were less accurate than those of the agrispec data resampled to similar resolution as the hyperion spectra . finally , the soc map predicted using hyperion data shows similarity with field observations . there is potential for the use of hyperspectral remote sensing for predictions of soil organic carbon . the use of these techniques will facilitate the implementation of digital soil mapping . story_separator_special_tag deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction . these methods have dramatically improved the state-of-the-art in speech recognition , visual object recognition , object detection and many other domains such as drug discovery and genomics . deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer . deep convolutional nets have brought about breakthroughs in processing images , video , speech and audio , whereas recurrent nets have shone light on sequential data such as text and speech . story_separator_special_tag despite achieving remarkable performance , deep graph learning models , such as node classification and network embedding , suffer from harassment caused by small adversarial perturbations . however , the vulnerability analysis of graph matching under adversarial attacks has not been fully investigated yet . this paper proposes an adversarial attack model with two novel attack techniques to perturb the graph structure and degrade the quality of deep graph matching : ( 1 ) a kernel density estimation approach is utilized to estimate and maximize node densities to derive imperceptible perturbations , by pushing attacked nodes to dense regions in two graphs , such that they are indistinguishable from many neighbors ; and ( 2 ) a meta learning-based projected gradient descent method is developed to well choose attack starting points and to improve the search performance for producing effective perturbations . we evaluate the effectiveness of the attack model on real datasets and validate that the attacks can be transferable to other graph learning models . story_separator_special_tag in this paper , we present bidirectional long short term memory ( lstm ) networks , and a modified , full gradient version of the lstm learning algorithm . we evaluate bidirectional lstm ( blstm ) and several other network architectures on the benchmark task of framewise phoneme classification , using the timit database . our main findings are that bidirectional networks outperform unidirectional ones , and long short term memory ( lstm ) is much faster and also more accurate than both standard recurrent neural nets ( rnns ) and time-windowed multilayer perceptrons ( mlps ) . our results support the view that contextual information is crucial to speech processing , and suggest that blstm is an effective architecture with which to exploit it ' . story_separator_special_tag in the lmm for hyperspectral images , all the image spectra lie on a high-dimensional simplex with corners called endmembers . given a set of endmembers , the standard calculation of fractional abundances with constrained least squares typically identifies the spectra as combinations of most , if not all , endmembers . we assume instead that pixels are combinations of only a few endmembers , yielding abundance vectors that are sparse . we introduce sparse demixing ( sd ) , which is a method that is similar to orthogonal matching pursuit , for calculating these sparse abundances . we demonstrate that sd outperforms an existing l1 demixing algorithm , which we prove to depend adversely on the angles between endmembers . we combine sd with dictionary learning methods to calculate automatically endmembers for a provided set of spectra . applying it to an airborne visible/infrared imaging spectrometer image of cuprite , nv , yields endmembers that compare favorably with signatures from the usgs spectral library . story_separator_special_tag the problem of limited training samples is always a major concern in hyperspectral remote sensing image classification . in this paper , a sample-screening multiple kernel learning ( s2mkl ) method is proposed for hyperspectral image classification with limited training samples . the core idea of the proposed method is to employ boosting strategy for screening the limited training samples under mkl framework . different from existing methods , the proposed mkl method exploits the boosting trick to try different combinations of the limited training samples and adaptively determine the optimal weights of base kernels in the linear combination . morphological profiles are firstly extracted as the both spatial and spectral features for classification instead of the original spectra . with the morphological profiles , adaboost strategy is then introduced to guide the construction of multiple kernel learning machine . by means of boosting strategy , the limited samples are effectively screened and used for classification . meanwhile , the weights of base kernels in the linear combination are automatically determined in the process of screening samples . three real hyperspectral data sets are used to evaluate the proposed method . the experimental results show that the proposed boosting-based multiple story_separator_special_tag in this paper , we propose a novel multiple kernel learning ( mkl ) framework to incorporate both spectral and spatial features for hyperspectral image classification , which is called multiple-structure-element nonlinear mkl ( multise-nmkl ) . in the proposed framework , multiple structure elements ( multises ) are employed to generate extended morphological profiles ( emps ) to present spatial spectral information . in order to better mine interscale and interstructure similarity among emps , a nonlinear mkl ( nmkl ) is introduced to learn an optimal combined kernel from the predefined linear base kernels . we integrate this nmkl with support vector machines ( svms ) and reduce the min max problem to a simple minimization problem . the optimal weight for each kernel matrix is then solved by a projection-based gradient descent algorithm . the advantages of using nonlinear combination of base kernels and multise-based emp are that similarity information generated from the nonlinear interaction of different kernels is fully exploited , and the discriminability of the classes of interest is deeply enhanced . experiments are conducted on three real hyperspectral data sets . the experimental results show that the proposed method achieves better performance for hyperspectral story_separator_special_tag recently , multiple kernel learning ( mkl ) methods have been developed to improve the flexibility of kernel-based learning machine . the mkl methods generally focus on determining key kernels to be preserved and their significance in optimal kernel combination . unfortunately , computational demand of finding the optimal combination is prohibitive when the number of training samples and kernels increase rapidly , particularly for hyperspectral remote sensing data . in this paper , we address the mkl for classification in hyperspectral images by extracting the most variation from the space spanned by multiple kernels and propose a representative mkl ( rmkl ) algorithm . the core idea embedded in the algorithm is to determine the kernels to be preserved and their weights according to statistical significance instead of time-consuming search for optimal kernel combination . the noticeable merits of rmkl consist that it greatly reduces the computational load for searching optimal combination of basis kernels and has no limitation from strict selection of basis kernels like most mkl algorithms do ; meanwhile , rmkl keeps excellent properties of mkl in terms of both good classification accuracy and interpretability . experiments are conducted on different real hyperspectral data , and story_separator_special_tag in this paper , we address a spectral unmixing problem for hyperspectral images by introducing multiple-kernel learning ( mkl ) coupled with support vector machines . to effectively solve issues of spectral unmixing , an mkl method is explored to build new boundaries and distances between classes in multiple-kernel hilbert space ( mkhs ) . integrating reproducing kernel hilbert spaces ( rkhss ) spanned by a series of different basis kernels in mkhs is able to provide increased power in handling general nonlinear problems than traditional single-kernel learning in rkhs . the proposed method is developed to solve multiclass unmixing problems . to validate the proposed mkl-based algorithm , both synthetic data and real hyperspectral image data were used in our experiments . the experimental results demonstrate that the proposed algorithm has a strong ability to capture interclass spectral differences and improve unmixing accuracy , compared to the state-of-the-art algorithms tested . story_separator_special_tag in this letter , a method to optimally determine the kernel bandwidth of the gaussian radial basis function ( rbf ) kernel for support vector ( sv ) -based hyperspectral anomaly detection is presented . in this method , the support of a local background distribution is first nonparametrically learned by a technique called sv data description ( svdd ) . the svdd optimally models an enclosing hypersphere around the local background data in a high-dimensional feature space associated with the gaussian rbf kernel . any test pixel that lies outside this hypersphere surrounding the local background is considered an anomaly and , hence , a possible target pixel . considerable improvement in detection performance due to kernel parameter optimization can be seen in the simulation results when the algorithm is applied to hyperspectral images . story_separator_special_tag recently , a kernel-based ensemble learning technique for hyperspectral detection/classification problems has been introduced by the authors , to provide robust classification over hyperspectral data with relatively high level of noise and background clutter . the kernel-based ensemble technique first randomly selects spectral feature subspaces from the input data . each individual classifier , which is in fact a support vector machine ( svm ) , then independently conducts its own learning within its corresponding spectral feature subspace and hence constitutes a weak classifier . the decisions from these weak classifiers are equally or adaptively combined to generate the final ensemble decision . however , in such ensemble learning , little attempt has been previously made to jointly optimize the weak classifiers and the aggregating process for combining the subdecisions . the main goal of this paper is to achieve an optimal sparse combination of the subdecisions by jointly optimizing the separating hyperplane obtained by optimally combining the kernel matrices of the svm classifiers and the corresponding weights of the subdecisions required for the aggregation process . sparsity is induced by applying an l1 norm constraint on the weighting coefficients . consequently , the weights of most of the subclassifiers story_separator_special_tag this paper studies a generalized bilinear model and a hierarchical bayesian algorithm for unmixing hyperspectral images . the proposed model is a generalization of the accepted linear mixing model but also of a bilinear model recently introduced in the literature . appropriate priors are chosen for its parameters in particular to satisfy the positivity and sum-to-one constraints for the abundances . the joint posterior distribution of the unknown parameter vector is then derived . a metropolis-within-gibbs algorithm is proposed which allows samples distributed according to the posterior of interest to be generated and to estimate the unknown model parameters . the performance of the resulting unmixing strategy is evaluated via simulations conducted on synthetic and real data . story_separator_special_tag statistical classification of byperspectral data is challenging because the inputs are high in dimension and represent multiple classes that are sometimes quite mixed , while the amount and quality of ground truth in the form of labeled data is typically limited . the resulting classifiers are often unstable and have poor generalization . this work investigates two approaches based on the concept of random forests of classifiers implemented within a binary hierarchical multiclassifier system , with the goal of achieving improved generalization of the classifier in analysis of hyperspectral data , particularly when the quantity of training data is limited . a new classifier is proposed that incorporates bagging of training samples and adaptive random subspace feature selection within a binary hierarchical classifier ( bhc ) , such that the number of features that is selected at each node of the tree is dependent on the quantity of associated training data . results are compared to a random forest implementation based on the framework of classification and regression trees . for both methods , classification results obtained from experiments on data acquired by the national aeronautics and space administration ( nasa ) airborne visible/infrared imaging spectrometer instrument over the kennedy story_separator_special_tag abstract hyperspectral reflectance ( 438 to 884 nm ) data were recorded at five different growth stages of winter wheat in a field experiment including two cultivars , three plant densities , and four levels of n application . all two-band combinations in the normalized difference vegetation index ( 1 2 ) / ( 1+ 2 ) were subsequently used in a linear regression analysis against green biomass ( gbm , g fresh weight m 2 soil ) , leaf area index ( lai , m 2 green leaf m 2 soil ) , leaf chlorophyll concentration ( chl conc , mg chlorophyll g 1 leaf fresh weight ) , leaf chlorophyll density ( chl density , mg chlorophyll m 2 soil ) , leaf nitrogen concentration ( n conc , mg nitrogen g 1 leaf dry weight ) , and leaf nitrogen density ( n density , g nitrogen m 2 soil ) . a number of grouped wavebands with high correlation ( r 2 > 95 % ) were revealed . for the crop variables based on quantity per unit surface area , i.e . gbm , lai , chl density , and n density , these wavebands story_separator_special_tag nowadays , hyperspectral remote sensors are readily available for monitoring the earth 's surface with high spectral resolution . the high-dimensional nature of the data collected by such sensors not only increases computational complexity but also can degrade classification accuracy . to address this issue , dimensionality reduction ( dr ) has become an important aid to improving classifier efficiency on these images . the common approach to decreasing dimensionality is feature extraction by considering the intrinsic dimensionality ( id ) of the data . a wide range of techniques for id estimation ( ide ) and dr for hyperspectral images have been presented in the literature . however , the most effective and optimum methods for ide and dr have not been determined for hyperspectral sensors , and this causes ambiguity in selecting the appropriate techniques for processing hyperspectral images . in this letter , we discuss and compare ten ide and six dr methods in order to investigate and compare their performance for the purpose of supervised hyperspectral image classification by using k-nearest neighbor ( k-nn ) . due to the nature of k-nn classifier that uses different distance metrics , a variety of distance metrics were used story_separator_special_tag deeper neural networks are more difficult to train . we present a residual learning framework to ease the training of networks that are substantially deeper than those used previously . we explicitly reformulate the layers as learning residual functions with reference to the layer inputs , instead of learning unreferenced functions . we provide comprehensive empirical evidence showing that these residual networks are easier to optimize , and can gain accuracy from considerably increased depth . on the imagenet dataset we evaluate residual nets with a depth of up to 152 layers -- -8x deeper than vgg nets but still having lower complexity . an ensemble of these residual nets achieves 3.57 % error on the imagenet test set . this result won the 1st place on the ilsvrc 2015 classification task . we also present analysis on cifar-10 with 100 and 1000 layers . the depth of representations is of central importance for many visual recognition tasks . solely due to our extremely deep representations , we obtain a 28 % relative improvement on the coco object detection dataset . deep residual nets are foundations of our submissions to ilsvrc & coco 2015 competitions , where we also won story_separator_special_tag classification of hyperspectral image ( hsi ) is an important research topic in the remote sensing community . significant efforts ( e.g. , deep learning ) have been concentrated on this task . however , it is still an open issue to classify the high-dimensional hsi with a limited number of training samples . in this paper , we propose a semi-supervised hsi classification method inspired by the generative adversarial networks ( gans ) . unlike the supervised methods , the proposed hsi classification method is semi-supervised , which can make full use of the limited labeled samples as well as the sufficient unlabeled samples . core ideas of the proposed method are twofold . first , the three-dimensional bilateral filter ( 3dbf ) is adopted to extract the spectral-spatial features by naturally treating the hsi as a volumetric dataset . the spatial information is integrated into the extracted features by 3dbf , which is propitious to the subsequent classification step . second , gans are trained on the spectral-spatial features for semi-supervised learning . a gan contains two neural networks ( i.e. , generator and discriminator ) trained in opposition to one another . the semi-supervised learning is achieved story_separator_special_tag linear spectral mixture analysis ( lsma ) is a widely used technique in remote sensing to estimate abundance fractions of materials present in an image pixel . in order for an lsma-based estimator to produce accurate amounts of material abundance , it generally requires two constraints imposed on the linear mixture model used in lsma , which are the abundance sum-to-one constraint and the abundance nonnegativity constraint . the first constraint requires the sum of the abundance fractions of materials present in an image pixel to be one and the second imposes a constraint that these abundance fractions be nonnegative . while the first constraint is easy to deal with , the second constraint is difficult to implement since it results in a set of inequalities and can only be solved by numerical methods . consequently , most lsma-based methods are unconstrained and produce solutions that do not necessarily reflect the true abundance fractions of materials . in this case , they can only be used for the purposes of material detection , discrimination , and classification , but not for material quantification . the authors present a fully constrained least squares ( fcls ) linear spectral mixture analysis method story_separator_special_tag in hyperspectral unmixing , the prevalent model used is the linear mixing model , and a large variety of techniques based on this model has been proposed to obtain endmembers and their abundances in hyperspectral imagery . however , it has been known for some time that nonlinear spectral mixing effects can be a crucial component in many real-world scenarios , such as planetary remote sensing , intimate mineral mixtures , vegetation canopies , or urban scenes . while several nonlinear mixing models have been proposed decades ago , only recently there has been a proliferation of nonlinear unmixing models and techniques in the signal processing literature . this paper aims to give an historical overview of the majority of nonlinear mixing models and nonlinear unmixing methods , and to explain some of the more popular techniques in detail . the main models and techniques treated are bilinear models , models for intimate mineral mixtures , radiosity-based approaches , ray tracing , neural networks , kernel methods , support vector machine techniques , manifold learning methods , piece-wise linear techniques , and detection methods for nonlinearity . furthermore , we provide an overview of several recent developments in the nonlinear story_separator_special_tag several popular endmember extraction and unmixing algorithms are based on the geometrical interpretation of the linear mixing model , and assume the presence of pure pixels in the data . these endmembers can be identified by maximizing a simplex volume , or finding maximal distances in subsequent subspace projections , while unmixing can be considered a simplex projection problem . since many of these algorithms can be written in terms of distance geometry , where mutual distances are the properties of interest instead of euclidean coordinates , one can design an unmixing chain where other distance metrics are used . many preprocessing steps such as ( nonlinear ) dimensionality reduction or data whitening , and several nonlinear unmixing models such as the hapke and bilinear models , can be considered as transformations to a different data space , with a corresponding metric . in this paper , we show how one can use different metrics in geometry-based endmember extraction and unmixing algorithms , and demonstrate the results for some well-known metrics , such as the mahalanobis distance , the hapke model for intimate mixing , the polynomial post-nonlinear model , and graph-geodesic distances . this offers a flexible processing chain story_separator_special_tag recently , convolutional neural networks have demonstrated excellent performance on various visual tasks , including the classification of common two-dimensional images . in this paper , deep convolutional neural networks are employed to classify hyperspectral images directly in spectral domain . more specifically , the architecture of the proposed classifier contains five layers with weights which are the input layer , the convolutional layer , the max pooling layer , the full connection layer , and the output layer . these five layers are implemented on each spectral signature to discriminate against others . experimental results based on several hyperspectral image data sets demonstrate that the proposed method can achieve better classification performance than some traditional methods , such as support vector machines and the conventional deep learning-based methods . story_separator_special_tag the support vector machine ( svm ) is a group of theoretically superior machine learning algorithms . it was found competitive with the best available machine learning algorithms in classifying high-dimensional data sets . this paper gives an introduction to the theoretical development of the svm and an experimental evaluation of its accuracy , stability and training speed in deriving land cover classifications from satellite images . the svm was compared to three other popular classifiers , including the maximum likelihood classifier ( mlc ) , neural network classifiers ( nnc ) and decision tree classifiers ( dtc ) . the impacts of kernel configuration on the performance of the svm and of the selection of training data and input variables on the four classifiers were also evaluated in this experiment . story_separator_special_tag computational intelligence techniques have been used in wide applications . out of numerous computational intelligence techniques , neural networks and support vector machines ( svms ) have been playing the dominant roles . however , it is known that both neural networks and svms face some challenging issues such as : ( 1 ) slow learning speed , ( 2 ) trivial human intervene , and/or ( 3 ) poor computational scalability . extreme learning machine ( elm ) as emergent technology which overcomes some challenges faced by other techniques has recently attracted the attention from more and more researchers . elm works for generalized single-hidden layer feedforward networks ( slfns ) . the essence of elm is that the hidden layer of slfns need not be tuned . compared with those traditional computational intelligence techniques , elm provides better generalization performance at a much faster learning speed and with least human intervene . this paper gives a survey on elm and its variants , especially on ( 1 ) batch learning mode of elm , ( 2 ) fully complex elm , ( 3 ) online sequential elm , ( 4 ) incremental elm , and ( 5 ) story_separator_special_tag morphological profiles ( mps ) are a useful tool for remotely sensed image classification . these profiles are constructed on a base image that can be a single band of a multicomponent remote sensing image . principal component analysis ( pca ) has been used to provide other base images to construct mps in high-dimensional remote sensing scenes such as hyperspectral images [ e.g. , by deriving the first principal components ( pcs ) and building the mps on the first few components ] . in this paper , we discuss several strategies for producing the base images for mps , and further categorize the considered methods into four classes : 1 ) \xa0linear , 2 ) \xa0nonlinear , 3 ) \xa0manifold learning-based , and 4 ) \xa0multilinear transformation-based . it is found that the multilinear pca ( mpca ) is a powerful approach for base image extraction . that is because it is a tensor-based feature representation approach , which is able to simultaneously exploit the spectral spatial correlation between neighboring pixels . we also show that independent component analysis ( ica ) is more effective for constructing base images than pca . another important contribution of this paper story_separator_special_tag in this paper , an adaptive mean-shift ( ms ) analysis framework is proposed for object extraction and classification of hyperspectral imagery over urban areas . the basic idea is to apply an ms to obtain an object-oriented representation of hyperspectral data and then use support vector machine to interpret the feature set . in order to employ ms for hyperspectral data effectively , a feature-extraction algorithm , nonnegative matrix factorization , is utilized to reduce the high-dimensional feature space . furthermore , two bandwidth-selection algorithms are proposed for the ms procedure . one is based on the local structures , and the other exploits separability analysis . experiments are conducted on two hyperspectral data sets , the dc mall hyperspectral digital-imagery collection experiment and the purdue campus hyperspectral mapper images . we evaluate and compare the proposed approach with the well-known commercial software ecognition ( object-based analysis approach ) and an effective spectral/spatial classifier for hyperspectral data , namely , the derivative of the morphological profile . experimental results show that the proposed ms-based analysis system is robust and obviously outperforms the other methods . story_separator_special_tag the appetite for up-to-date information about earth s surface is ever increasing , as such information provides a base for a large number of applications , including local , regional and global resources monitoring , land-cover and land-use change monitoring , and environmental studies . the data from remote sensing satellites provide opportunities to acquire information about land at varying resolutions and has been widely used for change detection studies . a large number of change detection methodologies and techniques , utilizing remotely sensed data , have been developed , and newer techniques are still emerging . this paper begins with a discussion of the traditionally pixel-based and ( mostly ) statistics-oriented change detection techniques which focus mainly on the spectral values and mostly ignore the spatial context . this is succeeded by a review of object-based change detection techniques . finally there is a brief discussion of spatial data mining techniques in image processing and change detection from remote sensing data . the merits and issues of different techniques are compared . the importance of the exponential increase in the image data volume and multiple sensors and associated challenges on the development of change detection techniques are highlighted . story_separator_special_tag multivariate analysis and statistical mixture designs were used for chromatographic fingerprint preparation and authentication of the plant material of three species of the genus bauhinia . the extracts were analysed by reversed-phase high-performance liquid chromatography . mixture design gave an optimum solvent composition for extracting components from the plants of 36 % dichloromethane , 17 % ethanol and 47 % ethyl acetate ( by volume ) , while an optimum mobile phase for chromatographic analyses was found to be 27 % methanol , 27 % acetonitrile and 46 % of water ( by volume ) . results from principal component analysis , hierarchical analysis and soft independent modelling by class analogy showed that bauhinia candicans can not be synonymous with b. forficata link . it was also possible to trace the metabolic profile without identifying its chemical constituents and to determine a chromatographic discriminating region . the characteristics responsible for discrimination between b. candicans and b. forficata were more polar substances that presented peaks with retention times around 1.65 and 1.81 min . story_separator_special_tag linear spectral unmixing is a popular tool in remotely sensed hyperspectral data interpretation . it aims at estimating the fractional abundances of pure spectral signatures ( also called as endmembers ) in each mixed pixel collected by an imaging spectrometer . in many situations , the identification of the end-member signatures in the original data set may be challenging due to insufficient spatial resolution , mixtures happening at different scales , and unavailability of completely pure spectral signatures in the scene . however , the unmixing problem can also be approached in semisupervised fashion , i.e. , by assuming that the observed image signatures can be expressed in the form of linear combinations of a number of pure spectral signatures known in advance ( e.g. , spectra collected on the ground by a field spectroradiometer ) . unmixing then amounts to finding the optimal subset of signatures in a ( potentially very large ) spectral library that can best model each mixed pixel in the scene . in practice , this is a combinatorial problem which calls for efficient linear sparse regression ( sr ) techniques based on sparsity-inducing regularizers , since the number of endmembers participating in a mixed story_separator_special_tag spectral unmixing aims at estimating the fractional abundances of pure spectral signatures ( also called endmembers ) in each mixed pixel collected by a remote sensing hyperspectral imaging instrument . in recent work , the linear spectral unmixing problem has been approached in semisupervised fashion as a sparse regression one , under the assumption that the observed image signatures can be expressed as linear combinations of pure spectra , known a priori and available in a library . it happens , however , that sparse unmixing focuses on analyzing the hyperspectral data without incorporating spatial information . in this paper , we include the total variation ( tv ) regularization to the classical sparse regression formulation , thus exploiting the spatial-contextual information present in the hyperspectral images and developing a new algorithm called sparse unmixing via variable splitting augmented lagrangian and tv . our experimental results , conducted with both simulated and real hyperspectral data sets , indicate the potential of including spatial information ( through the tv term ) on sparse unmixing formulations for improved characterization of mixed pixels in hyperspectral imagery . story_separator_special_tag sparse unmixing has been recently introduced in hyperspectral imaging as a framework to characterize mixed pixels . it assumes that the observed image signatures can be expressed in the form of linear combinations of a number of pure spectral signatures known in advance ( e.g. , spectra collected on the ground by a field spectroradiometer ) . unmixing then amounts to finding the optimal subset of signatures in a ( potentially very large ) spectral library that can best model each mixed pixel in the scene . in this paper , we present a refinement of the sparse unmixing methodology recently introduced which exploits the usual very low number of endmembers present in real images , out of a very large library . specifically , we adopt the collaborative ( also called multitask or simultaneous ) sparse regression framework that improves the unmixing results by solving a joint sparse regression problem , where the sparsity is simultaneously imposed to all pixels in the data set . our experimental results with both synthetic and real hyperspectral data sets show clearly the advantages obtained using the new joint sparse regression strategy , compared with the pixelwise independent approach . story_separator_special_tag spectral unmixing aims at finding the spectrally pure constituent materials ( also called endmembers ) and their respective fractional abundances in each pixel of a hyperspectral image scene . in recent years , sparse unmixing has been widely used as a reliable spectral unmixing methodology . in this approach , the observed spectral vectors are expressed as linear combinations of spectral signatures assumed to be known a priori and presented in a large collection , termed spectral library or dictionary , usually acquired in laboratory . sparse unmixing has attracted much attention as it sidesteps two common limitations of classic spectral unmixing approaches , namely , the lack of pure pixels in hyperspectral scenes and the need to estimate the number of endmembers in a given scene , which are very difficult tasks . however , the high mutual coherence of spectral libraries , jointly with their ever-growing dimensionality , strongly limits the operational applicability of sparse unmixing . in this paper , we introduce a two-step algorithm aimed at mitigating the aforementioned limitations . the algorithm exploits the usual low dimensionality of the hyperspectral data sets . the first step , which is similar to the multiple signal classification story_separator_special_tag in this study we compared the performance of regression tree ensembles using hyperspectral data . more specifically , we compared the performance of bagging , boosting and random forest to predict sirex noctilio induced water stress in pinus patula trees using nine spectral parameters derived from hyperspectral data . results from the study show that the random forest ensemble achieved the best overall performance ( r 2 = 0.73 ) and that the predictive accuracy of the ensemble was statistically different ( p < 0.001 ) from bagging and boosting . additionally , by using random forest as a wrapper we simplified the modeling process and identified the minimum number ( n = 2 ) of spectral parameters that offered the best overall predictive accuracy ( r 2 = 0.76 ) . the water index and ratio975 had the best ability to assay the water status of s. noctilio infested trees thus making it possible to remotely predict and quantify the severity of damage caused by the wasp . story_separator_special_tag an adaptive bayesian contextual classification procedure that utilizes both spectral and spatial interpixel dependency contexts in estimation of statistics and classification is proposed . essentially , this classifier is the constructive coupling of an adaptive classification procedure and a bayesian contextual classification procedure . in this classifier , the joint prior probabilities of the classes of each pixel and its spatial neighbors are modeled by the markov random field . the estimation of statistics and classification are performed in a recursive manner to allow the establishment of the positive-feedback process in a computationally efficient manner . experiments with real hyperspectral data show that , starting with a small training sample set , this classifier can reach classification accuracies similar to that obtained by a pixelwise maximum likelihood pixel classifier with a very large training sample set . additionally , classification maps are produced that have significantly less speckle error . story_separator_special_tag the practice of classifying objects according to perceived similarities is the basis for much of science . organizing data into sensible groupings is one of the most fundamental modes of understanding and learning . as an example , a common scheme of scientific classification puts organisms in to taxonomic ranks : domain , kingdom , phylum , class , etc. ) . cluster analysis is the formal study of algorithms and methods for grouping objects according to measured or perceived intrinsic characteristics . cluster analysis does not use category labels that tag objects with prior identifiers , i.e. , class labels . the absence of category information distinguishes cluster analysis ( unsupervised learning ) from discriminant analysis ( supervised learning ) . the objective of cluster analysis is to simply find a convenient and valid organization of the data , not to establish rules for separating future data into categories . story_separator_special_tag the rich information available in hyperspectral imagery has provided significant opportunities for material classification and identification . due to the problem of the curse of dimensionality ( called hughes phenomenon ) posed by the high number of spectral channels along with small amounts of labeled training samples , dimensionality reduction is a necessary preprocessing step for hyperspectral data . generally , in order to improve the classification accuracy , noise bands generated by various sources ( primarily the sensor and the atmosphere ) are often manually removed in advance . however , the removal of these bands may discard some important discriminative information , eventually degrading the classification accuracy . in this paper , we propose a new strategy to automatically select bands without manual band removal . firstly , wavelet shrinkage is applied to denoise the spatial images of the whole data cube . then affinity propagation , which is a recently proposed feature selection approach , is used to choose representative bands from the noise-reduced data . experimental results on three real hyperspectral data collected by two different sensors demonstrate that the bands selected by our approach on the whole data ( containing noise bands ) could achieve story_separator_special_tag most of the existing spatial-spectral-based hyperspectral image classification ( hsic ) methods mainly extract the spatial-spectral information by combining the pixels in a small neighborhood or aggregating the statistical and morphological characteristics . however , those strategies can only generate shallow appearance features with limited representative ability for classes with high interclass similarity and spatial diversity and therefore reduce the classification accuracy . to this end , we present a novel hsic framework , named deep multiscale spatial-spectral feature extraction algorithm , which focuses on learning effective discriminant features for hsic . first , the well pretrained deep fully convolutional network based on vgg-verydeep-16 is introduced to excavate the potential deep multiscale spatial structural information in the proposed hyperspectral imaging framework . then , the spectral feature and the deep multiscale spatial feature are fused by adopting the weighted fusion method . finally , the fusion feature is put into a generic classifier to obtain the pixelwise classification . compared with the existing spectral-spatial-based classification techniques , the proposed method provides the state-of-the-art performance and is much more effective , especially for images with high nonlinear distribution and spatial diversity . story_separator_special_tag principal component analysis pca is a multivariate technique that analyzes a data table in which observations are described by several inter-correlated quantitative dependent variables . its goal is to extract the important information from the table , to represent it as a set of new orthogonal variables called principal components , and to display the pattern of similarity of the observations and of the variables as points in maps . the quality of the pca model can be evaluated using cross-validation techniques such as the bootstrap and the jackknife . pca can be generalized as correspondence analysis ca in order to handle qualitative variables and as multiple factor analysis mfa in order to handle heterogeneous sets of variables . mathematically , pca depends upon the eigen-decomposition of positive semi-definite matrices and upon the singular value decomposition svd of rectangular matrices . copyright \xa9 2010 john wiley & sons , inc . story_separator_special_tag this paper proposes a novel framework called gaussian process maximum likelihood for spatially adaptive classification of hyperspectral data . in hyperspectral images , spectral responses of land covers vary over space , and conventional classification algorithms that result in spatially invariant solutions are fundamentally limited . in the proposed framework , each band of a given class is modeled by a gaussian random process indexed by spatial coordinates . these models are then used to characterize each land cover class at a given location by a multivariate gaussian distribution with parameters adapted for that location . experimental results show that the proposed method effectively captures the spatial variations of hyperspectral data , significantly outperforming a variety of other classification algorithms on three different hyperspectral data sets . story_separator_special_tag both supervised and semisupervised algorithms for hyperspectral data analysis typically assume that all unlabeled data belong to the same set of land-cover classes that is represented by labeled data . this is not true in general , however , since there may be new classes in the unexplored regions within an image or in areas that are geographically near but topographically distinct . this problem is more likely to occur when one attempts to build classifiers that cover wider areas ; such classifiers also need to address spatial variations in acquired spectral signatures if they are to be accurate and robust . this paper presents a semisupervised spatially adaptive mixture model ( sessamm ) to identify land covers from hyperspectral images in the presence of previously unknown land-cover classes and spatial variation of spectral responses . sessamm uses a nonparametric bayesian framework to apply spatially adaptive mechanisms to the mixture model with ( potentially ) infinitely many components . in this method , each component in the mixture has spatially adapted parameters estimated by gaussian process regression , and spatial correlations between indicator variables are also considered . the proposed sessamm algorithm is applied to hyperspectral data from botswana and story_separator_special_tag we consider a supervised classification of hyperspectral data using adaboost with stump functions as base classifiers . we used the bootstrap method without replacement to improve stability and accuracy and to reduce overtraining . we randomly split a data set into two subsets : one for training and the other one for validation . subsampling and training/validation steps were repeated to derive the final classifier by the majority vote of the classifiers . this method enabled us to estimate variable relevance to the classification . the relevance measure was used to estimate prior probabilities of the variables for random combinations . in numerical experiments with multispectral and hyperspectral data , the proposed method performed extremely well and showed itself to be superior to support vector machines , artificial neural networks , and other well-known classification methods . story_separator_special_tag in this paper , we study self-taught learning for hyperspectral image ( hsi ) classification . supervised deep learning methods are currently state of the art for many machine learning problems , but these methods require large quantities of labeled data to be effective . unfortunately , existing labeled hsi benchmarks are too small to directly train a deep supervised network . alternatively , we used self-taught learning , which is an unsupervised method to learn feature extracting frameworks from unlabeled hyperspectral imagery . these models learn how to extract generalizable features by training on sufficiently large quantities of unlabeled data that are distinct from the target data set . once trained , these models can extract features from smaller labeled target data sets . we studied two self-taught learning frameworks for hsi classification . the first is a shallow approach that uses independent component analysis and the second is a three-layer stacked convolutional autoencoder . our models are applied to the indian pines , salinas valley , and pavia university data sets , which were captured by two separate sensors at different altitudes . despite large variation in scene type , our algorithms achieve state-of-the-art results across all the story_separator_special_tag recently , anomaly detection ( ad ) has attracted considerable interest in a wide variety of hyperspectral remote sensing applications . the goal of this unsupervised technique of target detection is to identify the pixels with significantly different spectral signatures from the neighboring background . kernel methods , such as kernel-based support vector data description ( svdd ) ( k-svdd ) , have been presented as the successful approach to ad problems . the most commonly used kernel is the gaussian kernel function . the main problem using the gaussian kernel-based ad methods is the optimal setting of sigma . in an attempt to address this problem , this paper proposes a direct and adaptive measure for gaussian k-svdd ( gk-svdd ) . the proposed measure is based on a geometric interpretation of the gk-svdd . experimental results are presented on real and synthetically implanted targets of the target detection blind-test data sets . compared to previous measures , the results demonstrate better performance , particularly for subpixel anomalies . story_separator_special_tag in this letter , we propose a multinomial-logistic-regression method for pixelwise hyperspectral classification . the feature vectors are formed by the energy of the spectral vectors projected on class-indexed subspaces . in this way , we model not only the linear mixing process that is often present in the hyperspectral measurement process but also the nonlinearities that are separable in the feature space defined by the aforementioned feature vectors . our experimental results have been conducted using both simulated and real hyperspectral data sets , which are collected using nasa 's airborne visible/infrared imaging spectrometer ( aviris ) and the reflective optics system imaging spectrographic ( rosis ) system . these results indicate that the proposed method provides competitive results in comparison with other state-of-the-art approaches . story_separator_special_tag in this research we address the problem of high-dimensional in hyperspectral images , which may contain rare /anomaly vectors introduced in the subspace observation that we wish to preserve . linear techniques principal component analysis ( pca ) , and non linear techniques kernel pca , isomap , multidimensional scaling ( mds ) , local tangent space alignment ( ltsa ) , diffusion maps , sammon mapping , symmetric stochastic neighbor embedding ( symsne ) , stochastic neighbor embedding ( sne ) , locally linear embedding ( lle ) , locality preserving projection ( lpp ) , neighborhood preserving embedding ( npe ) , linear local tangent space alignment ( lltsa ) was presented . classical approaches criterion based on the norm l d , derivative spectral , nearest neighbors and quality criteria are used for obtaining a good preservation of these vectors in the reduction dimension . we have observed from the results obtained that sammon and isomap are less sensitive to these rare vectors compared to the other presented methods . story_separator_special_tag this paper proposes a novel framework for labelling problems which is able to combine multiple segmentations in a principled manner . our method is based on higher order conditional random fields and uses potentials defined on sets of pixels ( image segments ) generated using unsupervised segmentation algorithms . these potentials enforce label consistency in image regions and can be seen as a strict generalization of the commonly used pairwise contrast sensitive smoothness potentials . the higher order potential functions used in our framework take the form of the robust pn model . this enables the use of powerful graph cut based move making algorithms for performing inference in the framework [ 14 ] . we test our method on the problem of multi-class object segmentation by augmenting the conventional crf used for object segmentation with higher order potentials defined on image regions . experiments on challenging data sets show that integration of higher order potentials quantitatively and qualitatively improves results leading to much better definition of object boundaries . we believe that this method can be used to yield similar improvements for many other labelling problems . story_separator_special_tag for two decades , remotely sensed data from imaging spectrometers have been used to estimate non-pigment biochemical constituents of vegetation , including water , nitrogen , cellulose , and lignin . this interest has been motivated by the important role that these substances play in physiological processes such as photosynthesis , their relationships with ecosystem processes such as litter decomposition and nutrient cycling , and their use in identifying key plant species and functional groups . this paper reviews three areas of research to improve the application of imaging spectrometers to quantify non-pigment biochemical constituents of plants . first , we examine recent empirical and modeling studies that have advanced our understanding of leaf and canopy reflectance spectra in relation to plant biochemistry . next , we present recent examples of how spectroscopic remote sensing methods are applied to characterize vegetation canopies , communities and ecosystems . third , we highlight the latest developments in using imaging spectrometer data to quantify net primary production ( npp ) over large geographic areas . finally , we discuss the major challenges in quantifying non-pigment biochemical constituents of plant canopies from remotely sensed spectra . story_separator_special_tag abstract we develop a new method for estimating the biochemistry of plant material using spectroscopy . normalized band depths calculated from the continuum-removed reflectance spectra of dried and ground leaves were used to estimate their concentrations of nitrogen , lignin , and cellulose . stepwise multiple linear regression was used to select wavelengths in the broad absorption features centered at 1.73 m , 2.10 m , and 2.30 m that were highly correlated with the chemistry of samples from eastern u.s. forests . band depths of absorption features at these wavelengths were found to also be highly correlated with the chemistry of four other sites . a subset of data from the eastern u.s. forest sites was used to derive linear equations that were applied to the remaining data to successfully estimate their nitrogen , lignin , and cellulose concentrations . correlations were highest for nitrogen ( r2 from 0.75 to 0.94 ) . the consistent results indicate the possibility of establishing a single equation capable of estimating the chemical concentrations in a wide variety of species from the reflectance spectra of dried leaves . the extension of this method to remote sensing was investigated . the effects of leaf story_separator_special_tag deep learning ( dl ) methods have gained considerable attention since 2014. in this chapter we briefly review the state of the art in dl and then give several examples of applications from diverse areas of application . we will focus on convolutional neural networks ( cnns ) , which have since the seminal work of krizhevsky et al . ( imagenet classification with deep convolutional neural networks . advances in neural information processing systems 25 , pp . 1097 1105 , 2012 ) revolutionized image classification and even started surpassing human performance on some benchmark data sets ( ciresan et al. , multi-column deep neural network for traffic sign classification , 2012a ; he et al. , delving deep into rectifiers : surpassing human-level performance on imagenet classification . corr , vol . 1502.01852 , 2015a ) . while deep neural networks have become popular primarily for image classification tasks , they can also be successfully applied to other areas and problems with some local structure in the data . we will first present a classical application of cnns on image-like data , in particular , phenotype classification of cells based on their morphology , and then extend the story_separator_special_tag the fundamental basis for space-based remote sensing is that information is potentially available from the electromagnetic energy field arising from the earth 's surface and , in particular , from the spatial , spectral , and temporal variations in that field . rather than focusing on the spatial variations , which imagery perhaps best conveys , why not move on to look at how the spectral variations might be used . the idea was to enlarge the size of a pixel until it includes an area that is characteristic from a spectral response standpoint for the surface cover to be discriminated . the article includes an example of an image space representation , using three bands to simulate a color ir photograph of an airborne hyperspectral data set over the washington , dc , mall . story_separator_special_tag abstract learning incorporates a broad range of complex procedures . machine learning ( ml ) is a subdivision of artificial intelligence based on the biological learning process . the ml approach deals with the design of algorithms to learn from machine readable data . ml covers main domains such as data mining , difficult-to-program applications , and software applications . it is a collection of a variety of algorithms ( e.g . neural networks , support vector machines , self-organizing map , decision trees , random forests , case-based reasoning , genetic programming , etc . ) that can provide multivariate , nonlinear , nonparametric regression or classification . the modeling capabilities of the ml-based methods have resulted in their extensive applications in science and engineering . herein , the role of ml as an effective approach for solving problems in geosciences and remote sensing will be highlighted . the unique features of some of the ml techniques will be outlined with a specific attention to genetic programming paradigm . furthermore , nonparametric regression and classification illustrative examples are presented to demonstrate the efficiency of ml for tackling the geosciences and remote sensing problems . story_separator_special_tag keynote speech and invited talks.- machine learning for analyzing human brain function.- subgroup discovery techniques and applications.- it development in the 21st century and its implications.- theoretic foundations.- data mining of gene expression microarray via weighted prefix trees.- automatic extraction of low frequency bilingual word pairs from parallel corpora with various languages.- a kernel function method in clustering.- performance measurements for privacy preserving data mining.- extraction of frequent few-overlapped monotone dnf formulas with depth-first pruning.- association rules.- rule extraction from trained support vector machines.- pruning derivative partial rules during impact rule discovery.- : a new informative generic base of association rules.- a divide and conquer approach for deriving partially ordered sub-structures.- finding sporadic rules using apriori-inverse.- automatic view selection : an application to image mining.- pushing tougher constraints in frequent pattern mining.- an efficient compression technique for frequent itemset generation in association rule mining.- mining time-profiled associations : an extended abstract.- online algorithms for mining inter-stream associations from large sensor networks.- mining frequent ordered patterns.- biomedical domains.- conditional random fields for transmembrane helix prediction.- a dna index structure using frequency and position information of genetic alphabet.- an automatic unsupervised querying algorithm for efficient information extraction in biomedical domain.- voting fuzzy story_separator_special_tag an accurate estimation of biophysical variables is the key to monitor our planet . leaf chlorophyll content helps in interpreting the chlorophyll fluorescence signal from space , whereas oceanic chlorophyll concentration allows us to quantify the healthiness of the oceans . recently , the family of bayesian nonparametric methods has provided excellent results in these situations . a particularly useful method in this framework is the gaussian process regression ( gpr ) . however , standard gpr assumes that the variance of the noise process is independent of the signal , which does not hold in most of the problems . in this letter , we propose a nonstandard variational approximation that allows accurate inference in signal-dependent noise scenarios . we show that the so-called variational heteroscedastic gpr ( vhgpr ) is an excellent alternative to standard gpr in two relevant earth observation examples , namely , chl vegetation retrieval from hyperspectral images and oceanic chl concentration estimation from in situ measured reflectances . the proposed vhgpr outperforms the tested empirical approaches , as well as statistical linear regression ( both least squares and least absolute shrinkage and selection operator ) , neural nets , and kernel ridge regression , story_separator_special_tag deep belief networks ( dbn ) are generative neural network models with many layers of hidden explanatory factors , recently introduced by hinton , osindero , and teh ( 2006 ) along with a greedy layer-wise unsupervised learning algorithm . the building block of a dbn is a probabilistic model called a restricted boltzmann machine ( rbm ) , used to represent one layer of the model . restricted boltzmann machines are interesting because inference is easy in them and because they have been successfully used as building blocks for training deeper models . we first prove that adding hidden units yields strictly improved modeling power , while a second theorem shows that rbms are universal approximators of discrete distributions . we then study the question of whether dbns with more layers are strictly more powerful in terms of representational power . this suggests a new and less greedy criterion for training rbms within dbns . story_separator_special_tag this paper compares the performance of several classi er algorithms on a standard database of handwritten digits . we consider not only raw accuracy , but also training time , recognition time , and memory requirements . when available , we report measurements of the fraction of patterns that must be rejected so that the remaining patterns have misclassi cation rates less than a given threshold . story_separator_special_tag in this paper , we describe a novel deep convolutional neural network ( cnn ) that is deeper and wider than other existing deep networks for hyperspectral image classification . unlike current state-of-the-art approaches in cnn-based hyperspectral image classification , the proposed network , called contextual deep cnn , can optimally explore local contextual interactions by jointly exploiting local spatio-spectral relationships of neighboring individual pixel vectors . the joint exploitation of the spatio-spectral information is achieved by a multi-scale convolutional filter bank used as an initial component of the proposed cnn pipeline . the initial spatial and spectral feature maps obtained from the multi-scale filter bank are then combined together to form a joint spatio-spectral feature map . the joint feature map representing rich spectral and spatial properties of the hyperspectral image is then fed through a fully convolutional network that eventually predicts the corresponding label of each pixel vector . the proposed approach is tested on three benchmark data sets : the indian pines data set , the salinas data set , and the university of pavia data set . performance comparison shows enhanced classification performance of the proposed approach over the current state-of-the-art on the three data sets story_separator_special_tag high-spectral-resolution remote-sensing data are first transformed so that the noise covariance matrix becomes the identity matrix . then the principal components transform is applied . this transform is equivalent to the maximum noise fraction transform and is optimal in the sense that it maximizes the signal-to-noise ratio ( snr ) in each successive transform component , just as the principal component transform maximizes the data variance in successive components . application of this transform requires knowledge or an estimate of the noise covariance matrix of the data . the effectiveness of this transform for noise removal is demonstrated in both the spatial and spectral domains . results that demonstrate the enhancement of geological mapping and detection of alteration mineralogy in data from the pilbara region of western australia , including mapping of the occurrence of pyrophyllite over an extended area , are presented . > story_separator_special_tag classification of hyperspectral imagery using few labeled samples is a challenging problem , considering the high dimensionality of hyperspectral imagery . classifiers trained on limited samples with abundant spectral bands tend to overfit , leading to weak generalization capability . to address this problem , we have developed an enhanced ensemble method called multiclass boosted rotation forest ( mbrf ) , which combines the rotation forest algorithm and a multiclass adaboost algorithm . the benefit of this combination can be explained by bias-variance analysis , especially in the situation of inadequate training samples and high dimensionality . furthermore , mbrf innately produces posterior probabilities inherited from adaboost , which are served as the unary potentials of the conditional random field ( crf ) model to incorporate spatial context information . experimental results show that the classification accuracy by mbrf as well as its integration with crf consistently outperforms the other referenced state-of-the-art classification methods when limited labeled samples are available for training . story_separator_special_tag this paper presents a new semisupervised segmentation algorithm , suited to high-dimensional data , of which remotely sensed hyperspectral image data sets are an example . the algorithm implements two main steps : 1 ) semisupervised learning of the posterior class distributions followed by 2 ) segmentation , which infers an image of class labels from a posterior distribution built on the learned class distributions and on a markov random field . the posterior class distributions are modeled using multinomial logistic regression , where the regressors are learned using both labeled and , through a graph-based technique , unlabeled samples . such unlabeled samples are actively selected based on the entropy of the corresponding class label . the prior on the image of labels is a multilevel logistic model , which enforces segmentation results in which neighboring labels belong to the same class . the maximum a posteriori segmentation is computed by the -expansion min-cut-based integer optimization algorithm . our experimental results , conducted using synthetic and real hyperspectral image data sets collected by the airborne visible/infrared imaging spectrometer system of the national aeronautics and space administration jet propulsion laboratory over the regions of indian pines , in , and story_separator_special_tag this paper introduces a new supervised bayesian approach to hyperspectral image segmentation with active learning , which consists of two main steps . first , we use a multinomial logistic regression ( mlr ) model to learn the class posterior probability distributions . this is done by using a recently introduced logistic regression via splitting and augmented lagrangian algorithm . second , we use the information acquired in the previous step to segment the hyperspectral image using a multilevel logistic prior that encodes the spatial information . in order to reduce the cost of acquiring large training sets , active learning is performed based on the mlr posterior probabilities . another contribution of this paper is the introduction of a new active sampling approach , called modified breaking ties , which is able to provide an unbiased sampling . furthermore , we have implemented our proposed method in an efficient way . for instance , in order to obtain the time-consuming maximum a posteriori segmentation , we use the -expansion min-cut-based integer optimization algorithm . the state-of-the-art performance of the proposed approach is illustrated using both simulated and real hyperspectral data sets in a number of experimental comparisons with recently story_separator_special_tag in this letter , we propose a new semisupervised learning ( ssl ) algorithm for remotely sensed hyperspectral image classification . our main contribution is the development of a new soft sparse multinomial logistic regression model which exploits both hard and soft labels . in our terminology , these labels respectively correspond to labeled and unlabeled training samples . the proposed algorithm represents an innovative contribution with regard to conventional ssl algorithms that only assign hard labels to unlabeled samples . the effectiveness of our proposed method is evaluated via experiments with real hyperspectral images , in which comparisons with conventional semisupervised self-learning algorithms with hard labels are carried out . in such comparisons , our method exhibits state-of-the-art performance . story_separator_special_tag this paper introduces a new supervised segmentation algorithm for remotely sensed hyperspectral image data which integrates the spectral and spatial information in a bayesian framework . a multinomial logistic regression ( mlr ) algorithm is first used to learn the posterior probability distributions from the spectral information , using a subspace projection method to better characterize noise and highly mixed pixels . then , contextual information is included using a multilevel logistic markov-gibbs markov random field prior . finally , a maximum a posteriori segmentation is efficiently computed by the min-cut-based integer optimization algorithm . the proposed segmentation approach is experimentally evaluated using both simulated and real hyperspectral data sets , exhibiting state-of-the-art performance when compared with recently introduced hyperspectral image classification methods . the integration of subspace projection methods with the mlr algorithm , combined with the use of spatial-contextual information , represents an innovative contribution in the literature . this approach is shown to provide accurate characterization of hyperspectral imagery in both the spectral and the spatial domain . story_separator_special_tag in this paper , we propose a new framework for spectral-spatial classification of hyperspectral image data . the proposed approach serves as an engine in the context of which active learning algorithms can exploit both spatial and spectral information simultaneously . an important contribution of our paper is the fact that we exploit the marginal probability distribution which uses the whole information in the hyperspectral data . we learn such distributions from both the spectral and spatial information contained in the original hyperspectral data using loopy belief propagation . the adopted probabilistic model is a discriminative random field in which the association potential is a multinomial logistic regression classifier and the interaction potential is a markov random field multilevel logistic prior . our experimental results with hyperspectral data sets collected using the national aeronautics and space administration 's airborne visible infrared imaging spectrometer and the reflective optics system imaging spectrometer system indicate that the proposed framework provides state-of-the-art performance when compared to other similar developments . story_separator_special_tag hyperspectral image classification has been an active topic of research in recent years . in the past , many different types of features have been extracted ( using both linear and nonlinear strategies ) for classification problems . on the one hand , some approaches have exploited the original spectral information or other features linearly derived from such information in order to have classes which are linearly separable . on the other hand , other techniques have exploited features obtained through nonlinear transformations intended to reduce data dimensionality , to better model the inherent nonlinearity of the original data ( e.g. , kernels ) or to adequately exploit the spatial information contained in the scene ( e.g. , using morphological analysis ) . special attention has been given to techniques able to exploit a single kind of features , such as composite kernel learning or multiple kernel learning , developed in order to deal with multiple kernels . however , few approaches have been designed to integrate multiple types of features extracted from both linear and nonlinear transformations . in this paper , we develop a new framework for the classification of hyperspectral scenes that pursues the combination of multiple story_separator_special_tag in this letter , a novel anomaly detection framework with transferred deep convolutional neural network ( cnn ) is proposed . the framework is designed by considering the following facts : 1 ) a reference data with labeled samples are utilized , because no prior information is available about the image scene for anomaly detection and 2 ) pixel pairs are generated to enlarge the sample size , since the advantage of cnn can be realized only if the number of training samples is sufficient . a multilayer cnn is trained by using difference between pixel pairs generated from the reference image scene . then , for each pixel in the image for anomaly detection , difference between pixel pairs , constructed by combining the center pixel and its surrounding pixels , is classified by the trained cnn with the result of similarity measurement . the detection output is simply generated by averaging these similarity scores . experimental performance demonstrates that the proposed algorithm outperforms the classic reed-xiaoli and the state-of-the-art representation-based detectors , such as sparse representation-based detector ( srd ) and collaborative representation-based detector . story_separator_special_tag the deep convolutional neural network ( cnn ) is of great interest recently . it can provide excellent performance in hyperspectral image classification when the number of training samples is sufficiently large . in this paper , a novel pixel-pair method is proposed to significantly increase such a number , ensuring that the advantage of cnn can be actually offered . for a testing pixel , pixel-pairs , constructed by combining the center pixel and each of the surrounding pixels , are classified by the trained cnn , and the final label is then determined by a voting strategy . the proposed method utilizing deep cnn to learn pixel-pair features is expected to have more discriminative power . experimental results based on several hyperspectral image data sets demonstrate that the proposed method can achieve better classification performance than the conventional deep learning-based method . story_separator_special_tag linear discriminant analysis ( lda ) has been widely applied for hyperspectral image ( hsi ) analysis as a popular method for feature extraction and dimensionality reduction . linear methods such as lda work well for unimodal gaussian class-conditional distributions . however , when data samples between classes are nonlinearly separated in the input space , linear methods such as lda are expected to fail . the kernel discriminant analysis ( kda ) attempts to address this issue by mapping data in the input space onto a subspace such that fisher 's ratio in an intermediate ( higher-dimensional ) kernel-induced space is maximized . in recent studies with hsi data , kda has been shown to outperform lda , particularly when the data distributions are non-gaussian and multimodal , such as when pixels represent target classes severely mixed with background classes . in this letter , a modified kda algorithm , i.e. , kernel local fisher discriminant analysis ( klfda ) , is studied for hsi analysis . unlike kda , klfda imposes an additional constraint on the mapping-it ensures that neighboring points in the input space stay close-by in the projected subspace and vice versa . classification experiments with story_separator_special_tag the gaussian mixture model is a well-known classification tool that captures non-gaussian statistics of multivariate data . however , the impractically large size of the resulting parameter space has hindered widespread adoption of gaussian mixture models for hyperspectral imagery . to counter this parameter-space issue , dimensionality reduction targeting the preservation of multimodal structures is proposed . specifically , locality-preserving nonnegative matrix factorization , as well as local fisher 's discriminant analysis , is deployed as preprocessing to reduce the dimensionality of data for the gaussian-mixture-model classifier , while preserving multimodal structures within the data . in addition , the pixel-wise classification results from the gaussian mixture model are combined with spatial-context information resulting from a markov random field . experimental results demonstrate that the proposed classification system significantly outperforms other approaches even under limited training data . story_separator_special_tag hyperspectral imagery typically provides a wealth of information captured in a wide range of the electromagnetic spectrum for each pixel in the image ; however , when used in statistical pattern-classification tasks , the resulting high-dimensional feature spaces often tend to result in ill-conditioned formulations . popular dimensionality-reduction techniques such as principal component analysis , linear discriminant analysis , and their variants typically assume a gaussian distribution . the quadratic maximum-likelihood classifier commonly employed for hyperspectral analysis also assumes single-gaussian class-conditional distributions . departing from this single-gaussian assumption , a classification paradigm designed to exploit the rich statistical structure of the data is proposed . the proposed framework employs local fisher 's discriminant analysis to reduce the dimensionality of the data while preserving its multimodal structure , while a subsequent gaussian mixture model or support vector machine provides effective classification of the reduced-dimension multimodal data . experimental results on several different multiple-class hyperspectral-classification tasks demonstrate that the proposed approach significantly outperforms several traditional alternatives . story_separator_special_tag recent research has shown that using spectral spatial information can considerably improve the performance of hyperspectral image ( hsi ) classification . hsi data is typically presented in the format of 3d cubes . thus , 3d spatial filtering naturally offers a simple and effective method for simultaneously extracting the spectral spatial features within such images . in this paper , a 3d convolutional neural network ( 3d-cnn ) framework is proposed for accurate hsi classification . the proposed method views the hsi cube data altogether without relying on any preprocessing or post-processing , extracting the deep spectral spatial-combined features effectively . in addition , it requires fewer parameters than other deep learning-based methods . thus , the model is lighter , less likely to over-fit , and easier to train . for comparison and validation , we test the proposed method along with three other deep learning-based hsi classification methods namely , stacked autoencoder ( sae ) , deep brief network ( dbn ) , and 2d-cnn-based methods on three real-world hsi datasets captured by different sensors . experimental results demonstrate that our 3d-cnn-based method outperforms these state-of-the-art methods and sets a new record . story_separator_special_tag grouping cues can affect the performance of segmentation greatly . in this paper , we show that superpixels ( image segments ) can provide powerful grouping cues to guide segmentation , where superpixels can be collected easily by ( over ) -segmenting the image using any reasonable existing segmentation algorithms . generated by different algorithms with varying parameters , superpixels can capture diverse and multi-scale visual patterns of a natural image . successful integration of the cues from a large multitude of superpixels presents a promising yet not fully explored direction . in this paper , we propose a novel segmentation framework based on bipartite graph partitioning , which is able to aggregate multi-layer superpixels in a principled and very effective manner . computationally , it is tailored to unbalanced bipartite graph structure and leads to a highly efficient , linear-time spectral algorithm . our method achieves significantly better performance on the berkeley segmentation database compared to state-of-the-art techniques . story_separator_special_tag in recent years , deep learning has been widely studied for remote sensing image analysis . in this paper , we propose a method for remotely-sensed image classification by using sparse representation of deep learning features . specifically , we use convolutional neural networks ( cnn ) to extract deep features from high levels of the image data . deep features provide high level spatial information created by hierarchical structures . although the deep features may have high dimensionality , they lie in class-dependent sub-spaces or sub-manifolds . we investigate the characteristics of deep features by using a sparse representation classification framework . the experimental results reveal that the proposed method exploits the inherent low-dimensional structure of the deep features to provide better classification results as compared to the results obtained by widely-used feature exploration algorithms , such as the extended morphological attribute profiles ( emaps ) and sparse coding ( sc ) . story_separator_special_tag this paper reports the outcomes of the 2014 data fusion contest organized by the image analysis and data fusion technical committee ( iadf tc ) of the ieee geoscience and remote sensing society ( ieee grss ) . as for previous years , the iadf tc organized a data fusion contest aiming at fostering new ideas and solutions for multisource remote sensing studies . in the 2014 edition , participants considered multiresolution and multisensor fusion between optical data acquired at 20-cm resolution and long-wave ( thermal ) infrared hyperspectral data at 1-m resolution . the contest was proposed as a double-track competition : one aiming at accurate landcover classification and the other seeking innovation in the fusion of thermal hyperspectral and color data . in this paper , the results obtained by the winners of both tracks are presented and discussed . story_separator_special_tag kernel sparse representation classification ( ksrc ) , a nonlinear extension of sparse representation classification , shows its good performance for hyperspectral image classification . however , ksrc only considers the spectra of unordered pixels , without incorporating information on the spatially adjacent data . this paper proposes a neighboring filtering kernel to spatial-spectral kernel sparse representation for enhanced classification of hyperspectral images . the novelty of this work consists in : 1 ) presenting a framework of spatial-spectral ksrc ; and 2 ) measuring the spatial similarity by means of neighborhood filtering in the kernel feature space . experiments on several hyperspectral images demonstrate the effectiveness of the presented method , and the proposed neighboring filtering kernel outperforms the existing spatial-spectral kernels . in addition , the proposed spatial-spectral ksrc opens a wide field for future developments in which filtering methods can be easily incorporated . story_separator_special_tag in recent years , many studies on hyperspectral image classification have shown that using multiple features can effectively improve the classification accuracy . as a very powerful means of learning , multiple kernel learning ( mkl ) can conveniently be embedded in a variety of characteristics . this paper proposes a class-specific sparse mkl ( cs-smkl ) framework to improve the capability of hyperspectral image classification . in terms of the features , extended multiattribute profiles are adopted because it can effectively represent the spatial and spectral information of hyperspectral images . cs-smkl classifies the hyperspectral images , simultaneously learns class-specific significant features , and selects class-specific weights . using an $ l_ { 1 } $ -norm constraint ( i.e. , group lasso ) as the regularizer , we can enforce the sparsity at the group/feature level and automatically learn a compact feature set for the classification of any two classes . more precisely , our cs-smkl determines the associated weights of optimal base kernels for any two classes and results in improved classification performances . the advantage of the proposed method is that only the features useful for the classification of any two classes can be retained , story_separator_special_tag this article presents a new hyperspectral image classification method , which is capable of automatic feature learning while achieving high classification accuracy . the method contains the following two major modules : the spectral classification module and the spatial constraints module . spectral classification module uses a deep network , called stacked denoising autoencoders sda , to learn feature representation of the data . through sda , the data are projected non-linearly from its original hyperspectral space to some higher-dimensional space , where more compact distribution is obtained . an interesting aspect of this method is that it does not need any prior feature design/extraction process guided by human . the suitable feature for the classification is learnt by the deep network itself . superpixel is utilized to generate the spatial constraints for the refinement of the spectral classification results . by exploiting the spatial consistency of neighbourhood pixels , the accuracy of classification is further improved by a big margin . experiments on the public data sets have revealed the superior performance of the proposed method . story_separator_special_tag pansharpening aims at fusing a panchromatic image with a multispectral one , to generate an image with the high spatial resolution of the former and the high spectral resolution of the latter . in the last decade , many algorithms have been presented in the literatures for pansharpening using multispectral data . with the increasing availability of hyperspectral systems , these methods are now being adapted to hyperspectral images . in this work , we compare new pansharpening techniques designed for hyperspectral data with some of the state-of-the-art methods for multispectral pansharpening , which have been adapted for hyperspectral data . eleven methods from different classes ( component substitution , multiresolution analysis , hybrid , bayesian and matrix factorization ) are analyzed . these methods are applied to three datasets and their effectiveness and robustness are evaluated with widely used performance indicators . in addition , all the pansharpening techniques considered in this paper have been implemented in a matlab toolbox that is made available to the community . story_separator_special_tag image classification is a complex process that may be affected by many factors . this paper examines current practices , problems , and prospects of image classification . the emphasis is placed on the summarization of major advanced classification approaches and the techniques used for improving classification accuracy . in addition , some important issues affecting classification performance are discussed . this literature review suggests that designing a suitable image-processing procedure is a prerequisite for a successful classification of remotely sensed data into a thematic map . effective use of multiple features of remotely sensed data and the selection of a suitable classification method are especially significant for improving classification accuracy . non-parametric classifiers such as neural network , decision tree classifier , and knowledge-based classification have increasingly become important approaches for multisource data classification . integration of remote sensing , geographical information systems ( gis ) , and expert system emerges as a new research frontier . more research , however , is needed to identify and reduce uncertainties in the image-processing chain to improve classification accuracy . story_separator_special_tag hyperspectral unmixing is one of the most important techniques in analyzing hyperspectral images , which decomposes a mixed pixel into a collection of constituent materials weighted by their proportions . recently , many sparse nonnegative matrix factorization ( nmf ) algorithms have achieved advanced performance for hyperspectral unmixing because they overcome the difficulty of absence of pure pixels and sufficiently utilize the sparse characteristic of the data . however , most existing sparse nmf algorithms for hyperspectral unmixing only consider the euclidean structure of the hyperspectral data space . in fact , hyperspectral data are more likely to lie on a low-dimensional submanifold embedded in the high-dimensional ambient space . thus , it is necessary to consider the intrinsic manifold structure for hyperspectral unmixing . in order to exploit the latent manifold structure of the data during the decomposition , manifold regularization is incorporated into sparsity-constrained nmf for unmixing in this paper . since the additional manifold regularization term can keep the close link between the original image and the material abundance maps , the proposed approach leads to a more desired unmixing performance . the experimental results on synthetic and real hyperspectral data both illustrate the superiority of the story_separator_special_tag adversarial training has been shown to produce state of the art results for generative image modeling . in this paper we propose an adversarial training approach to train semantic segmentation models . we train a convolutional semantic segmentation network along with an adversarial network that discriminates segmentation maps coming either from the ground truth or from the segmentation network . the motivation for our approach is that it can detect and correct higher-order inconsistencies between ground truth segmentation maps and the ones produced by the segmentation net . our experiments show that our adversarial training approach leads to improved accuracy on the stanford background and pascal voc 2012 datasets . story_separator_special_tag advances in hyperspectral sensing provide new capability for characterizing spectral signatures in a wide range of physical and biological systems , while inspiring new methods for extracting information from these data . hsi data often lie on sparse , nonlinear manifolds whose geometric and topological structures can be exploited via manifold-learning techniques . in this article , we focused on demonstrating the opportunities provided by manifold learning for classification of remotely sensed data . however , limitations and opportunities remain both for research and applications . although these methods have been demonstrated to mitigate the impact of physical effects that affect electromagnetic energy traversing the atmosphere and reflecting from a target , nonlinearities are not always exhibited in the data , particularly at lower spatial resolutions , so users should always evaluate the inherent nonlinearity in the data . manifold learning is data driven , and as such , results are strongly dependent on the characteristics of the data , and one method will not consistently provide the best results . nonlinear manifold-learning methods require parameter tuning , although experimental results are typically stable over a range of values , and have higher computational overhead than linear methods , which story_separator_special_tag because the reliability of feature for every pixel determines the accuracy of classification , it is important to design a specialized feature mining algorithm for hyperspectral image classification . we propose a feature learning algorithm , contextual deep learning , which is extremely effective for hyperspectral image classification . on the one hand , the learning-based feature extraction algorithm can characterize information better than the pre-defined feature extraction algorithm . on the other hand , spatial contextual information is effective for hyperspectral image classification . contextual deep learning explicitly learns spectral and spatial features via a deep learning architecture and promotes the feature extractor using a supervised fine-tune strategy . extensive experiments show that the proposed contextual deep learning algorithm is an excellent feature learning algorithm and can achieve good performance with only a simple classifier . story_separator_special_tag deep learning , which represents data by a hierarchical network , has proven to be efficient in computer vision . to investigate the effect of deep features in hyperspectral image ( hsi ) classification , this paper focuses on how to extract and utilize deep features in hsi classification framework . first , in order to extract spectral-spatial information , an improved deep network , spatial updated deep auto-encoder ( sdae ) , is proposed . sdae , which is an improved deep auto-encoder ( dae ) , considers sample similarity by adding a regularization term in the energy function , and updates features by integrating contextual information . second , in order to deal with the small training set using deep features , a collaborative representation-based classification is applied . moreover , in order to suppress salt-and-pepper noise and smooth the result , we compute the residual of collaborative representation of all samples as a residual matrix , which can be effectively used in a graph-cut-based spatial regularization . the proposed method inherits the advantages of deep learning and has solutions to add spatial information of hsi in the learning network . using collaborative representation-based classification with deep features story_separator_special_tag in recent years , a large amount of multi-disciplinary research has been conducted on sparse models and their applications . in statistics and machine learning , the sparsity principle is used to perform model selection - that is , automatically selecting a simple model among a large collection of them . in signal processing , sparse coding consists of representing data with linear combinations of a few dictionary elements . subsequently , the corresponding tools have been widely adopted by several scientific communities such as neuroscience , bioinformatics , or computer vision . the goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing . more specifically , we focus on applications where the dictionary is learned and adapted to data , yielding a compact representation that has been successful in various contexts . story_separator_special_tag hyperspectral imaging is a trending technique in remote sensing that finds its application in many different areas , such as agriculture , mapping , target detection , food quality monitoring , etc . this technique gives the ability to remotely identify the composition of each pixel of the image . therefore , it is a natural candidate for the purpose of landmine detection , thanks to its inherent safety and fast response time . in this paper , we will present the results of several studies that employed hyperspectral imaging for the purpose of landmine detection , discussing the different signal processing techniques used in this framework for hyperspectral image processing and target detection . our purpose is to highlight the progresses attained in the detection of landmines using hyperspectral imaging and to identify possible perspectives for future work , in order to achieve a better detection in real-time operation mode . story_separator_special_tag we introduce key concepts and issues including the effects of atmospheric propagation upon the data , spectral variability , mixed pixels , and the distinction between classification and detection algorithms . detection algorithms for full pixel targets are developed using the likelihood ratio approach . subpixel target detection , which is more challenging due to background interference , is pursued using both statistical and subspace models for the description of spectral variability . finally , we provide some results which illustrate the performance of some detection algorithms using real hyperspectral imaging ( hsi ) data . furthermore , we illustrate the potential deviation of hsi data from normality and point to some distributions that may serve in the development of algorithms with better or more robust performance . we therefore focus on detection algorithms that assume multivariate normal distribution models for hsi data . story_separator_special_tag this article presents an overview of the theoretical and practical issues associated with the development , analysis , and application of detection algorithms to exploit hyperspectral imaging data . we focus on techniques that exploit spectral information exclusively to make decisions regarding the type of each pixel target or nontarget on a pixel-by-pixel basis in an image . first we describe the fundamental structure of the hyperspectral data and explain how these data influence the signal models used for the development and theoretical analysis of detection algorithms . next we discuss the approach used to derive detection algorithms , the performance metrics necessary for the evaluation of these algorithms , and a taxonomy that presents the various algorithms in a systematic manner . we derive the basic algorithms in each family , explain how they work , and provide results for their theoretical performance . we conclude with empirical results that use hyperspectral imaging data from the hydice and hyperion sensors to illustrate the operation and performance of various detectors .
an autonomous unmanned aircraft requires the collision avoidance capability to automatically sense and avoid conflicts , that is , to meet safety and flexibility issues in an environment of increasing air traffic densities . in this paper , the problem of solving conflicts among unmanned aircraft agents , that are assumed to fly in same altitude and instances , is considered . this paper presents a new functional architecture for unmanned aircraft collision avoidance system with an approach utilized for deciding the collision criteria upon flight plan sharing , and cooperatively avoids potential conflict through multi-agent peer to peer aircraft negotiation and predefined maneuvering in heading and speed changes . the designed collision avoidance system will allow each aircraft to negotiate with each other to determine a safe and acceptable solution when potential conflict is detected . story_separator_special_tag complex event recognition applications exhibit various types of uncertainty , ranging from incomplete and erroneous data streams to imperfect complex event patterns . we review complex event recognition techniques that handle , to some extent , uncertainty . we examine techniques based on automata , probabilistic graphical models and first-order logic , which are the most common ones , and approaches based on petri nets and grammars , which are less frequently used . a number of limitations are identified with respect to the employed languages , their probabilistic models and their performance , as compared to the purely deterministic cases . based on those limitations , we highlight promising directions for future work . story_separator_special_tag wearable computers have the potential to act as intelligent agents in everyday life and to assist the user in a variety of tasks , using context to determine how to act . location is the most common form of context used by these agents to determine the user 's task . however , another potential use of location context is the creation of a predictive model of the user 's future movements . we present a system that automatically clusters gps data taken over an extended period of time into meaningful locations at multiple scales . these locations are then incorporated into a markov model that can be consulted for use with a variety of applications in both single-user and collaborative scenarios . story_separator_special_tag at the heart of air traffic management ( atm ) lies the decision support systems ( dst ) that rely upon accurate trajectory prediction to determine how the airspace will look like in the future to make better decisions and advisories . dealing with airspace that is prone to congestion due to environmental factors still remains the challenge especially when a deterministic approach is used in the trajectory prediction process . in this paper , we describe a novel stochastic trajectory prediction approach for atm that can be used for more efficient and realistic flight planning and to assist airspace flow management , potentially resulting in higher safety , capacity , and efficiency commensurate with fuel savings thereby reducing emissions for a better environment . our approach considers airspace as a 3d grid network , where each grid point is a location of a weather observation . we hypothetically build cubes around these grid points , so the entire airspace can be considered as a set of cubes . each cube is defined by its centroid , the original grid point , and associated weather parameters that remain homogeneous within the cube during a period of time . then , story_separator_special_tag reliable trajectory prediction is paramount in air traffic management ( atm ) as it can increase safety , capacity , and efficiency , and lead to commensurate fuel savings and emission reductions . inherent inaccuracies in forecasting winds and temperatures often result in large prediction errors when a deterministic approach is used . a stochastic approach can address the trajectory prediction problem by taking environmental uncertainties into account and training a model using historical trajectory data along with weather observations . with this approach , weather observations are assumed to be realizations of hidden aircraft positions and the transitions between the hidden segments follow a markov model . however , this approach requires input observations , which are unknown , although the weather parameters overall are known for the pertinent airspace . we address this problem by performing time series clustering on the current weather observations for the relevant sections of the airspace.in this paper , we present a novel time series clustering algorithm that generates an optimal sequence of weather observations used for accurate trajectory prediction in the climb phase of the flight . our experiments use a real trajectory dataset with pertinent weather observations and demonstrate the effectiveness story_separator_special_tag the automatic dependent surveillance broadcast ( ads-b ) system is a key component of cns/atm recommended by the international civil aviation organization ( icao ) as the next generation air traffic control system . ads-b broadcasts identification , positional data , and operation information of an aircraft to other aircraft , ground vehicles and ground stations in the nearby region . this paper explores the ads-b based trajectory prediction and the conflict detection algorithm . the multiple-model based trajectory prediction algorithm leads accurate predicted conflict probability at a future forecast time . we propose an efficient and accurate algorithm to calculate conflict probability based on approximation of the conflict zone by a set of blocks . the performance of proposed algorithms is demonstrated by a numerical simulation of two aircraft encounter scenarios . story_separator_special_tag systems as diverse as genetic networks or the world wide web are best described as networks with complex topology . a common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution . this feature was found to be a consequence of two generic mechanisms : ( i ) networks expand continuously by the addition of new vertices , and ( ii ) new vertices attach preferentially to sites that are already well connected . a model based on these two ingredients reproduces the observed stationary scale-free distributions , which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems . story_separator_special_tag the r-tree , one of the most popular access methods for rectangles , is based on the heuristic optimization of the area of the enclosing rectangle in each inner node . by running numerous experiments in a standardized testbed under highly varying data , queries and operations , we were able to design the r * -tree which incorporates a combined optimization of area , margin and overlap of each enclosing rectangle in the directory . using our standardized testbed in an exhaustive performance comparison , it turned out that the r * -tree clearly outperforms the existing r-tree variants . guttman 's linear and quadratic r-tree and greene 's variant of the r-tree . this superiority of the r * -tree holds for different types of queries and operations , such as map overlay , for both rectangles and multidimensional points in all experiments . from a practical point of view the r * -tree is very attractive because of the following two reasons 1 it efficiently supports point and spatial data at the same time and 2 its implementation cost is only slightly higher than that of other r-trees . story_separator_special_tag this paper develops the multidimensional binary search tree ( or k-d tree , where k is the dimensionality of the search space ) as a data structure for storage of information to be retrieved by associative searches . the k-d tree is defined and examples are given . it is shown to be quite efficient in its storage requirements . a significant advantage of this structure is that a single data structure can handle many types of queries very efficiently . various utility algorithms are developed ; their proven average running times in an n record file are : insertion , o ( log n ) ; deletion of the root , o ( n ( k-1 ) /k ) ; deletion of a random node , o ( log n ) ; and optimization ( guarantees logarithmic performance of searches ) , o ( n log n ) . search algorithms are given for partial match queries with t keys specified [ proven maximum running time of o ( n ( k-t ) /k ) ] and for nearest neighbor queries [ empirically observed average running time of o ( log n ) . ] these performances far surpass story_separator_special_tag the complexity of the mobility tracking problem in a cellular environment has been characterized under an informationtheoretic framework . shannon s entropy measure is identified as a basis for comparing user mobility models . by building and maintaining a dictionary of individual user s path updates ( as opposed to the widely used location updates ) , the proposed adaptive on-line algorithm can learn subscribers profiles . this technique evolves out of the concepts of lossless compression . the compressibility of the variable-to-fixed length encoding of the acclaimed lempelziv family of algorithms reduces the update cost , whereas their built-in predictive power can be effectively used to reduce paging cost . story_separator_special_tag the evolution of many complex systems , including the world wide web , business , and citation networks , is encoded in the dynamic web describing the interactions between the system 's constituents . despite their irreversible and nonequilibrium nature these networks follow bose statistics and can undergo bose-einstein condensation . addressing the dynamical properties of these nonequilibrium systems within the framework of equilibrium quantum gases predicts that the `` first-mover-advantage , '' `` fit-get-rich , '' and `` winner-takes-all '' phenomena observed in competitive systems are thermodynamically distinct phases of the underlying evolving networks . story_separator_special_tag massive online analysis ( moa ) is a software environment for implementing algorithms and running experiments for online learning from evolving data streams . moa includes a collection of offline and online methods as well as tools for evaluation . in particular , it implements boosting , bagging , and hoeffding trees , all with and without naive bayes classifiers at the leaves . moa supports bi-directional interaction with weka , the waikato environment for knowledge analysis , and is released under the gnu gpl license . story_separator_special_tag an algorithm is given for computer control of a digital plotter . the algorithm may be programmed without multiplication or division instructions and is efficient with respect to speed of execution and memory utilization . story_separator_special_tag benchmarking spatiotemporal database systems requires the definition of suitable datasets simulating the typical behavior of moving objects . previous approaches for generating spatiotemporal data do not consider that moving objects often follow a given network . therefore , benchmarks require datasets consisting of such network-based moving objects . in this paper , the most important properties of network-based moving objects are presented and discussed . essential aspects are the maximum speed and the maximum capacity of connections , the influence of other moving objects on the speed and the route of an object , the adequate determination of the start and destination of an object , the influence of external events , and time-scheduled traffic . these characteristics are the basis for the specification and development of a new generator for spatiotemporal data . this generator combines real data ( the network ) with user-defined properties of the resulting dataset . a framework is proposed where the user can control the behavior of the generator by re-defining the functionality of selected object classes . an experimental performance investigation demonstrates that the chosen approach is suitable for generating large data sets . story_separator_special_tag the scalability of increasingly complex air transportation requires better automation support to prevent accidents and assure safety . collaborative control mechanisms for error and conflict detection and prevention are an essential part of this support . analysis of the network structure of proposed design for automated conflict detection and resolution for the next generation air transportation system was conducted . this analysis provides insight into the tradeoffs of alternative concepts of collaborative operations , including tradeoffs between expected conflicts , communications requirements , and vulnerability to targeted attack . the resulting design framework provides a structure for the application of network architecture methods to these , and other , techniques for air traffic control , enabling in-depth analysis and evaluation of the resulting system . story_separator_special_tag the objective of this paper is to build an air traffic flow prediction model . the air traffic flow prediction plays a key role in the airspace simulation model and air traffic flow management system . in china the air traffic information in each regional control center has not integrated together by now . the information only in a single regional control center can not reach the requirement of the current method based on 4-dimensional trajectory prediction . the new method is needed to solve this problem . large collection of radar data is stored . but there is no effort made to extract useful information from the database to help in the estimation . data mining is the process of extracting patterns as well as predicting previously unknown trends from large quantities of data . neural network and statistics are frequently applied to data mining with various objectives . this paper employs neural networks combined with the statistical analysis of historical data to forecast the traffic flow . two models with different types and input data are proposed . the accuracy of two models is tested and compared to each other using flow data at an arrival fix in story_separator_special_tag major developments relating to the b-tree from early 1979 through the fall of 1986 are presented . this updates the well-known article , `` the ubiquitous b-tree '' by douglas comer ( computing surveys , june 1979 ) . after a basic overview of b and b trees , recent research is cited as well as descriptions of nine b-tree variants developed since comer 's article . the advantages and disadvantages of each variant over the basic b-tree are emphasized . also included are a discussion of concurrency control issues in b-trees and a speculation on the story_separator_special_tag software decision support tools that assist controllers with the management of air traffic are dependent upon the ability to accurately predict future aircraft positions . trajectory predictions in en route airspace rely upon the availability of aircraft state , aircraft performance , pilot intent , and atmospheric data . the use of real-time airline information for improving ground-based trajectory predictions has been a recent focus in the development of the center-tracon automation system ( ctas ) at nasa ames . this paper studies the impact of airline flight-planning data on ctas en route climb trajectory prediction accuracy . the climb trajectory synthesis process is first described along with existing input data . flight planning data parameters , available from a typical airline operations center , are then discussed along with their potential usefulness to ctas . results are then presented to show the significant impact of airline-provided takeoff weight , speed-profile , and thrust calibration data on ctas climb trajectory prediction performance . story_separator_special_tag a machine learning approach to trajectory prediction for sequencing and merging of traffic following fixed arrival routes is described and evaluated using actual aircraft trajectory and meteorological data . in the approach a model is trained using historic data to make arrival time predictions . model inputs are the aircraft type , aircraft ground speed and altitude at the start of the arrival route , surface wind , and altitude winds . a stepwise regression method is used to systematically determine the inputs and functions of inputs that are included in the prediction model based on their explanatory power . for the evaluation of the approach a 45 nm fixed arrival route was used that ends at the runway . traffic performed a continuous descent operation . at a prediction horizon of 45 nm the model explained 63 % of the observed variance in the arrival time . the mean absolute time error was 18 s. finally , the prediction model was used to determine the required initial spacing interval between aircraft for continuous descent operation and examine the impact on runway throughput and conflicts . using the prediction model , throughput increased by up to 4 aircraft per hour story_separator_special_tag clustering trajectory data is an important way to mine hidden information behind moving object sampling data , such as understanding trends in movement patterns , gaining high popularity in geographic information and so on . in the era of ` big data ' , the current approaches for clustering trajectory data generally do not apply for excessive costs in both scalability and computing performance for trajectory big data . aiming at these problems , this study first proposes a new clustering algorithm for trajectory big data , namely tra-poptics by modifying a scalable clustering algorithm for point data ( poptics ) . tra-poptics has employed the spatiotemporal distance function and trajectory indexing to support trajectory data . tra-poptics can process the trajectory big data in a distributed manner to meet a great scalability . towards providing a fast solution to clustering trajectory big data , this study has explored the feasibility to utilize the contemporary general-purpose computing on the graphics processing unit ( gpgpu ) . the gpgpu-aided clustering approach parallelized the tra-poptics with the hyper-q feature of kelper gpu and massive gpu threads . the experimental results indicate that ( 1 ) the tra-poptics algorithm has a comparable clustering story_separator_special_tag abstract timely identifying flight diversions is a crucial aspect of efficient multi-modal transportation . when an airplane diverts , logistics providers must promptly adapt their transportation plans in order to ensure proper delivery despite such an unexpected event . in practice , the different parties in a logistics chain do not exchange real-time information related to flights . this calls for a means to detect diversions that just requires publicly available data , thus being independent of the communication between different parties . the dependence on public data results in a challenge to detect anomalous behavior without knowing the planned flight trajectory . our work addresses this challenge by introducing a prediction model that just requires information on an airplane 's position , velocity , and intended destination . this information is used to distinguish between regular and anomalous behavior . when an airplane displays anomalous behavior for an extended period of time , the model predicts a diversion . a quantitative evaluation shows that this approach is able to detect diverting airplanes with excellent precision and recall even without knowing planned trajectories as required by related research . by utilizing the proposed prediction model , logistics companies gain a story_separator_special_tag we introduce a system for sensing complex social systems with data collected from 100 mobile phones over the course of 9\xa0months . we demonstrate the ability to use standard bluetooth-enabled mobile telephones to measure information access and use in different contexts , recognize social patterns in daily user activity , infer relationships , identify socially significant locations , and model organizational rhythms . story_separator_special_tag this paper presents a comparison between the concepts underlying 4d trajectory based operations ( tbos ) from the double viewpoint of nextgen and sesar programs . in the proposed analysis , motivations justifying the introduction of 4d tbos are presented first . after that , the different and similar technologies that are being applied to support 4d tbos are discussed . this is followed by a discussion focused on the results obtained from different human-in-the-loop simulation activities . in addition , preliminary flight trials activities , planned and partly executed for concepts refinement and final validation , are also discussed . these validation activities , carried out on both nextgen and sesar sides , aim to assess the impact of the aforementioned supporting technologies on both pilots and air traffic controllers . this impact will actually be a key element for the effective implementation of 4d tbos . early benefits identified for sesar and nextgen are described next . finally , some comparisons and preliminary conclusions are presented . story_separator_special_tag in this work , we study a family of random geometric graphs on hyperbolic spaces . in this setting , n points are chosen randomly on a hyperbolic space and any two of them are joined by an edge with probability that depends on their hyperbolic distance , independently of every other pair . in particular , when the positions of the points have been fixed , the distribution over the set of graphs on these points is the boltzmann distribution , where the hamiltonian is given by the sum of weighted indicator functions for each pair of points , with the weight being proportional to a real parameter \\beta > 0 ( interpreted as the inverse temperature ) as well as to the hyperbolic distance between the corresponding points . this class of random graphs was introduced by krioukov et al . we provide a rigorous analysis of aspects of this model and its dependence on the parameter \\beta , verifying some of their observations . we show that a phase transition occurs around \\beta =1 . more specifically , we show that when \\beta > 1 the degree of a typical vertex is bounded in probability ( in story_separator_special_tag clustering algorithms are attractive for the task of class identification in spatial databases . however , the application to large spatial databases rises the following requirements for clustering algorithms : minimal requirements of domain knowledge to determine the input parameters , discovery of clusters with arbitrary shape and good efficiency on large databases . the well-known clustering algorithms offer no solution to the combination of these requirements . in this paper , we present the new clustering algorithm dbscan relying on a density-based notion of clusters which is designed to discover clusters of arbitrary shape . dbscan requires only one input parameter and supports the user in determining an appropriate value for it . we performed an experimental evaluation of the effectiveness and efficiency of dbscan using synthetic data and real data of the sequoia 2000 benchmark . the results of our experiments demonstrate that ( 1 ) dbscan is significantly more effective in discovering clusters of arbitrary shape than the well-known algorithm clarans , and that ( 2 ) dbscan outperforms clarans by a factor of more than 100 in terms of efficiency . story_separator_special_tag discovering co-movement patterns from large-scale trajectory databases is an important mining task and has a wide spectrum of applications . previous studies have identified several types of interesting co-movement patterns and show-cased their usefulness . in this paper , we make two key contributions to this research field . first , we propose a more general co-movement pattern to unify those defined in the past literature . second , we propose two types of parallel and scalable frameworks and deploy them on apache spark . to the best of our knowledge , this is the first work to mine co-movement patterns in real life trajectory databases with hundreds of millions of points . experiments on three real life large-scale trajectory datasets have verified the efficiency and scalability of our proposed solutions . story_separator_special_tag the current air traffic management ( atm ) system worldwide is managing a high ( and growing ) amount of demand that sometimes leads to demand-capacity balancing ( dcb ) issues . these further impose limitations to the atm system that are resolved via airspace management or flow management solutions , including regulations that generate delays ( and costs ) for the entire system . these demand-capacity imbalances are difficult to predict in the pre-tactical phase ( prior to operation ) , as the existing atm information is not accurate enough during this phase . with the aim of overcoming these drawbacks , the atm system is moving towards a new , trajectory-based operations ( tbo ) paradigm , where the trajectory becomes the cornerstone upon which the atm capabilities rely on . this transformation , however , requires reliable information available in pre-tactical phase or , at least , high-fidelity aircraft trajectory prediction capabilities to reach sufficient levels of confidence in the available planning information . in this scenario , the dart ( data-driven aircraft trajectory prediction research ) project from sesar 2020 exploratory research aims at reaching this goal , by means of machine learning and agent-based modeling story_separator_special_tag search operations in databases require special support at the physical level . this is true for conventional databases as well as spatial databases , where typical search operations include the point query ( find all objects that contain a given search point ) and the region query ( find all objects that overlap a given search region ) . more than ten years of spatial database research have resulted in a great variety of multidimensional access methods to support such operations . we give an overview of that work . after a brief survey of spatial data management in general , we first present the class of point access methods , which are used to search sets of points in two or more dimensions . the second part of the paper is devoted to spatial access methods to handle extended objects , such as rectangles or polyhedra . we conclude with a discussion of theoretical and experimental results concerning the relative performance of various approaches . story_separator_special_tag the increasing pervasiveness of location-acquisition technologies ( gps , gsm networks , etc . ) is leading to the collection of large spatio-temporal datasets and to the opportunity of discovering usable knowledge about movement behaviour , which fosters novel applications and services . in this paper , we move towards this direction and develop an extension of the sequential pattern mining paradigm that analyzes the trajectories of moving objects . we introduce trajectory patterns as concise descriptions of frequent behaviours , in terms of both space ( i.e. , the regions of space visited during movements ) and time ( i.e. , the duration of movements ) . in this setting , we provide a general formal statement of the novel mining problem and then study several different instantiations of different complexity . the various approaches are then empirically evaluated over real data and synthetic benchmarks , comparing their strengths and weaknesses . story_separator_special_tag the technological advances in smartphones and their widespread use has resulted in the big volume and varied types of mobile data which we have today . location prediction through mobile data mining leverages such big data in applications such as traffic planning , location-based advertising , intelligent resource allocation ; as well as in recommender services including the popular apple siri or google now . this paper , focuses on the challenging problem of predicting the next location of a mobile user given data on his or her current location . in this work , we propose nextlocation - a personalised mobile data mining framework - that not only uses spatial and temporal data but also other contextual data such as accelerometer , bluetooth and call/sms log . in addition , the proposed framework represents a new paradigm for privacy-preserving next place prediction as the mobile phone data is not shared without user permission . experiments have been performed using data from the nokia mobile data challenge mdc . the results on mdc data show large variability in predictive accuracy of about 17 % across users . for example , irregular users are very difficult to predict while for more story_separator_special_tag a number of air traffic management decision support tools ( dst ) are being developed to help air traffic managers and controllers improve capacity , efficiency , and safety in the national airspace system . although dst functionality may vary widely , trajectory prediction algorithms can be found at the core of most dst . a methodology is presented for the automated statistical analysis of trajectory prediction accuracy as a function of phase of flight ( level-flight , climb , descent ) and look-ahead time . the methodology is focused on improving trajectory prediction algorithm performance for dst applications such as conflict detection and arrival metering . the methodology has been implemented in software and tested with air traffic data . aggregate trajectory prediction accuracy statistics are computed and displayed in histogram format based on 2,774 large commercial jet flights from five different days of fort worth center air traffic data . the results show that trajectory prediction anomalies can be detected by examining error distributions for large numbers of trajectory predictions . the ability of the trajectory analysis methodology to detect the effects of subsequent changes to the trajectory prediction algorithm and to aircraft performance model parameters was also story_separator_special_tag in order to handle spatial data efficiently , as required in computer aided design and geo-data applications , a database system needs an mdex mechanism that ti help it retrieve data items quickly accordmg to their spatial locations however , traditional mdexmg methods are not well suited to data oblects of non-zero size located m multidimensional spaces in this paper we describe a dynarmc mdex structure called an r-tree winch meets this need , and give algorithms for searching and updatmg it . we present the results of a series of tests which indicate that the structure performs well , and conclude that it is useful for current database systems m spatial applications story_separator_special_tag this paper presents a model based on an hybrid system to numerically simulate the climbing phase of an aircraft . this model is then used within a trajectory prediction tool . finally , the covariance matrix adaptation evolution strategy ( cma-es ) optimization algorithm is used to tune five selected parameters , and thus improve the accuracy of the model . incor- porated within a trajectory prediction tool , this model can be used to derive the order of magnitude of the prediction error over time , and thus the domain of validity of the trajectory prediction . a first validation experiment of the proposed model is based on the errors along time for a one-time trajectory prediction at the take off of the flight with respect to the default values of the theoretical bada model . this experiment , assuming complete information , also shows the limit of the model . a second experiment part presents an on-line trajectory prediction , in which the prediction is continuously updated based on the current aircraft position . this approach raises several issues , for which improvements of the basic model are proposed , and the resulting trajectory prediction tool shows statistically story_separator_special_tag ground-based aircraft trajectory prediction is a critical issue for air traffic management . a safe and efficient prediction is a prerequisite for the implementation of automated tools that detect and solve conflicts between trajectories . moreover , regarding the safety constraints , it could be more reasonable to predict intervals rather than precise aircraft positions . in this paper , a standard point-mass model and statistical regression method is used to predict the altitude of climbing aircraft . in addition to the standard linear regression model , two common non-linear regression methods , neural networks and loess are used . a dataset is extracted from two months of radar and meteorological recordings , and several potential explanatory variables are computed for every sampled climb segment . a principal component analysis allows us to reduce the dimensionality of the problems , using only a subset of principal components as input to the regression methods . the prediction models are scored by performing a 10-fold cross-validation . statistical regression results method appears promising . the experiment part shows that the proposed regression models are much more efficient than the standard point-mass model . the prediction intervals obtained by our methods have the story_separator_special_tag mining frequent patterns in transaction databases , time-series databases , and many other kinds of databases has been studied popularly in data mining research . most of the previous studies adopt an apriori-like candidate set generation-and-test approach . however , candidate set generation is still costly , especially when there exist prolific patterns and/or long patterns.in this study , we propose a novel frequent pattern tree ( fp-tree ) structure , which is an extended prefix-tree structure for storing compressed , crucial information about frequent patterns , and develop an efficient fp-tree-based mining method , fp-growth , for mining the complete set of frequent patterns by pattern fragment growth . efficiency of mining is achieved with three techniques : ( 1 ) a large database is compressed into a highly condensed , much smaller data structure , which avoids costly , repeated database scans , ( 2 ) our fp-tree-based mining adopts a pattern fragment growth method to avoid the costly generation of a large number of candidate sets , and ( 3 ) a partitioning-based , divide-and-conquer method is used to decompose the mining task into a set of smaller tasks for mining confined patterns in conditional databases , story_separator_special_tag background and overview . 1. stochastic processes and models . 2. wiener filters . 3. linear prediction . 4. method of steepest descent . 5. least-mean-square adaptive filters . 6. normalized least-mean-square adaptive filters . 7. transform-domain and sub-band adaptive filters . 8. method of least squares . 9. recursive least-square adaptive filters . 10. kalman filters as the unifying bases for rls filters . 11. square-root adaptive filters . 12. order-recursive adaptive filters . 13. finite-precision effects . 14. tracking of time-varying systems . 15. adaptive filters using infinite-duration impulse response structures . 16. blind deconvolution . 17. back-propagation learning . epilogue . appendix a. complex variables . appendix b. differentiation with respect to a vector . appendix c. method of lagrange multipliers . appendix d. estimation theory . appendix e. eigenanalysis . appendix f. rotations and reflections . appendix g. complex wishart distribution . glossary . abbreviations . principal symbols . bibliography . index . story_separator_special_tag to maximize the capacity of airports by optimally allocating available resources such as runways , the arrival times of individual aircraft need to be computed . however , accurately predicting arrival times is difficult because aircraft trajectories are frequently vectored off the standard approach procedures . this paper introduces a new framework for predicting aircraft arrival times by incorporating probabilistic information for the types of trajectory patterns that will be applied by human air traffic controllers . the major patterns of the trajectories are identified , and the probabilities of those patterns are computed based on the patterns of the preceding aircraft . the proposed method is applied to traffic scenarios in real operations to demonstrate its performance . story_separator_special_tag with the recent progress of spatial information technologies and mobile computing technologies , spatio-temporal databases which store information on moving objects including vehicles and mobile users have gained a lot of research in- terests . in this paper , we propose an algorithm to extract mobility statistics from indexed spatio-temporal datasets for the interactive analysis of huge collections of moving object trajectories . we focus on a mobility statistics value called the markov transition probability , which is based on a cell-based organization of a target space and the markov chain model . the pro- posed algorithm efficiently computes the specified markov transition probabilities with the help of a spatial index r-tree . we reduce the statistics computation task to a kind of constraint satisfaction problem that uses a spatial index , and utilize internal story_separator_special_tag in several image applications , it is necessary to retrieve specific line segments born a potentially very large set . in this paper , we consider the problem of indexing straight line segments to enable efficient retrieval of all line segments that ( i ) go through a specified point , or ( ii ) intersect a specified line segment . we propose a data organization , based on the hough transform , that can be used to solve both retrieval problems efficiently . in addition , the proposed structure can be used for approximate retrievals , finding all line segments that pass close to a specified point . we show , through analysis and experiment , that the proposed technique always does as well as or better than retrieval based on minimum bounding rectangles or line segment end-points . story_separator_special_tag a number of emerging applications of data management technology involve the monitoring and querying of large quantities of continuous variables , e.g. , the positions of mobile service users , termed moving objects . in such applications , large quantities of state samples obtained via sensors are streamed to a database . indexes for moving objects must support queries efficiently , but must also support frequent updates . indexes based on minimum bounding regions ( mbrs ) such as the r-tree exhibit high concurrency overheads during node splitting , and each individual update is known to be quite costly . this motivates the design of a solution that enables the b+ -tree to manage moving objects . we represent moving-object locations as vectors that are timestamped based on their update time . by applying a novel linearization technique to these values , it is possible to index the resulting values using a single b+-tree that partitions values according to their timestamp and otherwise preserves spatial proximity . we develop algorithms for range and k nearest neighbor queries , as well as continuous queries . the proposal can be grafted into existing database systems cost effectively . an extensive experimental study story_separator_special_tag existing prediction methods in moving objects databases can not forecast locations accurately if the query time is far away from the current time . even for near future prediction , most techniques assume the trajectory of an object 's movements can be represented by some mathematical formulas of motion functions based on its recent movements . however , an object 's movements are more complicated than what the mathematical formulas can represent . prediction based on an object 's trajectory patterns is a powerful way and has been investigated by several work . but their main interest is how to discover the patterns . in this paper , we present a novel prediction approach , namely the hybrid prediction model , which estimates an object 's future locations based on its pattern information as well as existing motion functions using the object 's recent movements . specifically , an object 's trajectory patterns which have ad-hoc forms for prediction are discovered and then indexed by a novel access method for efficient query processing . in addition , two query processing techniques that can provide accurate results for both near and distant time predictive queries are presented . our extensive experiments story_separator_special_tag in social networks , predicting a user s location mainly depends on those of his/her friends , where the key lies in how to select his/her most influential friends . in this article , we analyze the theoretically maximal accuracy of location prediction based on friends locations and compare it with the practical accuracy obtained by the state-of-the-art location prediction methods . upon observing a big gap between the theoretical and practical accuracy , we propose a new strategy for selecting influential friends in order to improve the practical location prediction accuracy . specifically , several features are defined to measure the influence of the friends on a user s location , based on which we put forth a sequential random-walk-with-restart procedure to rank the friends of the user in terms of their influence . by dynamically selecting the top n most influential friends of the user per time slice , we develop a temporal-spatial bayesian model to characterize the dynamics of friends influence for location prediction . finally , extensive experimental results on datasets of real social networks demonstrate that the proposed influential friend selection method and temporal-spatial bayesian model can significantly improve the accuracy of location prediction . story_separator_special_tag the problem of indexing time series has attracted much interest . most algorithms used to index time series utilize the euclidean distance or some variation thereof . however , it has been forcefully shown that the euclidean distance is a very brittle distance measure . dynamic time warping ( dtw ) is a much more robust distance measure for time series , allowing similar shapes to match even if they are out of phase in the time axis . because of this flexibility , dtw is widely used in science , medicine , industry and finance . unfortunately , however , dtw does not obey the triangular inequality and thus has resisted attempts at exact indexing . instead , many researchers have introduced approximate indexing techniques or abandoned the idea of indexing and concentrated on speeding up sequential searches . in this work , we introduce a novel technique for the exact indexing of dtw . we prove that our method guarantees no false dismissals and we demonstrate its vast superiority over all competing approaches in the largest and most comprehensive set of time series indexing experiments ever undertaken . story_separator_special_tag we show how to index mobile objects in one and two dimensions using efficient dynamic external memory data structures . the problem is motivated by real life applications in traffic monitoring , intelligent navigation and mobile communications domains . for the l-dimensional case , we give ( i ) a dynamic , external memory algorithm with guaranteed worst case performance and linear space and ( ii ) a practical approximation algorithm also in the dynamic , external memory setting , which has linear space and expected logarithmic query time . we also give an algorithm with guaranteed logarithmic query time for a restricted version of the problem . we present extensions of our techniques to two dimensions . in addition we give a lower bound on the number of i/o s needed to answer the d-dimensional problem . initial experimental results and comparisons to traditional indexing approaches are also included . story_separator_special_tag trajectory prediction is the core support tool of collaborative decision making in air traffic control . this paper presents a four-dimensional trajectory prediction model , which is based on history and real-time radar data . before a flightpsilas departure , its trajectory is worked out by analyzing its history flying data . after its departure , the predicted trajectory is being updated with the real-time radar data . this model does not rely on aerodynamics and can predict trajectory for the whole flying process . emulation result shows that the prediction of this model approximates actual flying data and can provide better support to air traffic control systems . story_separator_special_tag this paper presents an overview of the mobile data challenge ( mdc ) , a large-scale research initiative aimed at generating innovations around smartphone-based research , as well as community-based evaluation of related mobile data analysis methodologies . first we review the lausanne data collection campaign ( ldcc ) an initiative to collect unique , longitudinal smartphone data set for the basis of the mdc . then , we introduce the open and dedicated tracks of the mdc ; describe the specific data sets used in each of them ; and discuss some of the key aspects in order to generate privacy-respecting , challenging , and scientifically relevant mobile data resources for wider use of the research community . the concluding remarks will summarize the paper . story_separator_special_tag this paper deals with the problem of predicting an aircraft trajectory in the vertical plane . a method depending on a small number of starting parameters is introduced and then used on a wide range of cases . the chosen method is based on neural networks . neural networks are trained using a set of real trajectories and then used to forecast new ones . two prediction methods have been developed : the first is able to take real points into account as the aircraft flies to improve precision . the second one predicts trajectories even when the aircraft is not flying . after depicting those prediction methods , the results are compared with other forecasting functions . neural networks give better results because they only rely on precisely known parameters . story_separator_special_tag zebranet is a mobile , wireless sensor network in which nodes move throughout an environment working to gather and process information about their surroundings [ 10 ] . as in many sensor or wireless systems , nodes have critical resource constraints such as processing speed , memory size , and energy supply ; they also face special hardware issues such as sensing device sample time , data storage/access restrictions , and wireless transceiver capabilities . this paper discusses and evaluates zebranet 's system design decisions in the face of a range of real-world constraints.impala -- -zebranet 's middleware layer -- -serves as a light-weight operating system , but also has been designed to encourage application modularity , simplicity , adaptivity , and repairability . impala is now implemented on zebranet hardware nodes , which include a 16-bit microcontroller , a low-power gps unit , a 900mhz radio , and 4mbits of non-volatile flash memory . this paper discusses impala 's operation scheduling and event handling model , and explains how system constraints and goals led to the interface designs we chose between the application , middleware , and firmware layers . we also describe impala 's network interface which unifies story_separator_special_tag in many applications that track and analyze spatiotemporal data , movements obey periodic patterns ; the objects follow the same routes ( approximately ) over regular time intervals . for example , people wake up at the same time and follow more or less the same route to their work everyday . the discovery of hidden periodic patterns in spatiotemporal data , apart from unveiling important information to the data analyst , can facilitate data management substantially . based on this observation , we propose a framework that analyzes , manages , and queries object movements that follow such patterns . we define the spatiotemporal periodic pattern mining problem and propose an effective and fast mining algorithm for retrieving maximal periodic patterns . we also devise a novel , specialized index structure that can benefit from the discovered patterns to support more efficient execution of spatiotemporal queries . we evaluate our methods experimentally using datasets with object trajectories that exhibit periodicity . story_separator_special_tag abstract in this paper , a stochastic optimal control method is developed for determining three-dimensional conflict-free aircraft trajectories under wind uncertainty . first , a spatially correlated wind model is used to describe the wind uncertainty , and a probabilistic conflict detection algorithm using the generalized polynomial chaos method is proposed . the generalized polynomial chaos algorithm can quantify uncertainties in complex nonlinear dynamical systems with high computational efficiency . in addition , a numerical algorithm that incorporates the generalized polynomial chaos method into the pseudospectral method is proposed to solve the conflict resolution problem as the stochastic optimal control problem . the stochastic optimal control method is combined with the proposed conflict detection algorithm to solve the conflict resolution problem under the wind uncertainty . through illustrative three-dimensional aircraft conflict detection and resolution examples with multiple heterogeneous aircraft , the performance and effectiveness of the proposed conflict detection and resolution algorithms are evaluated and demonstrated . story_separator_special_tag the impact on atm performance of improved trajectory-related information exchange was determined . this was first evaluated on trajectory prediction accuracy with a follow-on impact on conflict detection and resolution and flow management performance . the trajectory prediction model was validated against operational data to ensure validity of the impact of variability in parameters . the distinction between preand post-clearance trajectories enabled an assessment of the impact of open versus closes clearances . uncertainty was shown to be reducible to one-third of present levels with closed clearances and improved data exchange . normalized conflict detection performance was sensitive to the transitioning state of flights , significantly more than to airspace . resulting improvements in resolution were shown to reduce conflict-induced perturbations by up to 3.5 nautical miles per flight hour . the combined reduction in uncertainty and conflict-induced perturbations were evaluated against alternative tfm strategies . an example illustrated reductions in fuel of 60 pounds per flight , 2.2 minutes of ground delay and 50 seconds of airborne delay per flight . keywords-trajectory prediction , conflict detection , conflict resolution , flow management , flight planning story_separator_special_tag the pervasiveness of mobile devices and location based services is leading to an increasing volume of mobility data.this side eect provides the opportunity for innovative methods that analyse the behaviors of movements . in this paper we propose wherenext , which is a method aimed at predicting with a certain level of accuracy the next location of a moving object . the prediction uses previously extracted movement patterns named trajectory patterns , which are a concise representation of behaviors of moving objects as sequences of regions frequently visited with a typical travel time . a decision tree , named t-pattern tree , is built and evaluated with a formal training and test process . the tree is learned from the trajectory patterns that hold a certain area and it may be used as a predictor of the next location of a new trajectory finding the best matching path in the tree . three dierent best matching methods to classify a new moving object are proposed and their impact on the quality of prediction is studied extensively . using trajectory patterns as predictive rules has the following implications : ( i ) the learning depends on the movement of all available story_separator_special_tag several schemes for the linear mapping of a multidimensional space have been proposed for various applications , such as access methods for spatio-temporal databases and image compression . in these applications , one of the most desired properties from such linear mappings is clustering , which means the locality between objects in the multidimensional space being preserved in the linear space . it is widely believed that the hilbert space-filling curve achieves the best clustering ( abel and mark , 1990 ; jagadish , 1990 ) . we analyze the clustering property of the hilbert space-filling curve by deriving closed-form formulas for the number of clusters in a given query region of an arbitrary shape ( e.g. , polygons and polyhedra ) . both the asymptotic solution for the general case and the exact solution for a special case generalize previous work . they agree with the empirical results that the number of clusters depends on the hypersurface area of the query region and not on its hypervolume . we also show that the hilbert curve achieves better clustering than the z curve . from a practical point of view , the formulas given provide a simple measure that can story_separator_special_tag recent advances in wireless sensors and position technology provide us with unprecedent amount of moving object data . the volume of geospatial data gathered from moving objects defies human ability to analyze the stream of input data . therefore , new methods for mining and digesting of moving object data are urgently needed . one of the popular services available for moving objects is the prediction of the unknown location of an object . in this paper we present a new method for predicting the location of a moving object . our method uses the past trajectory of the object and combines it with movement rules discovered in the moving objects database . our original contribution includes the formulation of the location prediction model , the design of an efficient algorithm for mining movement rules , the proposition of four strategies for movement rule matching with respect to a given object trajectory , and the experimental evaluation of the proposed model . story_separator_special_tag advances in wireless and mobile technology flood us with amounts of moving object data that preclude all means of manual data processing . the volume of data gathered from position sensors of mobile phones , pdas , or vehicles , defies human ability to analyze the stream of input data . on the other hand , vast amounts of gathered data hide interesting and valuable knowledge patterns describing the behavior of moving objects . thus , new algorithms for mining moving object data are required to unearth this knowledge . an important function of the mobile objects management system is the prediction of the unknown location of an object . in this paper we introduce a data mining approach to the problem of predicting the location of a moving object . we mine the database of moving object locations to discover frequent trajectories and movement rules . then , we match the trajectory of a moving object with the database of movement rules to build a probabilistic model of object location . experimental evaluation of the proposal reveals prediction accuracy close to 80 % . our original contribution includes the elaboration on the location prediction model , the design of story_separator_special_tag several different algorithms have been proposed for time registering a test pattern and a concatenated ( isolated word ) sequence of reference patterns for automatic connected-word recognition . these algorithms include the two-level , dynamic programming algorithm , the sampling approach and the level-building approach . in this paper , we discuss the theoretical differences and similarities among the various algorithms . an experimental comparison of these algorithms for a connected-digit recognition task is also given . the comparison shows that for typical applications , the level-building algorithm performs better than either the two-level dp matching or the sampling algorithm . story_separator_special_tag spatio-temporal , geo-referenced datasets are growing rapidly , and will be more in the near future , due to both technological and social/commercial reasons . from the data mining viewpoint , spatio-temporal trajectory data introduce new dimensions and , correspondingly , novel issues in performing the analysis tasks . in this paper , we consider the clustering problem applied to the trajectory data domain . in particular , we propose an adaptation of a density-based clustering algorithm to trajectory data based on a simple notion of distance between trajectories . then , a set of experiments on synthesized data is performed in order to test the algorithm and to compare it with other standard clustering approaches . finally , a new approach to the trajectory clustering problem , called temporal focussing , is sketched , having the aim of exploiting the intrinsic semantics of the temporal dimension to improve the quality of trajectory clustering . story_separator_special_tag the proliferation and ubiquity of temporal data across many disciplines has generated substantial interest in the analysis and mining of time series . clustering is one of the most popular data mining methods , not only due to its exploratory power , but also as a preprocessing step or subroutine for other techniques . in this paper , we describe k-shape , a novel algorithm for time-series clustering . k-shape relies on a scalable iterative refinement procedure , which creates homogeneous and well-separated clusters . as its distance measure , k-shape uses a normalized version of the cross-correlation measure in order to consider the shapes of time series while comparing them . based on the properties of that distance measure , we develop a method to compute cluster centroids , which are used in every iteration to update the assignment of time series to clusters . an extensive experimental evaluation against partitional , hierarchical , and spectral clustering methods , with the most competitive distance measures , showed the robustness of k-shape . overall , k-shape emerges as a domain-independent , highly accurate , and efficient clustering approach for time series with broad applications . story_separator_special_tag focus on movement data has increased as a consequence of the larger availability of such data due to current gps , gsm , rfid , and sensors techniques . in parallel , interest in movement has shifted from raw movement data analysis to more application-oriented ways of analyzing segments of movement suitable for the specific purposes of the application . this trend has promoted semantically rich trajectories , rather than raw movement , as the core object of interest in mobility studies . this survey provides the definitions of the basic concepts about mobility data , an analysis of the issues in mobility data management , and a survey of the approaches and techniques for : ( i ) constructing trajectories from movement tracks , ( ii ) enriching trajectories with semantic information to enable the desired interpretations of movements , and ( iii ) using data mining to analyze semantic trajectories and extract knowledge about their characteristics , in particular the behavioral patterns of the moving objects . last but not least , the article surveys the new privacy issues that arise due to the semantic aspects of trajectories . story_separator_special_tag we present a system for online monitoring of maritime activity over streaming positions from numerous vessels sailing at sea . the system employs an online tracking module for detecting important changes in the evolving trajectory of each vessel across time , and thus can incrementally retain concise , yet reliable summaries of its recent movement . in addition , thanks to its complex event recognition module , this system can also offer instant notification to marine authorities regarding emergency situations , such as suspicious moves in protected zones , or package picking at open sea . not only did our extensive tests validate the performance , efficiency , and robustness of the system against scalable volumes of real-world and synthetically enlarged datasets , but its deployment against online feeds from vessels has also confirmed its capabilities for effective , real-time maritime surveillance . story_separator_special_tag this text integrates different mobility data handling processes , from database management to multi-dimensional analysis and mining , into a unified presentation driven by the spectrum of requirements raised by real-world applications . it presents a step-by-step methodology to understand and exploit mobility data : collecting and cleansing data , storage in moving object database ( mod ) engines , indexing , processing , analyzing and mining mobility data . emerging issues , such as semantic and privacy-aware querying and mining as well as distributed data processing , are also covered . theoretical presentation is smoothly interchanged with hands-on exercises and case studies involving an actual mod engine . the authors are established experts who address both theoretical and practical dimensions of the field but also presentvaluable prototype software . the background context , clear explanations and sample exercises make this an ideal textbook for graduate students studying database management , data mining and geographic information systems . story_separator_special_tag during the past few decades , a number of effective methods for indexing , query processing , and knowledge discovery in moving object databases have been proposed . an interesting research direction that has recently emerged handles semantics of movement instead of raw spatio-temporal data . semantic annotations , such as stop , move , at home , shopping , driving , and so on , are either declared by the users ( e.g. , through social network apps ) or automatically inferred by some annotation method and are typically presented as textual counterparts along with spatial and temporal information of raw trajectories . it is natural to argue that such spatio-temporal-textual sequences , called semantic trajectories , form a realistic representation model of the complex everyday life ( hence , mobility ) of individuals . towards handling semantic trajectories of moving objects in semantic mobility databases , the lack of real datasets leads to the need to design realistic simulators . in the context of the above discussion , the goal of this work is to realistically simulate the mobility life of a large-scale population of moving objects in an urban environment . two simulator variations are presented : story_separator_special_tag this tutorial provides an overview of the basic theory of hidden markov models ( hmms ) as originated by l.e . baum and t. petrie ( 1966 ) and gives practical details on methods of implementation of the theory along with a description of selected applications of the theory to distinct problems in speech recognition . results from a number of original sources are combined to provide a single source of acquiring the background required to pursue further this area of research . the author first reviews the theory of discrete markov chains and shows how the concept of hidden states , where the observation is a probabilistic function of the state , can be used effectively . the theory is illustrated with two simple examples , namely coin-tossing , and the classic balls-in-urns system . three fundamental problems of hmms are noted and several practical techniques for solving these problems are given . the various types of hmms that have been studied , including ergodic as well as left-right models , are described . > story_separator_special_tag ground-based aircraft trajectory prediction is a critical and fundamental issue for air traffic management decison support tool ( dst ) . moreover , regarding the safety constraints , it could be more reasonable to predict intervals rather than precise aircraft positions . with the presence of uncertainties in the trajectory prediction models and in order to have a meaningful trajectory prediction , a statistical model , to estimate these uncertainties , is required . obtain representative statistical measures of these uncertainties is an intensive process that requires data collection and analysis of a large number of aircraft trajectories . a kinematic stochastic model was used , associated with a probabilistic performance model it captures the variability associated with the execution of a flight phase . together with the monte carlo method it was possible to reproduce the trajectory of an aircraft with multiple possibilities and combinations for the desired time . the statistical performance model was developed from real aircraft data , obtained from ads-b receptors , and it is dependent on the type and flight phase of the aircraft . flight phases were identified using the viterbi algorithm . the results were promising , the most part of the story_separator_special_tag the coming years will witness dramatic advances in wireless communications as well as positioning technologies . as a result , tracking the changing positions of objects capable of continuous movement is becoming increasingly feasible and necessary . the present paper proposes a novel , r * -tree based indexing technique that supports the efficient querying of the current and projected future positions of such moving objects . the technique is capable of indexing objects moving in one- , two- , and three-dimensional space . update algorithms enable the index to accommodate a dynamic data set , where objects may appear and disappear , and where changes occur in the anticipated positions of existing objects . a comprehensive performance study is reported . story_separator_special_tag the design and analysis of spatial data structures addison the design and analysis of spatial data structures addison the design and analysis of spatial data structures addison the design and analysis of spatial data structures addison the design and analysis of spatial data structures addison applications of spatial data structures : computer graphics the design and analysis of spatial data structures ( pdf editor : andrew s. glassner computer foundations of mathematics 10 by addison wesley bing the value of social media for predicting stock returns landscape architecture fourth edition a manual of land portland writing units grade 5 ekpbs samsung odin manual pdf pdf duckshost wheres the bee wire o journal wmcir document about oae special education 043 secrets study chapter 15 section 2 guided reading a worldwide depression private lemonade nfcqr songs made famous by tammy wynette mandv chapter 22 the great depression begins test answers shamrock cargo a story of the irish pota ekpbs tlia2050a learner guide ramonapropertymanagers 12. greene n. , kass m. , miller g. hierarchical zbuffer the encyclopedia of the novel vmnlaw remembering and imagining palestine identity and service manual tc21da jupw websters new world basic dictionary of american english workshop manual for mercedes story_separator_special_tag in this final installment of the paper we consider the case where the signals or the messages or both are continuously variable , in contrast with the discrete nature assumed until now . to a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis . as the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case . there are , however , a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases . story_separator_special_tag we propose a data model for representing moving objects in database systems . it is called the moving objects spatio-temporal ( most ) data model . we also propose future temporal logic ( ftl ) as the query language for the most model , and devise an algorithm for processing ftl queries in most . story_separator_special_tag research into air traffic control automation at ames research center has led to the development of the center-tracon automation system ( ctas ) . the component of ctas used in the tracon ( i.e . terminal ) area is the final approach spacing tool ( fast ) . fast is designed to help air traffic controllers produce a safe , efficient , and expeditious flow of traffic . a key performance factor of the ctas system is the ability to calculate accurate trajectories . the trajectories in ctas for aircraft in both the terminal and en-route areas are calculated in the trajectory synthesizer ( ts ) . this paper describes the trajectory generation algorithm in the terminal area and suggests some modifications that will improve its performance . trajectories calculated from the current algorithm and a modified algorithm adapted from a validated en-route ts algorithm are compared . both algorithms are very accurate ( four to six percent error ) when calculating estimated time of arrival compared to the actual time of arrival . the current algorithm has large ( greater than 10 percent ) differences from the validated en-route algorithm in calculating advisories , such as speed changes . story_separator_special_tag the weak connectivity of a random net is defined and computed by an approximation method as a function ofa , the axone density . it is shown that rises rapidly witha , attaining 0.8 of its asymptotic value ( unity ) fora=2 , where the number of neurons in the net is arbitrarily large . the significance of this parameter is interpreted also in terms of the maximum expected spread of an epidemic under certain conditions . story_separator_special_tag trajectory prediction is an important technology for ensuring safety and efficiency of the air traffic . hybrid estimation algorithm and intent inference algorithm are usually used to make long-term probabilistic trajectory prediction . in this paper , data mining algorithms are used to process the historical radar data and to abstract a typical trajectory library . an improved trajectory prediction algorithm is proposed based on the typical trajectory , which is used as the intent information to update the transition probability matrix , and is also used to propagate the nominal trajectory instead of the flight plan path . the prediction performance of the proposed algorithm is tested using real radar data from north china air traffic management bureau . the simulation results show that the improved algorithm has a better prediction performance and the prediction accuracy is improved by 10 % at most . story_separator_special_tag trajectory prediction capabilities are an essential building block for most if not all air traffic management decision support tools ( dsts ) . dst applications range from en route to terminal operations with advisories ranging from passive flow suggestions to active clearance/instructions . many past dsts have been fielded with their own unique trajectory prediction capability . the objective of this paper is to identify significant performance factors and design considerations for developing a common trajectory prediction capability . a system engineering approach is used to resolve key design issues and tradeoffs such as the balance between prediction accuracy and computational speed for a variety of dst applications . controller intent uncertainty , the major source of prediction error , is mitigated by the control advisories of advanced dsts that close the control loop . key aspects of a common trajectory prediction module are presented including an approach to dynamically adapt the performance to support a range of dst applications . the characteristics of different aircraft performance models , the flight path integration logic and software implementation issues are also discussed . story_separator_special_tag time-parameterized queries ( tp queries for short ) retrieve ( i ) the actual result at the time that the query is issued , ( ii ) the validity period of the result given the current motion of the query and the database objects , and ( iii ) the change that causes the expiration of the result . due to the highly dynamic nature of several spatio-temporal applications , tp queries are important both as standalone methods , as well as building blocks of more complex operations . however , little work has been done towards their efficient processing . in this paper , we propose a general framework that covers time-parameterized variations of the most common spatial queries , namely window queries , k-nearest neighbors and spatial joins . in particular , each of these tp queries is reduced to nearest neighbor search where the distance functions are defined according to the query type . this reduction allows the application and extension of well-known branch and bound techniques to the current problem . the proposed methods can be applied with mobile queries , mobile objects or both , given a suitable indexing method . our experimental evaluation is story_separator_special_tag selectivity estimation - the problem of estimating the result size of queries - is a fundamental problem in databases . accurate estimation of query selectivity involving multiple correlated attributes is especially challenging . poor cardinality estimates could result in the selection of bad plans by the query optimizer . recently , deep learning has been applied to this problem with promising results . however , many of the proposed approaches often struggle to provide accurate results for multi attribute queries involving large number of predicates and with low selectivity . in this paper , we propose two complementary approaches that are effective for this scenario . our first approach models selectivity estimation as a density estimation problem where one seeks to estimate the joint probability distribution from a finite number of samples . we leverage techniques from neural density estimation to build an accurate selectivity estimator . the key idea is to decompose the joint distribution into a set of tractable conditional probability distributions such that they satisfy the autoregressive property . our second approach formulates selectivity estimation as a supervised deep learning problem that predicts the selectivity of a given query . we describe how to extend our algorithms story_separator_special_tag conventional spatial queries are usually meaningless in dynamic environments since their results may be invalidated as soon as the query or data objects move . in this paper we formulate two novel query types , time parameterized and continuous queries , applicable in such environments . a time-parameterized query retrieves the actual result at the time when the query is issued , the expiry time of the result given the current motion of the query and database objects , and the change that causes the expiration . a continuous query retrieves tuples of the form , where each result is accompanied by a future interval , during which it is valid . we study time-parameterized and continuous versions of the most common spatial queries ( i.e. , window queries , nearest neighbors , spatial joins ) , proposing efficient processing algorithms and accurate cost models . story_separator_special_tag existing methods for peediction spatio-temporal databases assume that objects move according to linear functions . this severely limits their applicability , since in practice movement is more complex , and individual objects may follow drastically diffferent motion patterns . in order to overcome these problems , we first introduce a general framework for monitoring and indexing moving objects , where ( i ) each boject computes individually the function that accurately captures its movement and ( ii ) a server indexes the object locations at a coarse level and processes queries using a filter-refinement mechanism . our second contribution is a novel recursive motion function that supports a broad class of non-linear motion patterns . the function does not presume any a-priori movement but can postulate the particular motion of each object by examining its locations at recent timestamps . finally . we propse an efficient indexing scheme that faciliates the processing of predicitive queries without false misses . story_separator_special_tag a predictive spatio-temporal query retrieves the set of moving objects that will intersect a query window during a future time interval . currently , the only access method for processing such queries in practice is the tpr-tree . in this paper we first perform an analysis to determine the factors that affect the performance of predictive queries and show that several of these factors are not considered by the tpr-tree , which uses the insertion/deletion algorithms of the r * -tree designed for static data . motivated by this , we propose a new index structure called the tpr * - tree , which takes into account the unique features of dynamic objects through a set of improved construction algorithms . in addition , we provide cost models that determine the optimal performance achievable by any data-partition spatio-temporal access method . using experimental comparison , we illustrate that the tpr * -tree is nearly-optimal and significantly outperforms the tpr-tree under all conditions . story_separator_special_tag this paper considers the problem of short to mid-term aircraft trajectory prediction , that is , the estimation of where an aircraft will be located over a 10-30 min time horizon . such a problem is central in decision support tools , especially in conflict detection and resolution algorithms . it also appears when an air traffic controller observes traffic on the radar screen and tries to identify convergent aircraft , which may be in conflict in the near future . an innovative approach for aircraft trajectory prediction is presented in this paper . this approach is based on local linear functional regression that considers data preprocessing , localizing and solving linear regression using wavelet decomposition . this algorithm takes into account only past radar tracks , and does not use any physical or aeronautical parameters . this approach has been successfully applied to aircraft trajectories between several airports on the data set that is one year air traffic over france . the method is intrinsic and independent from airspace structure . story_separator_special_tag dynamic attributes are attributes that change continuously over time making it impractical to issue explicit updates for every change . in this paper , we adapt a variant of the quadtree structure to solve the problem of indexing dynamic attributes . the approach is based on the key idea of using a linear function of time for each dynamic attribute that allows us to predict its value in the future . we contribute an algorithm for regenerating the quadtree-based index periodically that minimizes cpu and disk access cost . we also provide an experimental study of performance focusing on query processing and index update overheads . story_separator_special_tag addresses a comprehensive design issue of a centralized communication network with logical star-star topology : each established hub is connected to the center node via its own cable route which may pass through a number of sites , and so is the connection from a user to its destination hub . the study distinguishes itself from other network design studies in the literature by explicitly addressing the reality that the cable can be installed only in the conduit , and some hub devices such as line concentrators and multiplexers , if properly installed , would provide more economical cable paths . a cost minimization model is first made to cover three types of decisions in one setting : locating hubs , placing conduit system , and installing cable therein . it is then formulated as a variant of classical network design model , allowing the incorporation of the well-known dual-ascent solution strategy . despite the complexity inherent to the design problem , the performance of the proposed solution heuristic is shown via the extensive computational experiments with large-scale test problems to be very satisfactory in both speed and quality of the solutions generated . > story_separator_special_tag aircraft climb trajectories are difficult to predict , and large errors in these predictions reduce the potential operational benefits of some advanced concepts in the next generation air transportation system . an algorithm that dynamically adjusts modeled aircraft weights based on observed track data to improve the accuracy of trajectory predictions for climbing flights has been developed . in real-time evaluation with actual fort worth center traffic , the algorithm decreased the altitude root-mean-square error by about 20 % . it also reduced the root-mean-square error of predicted time at top of climb by the same amount . story_separator_special_tag abstract forecasting the future positions of mobile users is a valuable task allowing us to operate efficiently a myriad of different applications which need this type of information . we propose myway , a prediction system which exploits the individual systematic behaviors modeled by mobility profiles to predict human movements . myway provides three strategies : the individual strategy uses only the user individual mobility profile , the collective strategy takes advantage of all users individual systematic behaviors , and the hybrid strategy that is a combination of the previous two . a key point is that myway only requires the sharing of individual mobility profiles , a concise representation of the user s movements , instead of raw trajectory data revealing the detailed movement of the users . we evaluate the prediction performances of our proposal by a deep experimentation on large real-world data . the results highlight that the synergy between the individual and collective knowledge is the key for a better prediction and allow the system to outperform the state-of-art methods . story_separator_special_tag as mobile devices proliferate and networks become more location-aware , the corresponding growth in spatio-temporal data will demand analysis techniques to mine patterns that take into account the semantics of such data . association rule mining has been one of the more extensively studied data mining techniques , but it considers discrete transactional data ( supermarket or sequential ) . most attempts to apply this technique to spatial-temporal domains maps the data to transactions , thus losing the spatio-temporal characteristics . we provide a comprehensive definition of spatio-temporal association rules ( stars ) that describe how objects move between regions over time . we define support in the spatio-temporal domain to effectively deal with the semantics of such data . we also introduce other patterns that are useful for mobility data ; stationary regions and high traffic regions . the latter consists of sources , sinks and thoroughfares . these patterns describe important temporal characteristics of regions and we show that they can be considered as special stars . we provide efficient algorithms to find these patterns by exploiting several pruning properties . story_separator_special_tag the probability of error in decoding an optimal convolutional code transmitted over a memoryless channel is bounded from above and below as a function of the constraint length of the code . for all but pathological channels the bounds are asymptotically ( exponentially ) tight for rates above r_ { 0 } , the computational cutoff rate of sequential decoding . as a function of constraint length the performance of optimal convolutional codes is shown to be superior to that of block codes of the same length , the relative improvement increasing with rate . the upper bound is obtained for a specific probabilistic nonsequential decoding algorithm which is shown to be asymptotically optimum for rates above r_ { 0 } and whose performance bears certain similarities to that of sequential decoding algorithms . story_separator_special_tag the correlated exploitation of heterogeneous data sources offering very large archival and streaming data is important to increasing the accuracy of computations when analysing and predicting future states of moving entities . aiming to significantly advance the capacities of systems to promote safety and effectiveness of critical operations for large numbers of moving entities in large geographical areas , this paper describes progress achieved towards time critical big data analytics solutions in user-defined challenges concerning moving entities in the air-traffic management and maritime domains . specifically , the objective of this paper is to report progress and present further research challenges concerning data integration and management , predictive analytics for trajectory and events forecasting , and visual analytics . story_separator_special_tag the research presented in this paper aims to show the deployment and use of advanced technologies towards processing surveillance data for the detection of events , contributing to maritime situation awareness via trajectories detection , synopses generation and semantic enrichment of trajectories . we first introduce the context of the maritime domain and then the main principles of the big data architecture developed so far within the european funded h2020 datacron project . from the integration of large maritime trajectory datasets , to the generation of synopses and the detection of events , the main functions of the datacron architecture are developed and discussed . the potential for detection and forecasting of complex events at sea is illustrated by preliminary experimental results . story_separator_special_tag location prediction has attracted much attention due to its important role in many location-based services , such as food delivery , taxi-service , real-time bus system , and advertisement posting . traditional prediction methods often cluster track points into regions and mine movement patterns within the regions . such methods lose information of points along the road and can not meet the demand of specific services . moreover , traditional methods utilizing classic models may not perform well with long location sequences . in this paper , a spatial-temporal-semantic neural network algorithm ( sts-lstm ) has been proposed , which includes two steps . first , the spatial-temporal-semantic feature extraction algorithm ( sts ) is used to convert the trajectory to location sequences with fixed and discrete points in the road networks . the method can take advantage of points along the road and can transform trajectory into model-friendly sequences . then , a long short-term memory ( lstm ) -based model is constructed to make further predictions , which can better deal with long location sequences . experimental results on two real-world datasets show that sts-lstm has stable and higher prediction accuracy over traditional feature extraction and model building story_separator_special_tag abstract conflict avoidance ( ca ) plays a crucial role in guaranteeing the airspace safety . the current approaches , mostly focusing on a short-term situation which eliminates conflicts via local adjustment , can not provide a global solution . recently , long-term conflict avoidance approaches , which are proposed to provide solutions via strategically planning traffic flow from a global view , have attracted more attentions . with consideration of the situation in china , there are thousands of flights per day and the air route network is large and complex , which makes the long-term problem to be a large-scale combinatorial optimization problem with complex constraints . to minimize the risk of premature convergence being faced by current approaches and obtain higher quality solutions , in this work , we present an effective strategic framework based on a memetic algorithm ( ma ) , which can markedly improve search capability via a combination of population-based global search and local improvements made by individuals . in addition , a specially designed local search operator and an adaptive local search frequency strategy are proposed to improve the solution quality . furthermore , a fast genetic algorithm ( ga ) is story_separator_special_tag in this paper we explore the application of travel-speed prediction to query processing in moving objects databases . we propose to revise the motion plans of moving objects using the predicted travel-speeds . this revision occurs before answering queries . we develop three methods of doing this . these methods differ in the time when the motion plans are revised , and which of them are revised . we analyze the three methods theoretically and experimentally . story_separator_special_tag mobile objects have become ubiquitous in our everyday lives , ranging from cellular phones to sensors , therefore , analyzing and mining mobile data becomes an interesting problem with great practical importance . for instance , by finding trajectory patterns of the mobile clients , the mobile communication network can allocate resources more efficiently . however , due to the limited power of the mobile devices , we are only able to obtain the imprecise location of a mobile object at a given time . sequential patterns are a popular data mining model . by applying the sequential pattern model on the set of imprecise trajectories of the mobile objects , we may uncover important information or further our understanding of the inherent characteristics of the mobile objects , e.g. , constructing a classifier based on the discovered patterns or using the patterns to improve the accuracy of location prediction . since the input data is highly imprecise , it may not be possible to directly apply any existing sequential pattern discovery algorithm to the problem in this paper . thus , we propose the model of the trajectory patterns and a novel measure to represent the expected occurrences of story_separator_special_tag terminal-area aircraft intent inference ( t-aii ) is a prerequisite to detect and avoid potential aircraft conflict in the terminal airspace . t-aii challenges the state-of-the-art aii approaches due to the uncertainties of air traffic situation , in particular due to the undefined flight routes and frequent maneuvers . in this paper , a novel t-aii approach is introduced to address the limitations by solving the problem with two steps that are intent modeling and intent inference . in the modeling step , an online trajectory clustering procedure is designed for recognizing the real-time available routes in replacing of the missed plan routes . in the inference step , we then present a probabilistic t-aii approach based on the multiple flight attributes to improve the inference performance in maneuvering scenarios . the proposed approach is validated with real radar trajectory and flight attributes data of 34 days collected from chengdu terminal area in china . preliminary results show the efficacy of the presented approach . story_separator_special_tag mobility prediction is one of the most essential issues that need to be explored for mobility management in mobile computing systems . in this paper , we propose a new algorithm for predicting the next intercell movement of a mobile user in a personal communication systems network . in the first phase of our three-phase algorithm , user mobility patterns are mined from the history of mobile user trajectories . in the second phase , mobility rules are extracted from these patterns , and in the last phase , mobility predictions are accomplished by using these rules . the performance of the proposed algorithm is evaluated through simulation as compared to two other prediction methods . the performance results obtained in terms of precision and recall indicate that our method can make more accurate predictions than the other methods . story_separator_special_tag the increasing pervasiveness of location-acquisition technologies ( gps , gsm networks , etc . ) enables people to conveniently log their location history into spatial-temporal data , thus giving rise to the necessity as well as opportunity to discovery valuable knowledge from this type of data . in this paper , we propose the novel notion of individual life pattern , which captures individual 's general life style and regularity . concretely , we propose the life pattern normal form ( the lp-normal form ) to formally describe which kind of life regularity can be discovered from location history ; then we propose the lp-mine framework to effectively retrieve life patterns from raw individual gps data . our definition of life pattern focuses on significant places of individual life and considers diverse properties to combine the significant places . lp-mine is comprised of two phases : the modelling phase and the mining phase . the modelling phase pre-processes gps data into an available format as the input of the mining phase . the mining phase applies separate strategies to discover different types of pattern . finally , we conduct extensive experiments using gps data collected by volunteers in the real story_separator_special_tag aircraft tracking , intent inference , and trajectory predictions are important tools for enhanced capacity in air traffic control operations . in this paper , we propose an algorithm that performs these three tasks accurately . the algorithm uses a hybrid estimation algorithm to estimate the aircraft 's state and flight mode . these estimates are combined with knowledge about air traffic control regulations , the aircraft 's flight plan , and the environment to infer the pilot 's intent . trajectory predictions are computed as a function of the aircraft 's motion ( state and mode estimates ) and the inferred intent the result is an algorithm that provides , in real-time , accurate intent and trajectory predictions for aircraft . we analyze and test the performance of the proposed algorithm with various scenarios representative of current and future aircraft operations within the national airspace system . story_separator_special_tag research on predicting movements of mobile users has attracted a lot of attentions in recent years . many of those prediction techniques are developed based only on geographic features of mobile users ' trajectories . in this paper , we propose a novel approach for predicting the next location of a user 's movement based on both the geographic and semantic features of users ' trajectories . the core idea of our prediction model is based on a novel cluster-based prediction strategy which evaluates the next location of a mobile user based on the frequent behaviors of similar users in the same cluster determined by analyzing users ' common behavior in semantic trajectories . through a comprehensive evaluation by experiments , our proposal is shown to deliver excellent performance . story_separator_special_tag the advances in location-acquisition and mobile computing techniques have generated massive spatial trajectory data , which represent the mobility of a diversity of moving objects , such as people , vehicles , and animals . many techniques have been proposed for processing , managing , and mining trajectory data in the past decade , fostering a broad range of applications . in this article , we conduct a systematic survey on the major research into trajectory data mining , providing a panorama of the field as well as the scope of its research topics . following a road map from the derivation of trajectory data , to trajectory data preprocessing , to trajectory data management , and to a variety of mining tasks ( such as trajectory pattern mining , outlier detection , and trajectory classification ) , the survey explores the connections , correlations , and differences among these existing techniques . this survey also introduces the methods that transform trajectories into other data formats , such as graphs , matrices , and tensors , to which more data mining and machine learning techniques can be applied . finally , some public trajectory datasets are presented . this survey story_separator_special_tag the advance of location-acquisition technologies enables people to record their location histories with spatio-temporal datasets , which imply the correlation between geographical regions . this correlation indicates the relationship between locations in the space of human behavior , and can enable many valuable services , such as sales promotion and location recommendation . in this paper , by taking into account a user 's travel experience and the sequentiality locations have been visited , we propose an approach to mine the correlation between locations from a large number of users ' location histories . we conducted a personalized location recommendation system using the location correlation , and evaluated this system with a large-scale real-world gps dataset . as a result , our method outperforms the related work using the pearson correlation . story_separator_special_tag compressibility of individual sequences by the class of generalized finite-state information-lossless encoders is investigated . these encoders can operate in a variable-rate mode as well as a fixed-rate one , and they allow for any finite-state scheme of variable-length-to-variable-length coding . for every individual infinite sequence x a quantity \\rho ( x ) is defined , called the compressibility of x , which is shown to be the asymptotically attainable lower bound on the compression ratio that can be achieved for x by any finite-state encoder . this is demonstrated by means of a constructive coding theorem and its converse that , apart from their asymptotic significance , also provide useful performance criteria for finite and practical data-compression tasks . the proposed concept of compressibility is also shown to play a role analogous to that of entropy in classical information theory where one deals with probabilistic ensembles of sequences rather than with individual sequences . while the definition of \\rho ( x ) allows a different machine for each different sequence to be compressed , the constructive coding theorem leads to a universal algorithm that is asymptotically optimal for all sequences . story_separator_special_tag huge amounts of loosely structured and high velocity data are now being generated by ubiquitous mobile sensing devices , aerial sensory systems , cameras and radiofrequency identification readers , which are generating key knowledge into social media behaviors , intelligent transport patterns , military operational environments and space monitoring , safety systems etc . machine learning models and data mining techniques can be employed to produce actionable intelligence , based on predictive and prescriptive analytics . however , more data is not leading to better predictions as the accuracy of the implicated learning models hugely varies in accordance to the complexity of the given space and related data . especially in the case of open-ended data streams of massive scale , their efficiency is put to the challenge . in this work , we employ a variety of machine learning methods and apply them to geospatial time-series surveillance data , in an attempt to determine their capacity to learn a vessels behavioral pattern . we evaluate their effectiveness against metrics of accuracy , time and resource usage . the main concept of this study is to determine the most appropriate machine-learning model capable of learning a vessels behavior and performing story_separator_special_tag this demo presents the panda system for efficient support of a wide variety of predictive spatio-temporal queries . these queries are widely used in several applications including traffic management , location-based advertising , and store finders . panda targets long-term query prediction as it relies on adapting a long-term prediction function to : ( a ) scale up to large number of moving objects , and ( b ) support predictive queries . panda does not only aim to predict the query answer , but , it also aims to predict the incoming queries such that parts of the query answer can be precomputed before the query arrival . panda maintains a tunable threshold that achieves a trade-off between the predictive query response time and the system overhead in precomputing the query answer . equipped with a graphical user interface ( gui ) , audience can explore the panda demo through issuing predictive queries over a moving set of objects on a map . in addition , they are able to follow the execution of such queries through an eye on the panda execution engine . story_separator_special_tag given two sets of moving objects with nonzero extents , the continuous intersection join query reports every pair of intersecting objects , one from each of the two moving object sets , for every timestamp . this type of queries is important for a number of applications , e.g. , in the multi-billion dollar computer game industry , massively multiplayer online games like world of warcraft need to monitor the intersection among players ' attack ranges and render players ' interaction in real time . the computational cost of a straightforward algorithm or an algorithm adapted from another query type is prohibitive , and answering the query in real time poses a great challenge . those algorithms compute the query answer for either too long or too short a time interval , which results in either a very large computation cost per answer update or too frequent answer updates , respectively . this observation motivates us to optimize the query processing in the time dimension . in this study , we achieve this optimization by introducing the new concept of time-constrained ( tc ) processing . further , tc processing enables a set of effective improvement techniques on traditional intersection story_separator_special_tag moving object indexing and query processing is a well studied research topic , with applications in areas such as intelligent transport systems and location-based services . while much existing work explicitly or implicitly assumes a deterministic object movement model , real-world objects often move in more complex and stochastic ways . this paper investigates the possibility of a marriage between moving-object indexing and probabilistic object modeling . given the distributions of the current locations and velocities of moving objects , we devise an efficient inference method for the prediction of future locations . we demonstrate that such prediction can be seamlessly integrated into existing index structures designed for moving objects , thus improving the meaningfulness of range and nearest neighbor query results in highly dynamic and uncertain environments . the paper reports on extensive experiments on the bx-tree that offer insights into the properties of the paper 's proposal . story_separator_special_tag this paper presents the panda system for efficient support of a wide variety of predictive spatio-temporal queries that are widely used in several applications including traffic management , location-based advertising , and ride sharing . unlike previous attempts in supporting predictive queries , panda targets long-term query prediction as it relies on adapting a well-designed long-term prediction function to : ( a ) scale up to large number of moving objects , and ( b ) support large number of predictive queries . as a means of scalability , panda smartly precomputes parts of the most frequent incoming predictive queries , which significantly reduces the query response time . panda employs a tunable threshold that achieves a trade-off between query response time and the maintenance cost of precomptued answers . experimental results , based on large data sets , show that panda is scalable , efficient , and as accurate as its underlying prediction function . story_separator_special_tag in automotive applications , movement-path prediction enables the delivery of predictive and relevant services to drivers , e.g. , reporting traffic conditions and gas stations along the route ahead . path prediction also enables better results of predictive range queries and reduces the location update frequency in vehicle tracking while preserving accuracy . existing moving-object location prediction techniques in spatial-network settings largely target short-term prediction that does not extend beyond the next road junction . to go beyond short-term prediction , we formulate a network mobility model that offers a concise representation of mobility statistics extracted from massive collections of historical object trajectories . the model aims to capture the turning patterns at junctions and the travel speeds on road segments at the level of individual objects . based on the mobility model , we present a maximum likelihood and a greedy algorithm for predicting the travel path of an object ( for a time duration h into the future ) . we also present a novel and efficient server-side indexing scheme that supports predictive range queries on the mobility statistics of the objects . empirical studies with real data suggest that our proposals are effective and efficient . story_separator_special_tag predictive queries over spatio-temporal data proved to be vital in many location-based services including traffic management , ride sharing , and advertising . in the last few years , one of the most exciting work on spatio-temporal data management is about predictive queries . in this paper , we review the current research trends and present their related applications in the field of predictive spatio-temporal queries processing . then , we discuss some basic challenges arising from new opportunities and open problems . the goal of this paper is to catch the interesting areas and future work under the umbrella of predictive queries over spatio-temporal data . story_separator_special_tag predictive queries on moving objects offer an important category of location-aware services based on the objects ' expected future locations . a wide range of applications utilize this type of services , e.g. , traffic management systems , location-based advertising , and ride sharing systems . this paper proposes a novel index structure , named predictive tree ( p-tree ) , for processing predictive queries against moving objects on road networks . the predictive tree : ( 1 ) provides a generic infrastructure for answering the common types of predictive queries including predictive point , range , knn , and aggregate queries , ( 2 ) updates the probabilistic prediction of the object 's future locations dynamically and incrementally as the object moves around on the road network , and ( 3 ) provides an extensible mechanism to customize the probability assignments of the object 's expected future locations , with the help of user defined functions . the proposed index enables the evaluation of predictive queries in the absence of the objects ' historical trajectories . based solely on the connectivity of the road network graph and assuming that the object follows the shortest route to destination , the story_separator_special_tag massive trajectory data is being generated by gps-equipped devices , such as cars and mobile phones , which is used increasingly in transportation , location-based services , and urban computing . as a result , a variety of methods have been proposed for trajectory data management and analytics . however , traditional systems and methods are usually designed for very specific data management or analytics needs , which forces users to stitch together heterogeneous systems to analyze trajectory data in an inefficient manner . targeting the overall data pipeline of big trajectory data management and analytics , we present a unified platform , termed as ultraman . in order to achieve scalability , efficiency , persistence , and flexibility , ( i ) we extend apache spark with respect to both data storage and computing by seamlessly integrating a key-value store , and ( ii ) we enhance the mapreduce paradigm to allow flexible optimizations based on random data access . we study the resulting system 's flexibility using case studies on data retrieval , aggregation analyses , and pattern mining . extensive experiments on real and synthetic trajectory data are reported to offer insight into the scalability and performance story_separator_special_tag predictive spatio-temporal queries are crucial in many applications . traffic management is an example application , where predictive spatial queries are issued to anticipate jammed areas in advance . also , location-aware advertising is another example application that targets customers expected to be in the vicinity of a shopping mall in the near future . in this paper , we introduce panda , a generic framework for supporting spatial predictive queries over moving objects in euclidean spaces . panda distinguishes itself from previous work in spatial predictive query processing by the following features : ( 1 ) panda is generic in terms of supporting commonly-used types of queries , ( e.g. , predictive range , knn , aggregate queries ) over stationary points of interests as well as moving objects . ( 2 ) panda employees a prediction function that provides accurate prediction even under the absence or the scarcity of the objects historical trajectories . ( 3 ) panda is customizable in the sense that it isolates the prediction calculation from query processing . hence , it enables the injection and integration of user defined prediction functions within its query processing framework . ( 4 ) panda deals with story_separator_special_tag we present a novel method for predicting long-term target states based on mean-reverting stochastic processes . we use the ornstein-uhlenbeck ( ou ) process , leading to a revised target state equation and to a time scaling law for the related uncertainty that in the long term is shown to be orders of magnitude lower than under the nearly constant velocity ( ncv ) assumption . in support of the proposed model , an analysis of a significant portion of real-world maritime traffic is provided . story_separator_special_tag ship traffic monitoring is a foundation for many maritime security domains , and monitoring system specifications underscore the necessity to track vessels beyond territorial waters . however , vessels in open seas are seldom continuously observed . thus , the problem of long-term vessel prediction becomes crucial . this paper focuses attention on the performance assessment of the ornstein-uhlenbeck ( ou ) model for long-term vessel prediction , compared with usual and well-established nearly constant velocity ( ncv ) model . heterogeneous data , such as automatic identification system ( ais ) data , high-frequency surface wave radar data , and synthetic aperture radar data , are exploited to this aim . two different association procedures are also presented to cue dwells in case of gaps in the transmission of ais messages . suitable metrics have been introduced for the assessment . considerable advantages of the ou model are pointed out with respect to the ncv model .
convergence of random variables . in this expository note we point out some equivalent definitions of mixing and stability and discuss the use of these concepts in several contexts . further , we show how a recent central limit theorem for martingales can be obtained directly using stability . though the results are not new , the proofs seem substantially simpler than those previously given . story_separator_special_tag in this paper we develop a stochastic calculus with respect to a gaussian process of the form $ b_t = \\int^t_0 k ( t , s ) \\ , dw_s $ , where $ w $ is a wiener process and $ k ( t , s ) $ is a square integrable kernel , using the techniques of the stochastic calculus of variations . we deduce change-of-variable formulas for the indefinite integrals and we study the approximation by riemann sums.the particular case of the fractional brownian motion is discussed . story_separator_special_tag this paper develops a stochastic integration theory with respect to volatility modulated levy-driven volterra ( v mlv ) processes . it extends recent results in the literature to allow for stochastic volatility and pure jump processes in the integrator . the new integration operator is based on malliavin calculus and describes an anticipative integral . fundamental properties of the integral are derived and important applications are given . story_separator_special_tag this paper generalizes the integration theory for volatility modulated brownian-driven volterra processes onto the space g * of potthoff-timpel distributions . sufficient conditions for integrability of generalized processes are given , regularity results and properties of the integral are discussed . we introduce a new volatility modulation method through the wick product and discuss its relation to the pointwise-multiplied volatility model . story_separator_special_tag this paper proposes a new modelling framework for electricity forward markets based on so called ambit fields . the new model can capture many of the stylised facts observed in energy markets and is highly analytically tractable . we give a detailed account on the probabilistic properties of the new type of model , and we discuss martingale conditions , option pricing and change of measure within the new model class . also , we derive a model for the typically stationary spot price , which is obtained from the forward model through a limiting argument . story_separator_special_tag ambit stochastics is the name for the theory and applications of ambit fields and ambit processes and constitutes a new research area in stochastics for tempo-spatial phenomena . this paper gives an overview of the main findings in ambit stochastics up to date and establishes new results on general properties of ambit fields . moreover , it develops the concept of tempo-spatial stochastic volatility/intermittency within ambit fields . various types of volatility modulation ranging from stochastic scaling of the amplitude , to stochastic time change and extended subordination of random measures and to probability and l\\ ' { e } vy mixing of volatility/intensity parameters will be developed . important examples for concrete model specifications within the class of ambit fields are given . story_separator_special_tag we develop the asymptotic theory for the realised power variation of the processes x = f g , where g is a gaussian process with stationary increments . more specifically , under some mild assumptions on the variance function of the increments of g and certain regularity condition on the path of the process f we prove the convergence in probability for the properly normalised realised power variation . moreover , under a further assumption on the h\xa8older index of the path of f , we show an associated stable central limit theorem . the main tool is a general central limit theorem , due essentially to hu & nualart ( 2005 ) , nualart & peccati ( 2005 ) and peccati & tudor ( 2005 ) , for sequences of random variables which admit a chaos representation . story_separator_special_tag in this paper we study the asymptotic behaviour of power and multipower variations of stochastic processes . processes of the type considered serve in particular , to analyse data of velocity increments of a fluid in a turbulence regime with spot intermittency sigma . the purpose of the present paper is to determine the probabilistic limit behaviour of the ( multi ) power variations of y , as a basis for studying properties of the intermittency process . notably the processes y are in general not of the semimartingale kind and the established theory of multipower variation for semimartingales does not suffice for deriving the limit properties . as a key tool for the results a general central limit theorem for triangular gaussian schemes is formulated and proved . examples and an application to realised variance ratio are given . story_separator_special_tag we present some new asymptotic results for functionals of higher order differences of brownian semi-stationary processes . in an earlier work [ 4 ] we have derived a similar asymptotic theory for first order differences . however , the central limit theorems were valid only for certain values of the smoothness parameter of a brownian semistationary process , and the parameter values which appear in typical applications , e.g . in modeling turbulent flows in physics , were excluded . the main goal of the current paper is the derivation of the asymptotic theory for the whole range of the smoothness parameter by means of using second order differences . we present the law of large numbers for the multipower variation of the second order differences of brownian semi-stationary processes and show the associated central limit theorem . finally , we demonstrate some estimation methods for the smoothness parameter of a brownian semi-stationary process as an application of our probabilistic results . story_separator_special_tag consider a semimartingale of the form y_ { t } =y_0+\\int _0^ { t } a_ { s } ds+\\int _0^ { t } _ { s- } dw_ { s } , where a is a locally bounded predictable process and ( the volatility ) is an adapted right - continuous process with left limits and w is a brownian motion . we define the realised bipower variation process v ( y ; r , s ) _ { t } ^n=n^ { ( ( r+s ) /2 ) -1 } \\sum_ { i=1 } ^ { [ nt ] } |y_ { ( i/n ) } -y_ { ( ( i-1 ) /n ) } |^ { r } |y_ { ( ( i+1 ) /n ) } -y_ { ( i/n ) } |^ { s } , where r and s are nonnegative reals with r+s > 0 . we prove that v ( y ; r , s ) _ { t } n converges locally uniformly in time , in probability , to a limiting process v ( y ; r , s ) _ { t } ( the bipower variation process ) story_separator_special_tag we introduce the notion of relative volatility/intermittency and demonstrate how relative volatility statistics can be used to estimate consistently the temporal variation of volatility/intermittency even when the data of interest are generated by a non-semimartingale , or a brownian semistationary process in particular . while this estimation method is motivated by the assessment of relative energy dissipation in empirical data of turbulence , we apply it also to energy price data . moreover , we develop a probabilistic asymptotic theory for relative power variations of brownian semistationary processes and ito semimartingales and discuss how it can be used for inference on relative volatility/intermittency . story_separator_special_tag in the present paper we study moving averages ( also known as stochastic convolutions ) driven by a wiener process and with a deterministic kernel . necessary and sufficient conditions on the kernel are provided for the moving average to be a semimartingale in its natural filtration . our results are constructive - meaning that they provide a simple method to obtain kernels for which the moving average is a semimartingale or a wiener process . several examples are considered . in the last part of the paper we study general gaussian processes with stationary increments . we provide necessary and sufficient conditions on spectral measure for the process to be a semimartingale . story_separator_special_tag in this paper we present some new limit theorems for power variations of stationary increment l\\ ' { e } vy driven moving average processes . recently , such asymptotic results have been investigated in [ ann . probab . 45 ( 6b ) ( 2017 ) , 4477 -- 4528 , festschrift for bernt { \\o } ksendal , stochastics 81 ( 1 ) ( 2017 ) , 360 -- 383 ] under the assumption that the kernel function potentially exhibits a singular behaviour at $ 0 $ . the aim of this work is to demonstrate how some of the results change when the kernel function has multiple singularity points . our paper is also related to the article [ stoch . process . appl . 125 ( 2 ) ( 2014 ) , 653 -- 677 ] that studied the same mathematical question for the class of brownian semi-stationary models . story_separator_special_tag the aim of the present paper is to study the semimartingale property of continuous time moving averages driven by levy processes . we provide necessary and sufficient conditions on the kernel for the moving average to be a semimartingale in the natural filtration of the levy process , and when this is the case we also provide a useful representation . assuming that the driving levy process is of unbounded variation , we show that the moving average is a semimartingale if and only if the kernel is absolutely continuous with a density satisfying an integrability condition . story_separator_special_tag this paper gives a complete characterization of infinitely divisible semimartingales , i.e. , semimartingales whose finite dimensional distributions are infinitely divisible . an explicit and essentially unique decomposition of such semimartingales is obtained . a new approach , combining series decompositions of infinitely divisible processes with detailed analysis of their jumps , is presented . as an ilustration of the main result , the semimartingale property is explicitely determined for a large class of stationary increment processes and several examples of processes of interest are considered . these results extend stricker 's theorem characterizing gaussian semimartingales and knight 's theorem describing gaussian moving average semimartingales , in particular . story_separator_special_tag the class of moving-average fractional levy motions ( maflms ) , which are fields parameterized by a d-dimensional space , is introduced . maflms are defined by a moving-average fractional integration of order h of a random levy measure with finite moments . maflms are centred d-dimensional motions with stationary increments , and have the same covariance function as fractional brownian motions . they have h-d/2 holder-continuous sample paths . when the levy measure is the truncated random stable measure of index , maflms are locally self-similar with index \\widetilde { h } =h -d/2+d/ . this shows that in a non-gaussian setting these indices ( local self-similarity , variance of the increments , holder continuity ) may be different . moreover , we can establish a multiscale behaviour of some of these fields . all the indices of such maflms are identified for the truncated random stable measure . story_separator_special_tag various characterizations for fractional levy process to be of finite variation are obtained , one of which is in terms of the characteristic triplet of the driving levy process , while others are in terms of differentiability properties of the sample paths . a zero-one law and a formula for the expected total variation is also given . story_separator_special_tag the present paper discusses simulation of levy semistationary ( lss ) processes in the context of power markets . a disadvantage of applying numerical integration to obtain trajectories of lss processes is that such a scheme is not iterative . we address this problem by introducing and analyzing a fourier simulation scheme for obtaining trajectories of these processes in an iterative manner . furthermore , we demonstrate that our proposed scheme is well suited for simulation of a wide range of lss processes , including , in particular , lss processes indexed by a kernel function which is steep close to the origin . finally , we put our simulation scheme to work for simulating the price of path-dependent options to demonstrate the advantages of the proposed fourier simulation scheme . story_separator_special_tag we treat a stochastic integration theory for a class of hilbert-valued , volatility-modulated , conditionally gaussian volterra processes . we apply techniques from malliavin calculus to define this stochastic integration as a sum of a skorohod integral , where the integrand is obtained by applying an operator to the original integrand , and a correction term involving the malliavin derivative of the same altered integrand , integrated against the lebesgue measure . the resulting integral satisfies many of the expected properties of a stochastic integral , including an ito formula . moreover , we derive an alternative definition using a random-field approach and relate both concepts . we present examples related to fundamental solutions to partial differential equations . story_separator_special_tag starting from the well-known and very important fact that a semimartingale on a space \\ ( ( \\omega , line { set { \\raise0.3em\\hbox { $ \\smash { \\scriptscriptstyle- } $ } } { f } } , ( line { set { \\raise0.3em\\hbox { $ \\smash { \\scriptscriptstyle- } $ } } { f } } _t ) _ { t \\geqslant 0 } , p ) \\ ) may be viewed as a -additive measure on the space \xd7ir+ with values in lo , we introduce a random measure as being a -additive measure on \xd7ir+\xd7e , where e is an auxiliary space , with values in lo . this apparently includes all usual notions of random measures and vector-valued semimartingales . story_separator_special_tag we derive explicit integrability conditions for stochastic integrals taken over time and space driven by a random measure . our main tool is a canonical decomposition of a random measure which extends the results from the purely temporal case . we show that the characteristics of this decomposition can be chosen as predictable strict random measures , and we compute the characteristics of the stochastic integral process . we apply our conditions to a variety of examples , in particular to ambit processes , which represent a rich model class . story_separator_special_tag this paper presents some asymptotic results for statistics of brownian semi-stationary ( bss ) processes . more precisely , we consider power variations of bss processes , which are based on high frequency ( possibly higher order ) differences of the bss model . we review the limit theory discussed in [ barndorff-nielsen , o.e. , j.m . corcuera and m. podolskij ( 2011 ) : multipower variation for brownian semistationary processes . bernoulli 17 ( 4 ) , 1159-1194 ; barndorff-nielsen , o.e. , j.m . corcuera and m. podolskij ( 2012 ) : limit theorems for functionals of higher order differences of brownian semi-stationary processes . in `` prokhorov and contemporary probability theory '' , springer . ] and present some new connections to fractional diffusion models . we apply our probabilistic results to construct a family of estimators for the smoothness parameter of the bss process . in this context we develop estimates with gaps , which allow to obtain a valid central limit theorem for the critical region . finally , we apply our statistical theory to turbulence data . story_separator_special_tag in this paper we study the asymptotic behaviour of weighted random sums when the sum process converges stably in law to a brownian motion and the weight process has continuous trajectories , more regular than that of a brownian motion . we show that these sums converge in law to the integral of the weight process with respect to the brownian motion when the observation distances goes to zero . the result is obtained with the help of fractional calculus showing the power of this technique . this study , though interesting by itself , is motivated by an error found in the proof of theorem 4 in : j. m. corcuera , d. nualart , and j. h. c. woerner , power variation of some integral fractional processes , bernoulli , vol . 12 , no . 4 , pp . 713 735 , 2006 . story_separator_special_tag hs , where b h is a fractional brownian motion with hurst parameter h 2 ( 0 , 1 ) , and u is a process with finite q-variation , q , 1= ( 1 h ) . we establish the stable convergence of the corresponding fluctuations . these results provide new statistical tools to study and detect the longmemory effect and the hurst parameter . story_separator_special_tag the continuous case : brownian motion.- the wiener-ito chaos expansion.- the skorohod integral.- malliavin derivative via chaos expansion.- integral representations and the clark-ocone formula.- white noise , the wick product , and stochastic integration.- the hida-malliavin derivative on the space ? = s ? ( ? ) .- the donsker delta function and applications.- the forward integral and applications.- the discontinuous case : pure jump levy processes.- a short introduction to levy processes.- the wiener-ito chaos expansion.- skorohod integrals.- the malliavin derivative.- levy white noise and stochastic distributions.- the donsker delta function of a levy process and applications.- the forward integral.- applications to stochastic control : partial and inside information.- regularity of solutions of sdes driven by levy processes.- absolute continuity of probability laws . story_separator_special_tag we prove a law of large numbers for the power variation of an integrated fractional process in a pure jump model . this yields consistency of an estimator for the integrated volatility where we are no longer restricted to a gaussian model . story_separator_special_tag this paper is concerned with the asymptotic behavior of sums of terms which are a test function f evaluated at successive increments of a discretely sampled semimartingale . typically the test function is a power function ( when the power is 2 we get the realized quadratic variation ) . we prove a variety of `` laws of large numbers '' , that is convergence in probability of these sums , sometimes after normalization . we also exhibit in many cases the rate of convergence , as well as associated central limit theorems . story_separator_special_tag motivated by models of signaling pathways in b lymphocytes , which have extremely large nuclei , we study the question of how reaction-diffusion equations in thin $ 2d $ domains may be approximated by diffusion equations in regions of smaller dimensions . in particular , we study how transmission conditions featuring in the approximating equations become integral parts of the limit master equation . we device a scheme which , by appropriate rescaling of coefficients and finding a common reference space for all feller semigroups involved , allows deriving the form of the limit equation formally . the results obtained , expressed as convergence theorems for the feller semigroups , may also be interpreted as a weak convergence of underlying stochastic processes . story_separator_special_tag \xa9 springer-verlag , berlin heidelberg new york , 1993 , tous droits r\xe9serv\xe9s . l acc\xe8s aux archives du s\xe9minaire de probabilit\xe9s ( strasbourg ) ( http : //portail . mathdoc.fr/semproba/ ) implique l accord avec les conditions g\xe9n\xe9rales d utilisation ( http : //www.numdam.org/legal.php ) . toute utilisation commerciale ou impression syst\xe9matique est constitutive d une infraction p\xe9nale . toute copie ou impression de ce fichier doit contenir la pr\xe9sente mention de copyright . story_separator_special_tag starting from the moving average ( ma ) integral representation of fractional brownian motion ( fbm ) , the class of fractional levy processes ( flps ) is introduced by replacing the brownian motion by a general levy process with zero mean , finite variance and no brownian component . we present different methods of constructing flps and study second-order and sample path properties . flps have the same second-order structure as fbm and , depending on the levy measure , they are not always semimartingales . we consider integrals with respect to flps and ma processes with the long memory property . in particular , we show that the levy-driven ma process with fractionally integrated kernel coincides with the ma process with the corresponding ( not fractionally integrated ) kernel and driven by the corresponding flp . story_separator_special_tag we give a new characterization for the convergence in distribution to a standard normal law of a sequence of multiple stochastic integrals of a fixed order with variance one , in terms of the malliavin derivatives of the sequence . we also give a new proof of the main theorem in [ d. nualart , g. peccati , central limit theorems for sequences of multiple stochastic integrals , ann . probab . 33 ( 2005 ) 177-193 ] using techniques of malliavin calculus . finally , we extend our result to the multidimensional case and prove a weak convergence result for a sequence of square integrable random vectors , giving an application . story_separator_special_tag we prove functional central and non-central limit theorems for generalized variations of the anisotropic $ d $ -parameter fractional brownian sheet ( fbs ) for any natural number $ d $ . whether the central or the non-central limit theorem applies depends on the hermite rank of the variation functional and on the smallest component of the hurst parameter vector of the fbs . the limiting process in the former result is another fbs , independent of the original fbs , whereas the limit given by the latter result is an hermite sheet , which is driven by the same white noise as the original fbs . as an application , we derive functional limit theorems for power variations of the fbs and discuss what is a proper way to interpolate them to ensure functional convergence . story_separator_special_tag existence and uniqueness of solutions is established for stochastic volterra integral equations driven by right continuous semimartingales . this resolves ( in the affirmative ) a conjecture of m. berger and v. mizel . story_separator_special_tag according to jankovic [ publ . inst . math. , 54 ( 1993 ) , pp . 126 134 ] , a random variable y has negative binomial infinitely divisible distribution if and only if its characteristic function $ \\varphi ( t ) $ admits the representation $ \\varphi ( t ) =\\frac { 1 } { ( 1-\\log\\psi ( t ) ) ^ { r } } $ for some $ r > 0 $ and infinitely divisible characteristic function $ \\psi ( t ) $ . in this paper , an asymptotics is obtained of $ { \\bf p } \\ { y > t\\ } $ as $ t\\to\\infty $ for some class of random variables y which is expressed in terms of a spectral measure of a levy representation of infinitely divisible characteristic function $ \\psi ( t ) $ . story_separator_special_tag in this paper we give a central limit theorem for the weighted quadratic variations process of a two-parameter brownian motion . as an application , we show that the discretized quadratic variations $ \\sum_ { i=1 } ^ { [ n s ] } \\sum_ { j=1 } ^ { [ n t ] } | \\delta_ { i , j } y |^2 $ of a two-parameter diffusion $ y= ( y_ { ( s , t ) } ) _ { ( s , t ) \\in [ 0,1 ] ^2 } $ observed on a regular grid $ g_n $ is an asymptotically normal estimator of the quadratic variation of $ y $ as $ n $ goes to infinity . story_separator_special_tag let { x ( s ) , < s < } be a normalized stationary gaussian process with a long-range correlation . the weak limit in c [ 0,1 ] of the integrated process $ $ z_x \\left ( t \\right ) = \\frac { 1 } { { d\\left ( x \\right ) } } \\mathop \\smallint \\limits_0^ { xt } g\\left ( { x\\left ( s \\right ) } \\right ) ds , { \\text { } } x \\to \\infty $ $ , is investigated . here d ( x ) = x h l ( x ) with $ $ \\frac { 1 } { 2 } $ $ < h < 1 and l ( x ) is a slowly varying function at infinity . the function g satisfies eg ( x ( s ) ) =0 , eg 2 ( x ( s ) ) < and has arbitrary hermite rank m 1 . ( the hermite rank of g is the index of the first non-zero coefficient in the expansion of g in hermite polynomials . ) it is shown thatz x ( t ) converges for all m 1 to some process
on the mathematical theory of flow patterns of compressible fluids.- on a class of nonlinear partial differential equations.- integral operators and inverse problems in scattering theory.- study of partial differential equations by the means of generalized analytical functions.- the single layer potential approach in the theory of boundary value problems for elliptic equations.- constructive function theoretic methods for higher order pseudoparabolic equations.- uber die losung einiger nichtklassischer probleme der elastizitatstheorie.- the singularities of solutions to analytic elliptic boundary value problems.- uber einige neuere anwendungen der verallgemeinerten cauchy-riemannschen cleichungen in der schalentheorie.- zur darstellung pseudoanalytischer funktionen.- uber das randwert-normproblem fur ein nichtlineares elliptisches system.- stability of minimal surfaces.- non- ( k ) -monogenic points of functions of a quaternion variable.- on the theory of liniar equations with spatial derivatives.- on hilbert modules with reproducing kernel.- a priori abschatzungen fur eine klasse elliptischer pseudo-differentialoperatoren im raum lp ( rn ) .- a solution of the biharmonic dirichlet problem by means of hypercomplex analytic functions.- existenz- und eindeutigkeitsproblem bei der abstrahlung ebener wellen aus einem angestromten ringkanal.- bewegliche singularitaten von linearen partiellen differential-gleichungen.- losungsdarstellungen mittels differentialoperatoren fur das dirichlet-problem der gleichung ? u+c ( x , y ) u=0.- properties of a class story_separator_special_tag introduction generalized operatorsz of fractional integration and differentiation recent aspects of classical erdelyi-kober operators hyper-bessel differential and integral operators and equations applications to the generalized hypergeometric functions further generalizations and applications appendis : definitions , examples and properties of the special functions used in this book references citation index story_separator_special_tag since 2001 , we have observed the central region of our galaxy with the near-infrared ( j , h , and ks ) camera sirius and the 1.4 m telescope irsf . here i present the results about the infrared extinction law and the structure of the galactic bulge with bulge red clump stars . from the observation of the red clump stars , we have determined directly the ratios of extinction to color-excess ( aks/e ( h '' ks ) and aks/e ( j '' ks ) ) , which are clearly less than the ratios determined by previous color-di # erence methods . we also find a smaller structure ( # l # ! 4h ) inside the galactic bar although its exact nature is as yet uncertain . \xea 98 ! \xea 4\xeb $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ ! `` # irsf/sirius $ % & 2 ' story_separator_special_tag this letter is a reply to comments made by t. shipp et al . on a previous article by b. e. koenig [ j. acoust . soc . am . 79 , 2088 2090 ( 1986 ) ] . story_separator_special_tag an ni2+ binding protein ( pnixb , 31 kd ) present in mature xenopus laevis oocytes and in embryos from fertilization in n/f stage 42 , was isolated and characterized . after oocytes or embryos were fractionated by page , electroblotted onto nitrocellulose , and probed with 63ni2+ , pnixb was detected by autoradiography . pnixb , a yolk protein located in the embryonic gut , was purified from yolk platelets by ammonium sulfate precipitation , delipidation , gel filtration chromatography , and hplc analysis . during these steps , pnixb copurified with lipovitellin 2. the n terminal sequence of purified pnixb exactly matched that of xenopus lipovitellin 2 , deduced from the dna sequence of the xenopus vitellogenin a2 precursor gene . since pnixb and lipovitellin 2 agree in n terminal sequence , amino acid composition , and apparent molecular weight , they appear to be identical . based on a metalblot competition assay , the abilities of metal ions to compete with 63ni2+ for binding to pnixb were ranked : zn2+ cu2+ co2+ > cd2+ mn2+ > sn2+ . this study shows that xenopus lipovitellin 2 is a metal binding protein in vitro , and raises the possibility story_separator_special_tag abstract endosonography was performed in 76 patients who had endoscopically detected gastroesophageal varices or questionable submucosal lesions , or who were being evaluated for pancreatic carcinoma or pancreatitis . the results were compared with surgery or autopsy results . the patients were divided retrospectively into four groups . group 1 consisted of 6 patients who underwent surgery or autopsy . five esophageal varices and 1 fundic varix were diagnosed with endosonography and confirmed histologically . group 2 consisted of 29 patients undergoing sclerotherapy . intramural thickening of the esophagus and extramural collaterals were found in 20 and 22 patients , respectively . endoscopy revealed fibrosis in 10 patients . group 3 consisted of 16 patients evaluated for pancreatic disease . fifteen fundic varices , 6 cardiac varices , and 5 extramural collateral veins were found by eus . group 4 consisted of 16 patients with questionable submucosal lesions and 9 patients with lesions recognized endoscopically as varices . eus found varices in all 25 patients . in conclusion , eus is an important procedure in the diagnosis and follow-up of gastroesophageal varices , and in the identification of questionable abnormalities found endoscopically . the effect of sclerotherapy can be demonstrated story_separator_special_tag the analysis of the mn chalcogen atom ( te , se , s ) bond lengths in mn-based aii bvi and aiv bvi metal chalcogenides ( derived from x-ray absorption fine structure studies ) as well as in mn chalcogenides ( from x-ray diffraction ( xrd ) ) enabled the author to introduce certain self-corrections into the papers [ chem . phys . lett . 283 ( 1998 ) 313 ; chem . phys . lett . 336 ( 2001 ) 226 ] . in particular , it was found that both the tetrahedral- and octahedral covalent radii of manganese depend on a choice of the anion species in the mn chalcogen atom bond , and therefore they can not be considered as the element constants for mn . story_separator_special_tag results from hyperglycemic clamps in the restoring insulin secretion ( rise ) study found that youth ( 10-19 y ) with impaired glucose tolerance ( igt ) or early type 2 diabetes ( t2d ) are more insulin resistant and secrete more insulin than adults ( 20-65 y ) , despite similar bmi . here we use mari modeling of rise 3-h ogtt data to explore differences in -cell function between youth and adults . the mari model describes the relationship between insulin secretion rate ( isr ) and glucose concentration . the slope of the relationship is -cell glucose sensitivity ( gs ) ; isr at a fixed glucose is also calculated . derived terms include the 3-h to baseline potentiation factor ratio ( pfr ; a relative insulin secretion increase during the ogtt ) and rate sensitivity ( rs ; the dependence of isr on the glucose rate of change ) . analysis of covariance was used to compare youth vs. adults , with secretion measures adjusted for insulin sensitivity ( m/i ) from hyperglycemic clamps . adjusted gs was significantly higher in youth vs. adults with igt but not in those with t2d . adjusted rs and story_separator_special_tag we prove the existence of a transmutation operator between two weighted sturm liouville operators . we also provide an explicit formula for the transmutation operator and a construction algorithm . an example and an application to an inverse spectral problem are also considered . story_separator_special_tag abstract : seeks to demonstrate the usefulness of fractional integrals in applied mathematics by presenting some of their applications to axially symmetric potential problems and showing that one can ob tain in this manner both general theorems and explicit solutions of concrete problems . ( author ) story_separator_special_tag 1. for functions f lloc [ 0 , ) the riemann-liouville operator of fractional integration i is defined byand its adjoint operator , the weyl operator k , is defined byfor functions f lloc [ 0 , ) having a suitable behaviour at infinity . story_separator_special_tag connecting the real and the imaginary parts of an analytic function of x-\\-iy . this similarity suggests an integration theory similar in pattern to that of the complex function theory . the fundamentals of such theory are presented in this paper . ( a more elaborate mathematical treatment , containing all proofs , will be published elsewhere ) . the theory will be illustrated by some physical examples . in treating these examples our aim is not to obtain new results in mechanics but rather to present known facts from a simpler and more unified point of view . in what follows we suppose that the coefficients n ( i = 1 , 2 ) are positive analytic functions of the real variable y. then the equations ( 1.2 ) are of story_separator_special_tag whose real and imaginary parts are connected by the equations ( 1.1 ) . many concepts and results in the theory of analytic functions of a complex variable can be extended to functions satisfying the system ( 1.1 ) , notably the concept of differentiation , integration , powers , power series , the theorem of cauchy and morera and the fundamental theorem of algebra ( ' ) . in what follows we make the following assumptions concerning the coefficients < r , - , tj . story_separator_special_tag this study investigates the real output losses associated with modern banking crises . we find a remarkable diversity of experience . in a number of instances banking crises have not been associated with any significant reduction in the growth of real , per capita gdp . often , this has been the case in mature and developed economies . on the other hand , estimated output losses are extremely large for some other countries\xf1amounting to several years of lost gdp . interestingly , such large losses can be associated with banking crises that were designated as non-systemic . our sample mean and median output loss estimates are also big . for the average sample country , the estimated present discounted value of crisis-related output losses is bounded between 63 % and 302 % of real , per capita gdp in the last year before the crisis onset . average loss estimates are this large primarily because we find evidence that post-crisis economic slowdowns often persist long after the crisis is officially over . such delayed costs have been largely overlooked in previous empirical work and resultantly our loss estimates are much larger than those that have appeared elsewhere . story_separator_special_tag 1. selected topics from rational approximation ( in norwegian ) ( 141 pages ) . cand . real . thesis at the university of oslo 1970 . 2. r ( x ) as a dirichlet algebra and representation of orthogonal measures by differentials . math . scand . 29 ( 1971 ) , 87 103 . 3 . ( joint work with a.m. davie ) rational approximation on the union of sets . proc . amer . math . soc . 29 ( 1971 ) , 581 584 . 4. a short proof of the f. and m. riesz theorem . proc . amer . math . soc . 30 ( 1971 ) , 204 . 5. peak sets and interpolation sets for some algebras of analytic functions ph.d. dissertation at the university of california , los angeles 1971 . 6. null sets for measures orthogonal to r ( x ) . amer . journal of mathematics 94 ( 1972 ) , 331 342 . 7 . ( joint work with a.m. davie ) peak interpolation sets for some algebras of analytic functions . pacific journal of mathematics 41 ( 1972 ) , 81 87 . 8 . ( joint story_separator_special_tag the study presents an empirical analysis of the structure of production in irish agriculture . a single-output for-input generalized translog cost function model is utilized to obtain econometric measures of substitution between factor inputs , elasticities of input demand , neutral and non-neutral technical change and economies of scale . the cost function model is also used to decompose tornqvist productivity gains into the contributions due to technical change , scale economies and other ( residually measured ) influences . average annual tornqvist productivity gains of 2.0 percent were found , with the technical change and scale economies effects being computed as 1.33 percent and -0.22 percent per year , respectively , leaving a ( residual ) measure of other unspecified effects of 0.89 percent per year . copyright 1990 by oxford university press . story_separator_special_tag references : [ 1 ] barvinok , a. , integer points in polyhedra . zurich lectures in advanced mathematics . z\xfcrich ( 2008 ) , european mathematical society \xb7 zbl 1154.52009 [ 2 ] beck , m. ; robins , s. , computing the continuous discretely . integer-point enumeration in polyhedra ( 2015 ) , new york : springer , new york \xb7 zbl 1339.52002 [ 3 ] diaz , r. ; robins , s. , pick s formula via the weierstrass \\ ( # # # # \\ ) -function , amer . math . monthly , 5 , 431-437 ( 1995 ) \xb7 zbl 0834.52008 [ 4 ] diaz , r. ; robins , s. , the ehrhart polynomial of a lattice polytope , ann . of math , 145 , 2 , 3 ) : 503-518 ( 1997 ) \xb7 zbl 0874.52009 [ 5 ] funkenbusch , w. w. , from euler s formula to pick s formula using an edge theorem , amer . math . monthly , 81 , 6 , 647-648 ( 1974 ) \xb7 zbl 0282.05007 \xb7 doi:10.1080/00029890.1974.11993639 [ 6 ] gerber , a. ; pak , i. , concrete polytopes may not story_separator_special_tag impressions is often difficult . i used perfusion of the left coronary artery with blood oxygenated at pressure to carry out experimental reimplantation of this vessel into a systemic artery in 1949 ( smith g 1964a ) . by 1956 , using exposure of the whole animal , we were able to demonstrate ( smith & lawson 1958 ) protective effect against the onset of ventricular fibrillation following acute occlusion of the left circumflex coronary artery in the dog . a further study ( smith & lawson 1962 ) confirmed this and we reported the application of hyperbaric oxygen to man suffering from coronary artery occlusion . before applying the technique to localized hypoxic tissue , particularly of limbs where vascular surgery had been inadequate or impossible , it was essential to find out if the opening up of collateral vessels was prevented by increased tension of oxygen in the arterial inflow . this did not appear to happen , at least in skeletal muscular vessels ( smith , lawson , renfrew , ledingham & sharp 1961 ) . following this , a series of patients with traumatic ischlmia of limbs was treated with encouraging results ( smith , stevens story_separator_special_tag holland , amsterdam ( 1970 ) . 4. buzen , j.p. computational algorithms for closed queueing networks with exponential servers . comm . acm 16 , 9 , ( sept. 1973 ) , 527-531 . 5. denning , p.j . and buzen , j.p. the operational analysis of queueing network models . computing surveys , 10 , 3 ( sept. 1978 ) , 225-261 . 6. feller , w. an introduction to probability theory and its applications . volume i ( 2nd ed . ) , john wiley and sons , new york , ( 1957 ) . 7. gordon , w. j. and newell , g. f. closed queueing systems with exponential servers . oper . res. , 15 , 2 ( march 1967 ) 254-265 . 8. jackson , j.r. jobshop-like queueing systems . management science , 10 , 1 ( jan. 1963 ) 131-142 . 9. kleinrock , leonard queueing systems , volume 2 , wileyinterscience , new york ( 1976 ) . 10. lipsky , l. recent applications of queueing theory to computer modelling . first annual international conference on computer capacity management . institute for software engineering , washington , d.c. , pp . 97-141 story_separator_special_tag ce chapitre consacre a beyrouth met particulierement en relation deux des grandes hypotheses relatives l action collective urbaine evoquees dans l introduction de cet ouvrage . la premiere est l idee selon laquelle la gestion des reseaux urbains ordinaires ( eau , energie , transports , etc . ) represente , du fait de leur caractere vital pour le developpement economique des grandes agglomerations , un moyen indirect mais neanmoins effectif d une gouvernance metropolitaine . les institutions de gestion des reseaux , dites \xab de second rang \xbb ( lorrain , 2008 ) , permettent de surmonter les conflits politiques les plus paralysants . ainsi , cette gouvernance par les reseaux peut-elle etre un lieu d innovation techno-politique a travers la constitution de nouveaux instruments de pilotage de l action publique ( restructurations institutionnelles des operateurs de service , redefinition des tarifs et des modalites de mobilisation des investissements , etc. ) . la possibilite d une telle gouvernance est neanmoins contrariee a plus d un titre . il s agit a la fois de determinations lourdes , et de l action d acteurs precis . ces derniers sont , d une part , les agents d un capitalisme story_separator_special_tag abstract objective to characterise the clinical features of patients admitted to hospital with coronavirus disease 2019 ( covid-19 ) in the united kingdom during the growth phase of the first wave of this outbreak who were enrolled in the international severe acute respiratory and emerging infections consortium ( isaric ) world health organization ( who ) clinical characterisation protocol uk ( ccp-uk ) study , and to explore risk factors associated with mortality in hospital . design prospective observational cohort study with rapid data gathering and near real time analysis . setting 208 acute care hospitals in england , wales , and scotland between 6 february and 19 april 2020. a case report form developed by isaric and who was used to collect clinical data . a minimal follow-up time of two weeks ( to 3 may 2020 ) allowed most patients to complete their hospital admission . participants 20 133 hospital inpatients with covid-19 . main outcome measures admission to critical care ( high dependency unit or intensive care unit ) and mortality in hospital . results the median age of patients admitted to hospital with covid-19 , or with a diagnosis of covid-19 made in hospital , was story_separator_special_tag this is an extended version of originally published survey in the book : `` advances in modern analysis and mathematical modeling '' . editors : yu.f.korobeinik , a.g.kusraev . vladikavkaz : vladikavkaz scientific center , of the russian academy of sciences and republic of north ossetia -- alania , 2008. p. 226-293 . ( in russian ) . in this survey we consider main transmutation theory topics with many applications , including author 's own results . the topics covered are : transmutations for sturm -- liouville operators , vekua-erdelyi-lowndes transmutations , transmutations for general differential operators with variable coefficients , sonine and poisson transmutations , transmutations and fractional integrals , buschman -- erdelyi transmutations , in the search for volterra unitary transmutations , transmutations for singular differential operators with variable coefficients , composition method for transmutations , some applications and open problems . story_separator_special_tag n - 2- - n - 2- - 2 c1 c2 ir uv-vis 1 h nmr 13 c nmr 119 sn nmr x h460 hepg2 mcf7 hl7702 tris-hcl eb dna c1 c2 3 c2 hl7702 c1 c1 c2 dna story_separator_special_tag 1951 allan l. apter & brenda ion donald m. & bette cook jean grandmaison crain & william e. crain john h. downs patrick h. heaslip june b. hendrickson sally j. heule elaine k. johnson frances a. knobloch bernard j. kolar eugene s. norlander fern p. olin mary j. patterson robert w. & nancy a. salmela erveen a. serra peter h. & sally sneve anthony j. stauber barbara r. stewart john m. & ^patricia r. streitz norman e. sundeen edgar l. turcotte maribeth f. wege richard o . & dolores t. wold story_separator_special_tag the measurement of cu k-edge x-ray absorption near-edge structure of monovalent , divalent , and trivalent copper oxides has been used to identify the presence of different valence of copper in the la/sub 2-//sub x/ ( sr , ba ) /sub x/cuo/sub 4/ ( x = 0.0 -- 0.3 ) system . the results indicate the coexistence of cu ii and cu iii states in these compounds . theoretical calculations based on a cluster of cuo/sub 4/o/sub 2/la/sub 8/lasr predict the presence of nearly degenerate electronic ground states representing nominally cu ii and cu iii configurations . story_separator_special_tag the transmutation operators were introduced by delsarte and lions to state relations between harmonic analysis ( generalized translation operators , generalized convolution , generalized fourier transform and generalized paley-wiener theorem ) associated with two differential operators of the same order in the complex domain . in this paper , we discuss the analogous problem for differential operators having different orders . more precisely , we consider a suitable class of differential operators lz in the complex domain and from harmonic analysis associated with lz , we state the corresponding one associated with lzn , n being an arbitrary positive integer . our analysis is based on ricci 's decomposition . some particular cases are singled out . story_separator_special_tag abstract we consider in this work the lions transmutation operators associated with the lions differential operator on ] o , + [ . using these operators we give relations between the generalized continuous wavelet transform and the classical continuous wavelet transform on [ o , + [ , and we deduce the formulas which give the inverse operators of the lions transmutation operators . story_separator_special_tag a general construction of transmutation operators is developed for selfadjoint operators in gelfand triples . theorems regarding analyticity of generalized eigenfunctions and paley-wiener properties are proved . story_separator_special_tag on considere les operateurs aux derivees partielles : d 1 = / , d 2 = / y 2 + [ ( 2 +1 ) coth y+th y ] / y-1/ch 2 y . / + ( +1 ) 2 avec ( y , ) ] 0 , + [ xr et 0. dans ce travail on determine des operateurs de permutation de d 1 , d 2 en / , - y , et on etablit leurs liens avec les transformations integrales de gegenbauer et les transformations integrales et riemann-liouville et de weyl generalisees associees a des operateurs differentiels sur des intervalles finis et infinis . puis on etudie une analyse harmonique et les fonctions presque-periodiques associees aux operateurs d 1 , d 2 story_separator_special_tag feature selection is an important component of many machine learning applications . especially in many bioinformatics tasks , efficient and robust feature selection methods are desired to extract meaningful features and eliminate noisy ones . in this paper , we propose a new robust feature selection method with emphasizing joint l2,1-norm minimization on both loss function and regularization . the l2,1-norm based loss function is robust to outliers in data points and the l2,1-norm regularization selects features across all data points with joint sparsity . an efficient algorithm is introduced with proved convergence . our regression based objective makes the feature selection process more efficient . our method has been applied into both genomic and proteomic biomarkers discovery . extensive empirical studies are performed on six data sets to demonstrate the performance of our feature selection method . story_separator_special_tag abstract based on the half-unit schiff-base ligand precursor hl1 and the asymmetrical bis-schiff-base ligand precursor h2l2 synthesized from the reaction of 1-phenyl-3-methyl-4-benzoyl-5-pyrazolone ( pmbp ) , o-phenylenediamine and/or o-vanillin , three complexes containing low toxicity zn2+ ions , mononuclear [ zn ( l1 ) 2 ] ( 1 ) , [ zn ( l2 ) ( h2o ) ] ( 2 ) and trinuclear [ zn3 ( l2 ) 2 ( oac ) 2 ] ( 3 ) , are obtained , respectively . complex 1 proves to be inactive , resulting from its saturated octahedral coordination environment around the central zn2+ ion , while in complex 2 or 3 , the unsaturated five and/or four-coordinate coordination environment for the catalytic active centers ( zn2+ ions ) permits the monomer insertion for the effective bulk or solution copolymerization of cho ( cyclohexene oxide ) and ma ( maleic anhydride ) . all the bulk copolymerizations afford poly ( ester-co-ether ) s , while some of the solution copolymerizations produce perfectly alternating polyester copolymers . moreover , higher polymerization temperature , lower catalyst and co-catalyst concentration and shorter reaction time are helpful for the formation of alternating copolymers in bulk or story_separator_special_tag we present a conceptually simple , flexible , and general framework for object instance segmentation . our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance . the method , called mask r-cnn , extends faster r-cnn by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition . mask r-cnn is simple to train and adds only a small overhead to faster r-cnn , running at 5 fps . moreover , mask r-cnn is easy to generalize to other tasks , e.g. , allowing us to estimate human poses in the same framework . we show top results in all three tracks of the coco suite of challenges , including instance segmentation , bounding-box object detection , and person keypoint detection . without bells and whistles , mask r-cnn outperforms all existing , single-model entries on every task , including the coco 2016 challenge winners . we hope our simple and effective approach will serve as a solid baseline and help ease future research in instance-level recognition . code has been made available at : https : //github.com/facebookresearch/detectron . story_separator_special_tag in this paper , we consider a differential operator l , > 1/2 , called bessel struve operator . we construct and study transmutation operator between l and the second derivative operator d2/dx 2. we establish for the bessel struve operator a taylor formula with integral remainder . we apply these results to expand as taylor series the translation operator associated with l . we provide an analyticity criterion for functions on involving l . story_separator_special_tag abstract we consider a wide class of integral and ordinary differential equations of fractional multi-orders ( 1/ 1,1/ 2 , ,1/ m ) , depending on arbitrary parameters i > 0 , i r , i=1 , , m. denoting the differentiation operators by d =d ( i ) , ( i ) , and by l =l ( i ) , ( i ) the corresponding integrations ( operators right inverse to\xa0 d ) , we first observe that d and l can be considered as operators of the generalized fractional calculus , respectively as generalized fractional derivatives and integrals . a solution of the homogeneous ode of this kind , d y ( z ) = y ( z ) , 0 , 0 is the recently introduced multi-index mittag-leffler function e ( 1/ i ) , ( i ) ( z ) . we find a poisson-type integral transformation p ( generalizing the classical poisson integral formula ) that maps the cosm-function into the multi-index mittag-leffler function , and also transforms the simpler differentiation and integration operators of integer order m > 1 : dm= ( d/dz ) m and lm ( the m-fold integration ) into story_separator_special_tag we define riemann-liouville transform and its dual t associated with two singular partial differential operators . we establish some results of harmonic analysis for the fourier transform connected with . next , we prove inversion formulas for the operators , t and a plancherel theorem for t . story_separator_special_tag it is widely thought , but not yet explained , that there might be a pathogenetic link between the infection of hepatitis c virus ( hcv ) and the onset of b non hodgkin 's lymphoma ( nhl ) . we studied the prevalence of serum anti hcv antibodies among 300 nhl comparing it with the prevalence among 600 age and sex matched non neoplastic subjects as controls , 247 patients with non lymphomatous neoplasm , and 122 patients treated with immunosuppressive agents . we found a prevalence of 0.16 among nhl and 0.085 among controls and non lymphomatous patients . although the difference was statistically significant ( p < 0.001 ) , the odds ratio was 2.049 and its confidence intervals included the equality . the hcv prevalence was independent of nhl subset , and the genotypes distribution was the same among nhl and controls . we disclosed a hbsag prevalence of 0.077 in nhl versus 0.008 in controls ( p < 0.001 ) with an odds ratio of 9.9. we do not believe that these findings support the hypothesis of an hcv pathogenetic role in lymphomagenesis because ( i ) the risk of previous infection is marginally higher story_separator_special_tag we constrain the possibility of a non-trivial refractive index in free space corresponding to an energy-dependent velocity of light : c ( e ) c0 ( 1 e/m ) , where m is a mass scale that might represent effect of quantum-gravitational space-time foam , using the arrival times of sharp features observed in the intensities of radiation with different energies from a large sample of gamma-ray bursters ( grbs ) with known redshifts . we use wavelet techniques to identify genuine features , which we confirm in simulations with artificial added noise . using the weighted averages of the time-lags calculated using correlated features in all the grb light curves , we find a systematic tendency for more energetic photons to arrive earlier . however , there is a very strong correlation between the parameters characterizing an intrinsic time-lag at the source and a distance-dependent propagation effect . moreover , the significance of the earlier arrival times is less evident for a subsample of more robust spectral structures . allowing for intrinsic stochastic time-lags in these features , we establish a statistically robust lower limit : m > 0.9 \xd7 1016 gev on the scale of violation of lorentz story_separator_special_tag in this survey we discuss a unified approach to the generalized hypergeometric functions based on a generalized fractional calculus developed in the monography by kiryakova . this generalization of the classical theory of the operators of integration and differentiation of fractional order deals with integral ( differintegral ) operators involving meijer 's g- and fox 's h-functions as kernel functions . their theory is fully developed and illustrated by various special cases and applications in different areas of the applicable analysis . usually , the special functions of mathematical physics are defined by means of power series representations . however , some alternative representations can be used as their definitions . let us mention the well known poisson integrals for the bessel functions and the analytical continuation of the gauss hypergeometric function via the euler integral formula . the rodrigues differential formulae , involving repeated or fractional differentiation are also used as definitions of the classical orthogonal polynomials and their generalizations . as to the other special functions ( most of them being - and -functions ) , such representations are less popular and even unknown in the general case . there exist various integral and differential formulae , but story_separator_special_tag over native-speaking users of english . secondly , the numerical preponderance of non-native speakers means that it is their communication which is increasing more rapidly and thus dominating the development and evolution of english . thirdly , it is therefore becoming inescapably necessary for native speakers to accept unfamiliarities in the effective use of english . fourthly , acceptance of these unfamiliarities will be easier if there is a basis for understanding them . story_separator_special_tag the purpose of this work is to advance the current state of mathematical knowledge regarding fixed point theorems of functions . such ideas have historically enjoyed many applications , for example , to the qualitative and quantitative understanding of differential , difference and integral equations . herein , we extend an established result due to rus [ studia univ . babes-bolyai math. , 22 , 1977 , 40 42 ] that involves two metrics to ensure wider classes of functions admit a unique fixed point . in contrast to the literature , a key strategy herein involves placing assumptions on the iterations of the function under consideration , rather than on the function itself . in taking this approach we form new advances in fixed point theory under two metrics and establish interesting connections between previously distinct theorems , including those of rus [ studia univ . babes-bolyai math. , 22 , 1977 , 40 42 ] , caccioppoli [ rend . acad . naz . linzei . 11 , 1930 , 31 49 ] and bryant [ am . math . month . 75 , 1968 , 399 400 ] . our results make progress towards a fuller theory story_separator_special_tag previously ta li [ 5 ] considered the problem of an inversion integral for a certain integral transformation involving chebyshev polynomials . similar cases which include the legendre polynomials [ 1 ] and the gegenbauer ( ultraspheric ) polynomials [ 3 ] have also been treated . we here present the case of integral transformations which contain the legendre functions , pv ' , or the legendre function on the cut , pv ' ; i.e . we consider the equations story_separator_special_tag rodrigues s formula can be applied also to ( 1.1 ) and ( 1.3 ) but here the situation is slightly more involved in that the integrals with respect to ^2 are of fractional order and their inversion requires the knowledge of differentiation and integration of fractional order . in spite of this complication the method has its merits and seems more direct than that employed in [ 1 ] and [ 3 ] . moreover , once differentiation and integration of fractional order are used , it seems appropriate to allow a derivative of fractional order with respect to ^-1 to appear so that the ultraspherical polynomial in ( 1.3 ) may be replaced by an ( associated ) legendre function . this will be done in the present paper . story_separator_special_tag the spectral decomposition for an explicit second-order differential operator t is determined . the spectrum consists of a continuous part with multiplicity two , a continuous part with multiplicity one , and a finite discrete part with multiplicity one . the spectral analysis gives rise to a generalized fourier transform with an explicit hypergeometric function as a kernel . using jacobi polynomials , the operator t can also be realized as a five-diagonal operator , leading to orthogonality relations for 2\xd72-matrix-valued polynomials . these matrix-valued polynomials can be considered as matrix-valued generalizations of wilson polynomials . story_separator_special_tag solutions of fractional kinetic equations are obtained through an integral transform named p -transform introduced in this paper . the p -transform is a binomial type transform containing many class of transforms including the well known laplace transform . the paper is motivated by the idea of pathway model introduced by mathai [ linear algebra appl . 396 , 317 328 ( 2005 ) 10.1016/j.laa.2004.09.022 ] . the composition of the transform with differential and integral operators are proved along with convolution theorem . as an illustration of applications to the general theory of differential equations , a simple differential equation is solved by the new transform . being a new transform , the p -transform of some elementary functions as well as some generalized special functions such as h-function , g-function , wright generalized hypergeometric function , generalized hypergeometric function , and mittag-leffler function are also obtained . the results for the classical laplace transform is retrieved by letting 1 . story_separator_special_tag an integral equation of the first kind , with kernel involving a hypergeometric function , is discussed . conditions sufficient for uniqueness of solutions are given , then conditions necessary for existence of solutions . conditions sufficient for existence of solutions , only a little stricter than the necessary conditions , are given ; and with them two distinct forms of explicit solution . these two forms are associated at first with different ranges of the parameters , but their validity in the complementary ranges is also discussed . before giving the existence theory a digression is made on a subsidiary integral equation.corresponding theorems for another integral equation resembling the main one are deduced from some of the previous theorems . two more equations of similar form , less closely related , will be considered in another paper . special cases of some of these four integral equations have been considered recently by erdelyi , higgins , wimp and others . story_separator_special_tag this paper is a sequel to one with a similar title to appear in the proceedings of the edinburgh mathematical society . explicit solutions are found for two more integral equations of similar form ; and also conditions necessary and sufficient for existence , and sufficient for uniqueness , of solutions . these theorems are preceded by several preparatory theorems on fractional integrals with origin , including integrals of purely imaginary order . story_separator_special_tag basic science genetics of type 1 diabetes 18 , 58 , 180 , 181 , 182 , 606 , 607 , 608 , 609 , 1221 genetics of type 2 diabetes 160 , 161 , 162 , 163 , 164 , 165 , 166 , 167 , 168 , 169 , 170 , 171 , 172 , 173 , 174 , 175 , 176 , 177 , 178 , 179 , 1184 , 1249 , 1250 , 1251 , 1287 , 1288 , 1327 , 1769 , 1824 , 1875 insulin action , carbohydrate metabolism 9 , 774 , 775 , 776 , 777 , 778 , 779 , 780 , 781 , 782 , 1324 , 1821 insulin action , gastroentero pancreatic factors 109 , 645 , 686 , 687 , 688 , 689 , 758 , 759 , 760 , 761 , 762 , 763 , 1839 insulin action , glucose transport 764 , 765 , 766 , 767 , 768 , 769 , 770 , 771 , 772 , 773 , 1328 insulin action , hormone receptors 720 , 1787 insulin action , insulin sensitivity and resistance 10 , 12 , 53 , 54 , 591 , story_separator_special_tag providing basic information about the properties of radon transform , this book contains examples and documents a wide variety of applications . it offers guidance to literature related to transform , and is aimed at those with a basic undergraduate background in mathematics . story_separator_special_tag although it is still a work in progress , central and eastern europe 's transition to democracy and markets shows signs of durability . comparison of the post-1989 period to the interwar years suggests the critical importance of a supportive outside environment and popular forbearance in the achievement and institutionalization of real change . story_separator_special_tag we find convergent double series expansions for legendre 's third incomplete elliptic integral valid in overlapping subdomains of the unit square . truncated expansions provide asymptotic approximations in the neighborhood of the logarithmic singularity ( 1,1 ) if one of the variables approaches this point faster than the other . each approximation is accompanied by an error bound . story_separator_special_tag we find two convergent series expansions for legendre 's first incomplete elliptic integral f ( @ l , k ) in terms of recursively computed elementary functions . both expansions are valid at every point of the unit square 0 < @ l , k < 1 . truncated expansions yield asymptotic approximations for f ( @ l , k ) as @ l and/or k tend to unity , including the case when logarithmic singularity @ l=k=1 is approached from any direction . explicit error bounds are given at every order of approximation . for the reader 's convenience we present explicit expressions for low-order approximations and numerical examples to illustrate their accuracy . our derivation is based on rearrangements of some known double series expansions , hypergeometric summation algorithms and inequalities for hypergeometric functions . story_separator_special_tag we find two-sided inequalities for the generalized hypergeometric function of the form `` q '' + '' 1f '' q ( -x ) with positive parameters restricted by certain additional conditions . both lower and upper bounds agree with the value of `` q '' + '' 1f '' q ( -x ) at the endpoints of positive semi-axis and are asymptotically precise at one of the endpoints . the inequalities are derived from a theorem asserting the monotony of the quotient of two generalized hypergeometric functions with shifted parameters . the proofs hinge on a generalized stieltjes representation of the generalized hypergeometric function . this representation also provides yet another method to deduce the second thomae relation for `` 3f '' 2 ( 1 ) and leads to an integral representations of `` 4f '' 3 ( x ) in terms of the appell function f '' 3. in the last section of the paper we list some open questions and conjectures . story_separator_special_tag this quarterly report covers the activities of catalytic multi-stage liquefaction of coal during the period july 1 -- september 30 , 1994 , at hydrocarbon research , inc. in lawrenceville and princeton , new jersey . this doe contract period is from december 8 , 1992 to december 7 , 1994. the overall objective of this program is to produce liquid fuels from coal by direct liquefaction at a cost that is competitive with conventional fuels . specifically , this continuous bench-scale program contains provisions to examine new ideas in areas such as : low temperature pretreatments , more effective catalysts , on-line hydrotreating , new coal feedstocks , other hydrogen sources , more concentrated coal feeds and other highly responsive process improvements while assessing the design and economics of the bench-scale results . this quarterly report covers work on laboratory scale studies , continuous bench-scale operations , technical assessment and project management . story_separator_special_tag background by national statistics , japanese ischemic heart disease ( ihd ) mortality is one of the lowest of all industrialized countries , and the proportion of deaths due to heart failure in heart disease is the highest . there may be a difference in diagnostic preference between japan and other industrialized countries . methods and results ihd deaths according to the death certificates were reevaluated with world health organization monica criteria for those 25 to 74 years old by use of clinical and police records in a japanese city with a population of 347,000. their cause of death was given on the death certificates as ihd ( international classification of diseases [ icd ] , ninth revision , codes 410-414 ) , heart failure ( 428 ) , or other heart diseases ( 393-405 , 415-427 , 429 ) in 1984 through 1986. some deaths in 1985 through 1986 from stroke ( 430-438 ) or other diseases ( 250 , 272 , 278 , 440-448 , 797-799 ) were added . of 409 subjects , 397 ( 97 % ) could be examined . reevaluation of the 106 deaths originally diagnosed as ihd yielded 73 ihds and 11 sudden story_separator_special_tag der oberste gerichtshof hat als revisionsgericht durch den vizeprasidenten des obersten gerichtshofes kinzel als vorsitzenden und durch die hofrate des obersten gerichtshofes dr.hule , dr.warta , dr.klinger und mag.engelmaier als richter in der rechtssache der klagenden partei peter p * * * , kaufmann , 6020 innsbruck , hofgasse 4 , vertreten durch dr.walter hofbauer , dr.helmut rantner , dr.walter kerle , rechtsanwalte in innsbruck , wider die beklagten parteien 1 . ) gertrude o * * * , hausfrau , 6020 innsbruck , hofgasse 4 , vertreten durch dr.hermann r * * * , rechtsanwalt in innsbruck , und 2 . ) monika i * * * , lehrerin , 6600 reutte , schulstrase 3 , vertreten durch herbert hillebrand , dr. walter heel , rechtsanwalte in innsbruck , wegen aufhebung einer eigentumsgemeinschaft infolge revision der beklagten parteien gegen das urteil des oberlandesgerichtes innsbruck als berufungsgerichtes vom 9.april 1986 , gz 5 r 73/86-29 , womit infolge berufung der beklagten parteien das urteil des landesgerichtes innsbruck vom 13. dezember 1985 , gz 11 cg 427/84-20 , bestatigt wurde , in nichtoffentlicher sitzung zu recht erkannt : story_separator_special_tag burrows , p. m. 1987. improved estimation of pathogen transmission rates by group testing . phytopathology 77:363-365. estimation of infection rates or probabilities of disease transmission is testing is more efficient than individual testing . a simple formulation of improved by adopting an alternative to maximum likelihood estimation optimal group size is presented for situations where the number of test with superior bias and mean square error properties . this improves the plants is fixed by resource limitations . efficiency of group testing and extends the range of conditions where group additional key words : multiple transfer designs , pathogen transmission , vectors . swallow ( 3 ) recently discussed the merits of group tests ( multiple the familiar estimate log [ ( r + 0.5 ) / ( n r + 0.5 ) ] for logit ( 0 ) vector transfers ) when estimating individual pathogen log [ 0/ ( l 0 ) ] . application of that approach to the present transmission rates . a single group test consists of transferring k problem begins with [ ( r+a ) / ( n+ b ) ] l/k instead of the maximum vectors to each of n noninfected test plants story_separator_special_tag this report addresses the long-term results of nonoperative treatment for fractures of the thoracolumbar spine . forty-two patients meeting specified inclusion criteria were contacted and completed questionnaires . in all cases , nonoperative treatment was the only treatment received . the average time from injury to follow-up was 20.2 years ( range , 11 to 55 years ) . the average age at follow-up was 43 years ( range , 28 to 70 years ) . there were 31 men and 11 women in this series . seventy-one percent of the injuries were the result of motor vehicle accidents . the most common sites of injury were t12 l2 , which accounted for 64 % of the injuries . seventy-eight percent of the patients had no neurologic deficits at the time of injury . at follow-up , the average back pain score was 3.5 , with 0 being no pain at all and 10 being very severe pain . no patient demonstrated a decrease in their neurologic status at follow-up , and no patient required narcotic medication for pain control . eighty-eight percent of patients were able to work at their usual level of activity . follow-up radiographs revealed an story_separator_special_tag between july 1978 and january 1988 , 111 of 210 pancreas transplants were in nonuremic , nonkidney ( nunk ) recipients in whom complications of diabetes were judged more serious than the potential side effects of antirejection therapy . in all nunk cases , the 1-year patient survival rate ( psr ) and graft survival rate ( gsr ) were 90 % and 39 % . since november 1984 , 1-year psr and gsr for 62 nunk recipients of pancreas transplants alone ( pta ) were 93 % and 48 % , compared to 89 % and 37 % for 28 pancreas transplants after ( pak ) and 89 % and 73 % for 20 simultaneous ( spk ) with a kidney transplant . the 1-year gsr for 1984-88 technically successful ( ts ) pta cases ( n = 47 ) was 63 % , versus 75 % for pak ( n = 13 ) and 86 % for spk ( n = 17 ) cases . the 1-year gsrs , by technique and source for 1984-88 pta cases , were 58 % for bladder-drained cadaver ( n = 30 ) , 51 % for enteric-drained related ( n = story_separator_special_tag cortisone and acth in clinical practice . edited by w. s. c. copeman . 1953. pp . xi+255 , 29 illus . butterworth , london . ( 25s . ) in the words of the editor , this is the first book to endeavour to assess the place of these hormones in clinical practice . it is a matter of opinion whether the time is ripe for such an assessment , but dr. copeman and his colleagues have performed a very useful task in condensing the vast literature on the subject into manageable form , and adding their personal experiences in the use of acth and cortisone in a wide variety of diseases . there are chapters devoted to rheumatic and collagen diseases , diseases of the eye , endocrine disorders , respiratory and allergic diseases , skin diseases , and diseases of the haemopoietic system . dr. copeman and dr. 0. savage open the section onirheumatic and collagen diseases with an excellent review of the chemical nature and physiological effects of these hormones , and give clear indications as to their value and limitations in treating this group of conditions . they feel that acth and cortisone will always story_separator_special_tag \xa9 journ\xe9es \xe9quations aux d\xe9riv\xe9es partielles , 1978 , tous droits r\xe9serv\xe9s . l acc\xe8s aux archives de la revue \xab journ\xe9es \xe9quations aux d\xe9riv\xe9es partielles \xbb ( http : //www . math.sciences.univ-nantes.fr/edpa/ ) implique l accord avec les conditions g\xe9n\xe9rales d utilisation ( http : //www.numdam.org/legal.php ) . toute utilisation commerciale ou impression syst\xe9matique est constitutive d une infraction p\xe9nale . toute copie ou impression de ce fichier doit contenir la pr\xe9sente mention de copyright . story_separator_special_tag in this paper we consider fourier multipliers for $ l^p $ $ ( p > 1 ) $ on chebli-trimeche hypergroups and establish a version of hormander 's multiplier theorem . as applications we give some results concerning the riesz potentials and oscillating multipliers . story_separator_special_tag * in this paper we consider the modified wave equation associated with a class of radial laplacians l generalizing the radial part of the laplace-beltrami operator on hyperbolic spaces or damek-ricci spaces . we show that the huygens ' principle and the equipartition of energy hold if the inverse of the harish-chandra c-function is a polynomial and that these two properties hold asymptotically otherwise . similar results were established previously by branson , olafsson and schlichtkrull in the case of noncornpact symmetric spaces . story_separator_special_tag resume nous considerons un operateur aux derivees et differences singulier sur la droite reelle . nous construisons une paire de transformations integrales qui transmutent en l'operateur d / d x . en utilisant les proprietes de ces operateurs de transmutation , on definit une nouvelle analyse harmonique sur r correspondant a l'operateur\xa0 .
string-number conversion is an important class of constraints needed for the symbolic execution of string-manipulating programs . in particular solving string constraints with string-number conversion is necessary for the analysis of scripting languages such as javascript and python , where string-number conversion is a part of the definition of the core semantics of these languages . however , solving this type of constraint is very challenging for the state-of-the-art solvers . we propose in this paper an approach that can efficiently support both string-number conversion and other common types of string constraints . experimental results show that it significantly outperforms other state-of-the-art tools on benchmarks that involves string-number conversion . story_separator_special_tag we describe a uniform and efficient framework for checking the satisfiability of a large class of string constraints . the framework is based on the observation that both satisfiability and unsatisfiability of common constraints can be demonstrated through witnesses with simple patterns . these patterns are captured using flat automata each of which consists of a sequence of simple loops . we build a counter-example guided abstraction refinement ( cegar ) framework which contains both an under- and an over-approximation module . the flow of information between the modules allows to increase the precision in an automatic manner . we have implemented the framework as a tool and performed extensive experimentation that demonstrates both the generality and efficiency of our method . story_separator_special_tag we introduce trau , an smt solver for an expressive constraint language , including word equations , length constraints , context-free membership queries , and transducer constraints . the satisfiability problem for such a class of constraints is in general undecidable . the key idea behind trau is a technique called flattening , which searches for satisfying assignments that follow simple patterns . trau implements a counter-example guided abstraction refinement ( cegar ) framework which contains both an under- and an over-approximation module . the approximations are refined in an automatic manner by information flow between the two modules . the technique implemented by trau can handle a rich class of string constraints and has better performance than state-of-the-art string solvers . story_separator_special_tag we present a decision procedure for a logic that combines ( i ) aword equations over string variables denoting words of arbitrary lengths , together with ( ii ) aconstraints on the length of words , and on ( iii ) athe regular languages to which words belong . decidability of this general logic is still open . our procedure is sound for the general logic , and a decision procedure for a particularly rich fragment that restricts the form in which word equations are written . in contrast to many existing procedures , our method does not make assumptions about the maximum length of words . we have developed a prototypical implementation of our decision procedure , and integrated it into a cegar-based model checker for the analysis of programs encoded as horn clauses . our tool is able to automatically establish the correctness of several programs that are beyond the reach of existing methods . story_separator_special_tag we present version 1.0 of the norn smt solver for string constraints . norn is a solver for an expressive constraint language , including word equations , length constraints , and regular membership queries . as a feature distinguishing norn from other smt solvers , norn is a decision procedure under the assumption of a set of acyclicity conditions on word equations , without any restrictions on the use of regular membership . open image in new window story_separator_special_tag we present drex , a declarative language that can express all regular string-to string transformations , and can still be efficiently evaluated . the class of regular string transformations has a robust theoretical foundation including multiple characterizations , closure properties , and decidable analysis questions , and admits a number of string operations such as insertion , deletion , substring swap , and reversal . recent research has led to a characterization of regular string transformations using a primitive set of function combinators analogous to the definition of regular languages using regular expressions . while these combinators form the basis for the language drex proposed in this paper , our main technical focus is on the complexity of evaluating the output of a drex program on a given input string . it turns out that the natural evaluation algorithm involves dynamic programming , leading to complexity that is cubic in the length of the input string . our main contribution is identifying a consistency restriction on the use of combinators in drex programs , and a single-pass evaluation algorithm for consistent programs with time complexity that is linear in the length of the input string and polynomial in the size story_separator_special_tag dynamic symbolic execution ( dse ) combines concrete and symbolic execution , usually for the purpose of generating good test suites automatically . it relies on constraint solvers to solve path conditions and to generate new inputs to explore . dse tools usually make use of smt solvers for constraint solving . in this paper , we show that constraint programming ( cp ) is a powerful alternative or complementary technique for dse . specifically , we apply cp techniques for dse of javascript , the de facto standard for web programming . we capture the javascript semantics with minizinc and integrate this approach into a tool we call aratha . we use g-strings , a cp solver equipped with string variables , for solving path conditions , and we compare the performance of this approach against state-of-the-art smt solvers . experimental results , in terms of both speed and coverage , show the benefits of our approach , thus opening new research vistas for using cp techniques in the service of program analysis . story_separator_special_tag strings are extensively used in modern programming languages and constraints over strings of unknown length occur in a wide range of real-world applications such as software analysis and verification , testing , model checking , and web security . nevertheless , practically no cp solver natively supports string constraints . we introduce string variables and a suitable set of string constraints as builtin features of the minizinc modelling language . furthermore , we define an interpreter for converting a minizinc model with strings into a flatzinc instance relying on only integer variables . this provides a user-friendly interface for modelling combinatorial problems with strings , and enables both string and non-string solvers to actually solve such problems . story_separator_special_tag string constraint solving is an important emerging field , given the ubiquity of strings over different fields such as formal analysis , automated testing , database query processing , and cybersecurity . this paper highlights the current state-of-the-art for string constraint solving , and identifies future challenges in this field . story_separator_special_tag dashed strings have been recently proposed in constraint programming to represent the domain of string variables when solving combinatorial problems over strings . this approach showed promising performance on some classes of string problems , involving constraints like string equality and concatenation . however , there are a number of string constraints for which no propagator has yet been defined . in this paper , we show how to propagate lexicographic ordering ( lex ) , find and replace with dashed strings . all of these are fundamental string operations : lex is the natural total order over strings , while find and replace are frequently used in string manipulation . we show that these propagators , that we implemented in g-strings solver , allows us to be competitive with state-of-the-art approaches . story_separator_special_tag using dashed strings is an approach recently introduced in constraint programming ( cp ) to represent the domain of string variables , when solving combinatorial problems with string constraints . one of the most important string constraints is that of regular membership : \\ ( \\textsc { regular } ( x , r ) \\ ) imposes string x to be a member of the regular language defined by automaton r. the regular constraint is useful for specifying complex constraints on fixed length finite sequences , and regularly appears in cp models . dealing with regular is also desirable in software testing and verification , because regular expressions are often used in modern programming languages for pattern matching . in this paper , we define a regular propagator for dashed string solvers . we show that this propagator , implemented in the g-strings solver , is substantially better than the current state-of-the-art . we also demonstrate that many regular constraints appearing in string solving benchmarks can actually be tackled by dashed strings solvers without explicitly using regular . story_separator_special_tag solving constraints over strings is an emerging important field . recently , a constraint programming approach based on dashed strings has been proposed to enable a compact domain representation for potentially large bounded-length string variables . in this paper , we present a more efficient algorithm for propagating equality ( and related constraints ) over dashed strings . we call this propagation sweep-based . experimental evidences show that sweep-based propagation is able to significantly outperform state-of-the-art approaches for string constraint solving . story_separator_special_tag dashed strings are a formalism for modelling the domain of string variables when solving combinatorial problems with string constraints . in this work we focus on ( variants of ) the replace constraint , which aims to find the first occurrence of a query string in a target string , and ( possibly ) replaces it with a new string . we define a replace propagator which can also handle replace-last ( for replacing the last occurrence ) and replace-all ( for replacing all the occurrences ) . empirical results clearly show that string constraint solving can draw great benefit from this approach . story_separator_special_tag string processing is ubiquitous across computer science , and arguably more so in web programming . in order to reason about programs manipulating strings we need to solve constraints over strings . in constraint programming , the only approaches we are aware for representing string variables having bounded yet possibly unknown size degrade when the maximum possible string length becomes too large . in this paper , we introduce a novel approach that decouples the size of the string representation from its maximum length . the domain of a string variable is dynamically represented by a simplified regular expression that we called a dashed string , and the constraint solving relies on propagation of information based on equations between dashed strings . we implemented this approach in g-strings , a new string solver built on top of gecode solver that already shows some promising results . story_separator_special_tag most common vulnerabilities in web applications are due to string manipulation errors in input validation and sanitization code . string constraint solvers are essential components of program analysis techniques for detecting and repairing vulnerabilities that are due to string manipulation errors . for quantitative and probabilistic program analyses , checking the satisfiability of a constraint is not sufficient , and it is necessary to count the number of solutions . in this paper , we present a constraint solver that , given a string constraint , ( 1 ) constructs an automaton that accepts all solutions that satisfy the constraint , ( 2 ) generates a function that , given a length bound , gives the total number of solutions within that bound . our approach relies on the observation that , using an automata-based constraint representation , model counting reduces to path counting , which can be solved precisely . we demonstrate the effectiveness of our approach on a large set of string constraints extracted from real-world web applications . story_separator_special_tag recently , symbolic program analysis techniques have been extended to quantitative analyses using model counting constraint solvers . given a constraint and a bound , a model counting constraint solver computes the number of solutions for the constraint within the bound . we present a parameterized model counting constraint solver for string and numeric constraints . we first construct a multi-track deterministic finite state automaton that accepts all solutions to the given constraint . we limit the numeric constraints to linear integer arithmetic , and for non-regular string constraints we over-approximate the solution set . counting the number of accepting paths in the generated automaton solves the model counting problem . our approach is parameterized in the sense that , we do not assume a finite domain size during automata construction , resulting in a potentially infinite set of solutions , and our model counting approach works for arbitrarily large bounds . we experimentally demonstrate the effectiveness of our approach on a large set of string and numeric constraints extracted from software applications . we experimentally compare our tool to five existing model counting constraint solvers for string and numeric constraints and demonstrate that our tool is as efficient and story_separator_special_tag dynamic symbolic execution ( dse ) is a well-known technique for automatically generating tests to achieve higher levels of coverage in a program . two keys ideas of dse are to : ( 1 ) seed symbolic execution by executing a program on an initial input ; ( 2 ) using concrete values from the program execution in place of symbolic expressions whenever symbolic reasoning is hard or not desired . we describe dse for a simple core language and then present a minimalist implementation of dse for python ( in python ) that follows this basic recipe . the code is available at https : //www.github.com/thomasjball/pyexz3/ ( tagged v1.0 ) and has been designed to make it easy to experiment with and extend . story_separator_special_tag web applications are ubiquitous , perform mission- critical tasks , and handle sensitive user data . unfortunately , web applications are often implemented by developers with limited security skills , and , as a result , they contain vulnerabilities . most of these vulnerabilities stem from the lack of input validation . that is , web applications use malicious input as part of a sensitive operation , without having properly checked or sanitized the input values prior to their use . past research on vulnerability analysis has mostly focused on identifying cases in which a web application directly uses external input in critical operations . however , little research has been performed to analyze the correctness of the sanitization process . thus , whenever a web application applies some sanitization routine to potentially malicious input , the vulnerability analysis assumes that the result is innocuous . unfortunately , this might not be the case , as the sanitization process itself could be incorrect or incomplete . in this paper , we present a novel approach to the analysis of the sanitization process . more precisely , we combine static and dynamic analysis techniques to identify faulty sanitization procedures that can story_separator_special_tag we present an automated approach for detecting and quantifying side channels in java programs , which uses symbolic execution , string analysis and model counting to compute information leakage for a single run of a program . we further extend this approach to compute information leakage for multiple runs for a type of side channels called segmented oracles , where the attacker is able to explore each segment of a secret ( for example each character of a password ) independently . we present an efficient technique for segmented oracles that computes information leakage for multiple runs using only the path constraints generated from a single run symbolic execution . our implementation uses the symbolic execution tool symbolic pathfinder ( spf ) , smt solver z3 , and two model counting constraint solvers latte and abc . although latte has been used before for analyzing numeric constraints , in this paper , we present an approach for using latte for analyzing string constraints . we also extend the string constraint solver abc for analysis of both numeric and string constraints , and we integrate abc in spf , enabling quantitative symbolic string analysis . story_separator_special_tag bioinformatics aims at applying computer science methods to the wealth of data collected in a variety of experiments in life sciences ( e.g . cell and molecular biology , biochemistry , medicine , etc . ) in order to help analysing such data and eliciting new knowledge from it . in addition to string processing bioinformatics is often identified with machine learning used for mining the large banks of bio-data available in electronic format , namely in a number of web servers . nevertheless , there are opportunities of applying other computational techniques in some bioinformatics applications . in this paper , we report the application of constraint programming to address two structural bioinformatics problems , protein structure prediction and protein interaction ( docking ) . the efficient application of constraint programming requires innovative modelling of these problems , as well as the development of advanced propagation techniques ( e.g . global reasoning and propagation ) , which were adopted in chemera , a system that is currently used to support biochemists in their research . story_separator_special_tag this report documents the program and the outcomes of dagstuhl seminar 19062 `` bringing cp , sat and smt together : next challenges in constraint solving '' , whose main goals were to bring together leading researchers in the different subfields of automated reasoning and constraint solving , foster greater communication between these communities and exchange ideas about new research directions . constraint solving is at the heart of several key technologies , including program analysis , testing , formal methods , compilers , security analysis , optimization , and ai . during the last two decades , constraint solving has been highly successful and transformative : on the one hand , sat/smt solvers have seen a significant performance improvement with a concomitant impact on software engineering , formal methods and security ; on the other hand , cp solvers have also seen a dramatic performance improvement , with deep impact in ai and optimization . these successes bring new applications together with new challenges , not yet met by any current technology . the seminar brought together researchers from sat , smt and cp along with application researchers in order to foster cross-fertilization of ideas , deepen interactions , story_separator_special_tag satisfiability modulo theories ( smt ) refers to the problem of determining whether a first-order formula is satisfiable with respect to some logical theory . solvers based on smt are used as back-end engines in model-checking applications such as bounded , interpolation-based , and predicate-abstraction-based model checking . after a brief illustration of these uses , we survey the predominant techniques for solving smt problems with an emphasis on the lazy approach , in which a propositional satisfiability ( sat ) solver is combined with one or more theory solvers . we discuss the architecture of a lazy smt solver , give examples of theory solvers , show how to combine such solvers modularly , and mention several extensions of the lazy approach . we also briefly describe the eager approach in which the smt problem is reduced to a sat problem . finally , we discuss how the basic framework for determining satisfiability can be extended with additional functionality such as producing models , proofs , unsatisfiable cores , and interpolants . story_separator_special_tag we investigate the historical roots of the field of combinatorics on words . they comprise applications and interpretations in algebra , geometry and combinatorial enumeration . these considerations gave rise to early results such as those of axel thue at the beginning of the 20th century . other early results were obtained as a by-product of investigations on various combinatorial objects . for example , paths in graphs are encoded by words in a natural way , and conversely , the cayley graph of a group or a semigroup encodes words by paths . we give in this text an account of this two-sided interaction . story_separator_special_tag we present a new string smt solver , z3str3 , that is faster than its competitors z3str2 , norn , cvc4 , s3 , and s3p over a majority of three industrial-strength benchmarks , namely , kaluza , pisa , and ibm appscan . z3str3 supports string equations , linear arithmetic over length function , and regular language membership predicate . the key algorithmic innovation behind the efficiency of z3str3 is a technique we call theory-aware branching , wherein we modify z3 's branching heuristic to take into account the structure of theory literals to compute branching activities . in the traditional dpll ( t ) architecture , the structure of theory literals is hidden from the dpll ( t ) sat solver because of the boolean abstraction constructed over the input theory formula . by contrast , the theory-aware technique presented in this paper exposes the structure of theory literals to the dpll ( t ) sat solver 's branching heuristic , thus enabling it to make much smarter decisions during its search than otherwise . as a consequence , z3str3 has better performance than its competitors . story_separator_special_tag motivated by program analysis , security , and verification applications , we study various fragments of a rich first-order quantifier-free ( qf ) theory $ t_ { lre , n , c } $ over regular expression ( regex ) membership predicate , linear integer arithmetic over string length , string-number conversion predicate , and string concatenation . our contributions are the following . on the theoretical side , we prove a series of ( un ) decidability and complexity theorems for various fragments of $ t_ { lre , n , c } $ , some of which have been open for several years . on the practical side , we present a novel length-aware decision procedure for the qf first-order theory $ t_ { lre } $ with regex membership predicate and linear arithmetic over string length . the crucial insight that enables our algorithm to scale for instances obtained from practical applications is that these instances contain a wealth of information about upper and lower bounds on lengths of strings which can be used to simplify operations on automata representing regexes . we showcase the power of our algorithm via an extensive empirical evaluation over a large story_separator_special_tag parameter tampering attacks are dangerous to a web application whose server fails to replicate the validation of user-supplied data that is performed by the client . malicious users who circumvent the client can capitalize on the missing server validation . in this paper , we describe waptec , a tool that is designed to automatically identify parameter tampering vulnerabilities and generate exploits by construction to demonstrate those vulnerabilities . waptec involves a new approach to whitebox analysis of the server 's code . we tested waptec on six open source applications and found previously unknown vulnerabilities in every single one of them . story_separator_special_tag we discuss the problem of path feasibility for programs manipulating strings using a collection of standard string library functions . we prove results on the complexity of this problem , including its undecidability in the general case and decidability of some special cases . in the context of test-case generation , we are interested in an efficient finite model finding method for string constraints . to this end we develop a two-tier finite model finding procedure . first , an integer abstraction of string constraints are passed to an smt ( satisfiability modulo theories ) solver . the abstraction is either unsatisfiable , or the solver produces a model that fixes lengths of enough strings to reduce the entire problem to be finite domain . the resulting fixed-length string constraints are then solved in a second phase . we implemented the procedure in a symbolic execution framework , report on the encouraging results and discuss directions for improving the method further . story_separator_special_tag in this paper , we introduce stringfuzz : a modular smt-lib problem instance transformer and generator for string solvers . we supply a repository of instances generated by stringfuzz in smt-lib 2.0/2.5 format . we systematically compare z3str3 , cvc4 , z3str2 , and norn on groups of such instances , and identify those that are particularly challenging for some solvers . we briefly explain our observations and show how stringfuzz helped discover causes of performance degradations in z3str3 . story_separator_special_tag sat modulo theories ( smt ) consists of deciding the satisfiability of a formula with respect to a decidable background theory , such as linear integer arithmetic , bit-vectors , etc , in first-order logic with equality . smt has its roots in the field of verification . it is known that the sat technology offers an interesting , efficient and scalable method for constraint solving , as many experimentations have shown . although there already exist some results pointing out the adequacy of smt techniques for constraint solving , there are no available tools to extensively explore such adequacy . in this paper we introduce a tool for translating flatzinc ( minizinc intermediate code ) instances of constraint satisfaction problems to the standard smt-lib language . it can be used for deciding satisfiability as well as for optimization . the tool determines the required logic for solving each instance . the obtained results suggest that smt can be effectively used to solve csps . story_separator_special_tag a new form of sat-based symbolic model checking is described . instead of unrolling the transition relation , it incrementally generates clauses that are inductive relative to ( and augment ) stepwise approximate reachability information . in this way , the algorithm gradually refines the property , eventually producing either an inductive strengthening of the property or a counterexample trace . our experimental studies show that induction is a powerful tool for generalizing the unreachability of given error states : it can refine away many states at once , and it is effective at focusing the proof search on aspects of the transition system relevant to the property . furthermore , the incremental structure of the algorithm lends itself to a parallel implementation . story_separator_special_tag symbolic program analysis techniques rely on satisfiability-checking constraint solvers , while quantitative program analysis techniques rely on model-counting constraint solvers . hence , the efficiency of satisfiability checking and model counting is crucial for efficiency of modern program analysis techniques . in this paper , we present a constraint caching framework to expedite potentially expensive satisfiability and model-counting queries . integral to this framework is our new constraint normalization procedure under which the cardinality of the solution set of a constraint , but not necessarily the solution set itself , is preserved . we extend these constraint normalization techniques to string constraints in order to support analysis of string-manipulating code . a group-theoretic framework which generalizes earlier results on constraint normalization is used to express our normalization techniques . we also present a parameterized caching approach where , in addition to storing the result of a model-counting query , we also store a model-counter object in the constraint store that allows us to efficiently recount the number of satisfying models for different maximum bounds . we implement our caching framework in our tool cashew , which is built as an extension of the green caching framework , and integrate it story_separator_special_tag symbol , and maps the rest of the symbols to themselves . an alphabet-abstraction-transducer over 1 and 2 is a 2-track dfa a 1 ; 2 d hq ; 1 2 ; ; q0 ; fi , where q d fq0 ; sinkg , f d fq0g , and 8a 2 2 : .q0 ; .a ; a// d q0 , 8a 2 1 2 : .q0 ; .a ; // d q0 . now , using the alphabet-abstraction-transducer , we can compute the abstraction of a dfa as a post-image computation , and we can compute the concretization of dfa as a pre-image computation . let a be a single track dfa over 1 with track x. a 1 ; 2.x ; x 0/ denotes the alphabet transducer over 1 and 2 where x and x0 74 6 abstraction and approximation correspond to the input and output tracks , respectively . we define the abstraction and concretization functions on automata as ( where x0 7 ! x denotes renaming track x0 as x ) : 1 ; 2.a/ .9x : a u a 1 ; 2.x ; x0// x0 7 ! x , and 1 ; 2.a/ 9x0 : story_separator_special_tag models play a key role in assuring software quality in the model-driven approach . precise models usually require the definition of well-formedness rules to specify constraints that can not be expressed graphically . the object constraint language ( ocl ) is a de-facto standard to define such rules . techniques that check the satisfiability of such models and find corresponding instances of them are important in various activities , such as model-based testing and validation . several tools for these activities have been developed , but to our knowledge , none of them supports ocl string operations on scale that is sufficient for , e.g. , model-based testing . as , in contrast , many industrial models do contain such operations , there is evidently a gap . we present a lightweight solver that is specifically tailored to generate large solutions for tractable string constraints in model finding , and that is suited to directly express the main operations of the ocl datatype string . it is based on constraint logic programming ( clp ) and constraint handling rules , and can be seamlessly combined with other constraint solvers in clp . we have integrated our solver into the emftocsp story_separator_special_tag the theory of strings with concatenation has been widely argued as the basis of constraint solving for verifying string-manipulating programs . however , this theory is far from adequate for expressing many string constraints that are also needed in practice ; for example , the use of regular constraints ( pattern matching against a regular expression ) , and the string-replace function ( replacing either the first occurrence or all occurrences of a `` pattern '' string constant/variable/regular expression by a `` replacement '' string constant/variable ) , among many others . both regular constraints and the string-replace function are crucial for such applications as analysis of javascript ( or more generally html5 applications ) against cross-site scripting ( xss ) vulnerabilities , which motivates us to consider a richer class of string constraints . the importance of the string-replace function ( especially the replace-all facility ) is increasingly recognised , which can be witnessed by the incorporation of the function in the input languages of several string constraint solvers . recently , it was shown that any theory of strings containing the string-replace function ( even the most restricted version where pattern/replacement strings are both constant strings ) becomes story_separator_special_tag the design and implementation of decision procedures for checking path feasibility in string-manipulating programs is an important problem , whose applications include symbolic execution and automated detection of cross-site scripting ( xss ) vulnerabilities . a ( symbolic ) path is a finite sequence of assignments and assertions ( i.e . without loops ) , and checking its feasibility amounts to determining the existence of inputs that yield a successful execution . we give two general semantic conditions which together ensure the decidability of path feasibility : ( 1 ) each assertion admits regular monadic decomposition , and ( 2 ) each assignment uses a ( possibly nondeterministic ) function whose inverse relation preserves regularity . we show these conditions are expressive since they are satisfied by a multitude of string operations . they also strictly subsume existing decidable string theories , and most existing benchmarks ( e.g . most of kaluza 's , and all of slog 's , stranger 's , and sloth 's ) . we give a simple decision procedure and an extensible architecture of a string solver in that a user may easily incorporate his/her own string functions . we show the general fragment has story_separator_special_tag this is a survey on combinatorics of words to appear as a chapter in handbook of formal languages . the topics covered in details are : defect effect , equations as properties of words , periodicity , finiteness conditions , avoidabilty and subword complexity . story_separator_special_tag the static determination of approximated values of string expressions has many potential applications . for instance , approximated string values may be used to check the validity and security of generated strings , as well as to collect the useful string properties . previous string analysis efforts have been focused primarily on the maxmization of the precision of regular approximations of strings . these methods have not been completely satisfactory due to the difficulties in dealing with heap variables and context sensitivity . in this paper , we present an abstract-interpretation-based solution that employs a heuristic widening method . the presented solution is implemented and compared to jsa . in most cases , our solution gives results as precise as those produced by previous methods , and it makes the additional contribution of easily dealing with heap variables and context sensitivity in a very natural way . we anticipate the employment of our method in practical applications . story_separator_special_tag we perform static analysis of java programs to answer a simple question : which values may occur as results of string expressions ? the answers are summarized for each expression by a regular language that is guaranteed to contain all possible values . we present several applications of this analysis , including statically checking the syntax of dynamically generated expressions , such as sql queries . our analysis constructs flow graphs from class files and generates a context-free grammar with a nonterminal for each string expression . the language of this grammar is then widened into a regular language through a variant of an algorithm previously used for speech recognition . the collection of resulting regular languages is compactly represented as a special kind of multi-level automaton from which individual answers may be extracted . if a program error is detected , examples of invalid strings are automatically produced . we present extensive benchmarks demonstrating that the analysis is efficient and produces results of useful precision . story_separator_special_tag the main practical problem in model checking is the combinatorial explosion of system states commonly known as the state explosion problem . abstraction methods attempt to reduce the size of the state space by employing knowledge about the system and the specification in order to model only relevant features in the kripke structure . counterexample-guided abstraction refinement is an automatic abstraction method where , starting with a relatively small skeletal representation of the system to be verified , increasingly precise abstract representations of the system are computed . the key step is to extract information from false negatives ( `` spurious counterexamples '' ) due to over-approximation . story_separator_special_tag we present a refined segmentation abstract domain for the analysis of strings in the c programming language , properly extending the parametric segmentation approach to array representation introduced by p. cousot et al . to the case of text values . in particular , we capture the so-called string of interest of an array of char , in order to distinguish well-formed string arrays . a concrete and abstract semantics of the main c header file string.h functions are worked out in full detail . story_separator_special_tag strings are widely used in modern programming languages in various scenarios . for instance , strings are used to build up structured query language sql queries that are then executed . malformed strings may lead to subtle bugs , as well as non-sanitized strings may raise security issues in an application . for these reasons , the application of static analysis to compute safety properties over string values at compile time is particularly appealing . in this article , we propose a generic approach for the static analysis of string values based on abstract interpretation . in particular , we design a suite of abstract semantics for strings , where each abstract domain tracks a different kind of information . we discuss the trade-off between efficiency and accuracy when using such domains to catch the properties of interest . in this way , the analysis can be tuned at different levels of precision and efficiency , and it can address specific properties.copyright \xa9 2013 john wiley & sons , ltd . story_separator_special_tag a program denotes computations in some universe of objects . abstract interpretation of programs consists in using that denotation to describe computations in another universe of abstract objects , so that the results of abstract execution give some information on the actual computations . an intuitive example ( which we borrow from sintzoff [ 72 ] ) is the rule of signs . the text -1515 * 17 may be understood to denote computations on the abstract universe { ( + ) , ( - ) , ( \xb1 ) } where the semantics of arithmetic operators is defined by the rule of signs . the abstract execution -1515 * 17 - ( + ) * ( + ) ( - ) * ( + ) ( - ) , proves that -1515 * 17 is a negative number . abstract interpretation is concerned by a particular underlying structure of the usual universe of computations ( the sign , in our example ) . it gives a summary of some facets of the actual executions of a program . in general this summary is simple to obtain but inaccurate ( e.g . -1515 + 17 - ( + ) + story_separator_special_tag even the fastest smt solvers have performance problems with regular expressions from real programs . because these performance issues often arise from the problem representation ( e.g . non-deterministic finite automata get determinized and regular expressions get unrolled ) , we revisit boolean finite automata , which allow for the direct and natural representation of any boolean combination of regular languages . by applying the ic3 model checking algorithm to boolean finite automata , not only can we efficiently answer emptiness and universality problems , but through an extension , we can decide satisfiability of multiple variable string membership problems . we demonstrate the resulting system 's effectiveness on a number of popular benchmarks and regular expressions . story_separator_special_tag we present woorpje , a string solver for bounded word equations ( i.e. , equations where the length of each variable is upper bounded by a given integer ) . our algorithm works by reformulating the satisfiability of bounded word equations as a reachability problem for nondeterministic finite automata , and then carefully encoding this as a propositional satisfiability problem , which we then solve using the well-known glucose sat-solver . this approach has the advantage of allowing for the natural inclusion of additional linear length constraints . our solver obtains reliable and competitive results and , remarkably , discovered several cases where state-of-the-art solvers exhibit a faulty behaviour . story_separator_special_tag the study of word equations is a central topic in mathematics and theoretical computer science . recently , the question of whether a given word equation , augmented with various constraints/extensions , has a solution has gained critical importance in the context of string smt solvers for security analysis . we consider the decidability of this question in several natural variants and thus shed light on the boundary between decidability and undecidability for many fragments of the first order theory of word equations and their extensions . in particular , we show that when extended with several natural predicates on words , the existential fragment becomes undecidable . on the other hand , the positive \\ ( \\varsigma _2\\ ) fragment is decidable , and in the case that at most one terminal symbol appears in the equations , remains so even when length constraints are added . moreover , if negation is allowed , it is possible to model arbitrary equations with length constraints using only equations containing a single terminal symbol and length constraints . finally , we show that deciding whether solutions exist for a restricted class of equations , augmented with many of the predicates leading story_separator_special_tag we describe an algorithm for automatic test input generation for database applications . given a program in an imperative language that interacts with a database through api calls , our algorithm generates both input data for the program as well as suitable database records to systematically explore all paths of the program , including those paths whose execution depend on data returned by database queries . our algorithm is based on concolic execution , where the program is run with concrete inputs and simultaneously also with symbolic inputs for both program variables as well as the database state . the symbolic constraints generated along a path enable us to derive new input values and new database records that can cause execution to hit uncovered paths . simultaneously , the concrete execution helps to retain precision in the symbolic computations by allowing dynamic values to be used in the symbolic executor . this allows our algorithm , for example , to identify concrete sql queries made by the program , even if these queries are built dynamically.the contributions of this paper are the following . we develop an algorithm that can track symbolic constraints across language boundaries and use those constraints story_separator_special_tag abstract constraint handling rules ( chr ) are our proposal to allow more flexibility and application-oriented customization of constraint systems . chr are a declarative language extension especially designed for writing user-defined constraints . chr are essentially a committed-choice language consisting of multi-headed guarded rules that rewrite constraints into simpler ones until they are solved . in this broad survey we aim at covering all aspects of chr as they currently present themselves . going from theory to practice , we will define syntax and semantics for chr , introduce an important decidable property , confluence , of chr programs and define a tight integration of chr with constraint logic programming languages . this survey then describes implementations of the language before we review several constraint solvers both traditional and nonstandard ones written in the chr language . finally we introduce two innovative applications that benefited from using chr . story_separator_special_tag given the bytecode of a software system , is it possible to automatically generate attack signatures that reveal it s vulnerabilities ? a natural solution would be symbolically executing the target system and constructing constraints for matching path conditions and attack patterns . clearly , the constraint solving technique is the key to the above research . this paper presents simple linear string equation ( sise ) , a formalism for specifying constraints on strings . sise uses finite state transducers to precisely model variou s regular replacement operations , which makes it applicable for analyzing text processing programs such as web applications . we present a recursive algorithm that computes the solution pool of a sise . given the solution pool , a concrete variable solution can be generated . the algorithm is implemented in a java constraint solver called sushi , which is applied to security analysis of web applications . story_separator_special_tag modern web applications often suffer from command injection attacks . even when equipped with sanitization code , many systems can be penetrated due to software bugs . it is desirable to automatically discover such vulnerabilities , given the bytecode of a web application . one approach would be symbolically executing the target system and constructing constraints for matching path conditions and attack patterns . solving these constraints yields an attack signature , based on which , the attack process can be replayed . constraint solving is the key to symbolic execution . for web applications , string constraints receive most of the attention because web applications are essentially text processing programs . we present simple linear string equation ( sise ) , a decidable fragment of the general string constraint system . sise models a collection of regular replacement operations ( such as the greedy , reluctant , declarative , and finite replacement ) , which are frequently used by text processing programs . various automata techniques are proposed for simulating procedural semantics such as left-most matching . by composing atomic transducers of a sise , we show that a recursive algorithm can be used to compute the solution pool story_separator_special_tag in recent years there has been considerable interest in theories over string equations , length function , and string-number conversion predicate within the formal verification , software engineering , and security communities . smt solvers for these theories , such as z3str2 , cvc4 , and s3 , are of immense practical value in exposing security vulnerabilities in string-intensive programs . additionally , there are many open decidability and complexity-theoretic questions in the context of theories over strings that are of great interest to mathematicians . motivated by the above-mentioned applications and open questions , we study a first-order , many-sorted , quantifier-free theory $ t_ { s , n } $ of string equations , linear arithmetic over string length , and string-number conversion predicate and prove three theorems . first , we prove that the satisfiability problem for the theory $ t_ { s , n } $ is undecidable via a reduction from a theory of linear arithmetic over natural numbers with power predicate , we call power arithmetic . second , we show that the string-numeric conversion predicate is expressible in terms of the power predicate , string equations , and length function . this second story_separator_special_tag stp is a decision procedure for the satisfiability of quantifier-free formulas in the theory of bit-vectors and arrays that has been optimized for large problems encountered in software analysis applications . the basic architecture of the procedure consists of word-level pre-processing algorithms followed by translation to sat . the primary bottlenecks in software verification and bug finding applications are large arrays and linear bit-vector arithmetic . new algorithms based on the abstraction-refinement paradigm are presented for reasoning about large arrays . a solver for bit-vector linear arithmetic is presented that eliminates variables and parts of variables to enable other transformations , and reduce the size of the problem that is eventually received by the sat solver . these and other algorithms have been implemented in stp , which has been heavily tested over thousands of examples obtained from several real-world applications . experimental results indicate that the above mix of algorithms along with the overall architecture is far more effective , for a variety of applications , than a direct translation of the original formula to sat or other comparable decision procedures . story_separator_special_tag we prove several decidability and undecidability results for the satisfiability and validity problems for languages that can express solutions to word equations with length constraints . the atomic formulas over this language are equality over string terms ( word equations ) , linear inequality over the length function ( length constraints ) , and membership in regular sets . these questions are important in logic , program analysis , and formal verification . variants of these questions have been studied for many decades by mathematicians . more recently , practical satisfiability procedures ( aka smt solvers ) for these formulas have become increasingly important in the context of security analysis for string-manipulating programs such as web applications . we prove three main theorems . first , we give a new proof of undecidability for the validity problem for the set of sentences written as a quantifier alternation applied to positive word equations . a corollary of this undecidability result is that this set is undecidable even with sentences with at most two occurrences of a string variable . second , we consider boolean combinations of quantifier-free formulas constructed out of word equations and length constraints . we show that if story_separator_special_tag we present a decision procedure for the problem of , given a set of regular expressions r1 , , rn , determining whether r=r1 rn is empty . our solver , revenant , finitely unrolls automata for r1 , , rn , encoding each as a set of propositional constraints . if a sat solver determines satisfiability then r is non-empty . otherwise our solver uses unbounded model checking techniques to extract an interpolant from the bounded proof . this interpolant serves as an overapproximation of r. if the solver reaches a fixed-point with the constraints remaining unsatisfiable , it has proven r to be empty . otherwise , it increases the unrolling depth and repeats . we compare revenant with other state-of-the-art string solvers . evaluation suggests that it behaves better for constraints that express the intersection of sets of regular languages , a case of interest in the context of verification . story_separator_special_tag the logic of equality with uninterpreted functions ( euf ) and its extensions have been widely applied to processor verification , by means of a large variety of progressively more sophisticated ( lazy or eager ) translations into propositional sat . here we propose a new approach , namely a general dpll ( x ) engine , whose parameter x can be instantiated with a specialized solver solver t for a given theory t , thus producing a system dpll ( t ) . we describe this dpll ( t ) scheme , the interface between dpll ( x ) and solver t , the architecture of dpll ( x ) , and our solver for euf , which includes incremental and backtrackable congruence closure algorithms for dealing with the built-in equality and the integer successor and predecessor symbols . experiments with a first implementation indicate that our technique already outperforms the previous methods on most benchmarks , and scales up very well . story_separator_special_tag contentsintroduction \xa7 1. homomorphism and equivalence of automata \xa7 2. introduction of mappings in automata \xa7 3. introduction of events in finite automata , operations on events \xa7 4. automata and semi-groups \xa7 5. the composition of automata \xa7 6. experiments with automataconclusionreferences story_separator_special_tag this paper discusses an approach to representing and reasoning about constraints over strings . we discuss how string domains can often be concisely represented using regular languages , and how constraints over strings , and domain operations on sets of strings , can be carried out using this representation . story_separator_special_tag the first-order theory of the integers with addition and order , commonly known as presburger arithmetic , has been a central topic in mathematical logic and computer science for almost 90 years . presburger arithmetic has been the starting point for numerous lines of research in automata theory , model theory and discrete geometry . in formal verification , presburger arithmetic is the first-choice logic to represent and reason about systems with infinitely many states . this article provides a broad yet concise overview over the history , decision procedures , extensions and geometric properties of presburger arithmetic . story_separator_special_tag in this paper we present a generalization of the problem of interactive configuration . the usual interactive configuration problem is the problem of , given some variables on small finite domains and an increasing set of assignment of values to a subnet of the variables , to compute for each of the unassigned variables which values in its domain that participate in some solution for some assignment of values to the other unassigned variables . in this paper we consider how to extend this scheme to handle infinite regular domains using string variables and constraints that involves regular-expression checks on the string variables . we first show how to do this by using one single dfa . since this approach is vastly space consuming , we construct a data structure that simulates the large dfa and is much more space efficient . as an example a configuration problem on n string variables with only one solution in which each string variable is assigned a value oflength k the former structure will use ( kn ) space whereas the latter only need o ( kn ) . we also show how this framework can be combined with the recent bdd techniques story_separator_special_tag we improve an existing propagator for the context-free grammar constraint and demonstrate experimentally the practicality of the resulting propagator . the underlying technique could be applied to other existing propagators for this constraint . we argue that constraint programming solvers are more suitable than existing solvers for verification tools that have to solve string constraints , as they have a rich tradition of constraints for membership in formal languages . story_separator_special_tag string analysis is the problem of reasoning about how strings are manipulated by a program . it has numerous applications including automatic detection of cross-site scripting , and automatic test-case generation . a\xa0popular string analysis technique includes symbolic executions , which at their core use constraint solvers over the string domain , a.k.a . string solvers . such solvers typically reason about constraints expressed in theories over strings with the concatenation operator as an atomic constraint . in recent years , researchers started to recognise the importance of incorporating the replace-all operator ( i.e . replace all occurrences of a string by another string ) and , more generally , finite-state transductions in the theories of strings with concatenation . such string operations are typically crucial for reasoning about xss vulnerabilities in web applications , especially for modelling sanitisation functions and implicit browser transductions ( e.g . innerhtml ) . although this results in an undecidable theory in general , it was recently shown that the straight-line fragment of the theory is decidable , and is sufficiently expressive in practice . in this paper , we provide the first string solver that can reason about constraints involving both concatenation and story_separator_special_tag there has been significant recent interest in automated reasoning techniques , in particular constraint solvers , for string variables . these techniques support a wide variety of clients , ranging from static analysis to automated testing . the majority of string constraint solvers rely on finite automata to support regular expression constraints . for these approaches , performance depends critically on fast automata operations such as intersection , complementation , and determinization . existing work in this area has not yet provided conclusive results as to which core algorithms and data structures work best in practice.in this paper , we study a comprehensive set of algorithms and data structures for performing fast automata operations . our goal is to provide an apples-to-apples comparison between techniques that are used in current tools . to achieve this , we re-implemented a number of existing techniques . we use an established set of regular expressions benchmarks as an indicative workload . we also include several techniques that , to the best of our knowledge , have not yet been used for string constraint solving . our results show that there is a substantial performance difference across techniques , which has implications for future story_separator_special_tag reasoning about string variables , in particular program inputs , is an important aspect of many program analyses and testing frameworks . program inputs invariably arrive as strings , and are often manipulated using high-level string operations such as equality checks , regular expression matching , and string concatenation . it is difficult to reason about these operations because they are not well-integrated into current constraint solvers.we present a decision procedure that solves systems of equations over regular language variables . given such a system of constraints , our algorithm finds satisfying assignments for the variables in the system . we define this problem formally and render a mechanized correctness proof of the core of the algorithm . we evaluate its scalability and practical utility by applying it to the problem of automatically finding inputs that cause sql injection vulnerabilities . story_separator_special_tag reasoning about strings is becoming a key step at the heart of many program analysis and testing frameworks . stand-alone string constraint solving tools , called decision procedures , have been the focus of recent research in this area . the aim of this work is to provide algorithms and implementations that can be used by a variety of program analyses through a well-defined interface . this separation enables independent improvement of string constraint solving algorithms and reduces client effort . we present strsolve , a decision procedure that reasons about equations over string variables . our approach scales well with respect to the size of the input constraints , especially compared to other contemporary techniques . our approach performs an explicit search for a satisfying assignment , but constructs the search space lazily based on an automata representation . we empirically evaluate our approach by comparing it with four existing string decision procedures on a number of tasks . we find that our prototype is , on average , several orders of magnitude faster than the fastest existing approaches , and present evidence that our lazy search space enumeration accounts for most of that benefit . story_separator_special_tag g ' a. linz , peter . an introduction to formal languages and automata / peter linz ' -- 3 'd cd charrgcs ftrr the second edition wercl t ) volutionary rather than rcvolrrtionary and addressed initially , i felt that giving solutions to exercises was undesirable hecause it lirrritcd the chapter 1 fntroduction to the theory of computation . issuu solution manual to introduction to languages . introduction theory computation 2nd edition solution manual sipser . structural theory of automata : solution manual of theory of computation . kellison theory of interest pdf . transformation , sylvester 's theorem ( without proof ) , solution of second order . linear differential higher engineering mathematics by b.s . grewal , 40th edition , khanna . publication . 2. introduction of automata theory , languages and computationhopcroft . motwani & ulman unix system utilities manual . 4 . story_separator_special_tag abstract constraint logic programming ( clp ) is a merger of two declarative paradigms : constraint solving and logic programming . although a relatively new field , clp has progressed in several quite different directions . in particular , the early fundamental concepts have been adapted to better serve in different areas of applications . in this survey of clp , a primary goal is to give a systematic description of the major trends in terms of common fundamental concepts . the three main parts cover the theory , implementation issues , and programming for applications . story_separator_special_tag this paper describes a method for defining , analyzing , testing , and implementing large digital functions by means of a binary decision diagram . this diagram provides a complete , concise , `` implementation-free '' description of the digital functions involved . methods are described for deriving these diagrams and examples are given for a number of basic combinational and sequential devices . techniques are then outlined for using the diagrams to analyze the functions involved , for test generation , and for obtaining various implementations . it is shown that the diagrams are especially suited for processing by a computer . finally , methods are described for introducing inversion and for directly `` interconnecting '' diagrams to define still larger functions . an example of the carry look-ahead adder is included . story_separator_special_tag symbolic execution tools query constraint solvers for tasks such as determining the feasibility of program paths . therefore , the effectiveness of such tools depends on their constraint solvers.most modern constraint solvers for primitive types are efficient and accurate . however , research on constraint solvers for complex types , such as strings , is less converged.in this paper , we introduce two new solver adequacy criteria , modeling cost and accuracy , to help the user identify an adequate solver . using these metrics and performance criterion , we evaluate four distinct string constraint solvers in the context of symbolic execution . our results show that , depending on the needs of the user and composition of the program , one solver might be more appropriate than another . yet , none of the solvers exhibit the best results for all programs . hence , if resources permit , the user will benefit the most from executing all solvers in parallel and enabling communication between solvers . story_separator_special_tag many automatic testing , analysis , and verification techniques for programs can be effectively reduced to a constraint-generation phase followed by a constraint-solving phase . this separation of concerns often leads to more effective and maintainable software reliability tools . the increasing efficiency of off-the-shelf constraint solvers makes this approach even more compelling . however , there are few effective and sufficiently expressive off-the-shelf solvers for string constraints generated by analysis of string-manipulating programs , so researchers end up implementing their own ad-hoc solvers.to fulfill this need , we designed and implemented hampi , a solver for string constraints over bounded string variables . users of hampi specify constraints using regular expressions , context-free grammars , equality between string terms , and typical string operations such as concatenation and substring extraction . hampi then finds a string that satisfies all the constraints or reports that the constraints are unsatisfiable.we demonstrate hampi 's expressiveness and efficiency by applying it to program analysis and automated testing . we used hampi in static and dynamic analyses for finding sql injection vulnerabilities in web applications with hundreds of thousands of lines of code . we also used hampi in the context of automated bug story_separator_special_tag many automatic testing , analysis , and verification techniques for programs can be effectively reduced to a constraint generation phase followed by a constraint-solving phase . this separation of concerns often leads to more effective and maintainable tools . the increasing efficiency of off-the-shelf constraint solvers makes this approach even more compelling . however , there are few effective and sufficiently expressive off-the-shelf solvers for string constraints generated by analysis techniques for string-manipulating programs.we designed and implemented hampi , a solver for string constraints over fixed-size string variables . hampi constraints express membership in regular languages and fixed-size context-free languages . hampi constraints may contain context-free-language definitions , regular language definitions and operations , and the membership predicate . given a set of constraints , hampi outputs a string that satisfies all the constraints , or reports that the constraints are unsatisfiable.hampi is expressive and efficient , and can be successfully applied to testing and analysis of real programs . our experiments use hampi in : static and dynamic analyses for finding sql injection vulnerabilities in web applications ; automated bug finding in c programs using systematic testing ; and compare hampi with another string solver . hampi 's source story_separator_special_tag this paper describes the symbolic execution of programs . instead of supplying the normal inputs to a program ( e.g . numbers ) one supplies symbols representing arbitrary values . the execution proceeds as in a normal execution except that values may be symbolic formulas over the input symbols . the difficult , yet interesting issues arise during the symbolic execution of conditional branch type statements . a particular system called effigy which provides symbolic execution for program testing and debugging is also described . it interpretively executes programs written in a simple pl/i style programming language . it includes many standard debugging features , the ability to manage and to prove things about symbolic expressions , a simple program testing manager , and a program verifier . a brief discussion of the relationship between symbolic execution and program proving is also included . story_separator_special_tag we discuss in this paper how connections , discovered almost forty years ago , between logics and automata can be used in practice . for such logics expressing regular sets , we have developed tools that allow efficient symbolic reasoning not attainable by theorem proving or symbolic model checking . story_separator_special_tag we propose a novel high level programming notation , called fido , that we have designed to concisely express regular sets of strings or trees . in particular , it can be viewed as a domain-specific language for the expression of finite state automata on large alphabets ( of sometimes astronomical size ) . fido is based on a combination of mathematical logic and programming language concepts . this combination shares no similarities with usual logic programming languages . fido compiles into finite state string or tree automata , so there is no concept of run-time . it has already been applied to a variety of problems of considerable complexity and practical interest . we motivate the need for a language like fido , and discuss our design and its implementation . also , we briefly discuss design criteria for domain-specific languages that we have learned from the work with fido . we show how recursive data types , unification , implicit coercions , and subtyping can be merged with a variation of predicate logic , called the monadic second-order logic ( m2l ) on trees . fido is translated first into pure m2l via suitable encodings , and finally story_separator_special_tag the algorithm selection problem is concerned with selecting the best algorithm to solve a given problem on a case-by-case basis . it has become especially relevant in the last decade , as researchers are increasingly investigating how to identify the most suitable existing algorithm for solving a problem instead of developing new algorithms . this survey presents an overview of this work focusing on the contributions made in the area of combinatorial search problems , where algorithm selection techniques have achieved significant performance improvements . we unify and organise the vast literature according to criteria that determine algorithm selection systems in practice . the comprehensive classification of approaches identifies and analyses the different directions from which algorithm selection has been approached . this paper contrasts and compares different methods for solving the problem as well as ways of using these solutions . it closes by identifying directions of current and future research . story_separator_special_tag in order to properly test software , test data of a certain quality is needed . however , useful test data is often unavailable because existing or hand-crafted data might not be diverse enough to enable desired test cases . furthermore , using production data might be prohibited due to security or privacy concerns or other regulations . at the same time , existing tools for test data generation are often limited . story_separator_special_tag predicting protein-protein complexes ( protein docking ) is an important factor for understanding the majority of biochemical processes . in general , protein docking algorithms search through a large number of possible relative placements of the interacting partners , filtering out the majority of the candidates in order to produce a manageable set of candidates that can be examined in greater detail . this is a six-dimensional search through three rotational degrees of freedom and three translational degrees of freedom of one partner ( the probe ) relative to the other ( the target ) . the standard approach is to use a fixed step both for the rotation ( typically 10\\ ( ^\\circ \\ ) to 15\\ ( ^\\circ \\ ) ) and the translation ( typically 1a ) . since proteins are not isotropic , a homogeneous rotational sampling can result in redundancies or excessive displacement of important atoms . a similar problem occurs in the translational sampling , since the small step necessary to find the optimal fit between the two molecules results in structures that differ by so little that they become redundant . in this paper we propose a constraint-based approach that improves the search story_separator_special_tag the increased interest in string solving in the recent years has made it very hard to identify the right tool to address a particular user 's purpose . firstly , there is a multitude of string solving , each addressing essentially some subset of the general problem . generally , the addressed fragments are relevant and well motivated , but the lack of comparisons between the existing tools on an equal set of benchmarks , can not go unnoticed , especially as a common framework to compare solvers seems to be missing . in this paper we gather a set of relevant benchmarks and introduce our new benchmarking framework zaligvinder to address this purpose . story_separator_special_tag the problem of solving string constraints together with numeric constraints has received increasing interest recently . existing methods use either bit-vectors or automata ( or their combination ) to model strings , and reduce string constraints to bit-vector constraints or automaton operations , which are then solved in the respective domain . unfortunately , they often fail to achieve a good balance between efficiency , accuracy , and comprehensiveness . in this paper we illustrate a new technique that uses parameterized arrays as the main data structure to model strings , and converts string constraints into quantified expressions that are solved through quantifier elimination . we present an efficient and sound quantifier elimination algorithm . in addition , we use an automaton model to handle regular expressions and reason about string values faster . our method does not need to enumerate string lengths ( as bit-vector based methods do ) , or concrete string values ( as automaton based methods do ) . hence , it can achieve much better accuracy and efficiency . in particular , it can identify unsatisfiable cases quickly . our solver ( named pass ) supports most of the popular string operations , including string story_separator_special_tag an increasing number of applications in verification and security rely on or could benefit from automatic solvers that can check the satisfiability of constraints over a rich set of data types that includes character strings . unfortunately , most string solvers today are standalone tools that can reason only about ( some fragment ) of the theory of strings and regular expressions , sometimes with strong restrictions on the expressiveness of their input language . these solvers are based on reductions to satisfiability problems over other data types , such as bit vectors , or to automata decision problems . we present a set of algebraic techniques for solving constraints over the theory of unbounded strings natively , without reduction to other problems . these techniques can be used to integrate string reasoning into general , multi-theory smt solvers based on the dpll ( t ) architecture . we have implemented them in our smt solver cvc4 to expand its already large set of built-in theories to a theory of strings with concatenation , length , and membership in regular languages . our initial experimental results show that , in addition , over pure string problems , cvc4 is highly story_separator_special_tag we prove that the quantifier-free fragment of the theory of character strings with regular language membership constraints and linear integer constraints over string lengths is decidable . we do that by describing a sound , complete and terminating tableaux calculus for that fragment which uses as oracles a decision procedure for linear integer arithmetic and a number of computable functions over regular expressions . a distinguishing feature of this calculus is that it provides a completely algebraic method for solving membership constraints which can be easily integrated into multi-theory smt solvers . another is that it can be used to generate symbolic solutions for such constraints , that is , solved forms that provide simple and compact representations of entire sets of complete solutions . the calculus is part of a larger one providing the theoretical foundations of a high performance theory solver for string constraints implemented in the smt solver cvc4 . story_separator_special_tag we study the fundamental issue of decidability of satisfiability over string logics with concatenations and finite-state transducers as atomic operations . although restricting to one type of operations yields decidability , little is known about the decidability of their combined theory , which is especially relevant when analysing security vulnerabilities of dynamic web pages in a more realistic browser model . on the one hand , word equations ( string logic with concatenations ) can not precisely capture sanitisation functions ( e.g . htmlescape ) and implicit browser transductions ( e.g . innerhtml mutations ) . on the other hand , transducers suffer from the reverse problem of being able to model sanitisation functions and browser transductions , but not string concatenations . naively combining word equations and transducers easily leads to an undecidable logic . our main contribution is to show that the `` straight-line fragment '' of the logic is decidable ( complexity ranges from pspace to expspace ) . the fragment can express the program logics of straight-line string-manipulating programs with concatenations and transductions as atomic operations , which arise when performing bounded model checking or dynamic symbolic executions . we demonstrate that the logic can naturally story_separator_special_tag word equations are a crucial element in the theoretical foundation of constraint solving over strings . a word equation relates two words over string variables and constants . its solution amounts to a function mapping variables to constant strings that equate the left and right hand sides of the equation . while the problem of solving word equations is decidable , the decidability of the problem of solving a word equation with a length constraint ( i.e. , a constraint relating the lengths of words in the word equation ) has remained a long-standing open problem . we focus on the subclass of quadratic word equations , i.e. , in which each variable occurs at most twice . we first show that the length abstractions of solutions to quadratic word equations are in general not presburger-definable . we then describe a class of counter systems with presburger transition relations which capture the length abstraction of a quadratic word equation with regular constraints . we provide an encoding of the effect of a simple loop of the counter systems in the existential theory of presburger arithmetic with divisibility ( pad ) . since pad is decidable , we get a decision story_separator_special_tag support for regular expressions in symbolic execution-based tools for test generation and bug finding is insufficient . common aspects of mainstream regular expression engines , such as backreferences or greedy matching , are ignored or imprecisely approximated , leading to poor test coverage or missed bugs . in this paper , we present a model for the complete regular expression language of ecmascript 2015 ( es6 ) , which is sound for dynamic symbolic execution of the test and exec functions . we model regular expression operations using string constraints and classical regular expressions and use a refinement scheme to address the problem of matching precedence and greediness . we implemented our model in expose , a dynamic symbolic execution engine for javascript , and evaluated it on over 1,000 node.js packages containing regular expressions , demonstrating that the strategy is effective and can significantly increase the number of successful regular expression queries and therefore boost coverage . story_separator_special_tag model counting is the problem of determining the number of solutions that satisfy a given set of constraints . model counting has numerous applications in the quantitative analyses of program execution time , information flow , combinatorial circuit designs as well as probabilistic reasoning . we present a new approach to model counting for structured data types , specifically strings in this work . the key ingredient is a new technique that leverages generating functions as a basic primitive for combinatorial counting . our tool smc which embodies this approach can model count for constraints specified in an expressive string language efficiently and precisely , thereby outperforming previous finite-size analysis tools . smc is expressive enough to model constraints arising in real-world javascript applications and unix c utilities . we demonstrate the practical feasibility of performing quantitative analyses arising in security applications , such as determining the comparative strengths of password strength meters and determining the information leakage via side channels . story_separator_special_tag in javascript , and scripting languages in general , dynamic field access is a commonly used feature . unfortunately , current static analysis tools either completely ignore dynamic field access or use overly conservative approximations that lead to poor precision and scalability . story_separator_special_tag new techniques or applications in the intersection of constraint programming ( cp ) .- artificial intelligence ( ai ) and operations research ( or ) . story_separator_special_tag in this paper we construct an algorithm recognizing the solvability of arbitrary equations in a free semigroup.bibliography : 4 titles . story_separator_special_tag reasoning over bit-vectors arises in a variety of applications in verification and cryptography . this paper presents a bit-vector domain for constraint programming and its associated filtering algorithms . the domain supports all the traditional bit operations and correctly models modulo-arithmetic and overflows . the domain implementation uses bit operations of the underlying architecture , avoiding the drawback of a bit-blasting approach that associates a variable with each bit . the filtering algorithms implement either domain consistency on the bit-vector domain or bit consistency , a new consistency notion introduced in this paper . filtering algorithms for logical and structural constraints typically run in constant time , while arithmetic constraints such as addition run in time linear in the size of the bit-vectors . the paper also discusses how to channel bit-vector variables with an integer variable . story_separator_special_tag server-side programming is one of the key technologies that support today 's www environment . it makes it possible to generate web pages dynamically according to a user 's request and to customize pages for each user . however , the flexibility obtained by server-side programming makes it much harder to guarantee validity and security of dynamically generated pages.to check statically the properties of web pages generated dynamically by a server-side program , we develop a static program analysis that approximates the string output of a program with a context-free grammar . the approximation obtained by the analyzer can be used to check various properties of a server-side program and the pages it generates.to demonstrate the effectiveness of the analysis , we have implemented a string analyzer for the server-side scripting language php . the analyzer is successfully applied to publicly available php programs to detect cross-site scripting vulnerabilities and to validate pages they generate dynamically . story_separator_special_tag we present an algorithm for approximating context-free languages with regular languages . the algorithm is based on a simple transformation that applies to any context-free grammar and guarantees that the result can be compiled into a finite automaton . the resulting grammar contains at most one new nonterminal for any nonterminal symbol of the input grammar . the result thus remains readable and if necessary modifiable . we extend the approximation algorithm to the case of weighted context-free grammars . we also report experiments with several grammars showing that the size of the minimal deterministic automata accepting the resulting approximations is of practical use for applications such as speech recognition . story_separator_special_tag there is no standard modelling language for constraint programming ( cp ) problems . most solvers have their own modelling language . this makes it difficult for modellers to experiment with different solvers for a problem . in this paper we present minizinc , a simple but expressive cp modelling language which is suitable for modelling problems for a range of solvers and provides a reasonable compromise between many design possibilities . equally importantly , we also propose a low-level solver-input language called flatzinc , and a straightforward translation from minizinc to flatzinc that preserves all solver-supported global constraints . this lets a solver writer support minizinc with a minimum of effort -- they only need to provide a simple flatzinc front-end to their solver , and then combine it with an existing minizinc-to-flatzinc translator . such a front-end may then serve as a stepping stone towards a full minizinc implementation that is more tailored to the particular solver . a standard language for modelling cp problems will encourage experimentation with and comparisons between different solvers . although minizinc is not perfect -- no standard modelling language will be -- we believe its simplicity , expressiveness , and ease of story_separator_special_tag finite domain propagation solvers effectively represent the possible values of variables by a set of choices which can be naturally modelled as boolean variables . in this paper we describe how to mimic a finite domain propagation engine , by mapping propagators into clauses in a sat solver . this immediately results in strong nogoods for finite domain propagation . but a naive static translation is impractical except in limited cases . we show how to convert propagators to lazy clause generators for a sat solver . the resulting system introduces flexibility in modelling since variables are modelled dually in the propagation engine and the sat solver , and we explore various approaches to the dual modelling . we show that the resulting system solves many finite domain problems significantly faster than other techniques . story_separator_special_tag jquery is the most popular javascript library but the state-of-the-art static analyzers for javascript applications fail to analyze simple programs that use jquery . in this paper , we present a novel abstract string domain whose elements are simple regular expressions that can represent prefix , infix , and postfix substrings of a string and even their sets . we formalize the new domain in the abstract interpretation framework with abstract models of strings and objects commonly used in the existing javascript analyzers . for practical use of the domain , we present polynomial-time inclusion decision rules between the regular expressions and prove that the rules exactly capture the actual inclusion relation . we have implemented the domain as an extension of the open-source javascript analyzer , safe , and we show that the extension significantly improves the scalability and precision of the baseline analyzer in analyzing programs that use jquery . story_separator_special_tag this paper describes a global constraint on a fixed-length sequence of finite-domain variables requiring that the corresponding sequence of values taken by these variables belong to a given regular language , thereby generalizing some other known global constraints . we describe and analyze a filtering algorithm achieving generalized arc consistency for this constraint . some comparative empirical results are also given . story_separator_special_tag we prove that the satisfiability problem for word equations is in pspace . the satisfiability problem for word equations has a simple formulation : find out whether or not an input word equation has a solution . the decidability of the problem was proved by g.s . makanin ( 1977 ) . his decision procedure is one of the most complicated algorithms existing in the literature . we propose an alternative algorithm . the full version of the algorithm requires only a proof of the upper bound for index of periodicity of a minimal solution ( a. koscielski and l. pacholski , see journal of acm , vol.43 , no.4 . p.670-84 ) . our algorithm is the first one which is proved to work in polynomial space . story_separator_special_tag we present the first dexptime algorithm which solves word equations i.e . finds a finite representation of all solutions of an equation in a free semigroup . we show how to use our approach to solve two new problems in pspace which deal with properties of the solution set of a word equation : deciding finiteness of the solution set , deciding boundness of the set of maximal exponents of periodicity of solutions . the approach can be generalized to solve in pspace three problems for expressible relations , namely the emptiness of the relation , finiteness of the relation and boundness of the set of maximal exponents of periodicity of elements of the relation . story_separator_special_tag by a string on a , 6 we mean a row of a 's and 6 's such as baabbbab . i t may involve only a , or 6 , or be null . if , for example , gi , g2 , gz represent strings baby aa , b respectively , string g2gigigzg2 on gi , g2 , gz will represent , in obvious fashion , the string aababbabbaa on a , 6. by the correspondence decision problem we mean the problem of determining for an arbitrary finite set ( gu g { ) , ( g2 , g2 ) , , ( gm , gi ) of pairs of corresponding non-null strings on a , b whether there is a solution in w , iu ii , , in of equation story_separator_special_tag invited papers.- global optimization of probabilistically constrained linear programs.- algorithms and constraint programming.- interval analysis and robotics.- constraint based resilience analysis.- regular papers.- infinite qualitative simulations by means of constraint programming.- algorithms for stochastic csps.- graph properties based filtering.- the roots constraint.- cojava : optimization modeling by nondeterministic simulation.- an algebraic characterisation of complexity for valued constraint.- typed guarded decompositions for constraint satisfaction.- propagation in csp and sat.- the minimum spanning tree constraint.- impact of censored sampling on the performance of restart strategies.- watched literals for constraint propagation in minion.- inner and outer approximations of existentially quantified equality constraints.- performance prediction and automated tuning of randomized and parametric algorithms.- adaptive clause weight redistribution.- localization of an underwater robot using interval constraint propagation.- approximability of integer programming with generalised constraints.- when constraint programming and local search solve the scheduling problem of electricite de france nuclear power plant outages.- generalized arc consistency for positive table constraints.- stochastic allocation and scheduling for conditional task graphs in mpsocs.- boosting open csps.- compiling constraint networks into and/or multi-valued decision diagrams ( aomdds ) .- distributed constraint-based local search.- high-level nondeterministic abstractions in c++.- a structural characterization of temporal dynamic controllability.- when interval analysis helps inter-block story_separator_special_tag general syntax , the formal part of the general theory of signs , has as its basic operation the operation of concatenation , expressed by the connective and understood as follows : where x and y are any expressions , x y is the expression formed by writing the expression x immediately followed by the expression y . e.g. , where alpha and beta are understood as names of the respective signs and , the syntactical expression alpha beta is a name of the expression . tarski and hermes have presented axioms for concatenation , and definitions of derivative syntactical concepts . hermes has also related concatenation theory to the arithmetic of natural numbers , constructing a model of the latter within the former . conversely , godel 's proof of the impossibility of a complete consistent systematization of arithmetic depended on constructing a model of concatenation theory within arithmetic . story_separator_special_tag satisfiability modulo theories ( smt ) solvers with support for the theory of strings have recently emerged as powerful tools for reasoning about string-manipulating programs . however , due to the complex semantics of extended string functions , it is challenging to develop scalable solvers for the string constraints produced by program analysis tools . we identify several classes of simplification techniques that are critical for the efficient processing of string constraints in smt solvers . these techniques can reduce the size and complexity of input constraints by reasoning about arithmetic entailment , multisets , and string containment relationships over input terms . we provide experimental evidence that implementing them results in significant improvements over the performance of state-of-the-art smt solvers for extended string constraints . story_separator_special_tag in text encoding standards such as unicode , text strings are sequences of code points , each of which can be represented as a natural number . we present a decision procedure for a concatenation-free theory of strings that includes length and a conversion function from strings to integer code points . furthermore , we show how many common string operations , such as conversions between lowercase and uppercase , can be naturally encoded using this conversion function . we describe our implementation of this approach in the smt solver cvc4 , which contains a high-performance string subsolver , and show that the use of a native procedure for code points significantly improves its performance with respect to other state-of-the-art string solvers . story_separator_special_tag efficient reasoning about strings is essential to a growing number of security and verification applications . we describe satisfiability checking techniques in an extended theory of strings that includes operators commonly occurring in these applications , such as \\ ( \\mathsf { contains } , \\mathsf { index\\_of } \\ ) and \\ ( \\mathsf { replace } \\ ) . we introduce a novel context-dependent simplification technique that improves the scalability of string solvers on challenging constraints coming from real-world problems . our evaluation shows that an implementation of these techniques in the smt solver cvc4 significantly outperforms state-of-the-art string solvers on benchmarks generated using pyex , a symbolic execution engine for python programs . using a test suite sampled from four popular python packages , we show that pyex uses only \\ ( 41\\ % \\ ) of the runtime when coupled with cvc4 than when coupled with cvc4 s closest competitor while achieving comparable program coverage . story_separator_special_tag we investigate the satisfiability problem of word equations where each variable occurs at most twice ( quadratic systems ) . we obtain various new results : the satisfiability problem is np-hard ( even for a single equation ) . the main result says that once we have fixed the lengths of a possible solution , then we can decide in linear time whether there is a corresponding solution . if the lengths of a minimal solution were at most exponential , then the satisfiability problem of quadratic systems would be np-complete . ( the inclusion in np follows also from [ 21 ] ) in the second part we address the problem with regular constraints : the uniform version is pspace-complete . fixing the lengths of a possible solution does n't make the problem much easier . the non-uniform version remains np-hard ( in contrast to the linear time result above ) . the uniform version remains pspace-complete . story_separator_special_tag php web applications routinely generate invalid html . modern browsers silently correct html errors , but sometimes malformed pages render inconsistently , cause browser crashes , or expose security vulnerabilities . fixing errors in generated pages is usually straightforward , but repairing the generating php program can be much harder . we observe that malformed html is often produced by incorrect `` constant prints '' , i.e. , statements that print string literals , and present two tools for automatically repairing such html generation errors . phpquickfix repairs simple bugs by statically analyzing individual prints . phprepair handles more general repairs using a dynamic approach . based on a test suite , the property that all tests should produce their expected output is encoded as a string constraint over variables representing constant prints . solving this constraint describes how constant prints must be modified to make all tests pass . both tools were implemented as an eclipse plugin and evaluated on php programs containing hundreds of html generation errors , most of which our tools were able to repair automatically . story_separator_special_tag as ajax applications gain popularity , client-side javascript code is becoming increasingly complex . however , few automated vulnerability analysis tools for javascript exist . in this paper , we describe the first system for exploring the execution space of javascript code using symbolic execution . to handle javascript code s complex use of string operations , we design a new language of string constraints and implement a solver for it . we build an automatic end-to-end tool , kudzu , and apply it to the problem of finding client-side code injection vulnerabilities . in experiments on 18 live web applications , kudzu automatically discovers 2 previously unknown vulnerabilities and 9 more that were previously found only with a manually-constructed test suite . story_separator_special_tag satisfiability modulo theories ( smt ) solvers are fundamental tools that are used widely in software engineering , verification , and security research . precisely because of their widespread use , it is imperative we develop efficient and systematic methods to test them . to this end , we present a reinforcement-learning based fuzzing system , banditfuzz , that learns grammatical constructs of well-formed inputs that may cause performance slowdown in smt solvers . to the best of our knowledge , banditfuzz is the first machine-learning based performance fuzzer for smt solvers . story_separator_special_tag in constraint programming ( cp ) , a combinatorial problem is modeled declaratively as a conjunction of constraints , each of which captures some of the combinatorial substructure of the problem . constraints are more than a modeling convenience : every constraint is partially implemented by an inference algorithm , called a propagator , that rules out some but not necessarily all infeasible candidate values of one or more unknowns in the scope of the constraint . interleaving propagation with systematic search leads to a powerful and complete solution method , combining a high degree of re-usability with natural , high-level modeling.a propagator can be characterized as a sound approximation of a constraint on an abstraction of sets of candidate values ; propagators that share an abstraction are similar in the strength of the inference they perform when identifying infeasible candidate values . in this thesis , we consider abstractions of sets of candidate values that may be described by an elegant mathematical formalism , the galois connection . we develop a theoretical framework from the correspondence between galois connections and propagators , unifying two disparate views of the abstraction-propagation connection , namely the oft-overlooked distinction between representational and computational story_separator_special_tag we present a domain for string decision variables of bounded length , combining features from fixed-length and unbounded-length string solvers to reason on an interval defined by languages of prefixes and suffixes . we provide a theoretical groundwork for constraint solving on this domain and describe propagation techniques for several common constraints . story_separator_special_tag constraints on strings of unknown length occur in a wide variety of real-world problems , such as test case generation , program analysis , model checking , and web security . we describe a set of constraints sufficient to model many standard benchmark problems from these fields . for strings of an unknown length bounded by an integer , we describe propagators for these constraints . finally , we provide an experimental comparison between a state-of-the-art dedicated string solver , cp approaches utilising fixed-length string solving , and our implementation extending an off-the-shelf cp solver . story_separator_special_tag we present the design and implementation of bounded length sequence ( bls ) variables for a cp solver . the domain of a bls variable is represented as the combination of a set of candidate lengths and a sequence of sets of candidate characters . we show how this representation , together with requirements imposed by propagators , affects the implementation of bls variables for a copying cp solver , most importantly the closely related decisions of data structure , domain restriction operations , and propagation events . the resulting implementation outperforms traditional bounded-length string representations for cp solvers , which use a fixed-length array of candidate characters and a padding symbol . story_separator_special_tag abstract building upon definite clause grammar ( dcg ) , a number of logic grammar systems have been developed that are well-suited to phenomena in natural language . we have proposed an extension called string variable grammar ( svg ) , specifically tailored to the biological language of dna . we here rigorously define and characterize this formalism , showing that it specifies a class of languages that properly contains the context-free languages , but is properly contained in the indexed languages . we give a number of mathematical and biological examples , and use an svg variant to propose a new abstraction of the process of gene expression . a practical implementation called genlang is described , and some recent results in parsing genes and other high-level features of dna sequences are summarized . story_separator_special_tag the manipulation of raw string data is ubiquitous in security-critical software , and verification of such software relies on efficiently solving string and regular expression constraints via smt . however , the typical case of boolean combinations of regular expression constraints exposes blowup in existing techniques . to address solvability of such constraints , we propose a new theory of derivatives of symbolic extended regular expressions ( extended meaning that complement and intersection are incorporated ) , and show how to apply this theory to obtain more efficient decision procedures . our implementation of these ideas , built on top of z3 , matches or outperforms state-of-the-art solvers on standard and handwritten benchmarks , showing particular benefits on examples with boolean combinations . our work is the first formalization of derivatives of regular expressions which both handles intersection and complement and works symbolically over an arbitrary character theory . it unifies existing approaches involving derivatives of extended regular expressions , alternating automata and boolean automata by lifting them to a common symbolic platform . it relies on a parsimonious augmentation of regular expressions : a construct for symbolic conditionals is shown to be sufficient to obtain relevant closure properties for story_separator_special_tag minizinc is a solver agnostic modeling language for defining and solver combinatorial satisfaction and optimization problems . minizinc provides a solver independent modeling language which is now supported by constraint programming solvers , mixed integer programming solvers , sat and sat modulo theory solvers , and hybrid solvers . since 2008 we have run the minizinc challenge every year , which compares and contrasts the different strengths of different solvers and solving technologies on a set of minizinc models . here we report on what we have learnt from running the competition for 6 years . story_separator_special_tag we present the z3strbv solver for a many-sorted first-order quantifier-free theory tw , bv of string equations , string length represented as bit-vectors , and bit-vector arithmetic aimed at formal verification , automated testing , and security analysis of c/c++ applications . our key motivation for building such a solver is the observation that existing string solvers are not efficient at modeling the combined theory over strings and bit-vectors . we demonstrate experimentally that z3strbv is significantly more efficient than a reduction of string/bit-vector constraints to strings/natural numbers followed by a solver for strings/natural numbers or modeling strings as bit-vectors . we also propose two optimizations . first , we explore the concept of library-aware smt solving , which fixes summaries in the smt solver for string library functions such as strlen in c/c++ . z3strbv is able to consume these functions directly instead of re-analyzing the functions from scratch each time . second , we experiment with a binary search heuristic that accelerates convergence on a consistent assignment of string lengths . we also show that z3strbv is able to detect nontrivial overflows in real-world system-level code , as confirmed against seven security vulnerabilities from the cve and mozilla story_separator_special_tag reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective.what distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner 's predictions . further , the predictions may have long term effects through influencing the future state of the controlled system . thus , time plays a special role . the goal in reinforcement learning is to develop efficient learning algorithms , as well as to understand the algorithms ' merits and limitations . reinforcement learning is of great interest because of the large number of practical applications that it can be used to address , ranging from problems in artificial intelligence to operations research or control engineering . in this book , we focus on those algorithms of reinforcement learning that build on the powerful theory of dynamic programming.we give a fairly comprehensive catalog of learning problems , describe the core ideas , note a large number of state of the art algorithms , followed by the discussion of their theoretical properties and limitations . story_separator_special_tag we propose a novel technique for statically verifying the strings generated by a program . the verification is conducted by encoding the program in monadic second-order logic ( m2l ) . we use m2l to describe constraints among program variables and to abstract built-in string operations . once we encode a program in m2l , a theorem prover for m2l , such as mona , can automatically check if a string generated by the program satisfies a given specification , and if not , exhibit a counterexample . with this approach , we can naturally encode relationships among strings , accounting also for cases in which a program manipulates strings using indices . in addition , our string analysis is path sensitive in that it accounts for the effects of string and boolean comparisons , as well as regular-expression matches.we have implemented our string-analysis algorithm , and used it to augment an industrial security analysis for web applications by automatically detecting and verifying sanitizers -- -methods that eliminate malicious patterns from untrusted strings , making those strings safe to use in security-sensitive operations . on the 8 benchmarks we analyzed , our string analyzer discovered 128 previously unknown sanitizers , story_separator_special_tag constraint solving is an essential technique for detecting vulnerabilities in programs , since it can reason about input sanitization and validation operations performed on user inputs . however , real-world programs typically contain complex string operations that challenge vulnerability detection . state-of-the-art string constraint solvers support only a limited set of string operations and fail when they encounter an unsupported one , this leads to limited effectiveness in finding vulnerabilities . in this paper we propose a search-driven constraint solving technique that complements the support for complex string operations provided by any existing string constraint solver . our technique uses a hybrid constraint solving procedure based on the ant colony optimization meta-heuristic . the idea is to execute it as a fallback mechanism , only when a solver encounters a constraint containing an operation that it does not support . we have implemented the proposed search-driven constraint solving technique in the aco-solver tool , which we have evaluated in the context of injection and xss vulnerability detection for java web applications . we have assessed the benefits and costs of combining the proposed technique with two state-of-the-art constraint solvers ( z3-str2 and cvc4 ) . the experimental results , based story_separator_special_tag the key design challenges in the construction of a sat-based relational model finder are described , and novel techniques are proposed to address them . an efficient model finder must have a mechanism for specifying partial solutions , an effective symmetry detection and breaking scheme , and an economical translation from relational to boolean logic . these desiderata are addressed with three new techniques : a symmetry detection algorithm that works in the presence of partial solutions , a sparse-matrix representation of relations , and a compact representation of boolean formulas inspired by boolean expression diagrams and reduced boolean circuits . the presented techniques have been implemented and evaluated , with promising results . story_separator_special_tag motivated by the vulnerability analysis of web programs which work on string inputs , we present s3 , a new symbolic string solver . our solver employs a new algorithm for a constraint language that is expressive enough for widespread applicability . specifically , our language covers all the main string operations , such as those in javascript . the algorithm first makes use of a symbolic representation so that membership in a set defined by a regular expression can be encoded as string equations . secondly , there is a constraint-based generation of instances from these symbolic expressions so that the total number of instances can be limited . we evaluate s3 on a well-known set of practical benchmarks , demonstrating both its robustness ( more definitive answers ) and its efficiency ( about 20 times faster ) against the state-of-the-art . story_separator_special_tag we consider the problem of reasoning over an expressive constraint language for unbounded strings . the difficulty comes from recursively defined functions such as replace , making state-of-the-art algorithms non-terminating . our first contribution is a progressive search algorithm to not only mitigate the problem of non-terminating reasoning but also guide the search towards a minimal solution when the input formula is in fact satisfiable . we have implemented our method using the state-of-the-art z3 framework . importantly , we have enabled conflict clause learning for string theory so that our solver can be used effectively in the setting of program verification . finally , our experimental evaluation shows leadership in a large benchmark suite , and a first deployment for another benchmark suite which requires reasoning about string formulas of a class that has not been solved before . story_separator_special_tag we present a new algorithm for model counting of a class of string constraints . in addition to the classic operation of concatenation , our class includes some recursively defined operations such as kleene closure , and replacement of substrings . additionally , our class also includes length constraints on the string expressions , which means , by requiring reasoning about numbers , that we face a multi-sorted logic . in the end , our string constraints are motivated by their use in programming for web applications . story_separator_special_tag a decision procedure for the positive theory of a free countably generated semigroup is constructed , with a bound on the number of steps , obtained by modifying an algorithm from work of g. s. makanin ( see mr 57 # 9874 ) . bibliography : 7 titles . story_separator_special_tag constraints in form regular expressions over strings are ubiquitous . they occur often in programming languages like perl and c # , in sql in form of like expressions , and in web applications . providing support for regular expression constraints in program analysis and testing has several useful applications . we introduce a method and a tool called rex , for symbolically expressing and analyzing regular expression constraints . rex is implemented using the smt solver z3 , and we provide experimental evaluation of rex . story_separator_special_tag many severe security vulnerabilities in web applications can be attributed to string manipulation mistakes , which can often be avoided through formal string analysis . string analysis tools are indispensable and under active development . prior string analysis methods are primarily automata-based or satisfiability-based . the two approaches exhibit distinct strengths and weaknesses . specifically , existing automata-based methods have difficulty in generating counterexamples at system inputs to witness vulnerability , whereas satisfiability-based methods are inadequate to produce filters amenable for firmware or hardware implementation for real-time screening of malicious inputs to a system under protection . in this paper , we propose a new string analysis method based on a scalable logic circuit representation for ( nondeterministic ) finite automata to support various string and automata manipulation operations . it enables both counterexample generation and filter synthesis in string constraint solving . by using the new data structure , automata with large state spaces and/or alphabet sizes can be efficiently represented . empirical studies on a large set of open source web applications and well-known attack patterns demonstrate the unique benefits of our method compared to prior string analysis tools . story_separator_special_tag the international satisfiability modulo theories competition is an annual competition between satisfiability modulo theories ( smt ) solvers . the 2018 edition of the competition was part of the floc olympic games , which comprised 14 competitions in various areas of computational logic . we report on the design and selected results of the smt competition during the last floc olympiad , from 2015 to 2018. these competitions set several new records regarding the number of participants , number of benchmarks used , and amount of computation performed . story_separator_special_tag stranger is an automata-based string analysis tool for finding and eliminating string-related security vulnerabilities in php applications . stranger uses symbolic forward and backward reachability analyses to compute the possible values that the string expressions can take during program execution . stranger can automatically ( 1 ) prove that an application is free from specified attacks or ( 2 ) generate vulnerability signatures that characterize all malicious inputs that can be used to generate attacks . story_separator_special_tag in recent years , string solvers have become an essential component in many formal verification , security analysis , and bug-finding tools . such solvers typically support a theory of string equations , the length function , and the regular-expression membership predicate . these enable considerable expressive power , which comes at the cost of slow solving time , and in some cases even non-termination . we present three techniques , designed for word-based smt string solvers , to mitigate these problems : ( 1 ) detecting overlapping variables , which is essential to avoiding common cases of non-termination ; ( 2 ) pruning of the search space via bi-directional integration between the string and integer theories , enabling new cross-domain heuristics ; and ( 3 ) a binary search based heuristic , allowing the procedure to skip unnecessary string length queries and converge on consistent length assignments faster for large strings . we have implemented above techniques atop the z3-str solver , resulting in a significantly more robust and efficient solver , dubbed z3str2 , for the quantifier-free theory of string equations , the regular-expression membership predicate , and linear arithmetic over the length function . we report on story_separator_special_tag in recent years , string solvers have become an essential component in many formal-verification , security-analysis and bug-finding tools . such solvers typically support a theory of string equations , the length function as well as the regular-expression membership predicate . these enable considerable expressive power , which comes at the cost of slow solving time , and in some cases even nontermination . we present two techniques , designed for word-based smt string solvers , to mitigate these problems : ( i ) sound and complete detection of overlapping variables , which is essential to avoiding common cases of nontermination ; and ( ii ) pruning of the search space via bi-directional integration between the string and integer theories , enabling new cross-domain heuristics . we have implemented both techniques atop the z3-str solver , resulting in a significantly more robust and efficient solver , dubbed z3str2 , for the quantifier-free theory of string equations , the regular-expression membership predicate and linear arithmetic over the length function . we report on a series of experiments over four sets of challenging real-world benchmarks , where we compared z3str2 with five different string solvers : s3 , cvc4 , kaluza , story_separator_special_tag analyzing web applications requires reasoning about strings and non-strings cohesively . existing string solvers either ignore non-string program behavior or support limited set of string operations . in this paper , we develop a general purpose string solver , called z3-str , as an extension of the z3 smt solver through its plug-in interface . z3-str treats strings as a primitive type , thus avoiding the inherent limitations observed in many existing solvers that encode strings in terms of other primitives . the logic of the plug-in has three sorts , namely , bool , int and string . the string-sorted terms include string constants and variables of arbitrary length , with functions such as concatenation , sub-string , and replace . the int-sorted terms are standard , with the exception of the length function over string terms . the atomic formulas are equations over string terms , and ( in ) -equalities over integer terms . not only does our solver have features that enable whole program symbolic , static and dynamic analysis , but also it performs better than other solvers in our experiments . the application of z3-str in remote code execution detection shows that its support
the proliferation of internet of things ( iot ) and the success of rich cloud services have pushed the horizon of a new computing paradigm , edge computing , which calls for processing the data at the edge of the network . edge computing has the potential to address the concerns of response time requirement , battery life constraint , bandwidth cost saving , as well as data safety and privacy . in this paper , we introduce the definition of edge computing , followed by several case studies , ranging from cloud offloading to smart home and city , as well as collaborative edge to materialize the concept of edge computing . finally , we present several challenges and opportunities in the field of edge computing , and hope this paper will gain attention from the community and inspire more research in this direction . story_separator_special_tag mobile cloud computing is a new field of research that aims to study mobile agents ( people , vehicles , robots ) as they interact and collaborate to sense the environment , process the data , propagate the results and more generally share resources . mobile agents collectively operate as mobile clouds that enable environment modeling , content discovery , data collection and dissemination and other mobile applications in a way not possible , or not efficient , with conventional internet cloud models and mobile computing approaches . in this paper , we discuss design principles and research issues in mobile cloud computing . we then focus on the mobile vehicular cloud and review cloud applications ranging from urban sensing to intelligent transportation . story_separator_special_tag cloud-based vehicular networks are a promising paradigm to improve vehicular services through distributing computation tasks between remote clouds and local vehicular terminals . to further reduce the latency and the transmission cost of the computation off-loading , we propose a cloud-based mobileedge computing ( mec ) off-loading framework in vehicular networks . in this framework , we study the effectiveness of the computation transfer strategies with vehicle-to-infrastructure ( v2i ) and vehicle-to-vehicle ( v2v ) communication modes . considering the time consumption of the computation task execution and the mobility of the vehicles , we present an efficient predictive combination-mode relegation scheme , where the tasks are adaptively off-loaded to the mec servers through direct uploading or predictive relay transmissions . illustrative results indicate that our proposed scheme greatly reduces the cost of computation and improves task transmission efficiency . story_separator_special_tag connected vehicles provide advanced transformations and attractive business opportunities in the automotive industry . presently , ieee 802.11p and evolving 5g are the mainstream radio access technologies in the vehicular industry , but neither of them can meet all requirements of vehicle communication . in order to provide low-latency and high-reliability communication , an sdn-enabled network architecture assisted by mec , which integrates different types of access technologies , is proposed . mec technology with its on-premises feature can decrease data transmission time and enhance quality of user experience in latency-sensitive applications . therefore , mec plays as important a role in the proposed architecture as sdn technology . the proposed architecture was validated by a practical use case , and the obtained results have shown that it meets application- specific requirements and maintains good scalability and responsiveness . story_separator_special_tag over the last few years , we have witnessed an exponential increase in the computing and storage capabilities of smart devices that has led to the popularity of an emerging technology called edge computing . compared to the traditional cloud-computing- based infrastructure , computing and storage facilities are available near end users in edge computing . moreover , with the widespread popularity of unmanned aerial vehicles ( uavs ) , huge amounts of information will be shared between edge devices and uavs in the coming years . in this scenario , traffic surveillance using uavs and edge computing devices is expected to become an integral part of the next generation intelligent transportation systems . however , surveillance in its requires uninterrupted data sharing , cooperative decision making , and stabilized network formation . edge computing supports data processing and analysis closer to the deployed machines ( i.e. , the sources of the data ) . instead of simply storing data and missing the opportunity to capitalize on it , edge devices can analyze data to gain insights before acting on them . transferring data from the vehicle to the edge for real-time analysis can be facilitated by the use of story_separator_special_tag the emergence of computation intensive and delay sensitive on-vehicle applications makes it quite a challenge for vehicles to be able to provide the required level of computation capacity , and thus the performance . vehicular edge computing ( vec ) is a new computing paradigm with a great potential to enhance vehicular performance by offloading applications from the resource-constrained vehicles to lightweight and ubiquitous vec servers . nevertheless , offloading schemes , where all vehicles offload their tasks to the same vec server , can limit the performance gain due to overload . to address this problem , in this paper , we propose integrating load balancing with offloading , and study resource allocation for a multiuser multiserver vec system . first , we formulate the joint load balancing and offloading problem as a mixed integer nonlinear programming problem to maximize system utility . particularly , we take ieee 802.11p protocol into consideration for modeling the system utility . then , we decouple the problem as two subproblems and develop a low-complexity algorithm to jointly make vec server selection , and optimize offloading ratio and computation resource . numerical results illustrate that the proposed algorithm exhibits fast convergence and demonstrates story_separator_special_tag there are many issues when deploying self-driving car applications , such as unbalanced traffic flow in the network topology and inefficient network utilization , therefore , open and flexible automotive architectures are key requirements to enable experimenters to test their solutions in a productive environment . also improve the management of network resources , applications and users . in this work , simulation models for the automotive network based on mec and sdn technologies were developed , on the basis of which their positive impact on the automotive network was investigated and proved . the architecture of the automobile network was developed and its functional components were described in detail . in the results , simulation models were developed : the first experiment demonstrated a model showing the positive effect of edge and fog computing on the load of the network core ; a second experiment demonstrated the positive effect of sdn in load balancing between bs and rsu roadside units . story_separator_special_tag with the significant population growth in megacities everywhere , traffic congestion is becoming a severe impediment , leading to long travel delays and large economic loss on a global scale . platooning is a promising intelligent transportation framework that can improve road capacity , on-road safety , and fuel efficiency . furthermore , enabling inter-vehicle communications within a platoon and among platoons ( in a multiplatoon ) can potentially enhance platoon control by keeping constant inter-vehicle and inter-platoon distances . however , an efficient resource allocation ( ra ) approach is required for the timely and successful delivery of inter-vehicle information within multiplatoons . in this paper , subchannel allocation scheme and power control mechanism are proposed for lte-based inter-vehicle communications in a multiplatooning scenario . we jointly consider the evolved multimedia broadcast multicast services and device-to-device ( d2d ) multicast communications to enable intra- and inter-platoon communications such that a desired tradeoff between the required cellular resources and the imposed communication delay can be achieved . simulation results are given to demonstrate that the proposed approaches can reduce the communication delay comparing to d2d-unicast based ra scheme , especially in a multiplatoon scenario with a large number of vehicles story_separator_special_tag platooning has been identified as a promising framework to improve road capacity , on-road safety , and energy efficiency . enabling communications among vehicles in platoons is expected to enhance platoon control by keeping constant intervehicle and interplatoon distances . characterizing the performance of intra- and interplatoon communications in terms of throughput and packet transmission delays is crucial for validating the effectiveness of information sharing on platoon control . in this paper , we introduce an ieee 802.11p-based communication model for multiplatooning ( a chain of platoons ) scenarios . we present a probabilistic performance analysis of distributed-coordination-function-based intra- and interplatoon communications . expressions for the transmission attempt probability , collision probability , packet delay , packet-dropping probability , and network throughput are derived . numerical results show that the performance of interplatoon communications is affected by the transmissions of the first and last vehicles in a multiplatoon . this effect is reduced with an increase of the platoon number in the multiplatoon . in addition , the communication performance for three typical multiplatooning application scenarios is investigated , indicating that the ieee 802.11p-based communication can support the timely delivery of vehicle information among platoons for diverse on-road applications . story_separator_special_tag the autonomous vehicles ( avs ) , like that in knight rider , were completely a scientific fiction just a few years ago , but are now already practical with real-world commercial deployments . a salient challenge of avs , however , is the intensive computing tasks to carry out on board for the real-time traffic detection and driving decision making ; this imposes heavy load to avs due to the limited computing power . to explore more computing power and enable scalable autonomous driving , in this paper , we propose a collaborative task computing scheme for avs , in which the avs in proximity dynamically share idle computing power among each other . this , however , raises another fundamental problem on how to incentivize avs to contribute their computing power and how to fully utilize the pool of group computing power in an optimal way . this paper studies the problem by modeling the issue as a market-based optimal computing resource allocation problem . in specific , we develop a software-defined network ( sdn ) architecture and consider a star topology where a centered av outsources its computing tasks to the surrounding avs for its autonomous driving story_separator_special_tag the internet of vehicles ( iov ) is an emerging paradigm that is driven by recent advancements in vehicular communications and networking . meanwhile , the capability and intelligence of vehicles are being rapidly enhanced , and this will have the potential of supporting a plethora of new exciting applications that will integrate fully autonomous vehicles , the internet of things ( iot ) , and the environment . these trends will bring about an era of intelligent iov , which will heavily depend on communications , computing , and data analytics technologies . to store and process the massive amount of data generated by intelligent iov , onboard processing and cloud computing will not be sufficient due to resource/power constraints and communication overhead/latency , respectively . by deploying storage and computing resources at the wireless network edge , e.g. , radio access points , the edge information system ( eis ) , including edge caching , edge computing , and edge ai , will play a key role in the future intelligent iov . eis will provide not only low-latency content delivery and computation services but also localized data acquisition , aggregation , and processing . this article surveys story_separator_special_tag vehicular network aims at providing intelligent transportation and ubiquitous network access . edge computing is able to reduce the consumption of core network bandwidth and serving latency by processing the generated data at the network edge , and social network is able to provide precise services by analyzing user s personal behaviors . in this paper , we propose a new network system referred to as vehicular social edge computing ( vsec ) that inherits the advantages of both edge computing and social network . vsec is capable of improving the drivers quality of experience while enhancing the service providers quality of service . in order to further improve the performance of vsec , the network utility is modeled and maximized by optimally managing the available network resources via two steps . first , the total processing time is minimized to achieve the optimal payment of the user to each edge device for each kind of the required resource . second , a utility model is proposed , and the available resources are optimally allocated based on the results from the first step . the two optimization problems are solved by the lagrangian theory , and the closed-form expressions are story_separator_special_tag recently , due to the increasing popularity of enjoying various multimedia services on mobile devices ( e.g. , smartphones , ipads , and electronic tablets ) , the generated mobile data traffic has been explosively growing and has become a serve burden on mobile network operators . to address such a serious challenge in mobile networks , an effective approach is to manage data traffic by using complementary technologies ( e.g. , small cell network , wifi network , and so on ) to achieve mobile data offloading . in this paper , we discuss the recent advances in the techniques of mobile data offloading . particularly , based on the initiator diversity of data offloading , we classify the existing mobile data offloading technologies into four categories , i.e. , data offloading through small cell networks , data offloading through wifi networks , data offloading through opportunistic mobile networks , and data offloading through heterogeneous networks . besides , we show a detailed taxonomy of the related mobile data offloading technologies by discussing the pros and cons for various offloading technologies for different problems in mobile networks . finally , we outline some opening research issues and challenges , story_separator_special_tag this paper first provides a brief survey on existing traffic offloading techniques in wireless networks . particularly as a case study , we put forward an online reinforcement learning framework for the problem of traffic offloading in a stochastic heterogeneous cellular network ( hcn ) , where the time-varying traffic in the network can be offloaded to nearby small cells . our aim is to minimize the total discounted energy consumption of the hcn while maintaining the quality-of-service ( qos ) experienced by mobile users . for each cell ( i.e. , a macro cell or a small cell ) , the energy consumption is determined by its system load , which is coupled with system loads in other cells due to the sharing over a common frequency band . we model the energy-aware traffic offloading problem in such hcns as a discrete-time markov decision process ( dtmdp ) . based on the traffic observations and the traffic offloading operations , the network controller gradually optimizes the traffic offloading strategy with no prior knowledge of the dtmdp statistics . such a model-free learning framework is important , particularly when the state space is huge . in order to solve the story_separator_special_tag technological evolution of mobile user equipments ( ues ) , such as smartphones or laptops , goes hand-in-hand with evolution of new mobile applications . however , running computationally demanding applications at the ues is constrained by limited battery capacity and energy consumption of the ues . suitable solution extending the battery life-time of the ues is to offload the applications demanding huge processing to a conventional centralized cloud ( cc ) . nevertheless , this option introduces significant execution delay consisting in delivery of the offloaded applications to the cloud and back plus time of the computation at the cloud . such delay is inconvenient and make the offloading unsuitable for real-time applications . to cope with the delay problem , a new emerging concept , known as mobile edge computing ( mec ) , has been introduced . the mec brings computation and storage resources to the edge of mobile network enabling to run the highly demanding applications at the ue while meeting strict delay requirements . the mec computing resources can be exploited also by operators and third parties for specific purposes . in this paper , we first describe major use cases and reference scenarios where story_separator_special_tag this paper surveys the literature of opportunistic offloading . opportunistic offloading refers to offloading traffic originally transmitted through the cellular network to opportunistic network , or offloading computing tasks originally executed locally to nearby devices with idle computing resources through opportunistic network . this research direction is recently emerged , and the relevant research covers the period from 2009 to date , with an explosive trend over the last four years . we provide a comprehensive review of the research field from a multi-dimensional view based on application goal , realizing approach , offloading direction , etc . in addition , we pinpoint the major classifications of opportunistic offloading , so as to form a hierarchical or graded classification of the existing works . specifically , we divide opportunistic offloading into two main categories based on application goal : traffic offloading or computation offloading . each category is further divided into two smaller categories : with and without offloading node selection , which bridges between subscriber node and the cellular network , or plays the role of computing task executor for other nodes . we elaborate , compare , and analyze the literatures in each classification from the perspectives of story_separator_special_tag game theory ( gt ) has been used with significant success to formulate , and either design or optimize , the operation of many representative communications and networking scenarios . the games in these scenarios involve , as usual , diverse players with conflicting goals . this paper primarily surveys the literature that has applied theoretical games to wireless networks , emphasizing use cases of upcoming multi-access edge computing ( mec ) . mec is relatively new and offers cloud services at the network periphery , aiming to reduce service latency backhaul load , and enhance relevant operational aspects such as quality of experience or security . our presentation of gt is focused on the major challenges imposed by mec services over the wireless resources . the survey is divided into classical and evolutionary games . then , our discussion proceeds to more specific aspects which have a considerable impact on the game usefulness , namely : rational vs. evolving strategies , cooperation among players , available game information , the way the game is played ( single turn , repeated ) , the game model evaluation , and how the model results can be applied for both optimizing resource-constrained story_separator_special_tag a new networking paradigm , vehicular edge computing ( vec ) , has been introduced in recent years to the vehicular network to augment its computing capacity . the ultimate challenge to fulfill the requirements of both communication and computation is increasingly prominent , with the advent of ever-growing modern vehicular applications . with the breakthrough of vec , service providers directly host services in close proximity to smart vehicles for reducing latency and improving quality of service ( qos ) . this paper illustrates the vec architecture , coupled with the concept of the smart vehicle , its services , communication , and applications . moreover , we categorized all the technical issues in the vec architecture and reviewed all the relevant and latest solutions . we also shed some light and pinpoint future research challenges . this article not only enables naive readers to get a better understanding of this latest research field but also gives new directions in the field of vec to the other researchers . story_separator_special_tag the emergence of internet of things ( iot ) has enabled the interconnection and intercommunication among massive ubiquitous things , which caused an unprecedented generation of huge and heterogeneous amount of data , known as data explosions . on the other hand , although that cloud computing has served as an efficient way to process and store these data , however , challenges , such as the increasing demands of real time or latency-sensitive applications and the limitation of network bandwidth , still can not be solved by using only cloud computing . therefore , a new computing paradigm , known as fog computing , has been proposed as a complement to the cloud solution . fog computing extends the cloud services to the edge of network , and makes computation , communication and storage closer to edge devices and end-users , which aims to enhance low-latency , mobility , network bandwidth , security and privacy . in this paper , we will overview and summarize fog computing model architecture , key technologies , applications , challenges and open issues . firstly , we will present the hierarchical architecture of fog computing and its characteristics , and compare it with story_separator_special_tag some of the mobility challenges in the cities can be approached with the internet of things , cloud computing , and fog computing . in this work we consider the city as a system of systems and focus on the interaction of those entities with the mobility systems . a key element that allows this interaction is a vehicle on-board unit . in this work we address the interconnection , enabling technologies , interoperability , scalability , geo-distribution and time constrains aspects in the architectural design of mobility systems . for this we propose the on-board unit architecture and cover three different levels of technification in information and communication technologies . based on this differentiation we are able to include cloud and fog computing into our proposed designs . story_separator_special_tag purpose the past decade has witnessed a growing interest in vehicular networking and its myriad applications . the initial view of practitioners and researchers was that radio equipped vehicles can keep the drivers informed about potential safety risks and can enhance their awareness of road conditions and traffic related events . this conceptual paper seeks to put forth a novel vision , namely that advances in vehicular networks , embedded devices , and cloud computing can be used to set up what are known as vehicular clouds ( vcs ) .design/methodology/approach the paper suggests that vcs are technologically feasible and that they are likely to have a significant societal impact.findings the paper argues that at least in some of its manifestations , the ideas behind vcs are eminently implementable under present day technology . it is also expected that , once adopted and championed by municipalities and third party infrastructure providers , vcs will redefine the way in which pervasive computing and its myriad . story_separator_special_tag the growth of smart vehicles and computation-intensive applications poses new challenges in providing reliable and efficient vehicular services . offloading such applications from vehicles to mobile edge cloud servers has been considered as a remedy , although resource limitations and coverage constraints of the cloud service may still result in unsatisfactory performance . recent studies have shown that exploiting the unused resources of nearby vehicles for application execution can augment the computational capabilities of application owners while alleviating heavy on-board workloads . however , encouraging vehicles to share resources or execute applications for others remains a sensitive issue due to user selfishness . to address this issue , we establish a novel computation offloading marketplace in vehicular networks where a vickrey clarke groves based reverse auction mechanism utilizing integer linear programming ( ilp ) problem is formulated while satisfying the desirable economical properties of truthfulness and individual rationality . as ilp has high computation complexity which brings difficulties in implementation under larger and fast changing network topologies , we further develop an efficient unilateral-matching-based mechanism , which offers satisfactory suboptimal solutions with polynomial computational complexity , truthfulness and individual rationality properties as well as matching stability . simulation results show story_separator_special_tag emerging vehicular applications , such as real-time situational awareness and cooperative lane change , demand for sufficient computing resources at the edge to conduct time-critical and data-intensive tasks . this paper proposes fog following me ( folo ) , a novel solution for latency and quality balanced task allocation in vehicular fog computing . folo is designed to support the mobility of vehicles , including ones generating tasks and the others serving as fog nodes . we formulate the process of task allocation across stationary and mobile fog nodes into a joint optimization problem , with constraints on service latency , quality loss , and fog capacity . as it is a np-hard problem , we linearize it and solve it using mixed integer linear programming . to evaluate the effectiveness of folo , we simulate the mobility of fog nodes at different times of day based on real-world taxi traces , and implement two representative tasks , including video streaming and real-time object recognition . compared with naive and random fog node selection , the latency and quality balanced task allocation provided by folo achieves higher performance . more specifically , folo shortens the average service latency by up story_separator_special_tag with the emerging vehicular applications , such as real-time situational awareness and cooperative lane change , there exist huge demands for sufficient computing resources at the edge to conduct time-critical and data-intensive tasks . this paper proposes folo , a novel solution for latency and quality optimized task allocation in vehicular fog computing ( vfc ) . folo is designed to support the mobility of vehicles , including vehicles that generate tasks and the others that serve as fog nodes . considering constraints on service latency , quality loss , and fog capacity , the process of task allocation across stationary and mobile fog nodes is formulated into a joint optimization problem . this task allocation in vfc is known as a nondeterministic polynomial-time hard problem . in this paper , we present the task allocation to fog nodes as a bi-objective minimization problem , where a tradeoff is maintained between the service latency and quality loss . specifically , we propose an event-triggered dynamic task allocation framework using linear programming-based optimization and binary particle swarm optimization . to assess the effectiveness of folo , we simulated the mobility of fog nodes at different times of a day based on story_separator_special_tag vehicular cloud computing ( vcc ) is proposed to effectively utilize and share the computing and storage resources on vehicles . however , due to the mobility of vehicles , the network topology , the wireless channel states and the available computing resources vary rapidly and are difficult to predict . in this work , we develop a learning-based task offloading framework using the multi-armed bandit ( mab ) theory , which enables vehicles to learn the potential task offloading performance of its neighboring vehicles with excessive computing resources , namely service vehicles ( sevs ) , and minimizes the average offloading delay . we propose an adaptive volatile upper confidence bound ( avucb ) algorithm and augment it with load-awareness and occurrence-awareness , by redesigning the utility function of the classic mab algorithms . the proposed avucb algorithm can effectively adapt to the dynamic vehicular environment , balance the tradeoff between exploration and exploitation in the learning process , and converge fast to the optimal sev with theoretical performance guarantee . simulations under both synthetic scenario and a realistic highway scenario are carried out , showing that the proposed algorithm achieves close-to- optimal delay performance . story_separator_special_tag the vehicular edge computing system integrates the computing resources of vehicles , and provides computing services for other vehicles and pedestrians with task offloading . however , the vehicular task offloading environment is dynamic and uncertain , with fast varying network topologies , wireless channel states , and computing workloads . these uncertainties bring extra challenges to task offloading . in this paper , we consider the task offloading among vehicles , and propose a solution that enables vehicles to learn the offloading delay performance of their neighboring vehicles while offloading computation tasks . we design an adaptive learning based task offloading ( alto ) algorithm based on the multi-armed bandit theory , in order to minimize the average offloading delay . alto works in a distributed manner without requiring frequent state exchange , and is augmented with input-awareness and occurrence-awareness to adapt to the dynamic environment . the proposed algorithm is proved to have a sublinear learning regret . extensive simulations are carried out under both synthetic scenario and realistic highway scenario , and results illustrate that the proposed algorithm achieves low delay performance , and decreases the average delay up to $ 30\\ % $ compared with the story_separator_special_tag with the emergence of in-vehicle applications , providing the required computational capabilities is becoming a crucial problem . this paper proposes a framework named autonomous vehicular edge ( ave ) for edge computing on the road , with the aim of increasing the computational capabilities of vehicles in a decentralized manner . by managing the idle computational resources on vehicles and using them efficiently , the proposed ave framework can provide computation services in dynamic vehicular environments without requiring particular infrastructures to be deployed . specifically , this paper introduces a workflow to support the autonomous organization of vehicular edges . efficient job caching is proposed to better schedule jobs based on the information collected on neighboring vehicles , including gps information . a scheduling algorithm based on ant colony optimization is designed to solve this job assignment problem . extensive simulations are conducted , and the simulation results demonstrate the superiority of this approach over competing schemes in typical urban and highway scenarios . story_separator_special_tag there could be no smart city without a reliable and efficient transportation system . this necessity makes the its a key component of any smart city concept . while legacy its technologies are deployed worldwide in smart cities , enabling the next generation of its relies on effective integration of connected and autonomous vehicles , the two technologies that are under wide field testing in many cities around the world . even though these two emerging technologies are crucial in enabling fully automated transportation systems , there is still a significant need to automate other road and transportation components . to this end , due to their mobility , autonomous operation , and communication/processing capabilities , uavs are envisaged in many its application domains . this article describes the possible its applications that can use uavs , and highlights the potential and challenges for uav-enabled its for next-generation smart cities . story_separator_special_tag in this paper , a dynamic spectrum management framework is proposed to improve spectrum resource utilization in a multi-access edge computing ( mec ) in autonomous vehicular network ( avnet ) . to support the increasing data traffic and guarantee quality-of-service ( qos ) , spectrum slicing , spectrum allocating , and transmit power controlling are jointly considered . accordingly , three non-convex network utility maximization problems are formulated to slice spectrum among bss , allocate spectrum among autonomous vehicles ( avs ) associated with a bs , and control transmit powers of bss , respectively . via linear programming relaxation and first-order taylor series approximation , these problems are transformed into tractable forms and then are jointly solved through an alternate concave search ( acs ) algorithm . as a result , optimal spectrum slicing ratios among bss , optimal bs-vehicle association patterns , optimal fractions of spectrum resources allocated to avs , and optimal transmit powers of bss are obtained . based on our simulation , a high aggregate network utility is achieved by the proposed spectrum management scheme compared with two existing schemes . story_separator_special_tag the drastically increasing volume and the growing trend on the types of data have brought in the possibility of realizing advanced applications such as enhanced driving safety , and have enriched existing vehicular services through data sharing among vehicles and data analysis . due to limited resources with vehicles , vehicular edge computing and networks ( vecons ) i.e. , the integration of mobile edge computing and vehicular networks , can provide powerful computing and massive storage resources . however , road side units that primarily presume the role of vehicular edge computing servers can not be fully trusted , which may lead to serious security and privacy challenges for such integrated platforms despite their promising potential and benefits . we exploit consortium blockchain and smart contract technologies to achieve secure data storage and sharing in vehicular edge networks . these technologies efficiently prevent data sharing without authorization . in addition , we propose a reputation-based data sharing scheme to ensure high-quality data sharing among vehicles . a three-weight subjective logic model is utilized for precisely managing reputation of the vehicles . numerical results based on a real dataset show that our schemes achieve reasonable efficiency and high-level of security story_separator_special_tag vehicular fog computing ( vfc ) is a promising approach to provide ultra-low-latency service to vehicles and end users by extending the fog computing to conventional vehicular networks . parked vehicle assistance ( pva ) , as a critical technique in vfc , can be integrated with smart parking in order to exploit its full potentials . in this paper , we propose a vfc system by combining both pva and smart parking . a single- round multi-item parking reservation auction is proposed to guide the on-the-move vehicles to the available parking places with less effort and meanwhile exploit the fog capability of parked vehicles to assist the delay-sensitive computing services . the proposed allocation rule maximizes the aggregate utility of the smart vehicles and the proposed payment rule guarantees incentive compatible , individual rational and budget balance . the simulation results confirmed the win-win performance enhancement to fog node controller ( fnc ) , vehicles , and parking places from the proposed design . story_separator_special_tag vehicular fog computing ( vfc ) is a promising approach to provide ultra-low-latency service to vehicles and end users by extending fog computing to the conventional vehicular networks . parked vehicle assistance ( pva ) , as a critical technique in vfc , can be integrated with smart parking in order to exploit its full potentials . in this paper , we propose a smart vfc system , by combining both pva and smart parking . a vfc-aware parking reservation auction is proposed to guide the on-the-move vehicles to the available parking places with less effort and meanwhile exploit the fog capability of parked vehicles to assist the delay-sensitive computing services by monetary rewards to compensate for their service cost . the proposed allocation rule maximizes the aggregate utility of the smart vehicles and the proposed payment rule guarantees incentive compatibility , individual rationality , and budget balance . we further provide an observation stage with dynamic offload pricing update to improve the offload efficiency and the profit of the fog system . the simulation results confirm the win win performance enhancement to the fog node controller , the smart vehicles , and the parking places from the proposed design story_separator_special_tag automated driving is coming with enormous potential for safer , more convenient , and more efficient transportation systems . besides onboard sensing , autonomous vehicles can also access various cloud services such as high definition maps and dynamic path planning through cellular networks to precisely understand the real-time driving environments . however , these automated driving services , which have large content volume , are time-varying , location-dependent , and delay-constrained . therefore , cellular networks will face the challenge of meeting this extreme performance demand . to cope with the challenge , by leveraging the emerging mobile edge computing technique , in this article , we first propose a two-level edge computing architecture for automated driving services in order to make full use of the intelligence at the wireless edge ( i.e. , base stations and autonomous vehicles ) for coordinated content delivery . we then investigate the research challenges of wireless edge caching and vehicular content sharing . finally , we propose potential solutions to these challenges and evaluate them using real and synthetic traces . simulation results demonstrate that the proposed solutions can significantly reduce the backhaul and wireless bottlenecks of cellular networks while ensuring the quality story_separator_special_tag the smart vehicles construct vehicle of internet which can execute various intelligent services . although the computation capability of the vehicle is limited , multi-type of edge computing nodes provide heterogeneous resources for vehicular services.when offloading the complicated service to the vehicular edge computing node , the decision should consider numerous factors.the offloading decision work mostly formulate the decision to a resource scheduling problem with single or multiple objective function and some constraints , and explore customized heuristics algorithms . however , offloading multiple data dependency tasks in a service is a difficult decision , as an optimal solution must understand the resource requirement , the access network , the user mobility , and importantly the data dependency . inspired by recent advances in machine learning , we propose a knowledge driven ( kd ) service offloading decision framework for vehicle of internet , which provides the optimal policy directly from the environment . we formulate the offloading decision of multi-task in a service as a long-term planning problem , and explores the recent deep reinforcement learning to obtain the optimal solution . it considers the future data dependency of the following tasks when making decision for a current task story_separator_special_tag the proliferation of smart vehicular terminals ( vts ) and their resource hungry applications impose serious challenges to the processing capabilities of vts and the delivery of vehicular services . mobile edge computing ( mec ) offers a promising paradigm to solve this problem by offloading vt applications to proximal mec servers , while tv white space ( tvws ) bands can be used to supplement the bandwidth for computation offloading . in this paper , we consider a cognitive vehicular network that uses the tvws band , and formulate a dual-side optimization problem , to minimize the cost of vts and that of the mec server at the same time . specifically , the dual-side cost minimization is achieved by jointly optimizing the offloading decision and local cpu frequency on the vt side , and the radio resource allocation and server provisioning on the server side , while guaranteeing network stability . based on lyapunov optimization , we design an algorithm called ddorv to tackle the joint optimization problem , where only current system states , such as channel states and traffic arrivals , are needed . the closed-form solution to the vt-side problem is obtained easily by derivation story_separator_special_tag as autonomous and connected vehicles are becoming a reality , mobile-edge computing ( mec ) off-loading provides a promising paradigm to trade off between the long latency of clouding computing and the high cost of upgrading the on-board computers of vehicles . however , due to the randomness of task arrivals , vehicles always have a tendency to choose mec server for offloading in a selfish way , which is not satisfactory for the social good of the whole system and even results in a failure possibility of some tasks due to the overflow of mec servers . this paper elaborates the modeling of task arrival process and the influence of various offloading modes on computation cost . interestingly , by formulating task arrivals as a compound process of vehicle arrivals and task generations , we found that the task arrival model for mec servers does not belong to the standard poisson distribution , which contradicts the popular assumption in most existing studies . considering the load distribution and the prediction of cost , we propose a load-aware mec offloading method , in which each vehicle makes mec server selection based on the predicted cost with the updated knowledge on story_separator_special_tag by leveraging the 5g enabled vehicular ad hoc network ( 5g-vanet ) , it is widely recognized that connected vehicles have the potentials to improve road safety , transportation intelligence and provide in-vehicle entertainment experience . however , many enabling applications in 5g-vanet rely on the efficient content sharing among mobile vehicles , which is a very challenging issue due to the extremely large data volume , rapid topology change , and unbalanced traffic . in this paper , we investigate content prefetching and distribution in 5g-vanet . we first introduce an edge computing based hierarchical architecture for efficient distribution of large-volume vehicular data . we then propose a multi-place multi-factor prefetching scheme to meet the rapid topology change and unbalanced traffic . the content requests of vehicles can be served by neighbors , which can improve the sharing efficiency and alleviate the burden of networks . furthermore , we use a graph theory based approach to solve the content distribution by transforming it into a maximum weighted independent set problem . finally , the proposed scheme is evaluated with a greedy transmission strategy to demonstrate its efficiency . story_separator_special_tag vehicular networks have attracted much attention from both industry and academia . due to the high-speed and complex network topology of vehicles , vehicular networks have become extremely challenging . the development of mobile communication technologies , such as sdn and fc , provides a great platform for vehicular networks . sdn separates the software plane and data plane , which enables efficient management and centralized control of vehicular networks . fc is an extension of cloud computing . by pushing significant storage , control , management , and communication mechanisms onto the network edge or user equipment , fc alleviates the pressure of the core network . accordingly , we consider a novel sdfc-venet architecture in this article . based on the sdfc-venet architecture , mobility management and resource allocation are discussed . simulation results show the superiority of the presented sdfc-venet architecture , as well as the associated handover scheme and resource allocation mechanism . furthermore , the existing challenges and open issues are discussed . story_separator_special_tag vehicular edge computing is essential to support future emerging multimedia-rich and delay-sensitive applications in vehicular networks . however , the massive deployment of edge computing infrastructures induces new problems including energy consumption and carbon pollution . this motivates us to develop begin ( big data enabled energy-efficient vehicular edge computing ) , a programmable , scalable , and flexible framework for integrating big data analytics with vehicular edge computing . in this article , we first present a comprehensive literature review . then the overall design principle of begin is described with an emphasis on computing domain and data domain convergence . in the next section , we classify big data in begin into four categories and then describe their features and potential values . four typical application scenarios in begin including node deployment , resource adaptation and workload allocation , energy management , and proactive caching and pushing , are provided to illustrate how to achieve energy-efficient vehicular edge computing by using big data . a case study is presented to demonstrate the feasibility of begin and the superiority of big data in energy efficiency improvement . finally , we conclude this work and outline future research open issues story_separator_special_tag in vehicular edge computing ( vec ) , resource-intensive tasks are offloaded to computing nodes at the network edge . owing to high mobility and distributed nature , optimal task offloading in vehicular environments is still a challenging problem . in this paper , we first introduce a software-defined vehicular edge computing ( sd-vec ) architecture where a controller not only guides the vehicles task offloading strategy but also determines the edge cloud resource allocation strategy . to obtain the optimal strategies , we formulate a problem on the edge cloud selection and resource allocation to maximize the probability that a task is successfully completed within a pre-specified time limit . since the formulated problem is a well-known np-hard problem , we devise a mobility-aware greedy algorithm ( mga ) that determines the amount of edge cloud resources allocated to each vehicle . trace-driven simulation results demonstrate that mga provides near-optimal performance and improves the successful task execution probability compared with conventional algorithms . story_separator_special_tag low-iatency communication is crucial to satisfy the strict requirements on latency and reliability in 5g communications . in this paper , we firstly consider a contract-based vehicular fog computing resource allocation framework to minimize the intolerable delay caused by the numerous tasks on the base station during peak time . in the vehicular fog computing framework , the users tend to select nearby vehicles to process their heavy tasks to minimize delay , which relies on the participation of vehicles . thus , it is critical to design an effective incentive mechanism to encourage vehicles to participate in resource allocation . next , the simulation results demonstrate that the contract-based resource allocation can achieve better performance . story_separator_special_tag vehicular fog computing ( vfc ) has emerged as a promising solution to relieve the overload on the base station and reduce the processing delay during the peak time . the computation tasks can be offloaded from the base station to vehicular fog nodes by leveraging the under-utilized computation resources of nearby vehicles . however , the wide-area deployment of vfc still confronts several critical challenges , such as the lack of efficient incentive and task assignment mechanisms . in this paper , we address the above challenges and provide a solution to minimize the network delay from a contract-matching integration perspective . first , we propose an efficient incentive mechanism based on contract theoretical modeling . the contract is tailored for the unique characteristic of each vehicle type to maximize the expected utility of the base station . next , we transform the task assignment problem into a two-sided matching problem between vehicles and user equipment . the formulated problem is solved by a pricing-based stable matching algorithm , which iteratively carries out the propose and price-rising procedures to derive a stable matching based on the dynamically updated preference lists . finally , numerical results demonstrate that significant performance story_separator_special_tag as a typical application of the technology of internet of things ( iot ) , internet of vehicle ( iov ) is facing the explosive computation demands and restrict delay constrains . vehicular networks with mobile edge computing ( mec ) is a promising approach to address this problem . in this paper , we focus on the problem of reducing the completion time of virtual reality ( vr ) applications for iov . to this end , we propose a cooperative approach for parallel computing and transmission for vr . in our proposed scheme , a vr task is divided into two sub-tasks firstly . then one of the two is offloaded to the vehicle via wireless transmission so that the two sub-tasks can be processed at the mec server and the vehicle separately and simultaneously . we formulate the scheme as a nonlinear optimization problem to jointly determine computation offloading proportion , communication resource and computation resource allocation . due to the np-hard property of this problem , a joint offloading proportion and resource allocation optimization ( joprao ) algorithm is designed to obtain the optimal solution . simulation results demonstrate that latency of vr task completion time story_separator_special_tag vehicular networks are facing the challenges to support ubiquitous connections and high quality of service for numerous vehicles . to address these issues , mobile edge computing ( mec ) is explored as a promising technology in vehicular networks by employing computing resources at the edge of vehicular wireless access networks . in this paper , we study the efficient task offloading schemes in vehicular edge computing networks . the vehicles perform the offloading time selection , communication , and computing resource allocations optimally , the mobility of vehicles and the maximum latency of tasks are considered . to minimize the system costs , including the costs of the required communication and computing resources , we first analyze the offloading schemes in the independent mec servers scenario . the offloading tasks are processed by the mec servers deployed at the access point ( ap ) independently . a mobility-aware task offloading scheme is proposed . then , in the cooperative mec servers scenario , the mec servers can further offload the collected overloading tasks to the adjacent servers at the next ap on the vehicles moving direction . a location-based offloading scheme is proposed . in both scenarios , the story_separator_special_tag mobile edge computing ( mec ) is a new paradigm to improve the quality of vehicular services by providing computation offloading close to vehicular terminals ( vts ) .however , due to the computation limitation of the mec servers , how to optimally utilize the limited computation resources of mec servers while maintaining a high quality of experience ( qoe ) of vts becomes a challenge . to address the problem , we investigate a novel computation offloading scheme based on the mec offloading framework in vehicular networks . firstly , the utility of vts for offloading their computation tasks is presented , where the utility is jointly determined by the execution time , computation resources and the energy for completing the computation tasks . next , with the theoretical analysis of the utility , the qoe of each vt can be guaranteed . then , combined with the pricing scheme of the mec servers , we propose an efficient distributed computation offloading algorithm to make the optimal offloading decisions for vts , where the utility of the mec servers is maximized and the qoe of the vts is enhanced . in addition , simulation results demonstrate that the proposal story_separator_special_tag technological evolutions in the automobile industry , especially the development of connected and autonomous vehicles , have granted vehicles more computing , storage , and sensing resources . the necessity of efficient utilization of these resources leads to the vision of vehicular cloud computing ( vcc ) , which can offload the computing tasks from the edge or remote cloud to enhance the overall efficiency . in this paper , we study the problem of computation offloading through the vehicular cloud ( vc ) , where computing missions from edge cloud can be offloaded and executed cooperatively by vehicles in vc . specifically , computing missions are further divided into computing tasks with interdependency and executed in different vehicles in the vc to minimize the overall response time . to characterize the instability of computing resources resulting from the high vehicular mobility , a mobility model focusing on vehicular dwell time is utilized . considering the heterogeneity of vehicular computing capabilities and the interdependency of computing tasks , we formulate an optimization problem for task scheduling , which is np-hard . for low complexity , a modified genetic algorithm based scheduling scheme is designed where integer coding is used rather story_separator_special_tag with advances in the information and communication technology ( ict ) , connected vehicles are one of the key enablers to unleash intelligent transportation systems ( its ) . on the other hand , the envisioned massive number of connected vehicles raises the need for powerful communication and computation capabilities . as an emerging technique , fog computing is expected to be integrated with existing communication infrastructures , giving rise to a concept of fog-enhanced radio access networks ( ferans ) . such architecture brings computation capabilities closer to vehicular users , thereby reducing communication latency to access services , while making users capable of sharing local environment information for advanced vehicular services . in the ferans service migration , where the service is migrated from a source fog node to a target fog node following the vehicle 's moving trace , it is necessary for users to access service as close as possible in order to maintain the service continuity and satisfy stringent latency requirements of real-time services . fog servers , however , need to have sufficient computational resources available to support such migration . indeed , a fog node typically has limited resources and hence can easily story_separator_special_tag mobile computation offloading enables resources-constrained mobile devices to offload their computation intensive tasks to other available computing resources for local energy savings . in this paper , we study the offloading decision and task scheduling issues when multiple serving vehicles ( svs ) can be utilized in vehicular network . the overall energy consumption of users with dynamic voltage scaling ( dvs ) technology is minimized subject to the delay constraint of each task . for the ideal cases , the assignment and scheduling in an offline style are firstly formulated as a mixed-integer nonlinear programming ( minlp ) problem , and the optimal solution is derived based on dynamic programming . after that , two online strategies , energy consumption minimization ( ecm ) based low-complexity assignment , and resource reservation ( rr ) assignment are also proposed . simulation results demonstrate the improvements in energy saving when the proposed strategies incorporated with the dvs technology are adopted . story_separator_special_tag with onset of intelligent transport systems , vehicles are equipped with internet enabled powerful computation units that provide smart driving assistance , along with various infotainment applications . these applications require web assistance and high computation power , which can not be executed by standalone onboard units of the smart vehicles . third party infrastructures like centralized cloud and cloudlets are introduced , to meet the requirement for such vehicular , web based , resource hungry applications . offloading jobs to centralized cloud , exhausts network bandwidth and causes network delay , whereas frequent offloading to cloudlet results in resource starvation due to limited cloudlet resources . these problems lead to the introduction of vehicular cloud computing ( vcc ) where the onboard units of several local smart vehicles collectively form a cloud . the concept of multi layered cloud brings centralized cloud , cloudlet and vehicular cloud together to coexist and provide on demand services to mobile and vehicular users . in this work , a three tier architecture is proposed consisting of vehicular cloud , roadside cloudlet and centralized cloud . we have developed an optimized resource allocation and task scheduling algorithm to efficiently serve huge number of story_separator_special_tag fog computing extends the facility of cloud computing from the center to edge networks . although fog computing has the advantages of location awareness and low latency , the rising requirements of ubiquitous connectivity and ultra-low latency challenge real-time traffic management for smart cities . as an integration of fog computing and vehicular networks , vehicular fog computing ( vfc ) is promising to achieve real-time and location-aware network responses . since the concept and use case of vfc are in the initial phase , this article first constructs a three-layer vfc model to enable distributed traffic management in order to minimize the response time of citywide events collected and reported by vehicles . furthermore , the vfc-enabled offloading scheme is formulated as an optimization problem by leveraging moving and parked vehicles as fog nodes . a real-world taxi-trajectory-based performance analysis validates our model . finally , some research challenges and open issues toward vfc-enabled traffic management are summarized and highlighted . story_separator_special_tag mobile edge computing ( mec ) has been recently proposed to bring computing capabilities closer to mobile endpoints , with the aim of providing low latency and real-time access to network information via applications and services . several attempts have been made to integrate mec in intelligent transportation systems ( its ) , including new architectures , communication frameworks , deployment strategies and applications . in this paper , we explore existing architecture proposals for integrating mec in vehicular environments , which would allow the evolution of the next generation its in smart cities . moreover , we classify the desired applications into four major categories . we rely on a mec architecture with three layers to propose a data dissemination protocol , which can be utilized by traffic safety and travel convenience applications in vehicular networks . furthermore , we provide a simulation-based prototype to evaluate the performance of our protocol . simulation results show that our proposed protocol can significantly improve the performance of data dissemination in terms of data delivery , communication overhead and delay . in addition , we highlight challenges and open issues to integrate mec in vehicular networking environments for further research . story_separator_special_tag in the era of smart cities , all vehicle systems will be connected to enhance the comfort of driving , relieve traffic congestion , and enjoy in-vehicle multimedia entertainment . the vision of all vehicles connected poses a crucial challenge for an individual vehicle system to efficiently support these applications . network virtualization is a very promising enabling solution , by allowing multiple isolated and heterogeneous virtual networks ( vns ) , to satisfy the different quality of service ( qos ) requirements . smart identifier network ( sinet ) may provide vns through effective resources allocation and control based on its model of three layers and two domains . in this paper , we provide resource allocation and mapping of the vehicular networks through elastic network virtualization based on sinet . the appropriate vehicles are selected and generated as a function group for special service , by which the difference of heterogeneous vehicular resources is hided . vehicular nodes are autonomously organized that each of them can evaluate others resource availability in a topology-aware way with information by leveraging the learning technology and make its own decision to realize the whole mapping process through a phasing virtual network embedding story_separator_special_tag mobile edge computing ( mec ) has emerged as a promising paradigm to realize user requirements with low-latency applications . the deep integration of multi-access technologies and mec can significantly enhance the access capacity between heterogeneous devices and mec platforms . however , the traditional mec network architecture can not be directly applied to the internet of vehicles ( iov ) due to high speed mobility and inherent characteristics . furthermore , given a large number of resource-rich vehicles on the road , it is a new opportunity to execute task offloading and data processing onto smart vehicles . to facilitate good merging of the mec technology in iov , this article first introduces a vehicular edge multi-access network that treats vehicles as edge computation resources to construct the cooperative and distributed computing architecture . for immersive applications , co-located vehicles have the inherent properties of collecting considerable identical and similar computation tasks . we propose a collaborative task offloading and output transmission mechanism to guarantee low latency as well as the application- level performance . finally , we take 3d reconstruction as an exemplary scenario to provide insights on the design of the network framework . numerical results demonstrate story_separator_special_tag with the explosion in the number of connected devices and internet of things ( iot ) services in smart city , the challenges to meet the demands from both data traffic delivery and information processing are increasingly prominent . meanwhile , the connected vehicle networks have become an essential part in smart city , bringing massive data traffic as well as significant communication , caching , and computing resources . as the two typical services types in smart city , delay-tolerant and delay-sensitive traffic requires very different quality of service ( qos ) /quality of experience ( qoe ) , and could be delivered through the routes with different features to meet their qos/qoe requirements with the lowest costs . in this paper , we propose a novel vehicle network architecture in the smart city scenario , mitigating the network congestion with the joint optimization of networking , caching , and computing resources . cloud computing at the data centers as well as mobile edge computing at the evolved node bs and on-board units are taken as the paradigms to provide caching and computing resources . the programmable control principle originated from the software-defined networking paradigm has been introduced into story_separator_special_tag vehicular edge computing ( vec ) is introduced to extend computing capacity to vehicular network edge recently . with the advent of vec , service providers directly host services in close proximity of mobile vehicles for great improvements . as a result , a new networking paradigm , vehicular edge networks is emerged along with the development of vec . however , it is necessary to address security issues for facilitating vec well . in this paper , we focus on reputation management to ensure security protection and improve network efficiency in the implementation of vec . a distributed reputation management system ( dreams ) is proposed , wherein vec servers are adopted to execute local reputation management tasks for vehicles . this system has remarkable features for improving overall performance : 1 ) distributed reputation maintenance ; 2 ) trusted reputation manifestation ; 3 ) accurate reputation update ; and 4 ) available reputation usage . in particular , we utilize multi-weighted subjective logic for accurate reputation update in dreams . to enrich reputation usage in dreams , service providers optimize resource allocation in computation offloading by considering reputation of vehicles . numerical results indicate that dreams has great story_separator_special_tag vehicular edge computing ( vec ) has been studied as an important application of mobile edge computing in vehicular networks . usually , the generalization of vec involves large-scale deployment of dedicated servers , which will cause tremendous economic expense . we also observe that the parked vehicles ( pvs ) in addition to mobile vehicles have rich and underutilized resources for task execution in vehicular networks . thus , we consider scheduling pvs as available edge computing nodes to execute tasks , and this leads to a new computing paradigm , called by parked vehicle edge computing ( pvec ) . in this paper , we investigate pvec and explore opportunistic resources from pvs to run distributed mobile applications . pvs coordinate with vec servers for collective task execution . first , a system architecture with primary network entities is proposed for enabling pvec . we also elaborately design an interactive protocol to support mutual communications among them with security guarantee . moreover , we measure the availability of opportunistic resources and formulate a resource scheduling optimization problem by using stackelberg game approach . a subgradient-based iterative algorithm is presented to determine workload allocation among pvs and minimize the story_separator_special_tag as vehicle applications , mobile devices and the internet of things are growing fast , and developing an efficient architecture to deal with the big data in the internet of vehicles ( iov ) has been an important concern for the future smart city . to overcome the inherent defect of centralized data processing in cloud computing , fog computing has been proposed by offloading computation tasks to local fog servers ( lfss ) . by considering factors like latency , mobility , localization , and scalability , this article proposes a regional cooperative fog-computing-based intelligent vehicular network ( cfc-iov ) architecture for dealing with big iov data in the smart city . possible services for iov applications are discussed , including mobility control , multi-source data acquisition , distributed computation and storage , and multi-path data transmission . a hierarchical model with intra-fog and inter-fog resource management is presented , and energy efficiency and packet dropping rates of lfss in cfc-iov are optimized . story_separator_special_tag enabling hd-map-assisted cooperative driving among cavs to improve navigation safety faces technical challenges due to increased communication traffic volume for data dissemination and an increased number of computing/storing tasks on cavs . in this article , a new architecture that combines mec and sdn is proposed to address these challenges . with mec , the interworking of multiple wireless access technologies can be realized to exploit the diversity gain over a wide range of radio spectrum , and at the same time , computing/storing tasks of a cav are collaboratively processed by servers and other cavs . by enabling nfv in mec , different functions can be programmed on the server to support diversified av applications , thus enhancing the server 's flexibility . moreover , by using sdn concepts in mec , a unified control plane interface and global information can be provided , and by subsequently using this information , intelligent traffic steering and efficient resource management can be achieved . a case study is presented to demonstrate the effectiveness of the proposed architecture . story_separator_special_tag vehicular fog computing , which extends the mobile cloud paradigm , is usually composed of stable infrastructures , a large volume of vehicles , portable devices , and robust networks . as a service-providing platform , it is significant to quickly obtain the required service with the aim to correctly save the energy of the corresponding nodes and effectively improve the network survivability . however , the limited capacity of components makes such a situation more complicated . this paper aims to reduce serving time by allocating the available bandwidth to four kinds of services . a utility model is built according to the above-mentioned serving methods and is solved through a two-step approach . for the first step , all the sub-optimal solutions are provided based on a lagrangian algorithm . for the second step , an optimal solution selection process is presented and analyzed . a numerical simulation is executed to illustrate the allocation results and the optimal utility model while optimizing the survivability . story_separator_special_tag vehicular networks enable an efficient communication with the aim of improving data dissemination among vehicles . however , a growing number of vehicles expect to conduct data dissemination through road side facilities which cause an increment of network load . to address the issue , this letter applies fog computing technologies for enhancing vehicular network that is planned as a layered network architecture . moreover , two dynamic scheduling algorithms are proposed on the fog computing scheme for the data scheduling in vehicular networks . these algorithms can dynamically adapt a changeable network environment and benefit in efficiency . for performance analysis , a compositional formal method , named performance evaluation process algebra , is applied to model scheduling algorithm in a fog-based vehicular network . story_separator_special_tag in this paper , we propose chimera , a novel hybrid edge computing framework , integrated with the emerging edge cloud radio access network , to augment network-wide vehicle resources for future large-scale vehicular crowdsensing applications , by leveraging a multitude of cooperative vehicles and the virtual machine ( vm ) pool in the edge cloud via the control of the application manager deployed in the edge cloud . we present a comprehensive framework model and formulate a novel multivehicle and multitask offloading problem , aiming at minimizing the energy consumption of network-wide recruited vehicles serving heterogeneous crowdsensing applications , and meanwhile reconciling both application deadline and vehicle incentive . we invoke lyapunov optimization framework to design tasksche , an online task scheduling algorithm , which only utilizes the current system information . as the core components of the algorithm , we propose a task workload assignment policy based on graph transformation and a knapsack-based vm pool resource allocation policy . rigorous theoretical analyses and extensive trace-driven simulations indicate that our framework achieves superior performance ( e.g. , 20 % 68 % energy saving without overstepping application deadlines for network-wide vehicles compared with vehicle local processing ) and scales well story_separator_special_tag the resource limitation of multi-access edge computing ( mec ) is one of the major issues in order to provide low-latency high-reliability computing services for internet of things ( iot ) devices . moreover , with the steep rise of task requests from iot devices , the requirement of computation tasks needs dynamic scalability while using the potential of offloading tasks to mobile volunteer nodes ( mvns ) . we , therefore , propose a scalable vehicle-assisted mec ( svmec ) paradigm , which can not only relieve the resource limitation of mec but also enhance the scalability of computing services for iot devices and reduce the cost of using computing resources . in the svmec paradigm , a mec provider can execute its users tasks by choosing one of three ways : ( i ) do itself on local mec , ( ii ) offload to the remote cloud , and ( iii ) offload to the mvns . we formulate the problem of joint node selection and resource allocation as a mixed integer nonlinear programming ( minlp ) problem , whose major objective is to minimize the total computation overhead in terms of the weighted-sum of task completion story_separator_special_tag by analogy with internet of things , internet of vehicles ( iov ) that enables ubiquitous information exchange and content sharing among vehicles with little or no human intervention is a key enabler for the intelligent transportation industry . in this paper , we study how to combine both the physical and social layer information for realizing rapid content dissemination in device-to-device vehicle-to-vehicle ( d2d-v2v ) -based iov networks . in the physical layer , headway distance of vehicles is modeled as a wiener process , and the connection probability of d2d-v2v links is estimated by employing the kolmogorov equation . in the social layer , the social relationship tightness that represents content selection similarities is obtained by bayesian nonparametric learning based on real-world social big data , which are collected from the largest chinese microblogging service sina weibo and the largest chinese video-sharing site youku . then , a price-rising-based iterative matching algorithm is proposed to solve the formulated joint peer discovery , power control , and channel selection problem under various quality-of-service requirements . finally , numerical results demonstrate the effectiveness and superiority of the proposed algorithm from the perspectives of weighted sum rate and matching satisfaction gains story_separator_special_tag social internet of vehicles ( siov ) is a new paradigm that enables social relationships among vehicles by integrating vehicle-to-everything communications and social networking properties into the vehicular environment . through the provision of diverse socially-inspired applications and services , the emergence of siov helps to improve the road experience , traffic efficiency , road safety , travel comfort , and entertainment along the roads . however , the computation performance for those applications have been seriously affected by resource-limited on-board units as well as deployment costs and workloads of roadside units . under such context , an unmanned aerial vehicle ( uav ) -assisted mobile edge computing environment over siov with a three-layer integrated architecture is adopted in this paper . within this architecture , we explore the energy-aware dynamic resource allocation problem by taking into account partial computation offloading , social content caching , and radio resource scheduling . particularly , we develop an optimization framework for total utility maximization by jointly optimizing the transmit power of vehicle and the uav trajectory . to resolve this problem , an energy-aware dynamic power optimization problem is formulated under the constraint of the evolution law of energy consumption state for story_separator_special_tag data offloading plays an important role for the mobile data explosion problem that occurs in cellular networks . this paper proposed an idea and control scheme for offloading vehicular communication traffic in the cellular network to vehicle to vehicle ( v2v ) paths that can exist in vehicular ad hoc networks ( vanets ) . a software-defined network ( sdn ) inside the mobile edge computing ( mec ) architecture , which is abbreviated as the sdni-mec server , is devised in this paper to tackle the complicated issues of vanet v2v offloading . using the proposed sdni-mec architecture , each vehicle reports its contextual information to the context database of the sdni-mec server , and the sdn controller of the sdni-mec server calculates whether there is a v2v path between the two vehicles that are currently communicating with each other through the cellular network . this proposed method : 1 ) uses each vehicle s context ; 2 ) adopts a centralized management strategy for calculation and notification ; and 3 ) tries to establish a vanet routing path for paired vehicles that are currently communicating with each other using a cellular network . the performance analysis for the story_separator_special_tag several novel and promising wireless vehicular-network applications have been developed recently . examples include trip planning , media sharing , and internet access . however , sufficient network resources [ e.g. , equipped vehicles , base stations ( bss ) , roadside units ( rsus ) , and other infrastructure ] to support these applications are not yet available or are deployed at a slow pace . moreover , these applications require considerable efforts in information gathering and data processing to allow quick decision making and fast feedback to all users . this imposes significant challenges on the development of an efficient wireless vehicularcommunication platform . in this article , we propose a new hierarchical software-defined-network ( sdn ) -based wireless vehicular fog architecture called a hierarchical sdn for vehicular fog ( hsvf ) architecture . the hsvf is based on a hybrid sdn control plane that reinforces centralized and distributed management . the proposed sdn control plane includes a trajectory prediction module to mitigate the drawbacks of the frequent handover problem between the rsus and vehicles . the proposed architecture is then evaluated using a relevant case study that addresses the scheduling of electric vehicle ( ev ) energy story_separator_special_tag mobile edge computing ( mec ) is a promising solution to improve vehicular services through offloading computation to cloud servers in close proximity to mobile vehicles . however , the self-interested nature together with the high mobility characteristic of the vehicles make the design of the computation offloading scheme a significant challenge . in this paper , we propose a new vehicular edge computing ( vec ) framework to model the computation offloading process of the mobile vehicles running on a bidirectional road . based on this framework , we adopt a contract theoretic approach to design optimal offloading strategies for the vec service provider , which maximize the revenue of the provider while enhancing the utilities of the vehicles . to further improve the utilization of the computing resources of the vec servers , we incorporate task priority distinction as well as additional resource providing into the design of the offloading scheme , and propose an efficient vec server selection and computing resource allocation algorithm . numerical results indicate that our proposed schemes greatly enhance the revenue of the vec provider , and concurrently improve the utilization of cloud computing resources . story_separator_special_tag in this article , a robust and distributed incentive scheme for collaborative caching and dissemination in content-centric cellular-based vehicular delay-tolerant networks ( repsys ) is proposed . repsys is robust because despite taking into account first- and second-hand information , it is resilient against false accusations and praise , and distributed , as the decision to interact with another node depends entirely on each node . the performance evaluation shows that repsys is capable , while evaluating each node 's participation in the network , of correctly classifying nodes in most cases . in addition , it reveals that there are trade-offs in repsys , for example , to reduce detection time of nodes that neither cache nor disseminate other nodes ' data , one may sacrifice the system 's resilience against false accusations and praise , or even , by penalizing nodes that do not disseminate data , one may temporarily isolate nodes that could contribute to data dissemination . story_separator_special_tag this paper studies computation offloading for mobile users in legacy vehicles that are not equipped with powerful computing devices . while running a time-sensitive and computation-intensive application on a mobile device ( md ) of these users , the md may offload part or all the applications to a nearby smart vehicle that can serve as a cloudlet . this provides a possibility to complete the application in time while saving the battery energy of the md . however , high mobility of the vehicles causes short-lived communication links between the vehicles . as a result , the md may have to offload tasks to a new cloudlet each time it loses the connection to the current cloudlet . in this paper , we propose a vehicle-based cloudlet relaying ( vcr ) scheme for mobile computation offloading . the objective is to effectively utilize the computation resources available in surrounding smart vehicles of the md in the highly dynamic network environment , where each vehicle-based cloudlet may only be available to help the md execute a small portion of the application . instead of offloading tasks directly from the md to individual cloudlets , which may consume high power and story_separator_special_tag heterogeneous vehicular networks ( hetvnets ) , which applies the heterogenous access technologies ( e.g. , cellular and wifi ) complementarily to provide seamless and ubiquitous connections to vehicles , have emerged as a promising and more practical paradigm to enable vehicular service applications on the road . however , with different access technologies presenting different costs in terms of download latency and bandwidth cost , how to optimize the connection along the vehicle 's trip towards the lowest cost represents fundamental challenges . this paper investigates the issue by proposing an optimal access control scheme for vehicles in hetvnets . in specific , with different access links , we first model the cost of each vehicle to download its content by jointly considering the conventional vehicle to vehicle ( v2v ) communication and the available access links . a coalition formation game is then introduced to formulate the cooperation among vehicles based on different interests ( content cached in vehicles ) and requests ( content needs to download ) . after forming the coalition , vehicles in the same coalition can download their requested content by selecting the optimal access link to achieve the minimum cost . simulation results story_separator_special_tag the heterogeneous vehicular networks ( hetvnets ) , which apply the heterogeneous access technologies ( e.g . , cellular networks and wifi ) complementarily to provide seamless and ubiquitous connections to vehicles , have emerged as a promising and practical paradigm to enable vehicular service applications on the road . however , with different costs in terms of latency time and price , how to optimize the connection along the vehicle s trip toward the lowest cost represents fundamental challenges . this paper investigates the issue by proposing an optimal access control scheme for vehicles in hetvnets . in specific , with different access networks , we first model the cost of each vehicle to download the requested content by jointly considering the vehicle s requirements of the requested content and the features of the available access networks , including conventional vehicle to vehicle communication and the heterogeneous access technologies . a coalition formation game is then introduced to formulate the cooperation among vehicles based on their different interests ( contents cached in vehicles ) and requests ( contents to be downloaded ) . after forming the coalitions , vehicles in the same coalition can download their requested contents cooperatively story_separator_special_tag the emergence of computation-intensive vehicle applications poses a significant challenge to the limited computation capacity of on-board equipments . mobile edge computing has been recognized as a . story_separator_special_tag recently , parked vehicles have been shown to be useful to deliver content in vehicular ad hoc networks , where the parked vehicles can form social communities to share and exchange content with other moving vehicles and road side units ( rsus ) . however , as it takes resource such as bandwidth and power for parked vehicles and rsus to deliver content , the incentive scheme with the optimal pricing strategy needs to be studied . furthermore , because multiple places including rsus and parked vehicles can deliver content to moving vehicles , the optimal algorithm to determine where to obtain the requested content should also be discussed . therefore , in this paper , we first propose a framework of content delivery with parked vehicles , where moving vehicles can obtain content from both the rsu and parked vehicles according to the competition and cooperation among them . then , based on a stackelberg game , we develop a pricing model where each of the three players , including moving vehicles , rsu , and parked vehicles , can obtain their maximum utilities . next , a gradient based iteration algorithm is presented to obtain the stackelberg equilibrium story_separator_special_tag mobile computation offloading ( mco ) is an emerging technology to offload the resource-intensive computations from smart mobile devices ( smds ) to nearby resource-rich devices ( i.e. , cloudlets ) via wireless access . however , the link duration between a smd and a single cloudlet can be very limited in a vehicular network . as a result , offloading actions taken by a smd may fail due to link breakage caused by mobility . meanwhile , some vehicles , such as buses , always follow relatively fixed routes , and their locations can be predicted much easier than other vehicles . by taking advantage of this fact , we propose a semi-markov decision process ( smdp ) -based cloudlet cooperation strategy , where the bus-based cloudlets act as computation service providers for the smds in vehicles , and an application generated by a smd includes a series of tasks that have dependency among each other . in this paper , we adopt a semi-markov decision process ( smdp ) framework to formulate the bus-based cooperation computing problem as a delay-constrained shortest path problem on a state transition graph . the value iteration algorithm ( via ) is used story_separator_special_tag mobile computation offloading is an emerging technology to migrate resource-intensive computations from resource-limited mobile devices ( mds ) to resource-rich devices ( such as a cloud server ) via wireless access . accessing to remote cloud server usually introduces a long delay to first deliver parameters to the server and then retrieve the results back . for applications that are time sensitive , offloading to nearby cloudlets is preferred . however , the link duration between an md and a single cloudlet can be very limited in a vehicular network . as a result , offloading actions taken by an md may fail due to link breakage caused by mobility . meanwhile , some vehicles , such as buses , always follow relatively fixed routes , and their locations can be predicted much easier than other vehicles . by taking advantage of this fact , we propose a bus-based cloudlet cooperation strategy , where the bus-based cloudlets act as computation service providers for the mds in vehicles , and an application generated by an md includes a series of tasks that have dependency among each other . the proposed bus-based cloudlet cooperation strategy ( bccs ) finds the optimal set story_separator_special_tag vehicular edge computing has emerged as a promising technology to accommodate the tremendous demand for data storage and computational resources in vehicular networks . by processing the massive workload tasks in the proximity of vehicles , the quality of service can be guaranteed . however , how to determine the task offloading strategy under various constraints of resource and delay is still an open issue . in this paper , we study the task offloading problem from a matching perspective and aim to optimize the total network delay . the task offloading delay model is derived based on three different velocity models , i.e. , a constant velocity model , vehicle-following model , and traveling-time statistical model . next , we propose a pricing-based one-to-one matching algorithm and pricing-based one-to-many matching algorithms for the task offloading . the proposed algorithm is validated based on three different simulation scenarios , i.e. , straight road , the urban road with the traffic light , and crooked road , which are extracted from the realistic road topologies in beijing and guangdong , china . the simulation results confirm that significant delay decreasing can be achieved by the proposed algorithm . story_separator_special_tag the emergence of computation intensive on-vehicle applications poses a significant challenge to provide the required computation capacity and maintain high performance . vehicular edge computing ( vec ) is a new computing paradigm with a high potential to improve vehicular services by offloading computation-intensive tasks to the vec servers . nevertheless , as the computation resource of each vec server is limited , offloading may not be efficient if all vehicles select the same vec server to offload their tasks . to address this problem , in this paper , we propose offloading with resource allocation . we incorporate the communication and computation to derive the task processing delay . we formulate the problem as a system utility maximization problem , and then develop a low-complexity algorithm to jointly optimize offloading decision and resource allocation . numerical results demonstrate the superior performance of our joint optimization of selection and computation ( josc ) algorithm compared to state of the art solutions . story_separator_special_tag this paper studies the joint communication , caching and computing design problem for achieving the operational excellence and the cost efficiency of the vehicular networks . moreover , the resource allocation policy is designed by considering the vehicle 's mobility and the hard service deadline constraint . these critical challenges have often been either neglected or addressed inadequately in the existing work on the vehicular networks because of their high complexity . we develop a deep reinforcement learning with the multi-timescale framework to tackle these grand challenges in this paper . furthermore , we propose the mobility-aware reward estimation for the large timescale model to mitigate the complexity due to the large action space . numerical results are presented to illustrate the theoretical findings developed in the paper and to quantify the performance gains attained . story_separator_special_tag this paper shows the viability of solar-powered road side units ( srsu ) , consisting of small cell base stations and mobile edge computing ( mec ) servers , and powered solely by solar panels with battery , to provide connected vehicles with a low- latency , easy-to-deploy and energy-efficient communication and edge computing infrastructure . however , srsu may entail a high risk of power deficiency , leading to severe quality of service ( qos ) loss due to spatial and temporal fluctuation of solar power generation . meanwhile , the data traffic demand also varies with space and time . the mismatch between solar power generation and srsu power consumption makes optimal use of solar power challenging . in this paper , we model the above problem with three sub-problems , the srsu power consumption minimization problem , the temporal energy balancing problem and spatial energy balancing problem . three algorithms are proposed to solve the above sub-problems , and they together provide a complete joint battery charging and user association control algorithm to minimize the qos loss under delay constraint of the computing tasks . results with a simulated urban environment using actual solar irradiance and vehicular story_separator_special_tag the dawn of the 21st century has seen a growing interest in vehicular networking and its myriad potential applications . the initial view of practitioners and researchers was that radio-equipped vehicles could keep the drivers informed about potential safety risks and increase their awareness of road conditions . the view then expanded to include access to the internet and associated services . this position paper proposes and promotes a novel and more comprehensive vision namely , that advances in vehicular networks , embedded devices , and cloud computing will enable the formation of autonomous clouds of vehicular computing , communication , sensing , power and physical resources . hence , we coin the term , autonomous vehicular clouds ( avcs ) . a key features distinguishing avcs from conventional cloud computing is that mobile avc resources can be pooled dynamically to serve authorized users and to enable autonomy in real-time service sharing and management on terrestrial , aerial , or aquatic pathways or theatres of operations . in addition to general-purpose avcs , we also envision the emergence of specialized avcs such as mobile analytics laboratories . furthermore , we envision that the integration of avcs with ubiquitous smart infrastructures story_separator_special_tag recent improvements in vehicular ad hoc networks are accelerating the realization of intelligent transportation system ( its ) , which not only provides road safety and driving efficiency , but also enables infotainment services . since data dissemination plays an important part in its , recent studies have found caching as a promising way to promote the efficiency of data dissemination against rapid variation of network topology . in this paper , we focus on the scenario of roadside unit ( rsu ) caching , where multiple content providers ( cps ) aim to improve the data dissemination of their own contents by utilizing the storages of rsus . to deal with the competition among multiple cps for limited caching facilities , we propose a multi-object auction-based solution , which is sub-optimal and efficient to be carried out . a caching-specific handoff decision mechanism is also adopted to take advantages of the overlap of rsus . simulation results show that our solution leads to a satisfactory outcome . story_separator_special_tag |full text : pdf ( 444 kb ) 11. a survey on sensor networks akyildiz , i.f . ; weilian su ; sankarasubramaniam , y. ; cayirci , e. communications magazine , ieee volume 40 , issue 8 , date : aug 2002 , pages : 102114 digital object identifier 10.1109/mcom.2002.1024422 abstract |full text : pdf ( 990 kb ) 12. realization of the next-generation network chae-sub lee ; knight , d. communications magazine , ieee volume 43 , issue 10 , date : oct. 2005 , pages : 3441 digital object identifier 10.1109/mcom.2005.1522122 abstract |full text : pdf ( 151 kb ) 13. a survey on wireless mesh networks akyildiz , i.f . ; xudong wang communications magazine , ieee volume 43 , issue 9 , date : sept. 2005 , pages : s23s30 digital object identifier 10.1109/mcom.2005.1509968 abstract |full text : pdf ( 138 kb ) 14. a 2.4 ghz cmos sub-sampling mixer with integrated filtering pekau , h. ; haslett , j.w . solid-state circuits , ieee journal of volume 40 , issue 11 , date : nov. 2005 , pages : 21592166 digital object identifier 10.1109/jssc.2005.857364 abstract |full text : pdf ( 784 kb ) page story_separator_special_tag wireless vehicular communication has the potential to enable a host of new applications , the most important of which are a class of safety applications that can prevent collisions and save thousands of lives . the automotive industry is working to develop the dedicated short-range communication ( dsrc ) technology , for use in vehicle-to-vehicle and vehicle-to-roadside communication . the effectiveness of this technology is highly dependent on cooperative standards for interoperability . this paper explains the content and status of the dsrc standards being developed for deployment in the united states . included in the discussion are the ieee 802.11p amendment for wireless access in vehicular environments ( wave ) , the ieee 1609.2 , 1609.3 , and 1609.4 standards for security , network services and multi-channel operation , the sae j2735 message set dictionary , and the emerging sae j2945.1 communication minimum performance requirements standard . the paper shows how these standards fit together to provide a comprehensive solution for dsrc . most of the key standards are either recently published or expected to be completed in the coming year . a reader will gain a thorough understanding of dsrc technology for vehicular communication , including insights into story_separator_special_tag integrating the various embedded devices and systems in our environment enables an internet of things ( iot ) for a smart city . the iot will generate tremendous amount of data that can be leveraged for safety , efficiency , and infotainment applications and services for city residents . the management of this voluminous data through its lifecycle is fundamental to the realization of smart cities . therefore , in contrast to existing surveys on smart cities we provide a data-centric perspective , describing the fundamental data management techniques employed to ensure consistency , interoperability , granularity , and reusability of the data generated by the underlying iot for smart cities . essentially , the data lifecycle in a smart city is dependent on tightly coupled data management with cross-cutting layers of data security and privacy , and supporting infrastructure . therefore , we further identify techniques employed for data security and privacy , and discuss the networking and computing technologies that enable smart cities . we highlight the achievements in realizing various aspects of smart cities , present the lessons learned , and identify limitations and research challenges . story_separator_special_tag mobile-edge computation offloading ( meco ) offloads intensive mobile computation to clouds located at the edges of cellular networks . thereby , meco is envisioned as a promising technique for prolonging the battery lives and enhancing the computation capacities of mobiles . in this paper , we study resource allocation for a multiuser meco system based on time-division multiple access ( tdma ) and orthogonal frequency-division multiple access ( ofdma ) . first , for the tdma meco system with infinite or finite computation capacity , the optimal resource allocation is formulated as a convex optimization problem for minimizing the weighted sum mobile energy consumption under the constraint on computation latency . the optimal policy is proved to have a threshold-based structure with respect to a derived offloading priority function , which yields priorities for users according to their channel gains and local computing energy consumption . as a result , users with priorities above and below a given threshold perform complete and minimum offloading , respectively . moreover , for the cloud with finite capacity , a sub-optimal resource-allocation algorithm is proposed to reduce the computation complexity for computing the threshold . next , we consider the ofdma meco story_separator_special_tag recent advances in networking , caching and computing have significant impacts on the developments of vehicular networks . nevertheless , these important enabling technologies have traditionally been studied separately in the existing works on vehicular networks . in this paper , we propose an integrated framework that can enable dynamic orchestration of networking , caching and computing resources to improve the performance of next generation vehicular networks . we formulate the resource allocation strategy in this framework as a joint optimization problem . the complexity of the system is very high when we jointly consider these three technologies . therefore , we propose a novel deep reinforcement learning approach in this paper . simulation results are presented to show the effectiveness of the proposed scheme . story_separator_special_tag distributed reputation systems can be used to foster cooperation between nodes in decentralized and self-managed systems due to the nonexistence of a central entity . in this paper , a robust and distributed reputation system for delay-tolerant networks ( repsys ) is proposed . repsys is robust because despite taking into account first- and second-hand information , it is resilient against false accusations and praise , and distributed , as the decision to interact with another node depends entirely on each node . simulation results show that the system is capable , while evaluating each node 's participation in the network , to detect on the fly nodes that do not accept messages from other nodes and that disseminate false information even while colluding with others , and while evaluating how honest is each node in the reputation system , to classify correctly nodes in most cases . story_separator_special_tag in vehicular networks , mobile edge computing ( mec ) is applied to meet the offloading demand from vehicles . however , the mobility of vehicles may increase the offloading delay and even reduce the success rate of offloading , because vehicles may access another road side unit ( rsu ) before finishing offloading . therefore , an offloading algorithm with low time complexity is required to make the offloading decision quickly . in this paper , we put forward an efficient offloading algorithm based on support vector machine ( svmo ) to satisfy the fast offloading demand in vehicular networks . the algorithm can segment a huge task into several sub-tasks through a weight allocation method according to available resources of mec servers . then each sub-task is decided whether it should be offloaded or executed locally based on svms . as the vehicle moves through several mec servers , sub-tasks are allocated to them by order if they are offloaded . each server ensures the sub-task can be processed and returned in time . our proposed algorithm generate training data through decision tree . the simulation results show that the svmo algorithm has a high decision accuracy , story_separator_special_tag in this paper , an energy-efficient vehicular edge computing ( vec ) framework is proposed for in-vehicle user equipments ( ues ) with limited battery capacity . firstly , the energy consumption minimization problem is formulated as a joint workload offloading and power control problem , with the explicit consideration of energy consumption and delay models . queuing theory is applied to derive the stochastic traffic models at ues and vec nodes . then , the original np-hard problem is transformed to a convex global consensus problem , which can be decomposed into several parallel subproblems and solved subsequently . next , an alternating direction method of multipliers ( admm ) -based energy-efficient resource allocation algorithm is developed , whose outer loop representing iterations of nonlinear fractional programming , while inner loop representing iterations of primal and dual variable updates . finally , the relationships between energy consumption and key parameters such as workload offloading portion and transmission power are validated through numerical results . story_separator_special_tag this paper surveys recent literature on vehicular social networks that are a particular class of vehicular ad hoc networks , characterized by social aspects and features . starting from this pillar , we investigate perspectives on next-generation vehicles under the assumption of social networking for vehicular applications ( i.e. , safety and entertainment applications ) . this paper plays a role as a starting point about socially inspired vehicles and mainly related applications , as well as communication techniques . vehicular communications can be considered the first social network for automobiles since each driver can share data with other neighbors . for instance , heavy traffic is a common occurrence in some areas on the roads ( e.g. , at intersections , taxi loading/unloading areas , and so on ) ; as a consequence , roads become a popular social place for vehicles to connect to each other . human factors are then involved in vehicular ad hoc networks , not only due to the safety-related applications but also for entertainment purposes . social characteristics and human behavior largely impact on vehicular ad hoc networks , and this arises to the vehicular social networks , which are formed when vehicles story_separator_special_tag abstract the information-centric networking ( icn ) paradigm has gained increasing attention as a solution for boosting content delivery in vehicular network applications . the content-naming oriented search and in-network data caching procedures of icn can improve content delivery by avoiding the need to determine and maintain end-to-end routing paths in vehicular networks , which is challenging due to the mobility of vehicles , and intermittent and short-lived connections between them . however , the use of the icn paradigm in vehicular networks aggravates the broadcast storm problem , and constantly suffers from breaks in the reverse path used for content discovery and delivery . in this paper , we propose the location-based and information-centric ( loicen ) architecture to improve the content request procedure and reduce the broadcast storm problem in intelligent vehicular networks . in the loicen architecture , vehicles opportunistically obtain the location information of the vehicles that might have desired content in their cache . this location information is used whenever possible to improve content search and discovery by directing interest packets to the area where the content may be located , mitigating the broadcast storm problem by selecting only the most suitable neighboring vehicle at story_separator_special_tag sumo is an open source traffic simulation package including the simulation application itself as well as supporting tools , mainly for network import and demand modeling . sumo helps to investigate a large variety of research topics , mainly in the context of traffic management and vehicular communications . we describe the current state of the package , its major applications , both by research topic and by example , as well as future developments and extensions . keywords-microscopic traffic simulation ; traffic management ; open source ; software story_separator_special_tag different research communities varying from telecommunication to traffic engineering are working on problems related to vehicular traffic congestion , intelligent transportation systems , and mobility patterns using information collected from a variety of sensors . to test the solutions , the first step is to use a vehicular traffic simulator with an appropriate scenario in order to reproduce realistic mobility patterns . many mobility simulators are available , and the choice is usually done based on the size and type of simulation required , but a common problem is to find a realistic traffic scenario . in order to evaluate and compare new communication protocols for vehicular networks , it is necessary to use a wireless network simulator in combination with a vehicular traffic simulator . this additional step introduces further requirements for the scenario . the aim of this work is to provide a scenario able to meet all the common requirements in terms of size , realism and duration , in order to have a common basis for the evaluations . in the interest of building a realistic scenario , we decided to start from a real city with a standard topology common in mid-size european cities , story_separator_special_tag due to the dynamically changing topology of internet of vehicles ( iov ) , it is a challenging issue to achieve efficient data dissemination in iov . this paper considers strongly connected iov with a number of heterogenous vehicular nodes to disseminate information and studies distributed replication-based data dissemination algorithms to improve the performance of data dissemination . accordingly , two data replication algorithms , a deterministic algorithm and a distributed randomised algorithm , are proposed . in the proposed algorithms , the number of message copies spread in the network is limited and the network will be balanced after a series of average operations among the nodes . the number of communication stages needed for network balance shows the complexity of network convergence as well as network convergence speed . it is proved that the network can achieve a balanced status after a finite number of communication stages . meanwhile , the upper and lower bounds of the time complexity are derived when the distributed randomised algorithm is applied . detailed mathematical results show that the network can be balanced quickly in complete graph ; thus highly efficient data dissemination can be guaranteed in dense iov . simulation results story_separator_special_tag vehicular ad hoc networks ( vanets ) have emerged as a serious and promising candidate for providing ubiquitous communications both in urban and highway scenarios . consequently , nowadays it is widely believed that vanets will be able to support both safety and non-safety applications . for both classes of applications , since a zero-infrastructure is the typical premise assumed , it is crucial to understand the dynamics of network connectivity when one operates without relying on any telecommunications infrastructure . using the key metrics of interest ( such as link duration , connection duration , and re-healing time ) we provide a comprehensive framework for network connectivity of urban vanets . our study , in addition to extensive simulations based on a new cellular automata model for mobility , also provides a comprehensive analytical framework . this analytical framework leads to closed form results which facilitate physical insight into the impact of key system parameters on network connectivity . the predictions of our analytical framework also shed light on which type of safety and non-safety applications can be supported by urban vanets . story_separator_special_tag in future vehicular networks , to satisfy the ever-increasing capacity requirements , ultrahigh-speed directional millimeter-wave ( mmwave ) communications will be used as vehicle-to-vehicle ( v2v ) links to disseminate large-volume contents . however , the conventional ip-based routing protocol is inefficient for content disseminations in high-mobility and dynamic vehicular environments . furthermore , the vehicle association also has a significant effect on the content dissemination performance . the content segment diversity , defined as the difference of desired content segments between content requesters and content repliers , is a key consideration during vehicle associations . the relative velocity between vehicles , which highly influences the link stability , should also been taken into account . the beam management , such as the beamwidth control , determines the link quality and , therefore , the final content dissemination rate . based on the above observations , to improve the content dissemination performance , we propose an information-centric network ( icn ) -based mmwave vehicular framework together with a decentralized vehicle association algorithm to realize low-latency content disseminations . in the framework , by using the icn protocol , contents are cached and retrieved at the edge of the network , story_separator_special_tag sharing perceptual data with other vehicles enhances the traffic safety of autonomous vehicles because it helps vehicles locate other vehicles and pedestrians in their blind spots . such safety applications require high throughput and short delay , which can not be achieved by conventional microwave vehicular communication systems . therefore , millimeter-wave ( mmwave ) communications are considered to be a key technology for sharing perceptual data because of their wide bandwidth . one of the challenges of data sharing in mmwave communications is broadcasting because narrow-beam directional antennas are used to obtain high gain . because many vehicles should share their perceptual data to others within a short time frame in order to enlarge the areas that can be perceived based on shared perceptual data , an efficient scheduling for concurrent transmission that improves spatial reuse is required for perceptual data sharing . this paper proposes a data sharing algorithm that employs a graph-based concurrent transmission scheduling . the proposed algorithm realizes concurrent transmission to improve spatial reuse by designing a rule that is utilized to determine if the two pairs of transmitters and receivers interfere with each other by considering the radio propagation characteristics of narrow-beam antennas . story_separator_special_tag cellular networks are one of the cornerstones of our information-driven society . however , existing cellular systems have been seriously challenged by the explosion of mobile data traffic , the emergence of machine-type communications , and the flourishing of mobile internet services . in this article , we propose concert , a converged edge infrastructure for future cellular communications and mobile computing services . the proposed architecture is constructed based on the concept of control/data ( c/d ) plane decoupling . the data plane includes heterogeneous physical resources such as radio interface equipment , computational resources , and software-defined switches . the control plane jointly coordinates physical resources to present them as virtual resources , over which software-defined services including communications , computing , and management can be deployed in a flexible manner . moreover , we introduce new designs for physical resources placement and task scheduling so that concert can overcome the drawbacks of the existing baseband-up centralization approach and better facilitate innovations in next-generation cellular networks . these advantages are demonstrated with application examples on radio access networks with c/d decoupled air interface , delaysensitive machine-type communications , and realtime mobile cloud gaming . we also discuss some story_separator_special_tag this position paper proposes a novel and integrated architectural model for the design of new 5g-enabled supports , capable of synergically leveraging mobile edge computing ( mec ) and fog computing capabilities together . in particular , we claim the relevance of dynamically distributing monitoring and control intelligence close to sensor/actuator localities in order to reduce latency in the control loop and to enable some forms of at least partial decentralized autonomous control even in absence of ( temporary ) cloud computing availability . this scenario significantly benefits from the possibility of having functions that are dynamically migrated from the global cloud to the local 5g-enhanced edges ( and possibly vice versa ) , in order to best fit the characteristics of the deployment environment and of the supported internet of things ( iot ) applications at provisioning time . among the others , the paper details and discusses the technical challenges associated with i ) the quality-constrained exploitation of container-based virtualized resources at edge nodes and ii ) the quality-constrained integration of iot gateways . the reported use cases help to practically understand the benefits of the proposed integrated architecture and shed light on most relevant and open related story_separator_special_tag this article introduces the follow-me cloud concept and proposes its framework . the proposed framework is aimed at smooth migration of all or only a required portion of an ongoing ip service between a data center and user equipment of a 3gpp mobile network to another optimal dc with no service disruption . the service migration and continuity is supported by replacing ip addressing with service identification . indeed , an fmc service/application is identified , upon establishment , by a session/service id , dynamically changing along with the service being delivered over the session ; it consists of a unique identifier of ue within the 3gpp mobile network , an identifier of the cloud service , and dynamically changing characteristics of the cloud service . service migration in fmc is triggered by change in the ip address of the ue due to a change of data anchor gateway in the mobile network , in turn due to ue mobility and/or for load balancing . an optimal dc is then selected based on the features of the new data anchor gateway . smooth service migration and continuity are supported thanks to logic installed at ue and dcs that maps features story_separator_special_tag driven by both safety concerns and commercial interests , vehicular ad hoc networks ( vanets ) have recently received considerable attentions . in this paper , we address popular content distribution ( pcd ) in vanets , in which one large popular file is downloaded from a stationary roadside unit ( rsu ) , by a group of on-board units ( obus ) driving through an area of interest ( aoi ) along a highway . due to high speeds of vehicles and deep fadings of vehicle-to-roadside ( v2r ) channels , some of the vehicles may not finish downloading the entire file but only possess several pieces of it . to successfully send a full copy to each obu , we propose a cooperative approach based on the coalition formation games , in which obus exchange their possessed pieces by broadcasting to and receiving from their neighbors . simulation results show that our proposed approach presents a considerable performance improvement relative to the non-cooperative approach , in which the obus broadcast randomly selected pieces to their neighbors as along as the spectrum is detected to be unoccupied . story_separator_special_tag in the connected vehicle ecosystem , a high volume of information-rich and safety-critical data will be exchanged by roadside units and onboard transceivers to improve the driving and traveling experience . however , poor-quality wireless links and the mobility of vehicles highly challenge data delivery . the ip address-centric model of the current internet barely works in such extremely dynamic environments and poorly matches the localized nature of the majority of vehicular communications , which typically target specific road areas ( e.g. , in the proximity of a hazard or a point of interest ) regardless of the identity/address of a single vehicle passing by . therefore , a paradigm shift is advocated from traditional ip-based networking toward the groundbreaking information- centric networking . in this article , we scrutinize the applicability of this paradigm in vehicular environments by reviewing its core functionalities and the related work . the analysis shows that , thanks to features like named content retrieval , innate multicast support , and in-network data caching , information-centric networking is positioned to meet the challenging demands of vehicular networks and their evolution . interoperability with the standard architectures for vehicular applications along with synergies with emerging computing story_separator_special_tag mobile data offloading is a feasible and cost-effective solution to ease the burden of cellular networks . in internet of vehicles , however , existing offloading techniques are hardly applicable to the ubiquitous location-dependent services , which impose strict spatiotemporal constraints on content delivery . particularly , the spatiotemporal constraints cause a phenomenon where the delivery deadlines are different even for the vehicles who subscribe to the same content . to this end , we propose a space and time constrained data offloading scheme ( stcdo ) . the scheme maintains a probability-based contact graph to represent the near-term transmission opportunities between vehicles . furthermore , a dynamic structure called offloading tree is introduced to evaluate the influence of each vehicle on opportunistic dissemination . finally , the scheme uses a greedy algorithm to effectively select appropriate vehicles as offloading seeds . we perform extensive experiments based on the real-world map-driven movement model in the one simulator . the experimental results show that the proposed scheme largely offloads the overloaded cellular networks while satisfying the spatiotemporal constraints . story_separator_special_tag mobile edge computing ( mec ) offers a new paradigm to improve vehicular services and augment the capabilities of vehicles . in this paper , to reduce the latency of the computation offloading of vehicles , we study multiple vehicles computation offloading problem in vehicular edge networks . we formulate the problem as a multi-user computation offloading game problem , prove the existence of nash equilibrium ( ne ) of the game and propose a distributed computation offloading algorithm to compute the equilibrium . we analyze the price of anarchy of the game algorithm and evaluate the performance of the game algorithm using extensive simulations . numerical results show that the proposed algorithm can greatly reduce the computation overhead of vehicles . story_separator_special_tag with the rapid advances in vehicular technologies and the ever-increasing demands on mobile multimedia services , vehicular networks play a crucial role in intelligent transport systems by providing resilient connection among vehicles and users . meanwhile , a huge number of parked vehicles may have abundant and underutilized resources in forms of computation , communication and storage . in this paper , we propose a vehicular edge computing ( vec ) caching scheme in which content providers ( cps ) collaboratively cache popular contents in the storage of parked vehicles located in multiple parking lots . the proposed vec caching scheme extends the data center capability from the core to the edge of the networks . as a result , the duplicate transmissions from remote servers can be removed and the total transmission latency can be significantly reduced . in order to minimize the average latency to mobile users , we present a content placement algorithm based on an iterative ascending price auction . numerical results show that the proposed caching scheme achieves a performance gain up to 24 % in terms of average latency , compared to the widely-used scheme with most popularity caching . story_separator_special_tag the appearance of the internet of vehicles enables comfort driving experiences and content- rich multimedia services for in-vehicle users . the vehicular network provides specific scenario- centric content delivery services involving data of vehicle status , user behaviors , and environmental features . in this article , we focus on vehicular content delivery from a big data perspective . after a comprehensive review of state-of-the-art works , we elaborate the potential value of big data in vehicular information and content services by introducing several typical application scenarios . according to the data characteristics , we classify the vehicular data into three categories , that is , location-centric , user-centric , and vehicle-centric , and then illustrate an implementation of big data collection and analysis . a real-world big data application in social-based vehicular networks is presented , and simulation results show that the big-data-enabled content delivery strategy can obtain a performance gain of user satisfaction with the delivered contents compared to the case without consideration of social big data . finally , we conclude the article with several future research topics . story_separator_special_tag with the rapid advances in vehicular technologies and social multimedia applications , vsns have emerged and gained significant attention from both industry and academia . however , due to the low communication ability between vehicles , heavy network traffic load , and limited storage capacity , vsns face the challenge to improve the performance of content delivery to provide a pleasant and safe driving experience . therefore , in this article , first we present a novel framework to deliver content in vsns with d2d communication . in the proposed framework , moving vehicles can exchange content directly with each other according to d2d communication . all contents are managed in a content-centric mode , where moving vehicles can send their interests to obtain content with naming information , resulting in a reduction of network traffic load . based on the d2d communication , parked vehicles around the street can form vehicular social communities with the moving vehicles passing along the road , where the storage capacity of vsns can be increased by using the contents in parked vehicles . then we present the detailed process of content delivery in vsns including interest sending , content distribution , and content story_separator_special_tag one of the major goals of the 5g technology roadmap is to create disruptive innovation for the efficient use of the radio spectrum to enable rapid access to bandwidth-intensive multimedia services over wireless networks . the biggest challenge toward this goal lies in the difficulty in exploiting the multicast nature of the wireless channel in the presence of wireless users that rarely access the same content at the same time . recently , the combined use of wireless edge caching and coded multicasting has been shown to be a promising approach to simultaneously serve multiple unicast demands via coded multicast transmissions , leading to order-of-magnitude bandwidth efficiency gains . however , a crucial open question is how these theoretically proven throughput gains translate in the context of a practical implementation that accounts for all the required coding and protocol overheads . in this article , we first provide an overview of the emerging caching- aided coded multicast technique , including state-of-the-art schemes and their theoretical performance . we then focus on the most competitive scheme proposed to date and describe a fully working prototype implementation in cortexlab , one of the few experimental facilities where wireless multiuser communication scenarios can story_separator_special_tag mobile devices today are constantly generating and consuming a tremendous amount of content on the internet . caching of such `` massive '' data is beyond the capacity of existing cellular networks in terms of both cost and bandwidth due to its connection-centric nature . the increasing demand for content poses fundamental questions like where , what , and how to cache and retrieve cached content . leveraging the shift toward a content-centric networking paradigm , we propose to cache content close to the mobile user to avoid wasting resources and decrease access delays . therefore , we present saving , a socially aware vehicular information-centric networking system for content storage and sharing over vehicles using their computing , caching , and communication ( 3cs ) capabilities . the encapsulated 3cs are exploited first to identify the potential candidates , socially important , to cache in the fleet of vehicles . to achieve this , we propose a novel vehicle ranking system allowing a smart vehicle to autonomously `` compute '' its eligibility to address the question , where to cache . the identified vehicles then collaborate to efficiently `` cache '' content between them based on the content popularity story_separator_special_tag with vehicular mobile communication becoming an everyday requirement and an ever increasing number of services available , it is clear that vehicular networks require more efficient management . in this paper we discuss service orientation as an architectural model for information centric networking ( icn ) vanets . we discuss the limitations faced by vehicles and propose structuring communication towards as a coordinated service-centric network . intermittent connectivity and end-to-end network delay are amongst the issues tackled by this work as we envision our network model . additionally , we perform a set of simulations to exemplify the benefits of service coordination . story_separator_special_tag content dissemination , in particular , small-volume localized content dissemination , represents a killer application in vehicular networks , such as advertising distribution and road traffic alerts . the dissemination of contents in vehicular networks typically relies on the roadside infrastructure and moving vehicles to relay and propagate contents . due to instinct challenges posed by the features of vehicles ( mobility , selfishness , and routes ) and limited communication ability of infrastructures , to efficiently motivate vehicles to join in the content dissemination process and appropriately select the relay vehicles to satisfy different transmission requirements is a challenging task . this paper develops a novel edge-computing-based content dissemination framework to address the issue , composed of two phases . in the first phase , the contents are uploaded to an edge computing device ( ecd ) , which is an edge caching and communication infrastructure deployed by the content provider . by jointly considering the selfishness and the transmission capability of vehicles , a two-stage relay selection algorithm is designed to help the ecd selectively deliver the content through vehicle-to-infrastructure ( v2i ) communications to satisfy its requirements . in the second phase , the vehicles selected by story_separator_special_tag due to the expanding scale of vehicles and the new demands of multimedia services , current vehicular networks face challenges to increase capacity , support mobility , and improve qoe . an innovative design of next generation vehicular networks based on the content-centric architecture has been advocated recently . however , the details of the framework and related algorithms have not been sufficiently studied . in this article , we present a novel framework of a content-centric vehicular network ( ccvn ) . by introducing a content-centric unit , contents exchanged between vehicles can be managed based on their naming information . vehicles can send interests to obtain wanted contents instead of sending conventional information requests . then we present an integrated algorithm to deliver contents to vehicles with the help of content-centric units . contents can be stored according to their priorities determined by vehicle density and content popularity . pending interests are updated based on the analysis of transmission ratio and network topology . the location of a content-centric unit to provide content during the moving of vehicles is determined by the forwarding information . finally , simulation experiments are carried out to show the efficiency of the story_separator_special_tag vehicular content networks ( vcns ) , which distribute medium-volume contents to vehicles in a fully distributed manner , represent the key enabling technology of vehicular infotainment applications . in vcns , the road-side units ( rsus ) cache replicas of contents on the edge of networks to facilitate the timely content delivery to driving-through vehicles when requested . however , due to the limited storage at rsus and soaring content size for distribution , rsus can only selectively cache content replicas . the edge caching scheme in rsus , therefore , becomes a fundamental issue in vcns . this paper addresses the issue by developing an edge caching scheme in rsus . specifically , we first analyze the features of vehicular content requests based on the content access pattern , vehicle 's velocity , and road traffic density . a model is then proposed to determine whether and where to obtain the replica of content when the moving vehicle requests it . after this , a cross-entropy-based dynamic content caching scheme is proposed accordingly to cache the contents at the edge of vcns based on the requests of vehicles and the cooperation among rsus . finally , the performance story_separator_special_tag driven by the evolutionary development of automobile industry and cellular technologies , dependable vehicular connectivity has become essential to realize future intelligent transportation systems ( its ) . in this paper , we investigate how to achieve dependable content distribution in device-to-device ( d2d ) -based cooperative vehicular networks by combining big data-based vehicle trajectory prediction with coalition formation game-based resource allocation . first , vehicle trajectory is predicted based on global positioning system and geographic information system data , which is critical for finding reliable and long-lasting vehicle connections . then , the determination of content distribution groups with different lifetimes is formulated as a coalition formation game . we model the utility function based on the minimization of average network delay , which is transferable to the individual payoff of each coalition member according to its contribution . the merge and split process is implemented iteratively based on preference relations , and the final partition is proved to converge to a nash-stable equilibrium . finally , we evaluate the proposed algorithm based on real-world map and realistic vehicular traffic . numerical results demonstrate that the proposed algorithm can achieve superior performance in terms of average network delay and story_separator_special_tag motivated by message delivery in vehicular ad hoc networks , we study distributed data replication algorithms for information delivery in a special completely connected network . to improve the efficiency of data dissemination , the number of message copies that can be spread is controlled and a distributed randomized data replication algorithm is proposed . the key idea is to let the data carrier distribute the data dissemination tasks to multiple nodes to speed up the dissemination process . we show how the network converges and prove that the network can enter into a balanced status in a small number of stages . most of the theoretical results described in this paper are to study the complexity of network convergence . simulation results show that the proposed algorithm can disseminate data to a specific area with low delay . story_separator_special_tag efficient content distribution is a critical challenge in vehicular networks ( vanets ) . this is due to the characteristics of vehicular networks , such as high mobility , dynamic topologies , short-lived links and intermittent connectivity between vehicles . recently , information-centric networking ( icn ) has been proposed to vanet scenarios for improving content delivery of infotainmentapplications . however , icn in vanets suffers from the interest transmission broadcast problem , which results in a waste of resources and diminishes the performance of vanets ' applications . in this paper , we propose the link stability-based interest forwarding for content request ( lisic ) protocol , in order to tackle the interest broadcast storm problem during a content search in information-centric vanets . the proposed protocol controls interesttransmission by prioritizing neighboring vehicles with more stable links with the current sender . simulation results show that the proposed protocol improves the content delivery rate by 40 % while decreases the interest packet transmissions by 26 % , in scenario of a low number of content producers in the network . story_separator_special_tag recently , millimeter-wave ( mmwave ) bands have been postulated as a means to accommodate the foreseen extreme bandwidth demands in vehicular communications , which result from the dissemination of sensory data to nearby vehicles for enhanced environmental awareness and improved safety level . however , the literature is particularly scarce in regards to principled resource allocation schemes that deal with the challenging radio conditions posed by the high mobility of vehicular scenarios . in this paper , we propose a novel framework that blends together matching theory and swarm intelligence to dynamically and efficiently pair vehicles and optimize both transmission and reception beamwidths . this is done by jointly considering channel state information and queue state information when establishing vehicle-to-vehicle ( v2v ) links . to validate the proposed framework , simulation results are presented and discussed , where the throughput performance as well as the latency/reliability tradeoffs of the proposed approach are assessed and compared with several baseline approaches recently proposed in the literature . the results obtained in this paper show performance gains of 25 % in reliability and delay for ultra-dense vehicular scenarios with 50 % more active v2v links than the baselines . these results
the notion of mutation plays crucial roles in representation theory of algebras . two kinds of mutation are well-known : tilting/silting mutation and quiver-mutation . in this paper , we focus on tilting mutation for symmetric algebras . introducing mutation of sb quivers , we explicitly give a combinatorial description of tilting mutation of symmetric special biserial algebras . as an application , we generalize rickard 's star theorem . we also introduce flip of brauer graphs and apply our results to brauer graph algebras . story_separator_special_tag we associate an algebra a ( ) to a triangulation of a surface s with a set of boundary marking points . this algebra a ( ) is gentle and gorenstein of dimension one . we also prove that a ( ) is cluster-tilted if and only if it is cluster-tilted of type a or a , or if and only if the surface s is a disc or an annulus . moreover all cluster-tilted algebras of type a or a are obtained in this way . story_separator_special_tag abstract let be a finite dimensional indecomposable weakly symmetric algebra over an algebraically closed field k , satisfying j 3 ( ) = 0 . let s 1 , , s r be representatives of the isomorphism classes of simple -modules , and let e be the r \xd7 r matrix whose ( i , j ) entry is dim k ext 1 ( s i , s j ) . if there exists an eigenvalue of e satisfying | | > 2 then the minimal resolution of each non-projective finitely generated -module has exponential growth , with radius of convergence 1 2 ( 2 4 ) . on the other hand , if all eigenvalues of e satisfy | | 2 then the dimensions of the modules in the minimal projective resolution of each finitely generated -module are either bounded or grow linearly . in this case , we classify the possibilities for the matrix e. the proof is an application of the perron frobenius theorem . story_separator_special_tag in this paper we show that the fields of rational invariants over the irreducible components of the module varieties for an acyclic gentle algebra are purely transcendental extensions . along the way , we exhibit for such fields of rational invariants a transcendence basis in terms of schofield determinantal semi-invariants . we also show that the moduli space of modules over a regular irreducible component is just a product of projective spaces . story_separator_special_tag by an algebra we mean an associative k-algebra with identity , where k is an algebraically closed field . all algebras are assumed to be finite dimensional over k ( except the path algebra kq ) . an algebra is said to be biserial if every indecomposable projective left or right -module p contains uniserial submodules u and v such that u+v=rad ( p ) and u v is either zero or simple . ( recall that a module is uniserial if it has a unique composition series , and the radical rad ( m ) of a module m is the intersection of its maximal submodules . ) biserial algebras arose as a natural generalization of nakayama 's generalized uniserial algebras [ 2 ] . the condition first appeared in the work of tachikawa [ 6 , proposition 2.7 ] , and it was formalized by fuller [ 1 ] . examples include blocks of group algebras with cyclic defect group ; finite dimensional quotients of the algebras ( 1 ) ( 4 ) and ( 7 ) ( 9 ) in ringel 's list of tame local algebras [ 4 ] ; the special biserial algebras of [ story_separator_special_tag this chapter gives the theory of blocks with cyclic defect groups . we start by describing the brauer tree , a combinatorial object that encodes first the decomposition matrix of the block , then ext1 between simple modules in the block , and indeed the morita equivalence type of the block ( but not the source algebra ) . we then construct brauer tree algebras , which are basic algebras that are morita equivalent to blocks with cyclic defect groups . after describing the indecomposable modules for such a block , we turn to the classification of the possible brauer trees , using the classification of finite simple groups . story_separator_special_tag abstract this paper determines much of the structure of blocks whose defect group is dihedral , semidihedral or generalised quaternion and which have either one or two simple modular representations ( brauer characters ) . it is shown that in the above circumstances there is only a very small number of possibilities for the cartan matrix , decomposition matrix and the category of modular representations once the defect group is specified . certain character-theoretic results of brauer and olsson are complemented here by a classification of all symmetric algebras with the appropriate representation theory . it is likely that a similar approach is available in the case of blocks with such defect groups and three ( the maximum number possible ) brauer characters . in view of the length of such a project this case is being published first . although the structure theorem can be expressed analogously to that for cyclic defect groups , the inductive proof used there apparently is not applicable here . story_separator_special_tag we investigate the existence of auslander-reiten components of euclidean type for special biserial self-injective algebras and for blocks of group algebras . in particular we obtain a complete description of stable auslander-reiten quivers for the tame self-injective algebras considered here story_separator_special_tag one of our main results is a classification all the weakly symmetric radical cube zero finite dimensional algebras over an algebraically closed field having a theory of support via the hochschild cohomology ring satisfying dade 's lemma . along the way we give a characterization of when a finite dimensional koszul algebra has such a theory of support in terms of the graded centre of the koszul dual . story_separator_special_tag abstract in this paper we study multiserial and special multiserial algebras . these algebras are a natural generalization of biserial and special biserial algebras to algebras of wild representation type . we define a module to be multiserial if its radical is the sum of uniserial modules whose pairwise intersection is either 0 or a simple module . we show that all finitely generated modules over a special multiserial algebra are multiserial . in particular , this implies that , in analogy to special biserial algebras being biserial , special multiserial algebras are multiserial . we then show that the class of symmetric special multiserial algebras coincides with the class of brauer configuration algebras , where the latter are a generalization of brauer graph algebras . we end by showing that any symmetric algebra with radical cube zero is special multiserial and so , in particular , it is a brauer configuration algebra . story_separator_special_tag in this paper we give a new definition of symmetric special multiserial algebras in terms of defining cycles . as a consequence , we show that every special multiserial algebra is a quotient of a symmetric special multiserial algebra . story_separator_special_tag we develop a theory of group actions and coverings on brauer graphs that parallels the theory of group actions and coverings of algebras . in particular , we show that any brauer graph can be covered by a brauer graph that has multiplicity function identically one , no loops , and no multiple edges . furthermore , we classify the coverings of brauer graph algebras that are again brauer graph algebras . story_separator_special_tag in this paper we establish a connection between ribbon graphs and brauer graphs . as a result , we show that a compact oriented surface with marked points gives rise to a unique brauer graph algebra up to derived equivalence . in the case of a disc with marked points we show that a dual construction in terms of dual graphs exists . the rotation of a diagonal in an m-angulation gives rise to a whitehead move in the dual graph , and we explicitly construct a tilting complex on the related brauer graph algebras reflecting this geometrical move . story_separator_special_tag the aim of this paper is to establish a connection between the standard koszul and the quasi-koszul property in the class of self-injective special biserial algebras . furthermore , we give a characterization of standard koszul symmetric special biserial algebras in terms of quivers and relations . story_separator_special_tag a note on a certain class of tilted algebras by l. angeleri-hugel and f. u. coelho derived equivalence and stable equivalence of repetitions of algebras of finite global dimension by h. asashiba right triangulated categories with right semi-equivalences by i. assem , a. beligiannis , and n. marmaridis the existence of short exact sequences with some of the terms in given subcategories by o. bakke induced boundary maps for the cohomology of monomial and auslander algebras by m. bardzell and e. n. marcos the repetitive partition of the repetitive category of a tubular algebra by m. barot representations of finitely represented dyadic sets by a. v. roiter , k. i. belousov , and l. a. nazarova relative cotilting theory and almost complete cotilting modules by a. b. buan and o. solberg hochschild cohomology algebra of radical square zero algebras by c. cibils equivalences represented by faithful non-tilting $ * $ -modules by r. colpi and g. d'este non reduced components of alg $ _n $ by t. dana-picard and m. schaps extensionless modules of infinite rank by a. p. dean and f. okoh a quiver description of hereditary categories and its application to the first weyl algebra by b. deng story_separator_special_tag we show that two well-studied classes of tame algebras coincide : namely , the class of symmetric special biserial algebras coincides with the class of brauer graph algebras . we then explore the connection between gentle algebras and symmetric special biserial algebras by explicitly determining the trivial extension of a gentle algebra by its minimal injective co-generator . this is a symmetric special biserial algebra and hence a brauer graph algebra of which we explicitly give the brauer graph . we further show that a brauer graph algebra gives rise , via admissible cuts , to many gentle algebras and that the trivial extension of a gentle algebra obtained via an admissible cut is the original brauer graph algebra . as a consequence we prove that the trivial extension of a jacobian algebra of an ideal triangulation of a riemann surface with marked points in the boundary is isomorphic to the brauer graph algebra with brauer graph given by the arcs of the triangulation . story_separator_special_tag an algebra a is called biserial ( cf . [ 9 ] ) if the radical of any indecomposable nonuniserial projective , left or right , ^4-module is a sum of two uniserial submodules whose intersection is simple or zero . examples for biserial algebras are nakayama algebras ( i.e . generalized uniserial algebras [ 22 ] ) , blocks of group algebras with cyclic defect group [ 17 ] , [ 18 ] , generalized tilted algebras [ 1 ] , and algebras whose auslander-reiten sequences have at most two nonprojective summands in their middle term [ 4 ] .
abstract the nemo-3 tracking detector is located in the frejus underground laboratory . it was designed to study double beta decay in a number of different isotopes . presented here are the experimental half-life limits on the double beta decay process for the isotopes 100mo and 82se for different majoron emission modes and limits on the effective neutrino majoron coupling constants . in particular , new limits on ordinary majoron ( spectral index 1 ) decay of 100mo ( t 1 / 2 > 2.7 \xd7 10 22 yr ) and 82se ( t 1 / 2 > 1.5 \xd7 10 22 yr ) have been obtained . corresponding bounds on the majoron neutrino coupling constant are g e e ( 0.4 1.8 ) \xd7 10 4 and ( 0.66 1.9 ) \xd7 10 4 . story_separator_special_tag the full data set of the nemo-3 experiment has been used to measure the half-life of the two-neutrino double beta decay of $ $ ^ { 100 } $ $ 100mo to the ground state of $ $ ^ { 100 } $ $ 100ru , $ $ t_ { 1/2 } = \\left [ 6.81 \\pm 0.01\\ , \\left ( \\text { stat } \\right ) ^ { +0.38 } _ { -0.40 } \\ , \\left ( \\text { syst } \\right ) \\right ] \\times 10^ { 18 } $ $ t1/2=6.81\xb10.01stat-0.40+0.38syst\xd71018\xa0year . the two-electron energy sum , single electron energy spectra and distribution of the angle between the electrons are presented with an unprecedented statistics of $ $ 5\\times 10^5 $ $ 5\xd7105 events and a signal-to-background ratio of $ $ \\sim $ $ 80. clear evidence for the single state dominance model is found for this nuclear transition . limits on majoron emitting neutrinoless double beta decay modes with spectral indices of $ $ \\mathrm { n } =2,3,7 $ $ n=2,3,7 , as well as constraints on lorentz invariance violation and on the bosonic neutrino contribution to the two-neutrino double beta decay mode are story_separator_special_tag abstract we assume that the pauli exclusion principle is violated for neutrinos , and thus , neutrinos obey at least partly the bose einstein statistics . the parameter sin 2 is introduced that characterizes the bosonic ( symmetric ) fraction of the neutrino wave function . consequences of the violation of the exclusion principle for the two-neutrino double beta decays ( 2 -decays ) are considered . this violation strongly changes the rates of the decays and modifies the energy and angular distributions of the emitted electrons . pure bosonic neutrinos are excluded by the present data . in the case of partly bosonic ( or mixed-statistics ) neutrinos the analysis of the existing data allows to put the conservative upper bound sin 2 0.6 . the sensitivity of future measurements of the 2 -decay to sin 2 is evaluated . story_separator_special_tag a search for lorentz- and cpt-violating signals in the double beta decay spectrum of ^ ( 136 ) xe has been performed using an exposure of 100 kg yr with the exo-200 detector . no significant evidence of the spectral modification due to isotropic lorentz-violation was found , and a two-sided limit of 2.65\xd710^ ( 5 ) gev < \xe2^ ( ( 3 ) ) _ ( of ) < 7.60\xd710^ ( 6 ) gev ( 90 % c.l . ) is placed on the relevant coefficient within the standard-model extension ( sme ) . this is the first experimental study of the effect of the sme-defined oscillation-free and momentum-independent neutrino coupling operator on the double beta decay process . story_separator_special_tag we present the search for lorentz violation in the double beta decay of se82 with cupid-0 , using an exposure of 9.95 kg\xd7yr . we found no evidence for the searched signal and set a limit on the isotropic components of the lorentz violating coefficient of a of ( 3 ) < 4.1\xd710-6 gev ( 90 % credible interval ) . this results is obtained with a bayesian analysis of the experimental data and fully includes the systematic uncertainties of the model . this is the first limit on a of ( 3 ) obtained with a scintillating bolometer , showing the potentiality of this technique . story_separator_special_tag motivated by nonzero neutrino masses and the possibility of new physics discovery , a number of experiments search for neutrinoless double beta decay . while hunting for this hypothetical nuclear process , a significant amount of two-neutrino double beta decay data have become available . although these events are regarded and studied mostly as the background of neutrinoless double beta decay , they can also be used to probe physics beyond the standard model . in this letter , we show how the presence of right-handed leptonic currents would affect the energy distribution and angular correlation of the outgoing electrons in two-neutrino double beta decay . consequently , we estimate constraints imposed by currently available data on the existence of right-handed neutrino interactions without having to assume their nature . in this way , our results complement the bounds coming from the nonobservation of neutrinoless double beta decay as they limit also the exotic interactions of dirac neutrinos . we perform a detailed calculation of two-neutrino double beta decay under the presence of exotic ( axial- ) vector currents , and we demonstrate that current experimental searches can be competitive to existing limits . story_separator_special_tag neutrino self-interactions ( $ \\ensuremath { u } \\mathrm { si } $ ) beyond the standard model are an attractive possibility to soften cosmological constraints on neutrino properties and also to explain the tension in late and early time measurements of the hubble expansion rate . the required strength of $ \\ensuremath { u } \\mathrm { si } $ to explain the $ 4\\ensuremath { \\sigma } $ hubble tension is in terms of a pointlike effective four-fermion coupling that can be as high as $ { 10 } ^ { 9 } { g } _ { f } $ , where $ { g } _ { f } $ is the fermi constant . in this work , we show that such strong $ \\ensuremath { u } \\mathrm { si } $ can cause significant effects in two-neutrino double beta decay , leading to an observable enhancement of decay rates and to spectrum distortions . we analyze self-interactions via an effective operator as well as when mediated by a light scalar . data from observed two-neutrino double beta decay are used to constrain $ \\ensuremath { u } \\mathrm { si } $ , story_separator_special_tag the two-neutrino mode of double-beta decay in /sup 82/se has been observed in a time-projection chamber at a half-life of ( 1.1/sub -0.3//sup +0.8/ ) x 10/sup 20/ yr ( 68 % confidence level ) . this result from direct counting confirms the earlier geochemical measurements and helps provide a standard by which to test the double-beta-decay matrix elements of nuclear theory . it is the rarest natural decay process ever observed directly in the laboratory . story_separator_special_tag two-neutrino double electron capture ( 2 ecec ) is a second-order weak-interaction process with a predicted half-life that surpasses the age of the universe by many orders of magnitude1 . until now , indications of 2 ecec decays have only been seen for two isotopes2 5 , 78kr and 130ba , and instruments with very low background levels are needed to detect them directly with high statistical significance6,7 . the 2 ecec half-life is an important observable for nuclear structure models8 14 and its measurement represents a meaningful step in the search for neutrinoless double electron capture the detection of which would establish the majorana nature of the neutrino and would give access to the absolute neutrino mass15 17. here we report the direct observation of 2 ecec in 124xe with the xenon1t dark-matter detector . the significance of the signal is 4.4 standard deviations and the corresponding half-life of 1.8 \xd7 1022 years ( statistical uncertainty , 0.5 \xd7 1022 years ; systematic uncertainty , 0.1 \xd7 1022 years ) is the longest measured directly so far . this study demonstrates that the low background and large target mass of xenon-based dark-matter detectors make them well suited for measuring story_separator_special_tag we carried out the comparative study of the signal from the decay of double $ k $ -shell vacancy production that follows after single $ k $ -shell electron capture of $ ^ { 81 } $ kr and double $ k $ -shell electron capture of $ ^ { 78 } $ kr . the radiative decay of a the double $ 1s $ vacancy state was identified by detecting the triple coincidence of two $ k $ x-rays and several auger electrons in the $ ecec $ -decay , or by detecting two $ k $ x-rays and ( auger electrons + ejected $ k $ -shell electron ) in the $ ec $ decay . the number of $ k $ -shell vacancies per the $ k $ -electron capture , produced as a result of the shake-off process , has been measured for the decay of $ ^ { 81 } $ kr . the probability for this decay was found to be $ p_ { kk } = ( 5.7\\pm0.8 ) \\times10^ { -5 } $ with a systematic error of $ ( \\delta p_ { kk } ) _ { syst } =\\pm0.4 \\times10^ story_separator_special_tag all existing positive results on two-neutrino double beta decay and two-neutrino double electron capture in different nuclei have been analyzed . weighted average and recommended half-life values for 48ca , 76ge , 82se , 96zr , 100mo , 100mo - 100ru ( 01+ ) , 116cd , 128te , 130te , 136xe , 150nd , 150nd - 150sm ( 01+ ) , 238u , 78kr , 124xe and 130ba have been obtained . given the measured half-life values , effective nuclear matrix elements for all these transitions were calculated . story_separator_special_tag abstract all existing positive results on two-neutrino double beta decay in different nuclei were analyzed . using the procedure recommended by the particle data group , weighted average values for half-lives of 48 ca , 76 ge , 82 se , 96 zr , 100 mo , 100 mo 100 ru ( 0 1 + ) , 116 cd , 130 te , 136 xe , 150 nd , 150 nd 150 sm ( 0 1 + ) and 238 u were obtained . existing geochemical data were analyzed and recommended values for half-lives of 128 te and 130 ba are proposed . given the measured half-life values , nuclear matrix elements were calculated using latest ( more reliable and precise ) values for phase space factor . finally , previous results ( prc 81 ( 2010 ) 035501 ) were updated and results for 136 xe were added . story_separator_special_tag an improved formalism of the two-neutrino double-beta decay ( $ 2 u\\beta\\beta $ -decay ) rate is presented , which takes into account the dependence of energy denominators on lepton energies via the taylor expansion . till now , only the leading term in this expansion has been considered . the revised $ 2 u\\beta\\beta $ -decay rate and differential characteristics depend on additional phase-space factors weighted by the ratios of $ 2 u\\beta\\beta $ -decay nuclear matrix elements with different powers of the energy denominator . for nuclei of experimental interest all phase-space factors are calculated by using exact dirac wave functions with finite nuclear size and electron screening . for isotopes with measured $ 2 u\\beta\\beta $ -decay half-life the involved nuclear matrix elements are determined within the quasiparticle random phase approximation with partial isospin restoration . the importance of correction terms to the $ 2 u\\beta\\beta $ -decay rate due to taylor expansion is established and the modification of shape of single and summed electron energy distributions is discussed . it is found that the improved calculation of the $ 2 u\\beta\\beta $ -decay predicts slightly suppressed $ 2 u\\beta\\beta $ -decay background to the neutrinoless double-beta story_separator_special_tag the nuclear matrix elements $ m^ { 0 u } $ of the neutrinoless double beta decay ( $ 0 u\\beta\\beta $ ) of most nuclei with known $ 2 u\\beta\\beta $ -decay rates are systematically evaluated using the quasiparticle random phase approximation ( qrpa ) and renormalized qrpa ( rqrpa ) . the experimental $ 2 u\\beta\\beta $ -decay rate is used to adjust the most relevant parameter , the strength of the particle-particle interaction . new results confirm that with such procedure the $ m^ { 0 u } $ values become essentially independent on the size of the single-particle basis . furthermore , the matrix elements are shown to be also rather stable with respect to the possible quenching of the axial vector strength parametrized by reducing the coupling constant $ g_a $ , as well as to the uncertainties of parameters describing the short range nucleon correlations . theoretical arguments in favor of the adopted way of determining the interaction parameters are presented . furthermore , a discussion of other implicit and explicit parameters , inherent to the qrpa method , is presented . comparison is made of the ways these factors are chosen by different story_separator_special_tag we calculate the nuclear matrix elements of the neutrinoless double beta ( $ 0 u\\beta\\beta $ ) decays of $ ^ { 76 } $ ge and $ ^ { 82 } $ se for the light-neutrino exchange mechanism . the nuclear wave functions are obtained by using realistic two-body forces within the proton-neutron quasiparticle random-phase approximation ( pnqrpa ) . we include the effects that come from the finite size of a nucleon , from the higher-order terms of nucleonic weak currents , and from the nucleon-nucleon short-range correlations . most importantly , we improve on the presently available calculations by replacing the rudimentary jastrow short-range correlations by the more advanced unitary correlation operator method ( ucom ) . the ucom corrected matrix elements turn out to be notably larger in magnitude than the jastrow corrected ones . this has drastic consequences for the detectability of $ 0 u\\beta\\beta $ decay in the present and future double beta experiments . story_separator_special_tag the review summarizes much of particle physics and cosmology . using data from previous editions , plus 3,283 new measurements from 899 japers , we list , evaluate , and average measured properties of gauge bosons and the recently discovered higgs boson , leptons , quarks , mesons , and baryons . we summarize searches for hypothetical particles such as heavy neutrinos , supersymmetric and technicolor particles , axions , dark photons , etc . all the particle properties and search limits are listed in summary tables . we also give numerous tables , figures , formulae , and reviews of topics such as supersymmetry , extra dimensions , particle detectors , probability , and statistics . among the 112 reviews are many that are new or heavily revised including those on : dark energy , higgs boson physics , electroweak model , neutrino cross section measurements , monte carlo neutrino generators , top quark , dark matter , dynamical electroweak symmetry breaking , accelerator physics of colliders , high-energy collider parameters , big bang nucleosynthesis , astrophysical constants and cosmological parameters . story_separator_special_tag the nuclear matrix elements that govern the rate of neutrinoless double beta decay must be accurately calculated if experiments are to reach their full potential . theorists have been working on the problem for a long time but have recently stepped up their efforts as ton-scale experiments have begun to look feasible . here we review past and recent work on the matrix elements in a wide variety of nuclear models and discuss work that will be done in the near future . ab initio nuclear-structure theory , which is developing rapidly , holds out hope of more accurate matrix elements with quantifiable error bars . story_separator_special_tag detection of the neutrinoless $ \\ensuremath { \\beta } \\ensuremath { \\beta } $ ( $ 0\\ensuremath { u } \\ensuremath { \\beta } \\ensuremath { \\beta } $ ) decay is of high priority in the particle- and neutrino-physics communities . the detectability of this decay mode is strongly influenced by the value of the weak axial-vector coupling constant $ { g } _ { a } $ . the recent nuclear-model analyses of $ \\ensuremath { \\beta } $ and $ \\ensuremath { \\beta } \\ensuremath { \\beta } $ decays suggest that the value of $ { g } _ { a } $ could be dramatically quenched , reaching ratios of $ { g } _ { a } ^ { \\mathrm { free } } / { g } _ { a } \\ensuremath { \\approx } 4 $ , where $ { g } _ { a } ^ { \\mathrm { free } } =1.27 $ is the free , neutron-decay , value of $ { g } _ { a } $ . the effects of this quenching appear devastating for the sensitivity of the present and future $ 0\\ensuremath { u story_separator_special_tag a complete and improved calculation of phase space factors ( psf ) for $ 2 u\\beta\\beta $ and $ 0 u\\beta\\beta $ decay is presented . the calculation makes use of exact dirac wave functions with finite nuclear size and electron screening and includes life-times , single and summed electron spectra , and angular electron correlations . story_separator_special_tag we report an up-dated and complete list of the phase space factors ( psf ) for the , , ec and ecec double beta decay ( dbd ) modes . in calculation , the coulomb distortion of the electron wave functions is treated by solving numerically the dirac equation with inclusion of the finite nuclear size and electron screening effects . in addition to the previous recent calculations we used a coulomb potential derived from a realistic proton density distribution in nucleus , developed own routines with improved precision for solving the dirac equations and integrating the psf expressions , and used q-values reported recently . in general , we found a good agreement between our psf values and those reported by other authors , especially for and decay modes and lighter nuclei . however , even in these cases we got several relevant discrepancies ( larger than 10 % ) between our results and those reported in literature , while for the ec decay modes we got more and larger discrepancies . the possible sources of these discrepancies are discussed . accurate values of the psf are necessary ingredients both for theorists , to improve the dbd lifetime predictions story_separator_special_tag all existing `` positive '' results on two-neutrino double-beta decay in different nuclei were analyzed . using procedure recommended by particle data group weighted average values for half-lives of 48ca , 76ge , 82se , 96zr , 100mo , 100mo-100ru ( 0+1 ) , 116cd , 150nd and 238u were obtained . existing geochemical data were analyzed and recommended values for half-lives of 128te and 130te are proposed . we recommend to use these results as the most precise and reliable values for half-lives at this moment . story_separator_special_tag all existing positive results on two-neutrino double-beta decay in different nuclei were analyzed . using the procedure recommended by the particle data group , weighted average values for half-lives of 48ca , 76ge , 82se , 96zr , 100mo , 100mo 100ru ( 01+ ) , 116cd , 150nd , 150nd 150sm ( 01+ ) and 238u were obtained . existing geochemical data were analyzed and recommended values for half-lives of 128te , 130te , and 130ba are proposed . we recommend the use of these results as presently the most precise and reliable values for half-lives . story_separator_special_tag all existing positive results on two neutrino double beta decay and two neutrino double electron capture in different nuclei were analyzed . using the procedure recommended by the particle data group , weighted average values for half-lives of $ ^ { 48 } $ ca , $ ^ { 76 } $ ge , $ ^ { 82 } $ se , $ ^ { 96 } $ zr , $ ^ { 100 } $ mo , $ ^ { 100 } $ mo - $ ^ { 100 } $ ru ( $ 0^+_1 $ ) , $ ^ { 116 } $ cd , $ ^ { 130 } $ te , $ ^ { 136 } $ xe , $ ^ { 150 } $ nd , $ ^ { 150 } $ nd - $ ^ { 150 } $ sm ( $ 0^+_1 $ ) , $ ^ { 238 } $ u , $ ^ { 78 } $ kr and $ ^ { 124 } $ xe were obtained . existing geochemical data were analyzed and recommended values for half-lives of $ ^ { 128 } $ te and $ ^ { story_separator_special_tag the nemo-3 experiment at the modane underground laboratory has investigated the double- $ \\beta $ decay of $ ^ { 48 } { \\rm ca } $ . using $ 5.25 $ yr of data recorded with a $ 6.99\\ , { \\rm g } $ sample of $ ^ { 48 } { \\rm ca } $ , approximately $ 150 $ double- $ \\beta $ decay candidate events have been selected with a signal-to-background ratio greater than $ 3 $ . the half-life for the two-neutrino double- $ \\beta $ decay of $ ^ { 48 } { \\rm ca } $ has been measured to be $ t^ { 2 u } _ { 1/2 } \\ , =\\ , [ 6.4\\ , ^ { +0.7 } _ { -0.6 } { \\rm ( stat . ) } \\ , ^ { +1.2 } _ { -0.9 } { \\rm ( syst . ) } ] \\times 10^ { 19 } \\ , { \\rm yr } $ . a search for neutrinoless double- $ \\beta $ decay of $ ^ { 48 } { \\rm ca } $ yields a null result and a corresponding lower story_separator_special_tag a search for neutrinoless $ $ \\beta \\beta $ $ decay processes accompanied with majoron emission has been performed using data collected during phase\xa0i of the germanium detector array ( gerda ) experiment at the laboratori nazionali del gran sasso of infn ( italy ) . processes with spectral indices $ $ n = 1 , 2 , 3 , 7 $ $ n=1,2,3,7 were searched for . no signals were found and lower limits of the order of 10 $ $ ^ { 23 } $ $ 23\xa0yr on their half-lives were derived , yielding substantially improved results compared to previous experiments with $ $ ^ { 76 } $ $ 76ge . a new result for the half-life of the neutrino-accompanied $ $ \\beta \\beta $ $ decay of $ $ ^ { 76 } $ $ 76ge with significantly reduced uncertainties is also given , resulting in $ $ t^ { 2 u } _ { 1/2 } = ( 1.926 \\pm 0.094 ) \\times 10^ { 21 } $ $ t1/22 = ( 1.926\xb10.094 ) \xd71021\xa0yr . story_separator_special_tag using data from the nemo-3 experiment , we have measured the two-neutrino double beta decay ( $ $ 2 u \\beta \\beta $ $ 2 ) half-life of $ $ ^ { 82 } $ $ 82se as $ $ t_ { \\smash { 1/2 } } ^ { 2 u } \\ ! =\\ ! \\left [ 9.39 \\pm 0.17\\left ( \\text { stat } \\right ) \\pm 0.58\\left ( \\text { syst } \\right ) \\right ] \\times 10^ { 19 } $ $ t1/22 =9.39\xb10.17stat\xb10.58syst\xd71019\xa0y under the single-state dominance hypothesis for this nuclear transition . the corresponding nuclear matrix element is $ $ \\left| m^ { 2 u } \\right| = 0.0498 \\pm 0.0016 $ $ m2 =0.0498\xb10.0016 . in addition , a search for neutrinoless double beta decay ( $ $ 0 u \\beta \\beta $ $ 0 ) using 0.93\xa0kg of $ $ ^ { 82 } $ $ 82se observed for a total of 5.25\xa0y has been conducted and no evidence for a signal has been found . the resulting half-life limit of $ $ t_ { 1/2 } ^ { 0 u } > 2.5 \\times 10^ { 23 } \\ , \\text story_separator_special_tag we report on the measurement of the two-neutrino double- decay of ^ { 82 } se performed for the first time with cryogenic calorimeters , in the framework of the cupid-0 experiment . with an exposure of 9.95\xa0kg yr of zn^ { 82 } se , we determine the two-neutrino double- decay half-life of ^ { 82 } se with an unprecedented precision level , t_ { 1/2 } ^ { 2 } = [ 8.60\xb10.03 ( stat ) _ { -0.13 } ^ { +0.19 } ( syst ) ] \xd710^ { 19 } yr. the very high signal-to-background ratio , along with the detailed reconstruction of the background sources allowed us to identify the single state dominance as the underlying mechanism of such a process , demonstrating that the higher state dominance hypothesis is disfavored at the level of 5.5 . story_separator_special_tag we report the measurement of the two-neutrino double-beta ( $ 2 u\\beta\\beta $ ) decay of $ ^ { 100 } $ mo to the ground state of $ ^ { 100 } $ ru using lithium molybdate ( \\crystal ) scintillating bolometers . the detectors were developed for the cupid-mo program and operated at the edelweiss-iii low background facility in the modane underground laboratory . from a total exposure of $ 42.235 $ kg $ \\times $ d , the half-life of $ ^ { 100 } $ mo is determined to be $ t_ { 1/2 } ^ { 2 u } = [ 7.12^ { +0.18 } _ { -0.14 } \\ , \\mathrm { ( stat. ) } \\pm0.10\\ , \\mathrm { ( syst . ) } ] \\times10^ { 18 } $ years . this is the most accurate determination of the $ 2 u\\beta\\beta $ half-life of $ ^ { 100 } $ mo to date . we also confirm , with the statistical significance of $ > 3\\sigma $ , that the single-state dominance model of the $ 2 u\\beta\\beta $ decay of $ ^ { 100 } $ mo is favored over story_separator_special_tag the nemo-3 experiment measured the half-life of the 2 decay and searched for the 0 decay of 116cd . using 410 g of 116cd installed in the detector with an exposure of 5.26 y , ( 4968\xb174 ) events corresponding to the 2 decay of 116cd to the ground state of 116sn have been observed with a signal to background ratio of about 12. the half-life of the 2 decay has been measured to be t2 1/2= [ 2.74\xb10.04 ( stat. ) \xb10.18 ( syst . ) ] \xd71019 y. no events have been observed above the expected background while searching for 0 decay . the corresponding limit on the half-life is determined to be t0 1/2 1.0\xd71023 y at the 90 % c.l . which corresponds to an upper limit on the effective majorana neutrino mass of m 1.4 2.5 ev depending on the nuclear matrix elements considered . limits on other mechanisms generating 0 decay such as the exchange of r-parity violating supersymmetric particles , right-handed currents and majoron emission are also obtained . story_separator_special_tag the double-beta decay of 116cd has been investigated with the help of radiopure enriched cdwo4 crystal scintillators ( mass of 1.162 kg ) at the gran sasso underground laboratory . the half-life of 116cd relatively to the 2 2 decay to the ground state of 116sn was measured with the highest up-to-date accuracy as t1/2 = ( 2.63 +0.11 0.12 ) \xd7 1019 yr. a new improved limit on the 0 2 decay of 116cd to the ground state of 116sn was set as t1/2 2.2 \xd7 1023 yr at 90 % c.l. , which is the most stringent known restriction for this isotope . it corresponds to the effective majorana neutrino mass limit in the range m ( 1.0 1.7 ) ev , depending on the nuclear matrix elements used in the estimations . new improved half-life limits for the 0 2 decay with majoron ( s ) emission , lorentz-violating 2 2 decay and 2 transitions to excited states of 116sn were set at the level of t1/2 1020 1022 yr. new limits for the hypothetical lepton-number violating parameters ( right-handed currents admixtures in weak interaction , the effective majoronneutrino coupling constants , r-parity violating parameter , lorentz-violating story_separator_special_tag we report on the measurement of the two-neutrino double-beta decay half-life of $ $ ^ { 130 } $ $ 130te with the cuore-0 detector . from an exposure of 33.4\xa0kg\xa0year of teo $ $ _2 $ $ 2 , the half-life is determined to be $ $ t_ { 1/2 } ^ { 2 u } $ $ t1/22 = [ 8.2 \xb1 0.2 ( stat . ) \xb1 0.6 ( syst . ) ] $ $ \\times $ $ \xd7 10 $ $ ^ { 20 } $ $ 20\xa0year . this result is obtained after a detailed reconstruction of the sources responsible for the cuore-0 counting rate , with a specific study of those contributing to the $ $ ^ { 130 } $ $ 130te neutrinoless double-beta decay region of interest . story_separator_special_tag the cryogenic underground observatory for rare events ( cuore ) is a cryogenic experiment searching for neutrinoless double beta decay ( $ $ 0 u \\beta \\beta $ $ ) of $ $ { ^ { 130 } \\hbox { te } } $ $ . the detector consists of an array of $ $ 988\\ , { \\hbox { teo } _ { 2 } } $ $ crystals arranged in a compact cylindrical structure of 19 towers . we report the cuore initial operations and optimization campaigns . we then present the cuore results on $ $ 0 u \\beta \\beta $ $ and $ $ 2 u \\beta \\beta $ $ decay of $ $ { ^ { 130 } \\hbox { te } } $ $ obtained from the analysis of the physics data acquired in 2017 . story_separator_special_tag we present an improved search for neutrinoless double-beta ( 0 ) decay of ^ { 136 } xe in the kamland-zen experiment . owing to purification of the xenon-loaded liquid scintillator , we achieved a significant reduction of the ^ { 110m } ag contaminant identified in previous searches . combining the results from the first and second phase , we obtain a lower limit for the 0 decay half-life of t_ { 1/2 } ^ { 0 } > 1.07\xd710^ { 26 } yr at 90 % \xa0c.l. , an almost sixfold improvement over previous limits . using commonly adopted nuclear matrix element calculations , the corresponding upper limits on the effective majorana neutrino mass are in the range 61-165\xa0mev . for the most optimistic nuclear matrix elements , this limit reaches the bottom of the quasidegenerate neutrino mass region . story_separator_special_tag we present results from a search for neutrinoless double- ( 0 ) decay using 36.6 g of the isotope 150nd with data corresponding to a live time of 5.25 y recorded with the nemo-3 detector . we construct a complete background model for this isotope , including a measurement of the two-neutrino double- decay half-life of t2 1/2= [ 9.34 \xb1 0.22 ( stat . ) +0.62 0.60 ( syst . ) ] \xd71018 y for the ground state transition , which represents the most precise result to date for this isotope . we perform a multivariate analysis to search for 0 decays in order to improve the sensitivity and , in the case of observation , disentangle the possible underlying decay mechanisms . as no evidence for 0 decay is observed , we derive lower limits on half-lives for several mechanisms involving physics beyond the standard model . the observed lower limit , assuming light majorana neutrino exchange mediates the decay , is t0 1/2 > 2.0 \xd71022 y at the 90 % c.l. , corresponding to an upper limit on the effective neutrino mass of m < 1.6 - 5.3 ev . story_separator_special_tag two-neutrino 2 decay of 150nd to the 01+ 740.5-kev excited level of 150sm has been investigated by using a highly purified 2.381-kg nd2o3 sample with the help of ultra-low-background gamma spectrometer with 4 hpge detectors ( 255 cm3 each ) at the gran sasso underground laboratory ( infn , italy ) . gamma quanta , expected in cascade after de-excitation of the 01+ ( 740.5 kev ) excited level of 150sm , have been observed in the coincidence spectra accumulated over 25947 h. the half-life value has been preliminary estimated as t1/2= [ 6.9 1.9+4.0 ( stat ) \xb11.1 ( syst ) ] \xd7109y . the data taking is in progress to reduce the statistical error.two-neutrino 2 decay of 150nd to the 01+ 740.5-kev excited level of 150sm has been investigated by using a highly purified 2.381-kg nd2o3 sample with the help of ultra-low-background gamma spectrometer with 4 hpge detectors ( 255 cm3 each ) at the gran sasso underground laboratory ( infn , italy ) . gamma quanta , expected in cascade after de-excitation of the 01+ ( 740.5 kev ) excited level of 150sm , have been observed in the coincidence spectra accumulated over 25947 h. the half-life story_separator_special_tag the accurate experimental determination of half-lives for 2 -decay could be used to adjust parameters of the qrpa model , the strength of the particle-particle interaction gpp and to improve the calculation of nuclear matrix element for 0 ( neutrinoless ) decay and neutrino mass m estimations . presently high precision direct counting experiments are difficult due to low decay rates of tellurium and barium isotopes , but their daughter nuclei accumulating in geological samples for millions of years , are clearly observable in xenon isotopes , which have extremely low background in terrestrial rocks . this work reviews current status of the previously determined half-lives of 130 te and 128 te and presents new estimates of half-life of 130 ba weak decay . our new estimates take into account the spallogenic production of 130 xe , significantly reducing the disagreement between the only two published values of 130 ba half-lives . based on this result and on our early experiments we propose a new class of geological samples , which can provide an even more accurate determination of the 130 ba half-life . story_separator_special_tag { sup 48 } ca , the lightest experimentally accessible double beta decay candidate , is the only one simple enough to be treated exactly in the nuclear shell model . thus the { beta } { beta } { sub 2 { nu } } half-life measurement , reported here , provides a unique test of the nuclear physics involved in the { beta } { beta } matrix element calculation . enriched { sup 48 } ca sources of two different thicknesses have been exposed in a time projection chamber . we observe a half-life of t { sub 1/2 } { sup 2 { nu } } = ( 4.3 { sub { minus } 1.1 } { sup +2.4 } [ stat ] { plus_minus } 1.4 [ syst ] ) { times } 10 { sup 19 } yr , consistent with shell model calculations . { copyright } { ital 1996 the american physical society . } story_separator_special_tag abstract this letter describes a collaborative tgv ( telescope germanium vertical ) study of the double beta decay of 48 ca with a low-background and high sensitivity ge multi-detector spectrometer . the results of t 1/2 2 = ( 4.2 +3.3 1.3 ) \xd710 19 years and t 1/2 0 > 1.5\xd710 21 years ( 90 % cl ) for double beta decay of 48 ca were found after processing experimental data obtained after 8700 hours of measuring time , using approximately 1 gramme of 48 ca . the features of a tgv-2 experiment are also presented . story_separator_special_tag the search for double beta-decay of 76ge was carried out with a detector fabricated of enriched material ( 85 % abundance of 76ge compared with 7.8 % natural abundance ) . measurements have been performed by the itep/yepi team in the avan salt mine , 245 meters underground , situated in yerevan , armenia . evidence for two-neutrino double beta-decay of 76ge with half-life of t1/2 ( 2v ) = ( 9\xb11 ) \xb7 1020 y was obtained . new limits for neutrinoless double beta-decay , t1/2 ( 0v ) > 1.3\xd71024 y , and double beta-decay with majoron emission t1/2 ( 0v , b ) > 1\xd71022 y were obtained at 68 % cl from mean background fluctuations . limit for 0v-decay derived by the maximum likelihood method was t1/2 > 2.0\xd71024 y . story_separator_special_tag a dramatic reduction in background was achieved in the latest pacific northwest laboratory -- university of south carolina germanium detectors . two 1.05-kg natural-isotopic-abundance detectors were operated for 1.92 kg yr. the residual spectrum , after straightforward corrections , has a significant region resembling the theoretical spectrum of the two-neutrino { beta } { beta } decay of { sup 76 } ge . a fit to the data yields { ital t } { sub 1/2 } { sup 2 { nu } } ( { sup 76 } ge ) = ( 1.1 { sub { minus } 0.3 } { sup +0.6 } ) { times } 10 { sup 21 } yr at the 95 % c.l. , which agrees with shell-model predictions . story_separator_special_tag a brief review and status of theoretical issues associated with double-beta decay ( -decay ) is given . the final results of the measurement of 2 -decay of 100mo to the first excited 0+ state in 100ru are presented prior to publication . corrections to the earlier pnl/usc/itep/ypi measurement of 2 -decay of 76ge are also given prior to publication . finally , a status report and first results of the phase-i of the international germanium experiment ( igex ) are presented . story_separator_special_tag the current situation of the double beta decay direct counting experiments is briefly reviewed . a comparison with the theoretical predictions in some representative nuclear models is presented . story_separator_special_tag a time projection chamber ( tpc ) has been constructed to search for double beta decay in 82se . the lifetime for the 2 2 is 1/2=1.4 ( +6 3 ) 1020 years ( 68 % c.l. ) . limits for o 2 decay with and without majoron emission are also given . story_separator_special_tag the double-beta decay of 82se to the 0+1 excited state of 82kr has been studied with the nemo-3 detector using 0.93 kg of enriched 82se measured for 4.75 y , corresponding to an exposure of 4.42 kg y. a dedicated analysis to reconstruct the gamma-rays has been performed to search for events in the 2e2g channel . no evidence of a 2nbb decay to the 0+1 state has been observed and a limit of t2n 1/2 ( 82se ; 0+gs - > 0+1 ) > 1.3 1021 y at 90 % cl has been set . concerning the 0nbb decay to the 0+1 state , a limit for this decay has been obtained with t0n 1/2 ( 82se ; 0+g s - > 0+1 ) > 2.3 1022 y at 90 % cl , independently from the 2nbb decay process . these results are obtained for the first time with a tracko-calo detector , reconstructing every particle in the final state . story_separator_special_tag abstract after 10357 h of running the nemo-2 tracking detector with an isotopically enriched zirconium source ( 0.084 mol yr of 96zr ) , a 2 decay half-life of t1/2= ( 2.1+0.8 ( stat ) 0.4 ( stat ) \xb10.2 ( syst ) ) \xb71019 y was measured . limits with a 90 % c.l . on the 96zr half-lives of 1.0\xb71021 y for 0 decay to the ground state , 3.9\xb71020 y to the 2+ excited state and 3.5\xb71020 y for 0 0 decay with a majoron ( 0 ) were obtained . the data also provide direct limits at the 90 % c.l . for the 94zr half-lives . these limits are 1.1\xb71017 y for 2 decay to the ground state , 1.9\xb71019 y for 0 decay to the ground state and 2.3\xb71018 y for 0 0 decay to ground state . story_separator_special_tag using 9.4 g of 96zr isotope and 1221 days of data from the nemo-3 detector corresponding to 0.031 kg y , the obtained 2 decay half-life measurement is view the mathml source . different characteristics of the final state electrons have been studied , such as the energy sum , individual electron energy , and angular distribution . the 2 nuclear matrix element is extracted using the measured 2 half-life and is m2 =0.049\xb10.002 . constraints on 0 decay have also been set . story_separator_special_tag an excess amount of $ ^ { 96 } \\mathrm { mo } $ found in a 1.7\\ifmmode\\times\\else\\texttimes\\fi { } $ { 10 } ^ { 9 } $ yr zircon sample from cable sands ' western australia , yielded a half-life of ( 3.9\\ifmmode\\pm\\else\\textpm\\fi { } 0.9 ) \\ifmmode\\times\\else\\texttimes\\fi { } $ { 10 } ^ { 19 } $ yr for the double beta decay of $ ^ { 96 } \\mathrm { zr } $ to $ ^ { 96 } \\mathrm { mo } $ . story_separator_special_tag the double beta decays of mo-100 and nd-150 were studied in a time projection chamber located 72 m underground . a 3275 h exposure of a 16.7 g sample of metallic mo enriched to 97.4 % in mo-100 resulted in a two-neutrino half-life of ( 6.82 + 0.38 - 0.53 +/- 0.68 ) * 10 * * 18 y. similarly , a 6287 h exposure of 15.5 g of nd2o3 enriched to 91 % in nd-150 yielded ( 6.75 + 0.37 - 0.42 +/- 0.68 ) * 10 * * 18 y. lower limits on half-lives for neutrinoless decay with and without majoron emission also have been measured . story_separator_special_tag a time projection chamber with 8.3 grams of enriched { sup 100 } moo { sub 3 } as the central electrode has been operating approximately five months in an underground laboratory . a preliminary analysis of the two-electron sum energy spectrum , the spectrum of those same electrons taken singly , and the opening angle distribution yields a half life of 1.16 { sub -0.08 } { sup +0.34 } { times } 10 { sup 19 } y at the 68 % confidence level for two-neutrino double beta decay of { sup 100 } mo . 9 refs. , 8 figs . story_separator_special_tag in this paper we review results obtained in the searches of double beta decays to excited states of the daughter nuclei and illustrate the related experimental techniques . in particular , we describe in some detail the only two cases in which the transition has been observed ; that is the 2 ( 0+ 01+ ) decay of 100mo and 150nd nuclides . moreover , the most significant results in terms of lower limits on the half-life are also summarized . story_separator_special_tag two-neutrino double beta decay of 100mo with half-life t1/2= [ 7.2\xb10.9 ( stat ) \xb1 1.8 ( syst ) ] \xd71018 yr was detected using a liquid argon ionization chamber . with a c.l . of 68 % ( 90 % ) , the bounds on neutrinoless decay and decay with majoron emission were found to be 8.4 ( 4.9 ) \xd71021 and 4.1 ( 3.2 ) \xd71020 yr , respectively . an analysis of all available results provides the average world value t1/2= ( 8.0\xb10.7 ) \xd71018 yr for the two-neutrino decay of 100mo , and the corresponding nuclear matrix element is mgt=0.118\xb10.005 . story_separator_special_tag the large statistics collected during the operation of a znmoo4 array , for a total exposure of 1.3 kg day of 100mo , allowed the first bolometric observation of the two neutrino double beta decay of 100mo . the observed spectrum of each crystal was reconstructed taking into account the different background contributions due to environmental radioactivity and internal contamination . the analysis of coincidences between the crystals allowed the assignment of constraints to the intensity of the different background sources , resulting in a reconstruction of the measured spectrum down to an energy of ~300 kev . the half-life extracted from the data is t = [ 7.15 \xb1 0.37 ( stat ) \xb1 0.66 ( syst ) ] \xd7 1018 y . story_separator_special_tag abstract double-beta decay of 100mo to the 0+ excited sate at 1130.29 kev in 100ru has been observed . a 956g sample of molybdenum powder enriched to 98.468 % 100mo was counted in a marinelli geometry with a well-shielded , ultralow-background germnium detector . the cascade gamma rays at 539.53 and 590.76 kev in 100ru were observed . the resulting half-life is ( 6.1 1.1+1.8 ) \xd71020 yr at the 68 % confidence limit in disagreement with a recently published limit . story_separator_special_tag the double-beta beta beta ( 2 nu ) -decay rate of 100mo to the first excited 0 ( + ) state of 100ru has been measured by a gamma-gamma coincidence technique that uses two hpge detectors to observe the two gamma rays ( e ( gamma 1 ) = 590.76 kev ; e ( gamma 2 ) = 539.53 kev ) from the 100ru nucleus as it deexcites to the ground state via the 0 ( + ) -- > 2 ( + ) -- > 0 ( + ) sequence . unlike all previous beta beta-decay experiments , this technique provides data which have a large signal-to-background ratio . after a 440-day measurement of a 1.05-kg isotopically enriched ( 98.4 % ) disk of 100mo , 22 detected coincidence events ( with an estimated background of 2.5 events ) yield a half-life of [ 5.9 ( +1.7 ) ( -1.1 ) ( stat ) +/-0.6 ( syst ) ] x 10 ( 20 ) years . story_separator_special_tag abstract the coincidence detection efficiency of the tunl-itep apparatus designed for measuring half-life times of two-neutrino double-beta ( 2 ) decay transitions to excited final states in daughter nuclei has been measured with a factor of 2.4 improved accuracy . in addition , the previous measuring time of 455 days for the study of the 100 mo 2 decay to the first excited 0 1 + state in 100 ru has been increased by 450 days , and a new result ( combined with the previous measurement obtained with the same apparatus ) for this transition is presented : t 1 / 2 = [ 5.5 0.8 + 1.2 ( stat ) \xb1 0.3 ( syst ) ] \xd7 10 20 yr . measured 2 decay half-life times to excited states can be used to test the reliability of nuclear matrix element calculations needed for determining the effective neutrino mass from zero-neutrino double-beta decay data . we also present new limits for transitions to higher excited states in 100 ru which , if improved , may be of interest for more exotic conjectures , like a bosonic component to neutrino statistics . story_separator_special_tag the double beta decay of 100mo to the 0^+_1 and 2^+_1 excited states of 100ru is studied using the nemo 3 data . after the analysis of 8024 h of data the half-life for the two-neutrino double beta decay of 100mo to the excited 0^+_1 state is measured to be t^ ( 2nu ) _1/2 = [ 5.7^ { +1.3 } _ { -0.9 } ( stat ) +/-0.8 ( syst ) ] x 10^20 y. the signal-to-background ratio is equal to 3. information about energy and angular distributions of emitted electrons is also obtained . no evidence for neutrinoless double beta decay to the excited 0^+_1 state has been found . the corresponding half-life limit is t^ ( 0nu ) _1/2 ( 0^+ -- > 0^+_1 ) > 8.9 x 10^22 y ( at 90 % c.l. ) . the search for the double beta decay to the 2^+_1 excited state has allowed the determination of limits on the half-life for the two neutrino mode t^ ( 2nu ) _1/2 ( 0^+ -- > 2^+_1 ) > 1.1 x 10^21 y ( at 90 % c.l . ) and for the neutrinoless mode t^ ( 0nu ) _1/2 ( story_separator_special_tag sample of 100moo3 with molybdenum enriched in 100mo to 99.5 % and mass of 1199 g was measured deep underground ( 3600 m w.e . ) in the laboratori nazionali del gran sasso of infn , italy ; during 17249 h with a low-background set-up with 4 hp ge detectors . after 2 2 decay of 100mo to the 0+1 excited level of 100ru ( eexc = 1131 kev ) , two quanta of 540 kev and 591 kev should be emitted in deexcitation process . both these 's are observed in the accumulated data as in coincidence spectrum as well in 1-dimensional sum spectrum . measured half life is t1/2 = ( 7.0+1.1 0.8 ) \xd7 1020 yr , in agreement with positive results obtained in previous experiments . story_separator_special_tag double beta decay of 100mo to the excited states of daughter nuclei has been studied using a 600 cm3 low-background hpge detector and an external source consisting of 2588 g of 97.5 % enriched metallic 100mo , which was formerly inside the nemo-3 detector and used for the nemo-3 measurements of 100mo . the half-life for the two-neutrino double beta decay of 100mo to the excited view the mathml source01+ state in 100ru is measured to be t1/2= [ 7.5\xb10.6 ( stat ) \xb10.6 ( syst ) ] 1020 yrt1/2= [ 7.5\xb10.6 ( stat ) \xb10.6 ( syst ) ] 1020 yr. for other ( 0 +2 ) ( 0 +2 ) transitions to the view the mathml source21+ , view the mathml source22+ , view the mathml source02+ , view the mathml source23+ and view the mathml source03+ levels in 100ru , limits are obtained at the level of ( 0.25-1.1 ) 1022 yr ( 0.25-1.1 ) 1022 yr . story_separator_special_tag an experiment to search for 2 processes in { sup 116 } cd with the help of enriched ( to 82 % ) cadmium tungstate crystal scintillators is in progress at the gran sasso national laboratory of the infn ( lngs , italy ) . after 11074 h of data taking in the last configuration , the preliminary estimate for the half-life of 116cd relatively to 2 2 decay is t { sub 1/2 } = [ 2.52 \xb1 0.02 ( stat . ) \xb1 0.14 ( syst . ) ] \xd7 10 { sup 19 } yr. by using the data of previous stages of the experiment with a similar level of background ( 0.1 counts/ ( kev kg yr ) in the energy interval 2.7 2.9 mev ; the total time of measurements is 19770 h ) we have obtained a new limit on the 0 2 decay of { sup 116 } cd to the ground state of { sup 116 } sn : t { sub 1/2 } 1.9 \xd7 10 { sup 23 } yr at 90 % c.l . new limits on different 2 processes in { sup 116 } cd ( decays with story_separator_special_tag a review of recent geochemical measurements on the double-beta decay of 82se , 128te and 130te suggests that the current 'best ' value for the decay rate of 128te relative to that of 130te is 4 * 10-4 and the current 'best ' values for individual half-lives are as follows : 1 * 1020 y for 82se , 2 * 1024 y for 128te , and 8 * 1020 y for 130te . story_separator_special_tag double beta decay of [ sup 128 ] te has been confirmed and the ratio of half-lives for [ beta ] [ beta ] decay of [ sup 130 ] te and [ sup 128 ] te has been precisely determined as [ ital t ] [ sub 1/2 ] [ sup 130 ] / [ ital t ] [ sub 1/2 ] [ sup 128 ] = ( 3.52 [ plus minus ] 0.11 ) [ times ] 10 [ sup [ minus ] 4 ] by ion-counting mass spectrometry of xe in ancient te ores , using techniques that reduce interferences due to trapped xe . we have also detected excesses of [ sup 126 ] xe originating in high energy reactions of cosmic ray muons and their secondaries on te ; such reactions make minor contributions to the measured [ sup 128 ] xe excesses in the te ores . the xe measurements , combined with common pb dating of the ores , yield a [ sup 130 ] te half-life of ( 2.7 [ plus minus ] 0.1 ) [ times ] 10 [ sup 21 ] yr and thus a [ sup 128 ] te story_separator_special_tag all existing positive results on two neutrino double beta decay in different nuclei were analyzed . using the procedure recommended by the particle data group , weighted average values for half lives of 48ca , 76ge , 82se , 96zr , 100mo , 100mo 100ru ( 01+ ) , 116cd , 130te , 150nd , 150nd 150sm ( 01+ ) and 238u were obtained . existing geochemical data were analyzed and recommended values for half lives of 128te , 130te and 130ba are proposed . we recommend the use of these results as presently the most precise and reliable values for half lives . story_separator_special_tag we report on the final results of a series of experiments on double beta decay of 130 te carried out with an array of twenty cryogenic detectors . the set-up is made with crystals of teo2 with a total mass of 6.8 kg , the largest operating one for a cryogenic experiment . four crystals are made with isotopically enriched materials : two in 128 te and two others in 130 te . the remaining ones are made with natural tellurium , which contains 31.7 % and 33.8 % 128 te and 130 te , respectively . the array was run under a heavy shield in the gran sasso underground laboratory at a depth of about 3500 m.w.e . by recording the pulses of each detector in anticoincidence with the others a lower limit of 2.1 \xd7 10 23 years has been obtained at the 90 % c.l . on the lifetime for neutrinoless double beta decay of 130 te . in terms of effective neutrino mass this leads to the most restrictive limit in direct experiments , after those obtained with ge diodes . limits on other lepton violating decays of 130 te and on the neutrinoless double beta story_separator_special_tag we report on an improved measurement of the 2 half-life of ^ ( 136 ) xe performed by exo-200 . the use of a large and homogeneous time-projection chamber allows for the precise estimate of the fiducial mass used for the measurement , resulting in a small systematic uncertainty . we also discuss in detail the data-analysis methods used for double- decay searches with exo-200 , while emphasizing those directly related to the present measurement . the ^ ( 136 ) xe 2 half-life is found to be t^ ( 2 ) _ ( 1/2 ) = 2.165\xb10.016 ( stat ) \xb10.059 ( sys ) \xd710^ ( 21 ) yr. this is the most precisely measured half-life of any 2 decay to date . story_separator_special_tag we obtain a positive effect in the experiment to search for two-neutrino decay of 150nd by using the time projection chamber . the half-life t12 ( 2 2 ) = [ 1.88 0.39+0.66 ( stat . ) \xb1 0.19 ( syst . ) ] \xd7 1019 yr is defined from the analysis of the data accumulated with samples of 150nd ( 92 % enrichment ) and natnd ( the natural isotopic abundance is 5.6 % ) . the effect to the background ratio is equal to 41. the background from radioactive impurities in the sources is estimated experimentally in a special series of measurements . story_separator_special_tag double beta decay of 150nd and 148nd to the excited states of daughter nuclei have been studied using a 400 cm3 low-background hpge detector and an external source consisting of 3046 g of natural nd2o3 powder . the half-life for the two-neutrino double beta decay of 150nd to the excited 0+1 state in 150sm is measured to be t1/2 = [ 1.33+0.36 0.23 ( stat ) +0.27 0.13 ( syst ) ] \xb7 1020 y. for other ( 0 + 2 ) transitions to the 2+1 , 2+2 , 2+3 , and 0+2 levels in 150sm , limits are obtained at the level of ~ ( 2 8 ) \xb7 1020 y. in the case of 148nd only limits for the ( 0 + 2 ) transitions to the 2+1 , 0+1 , and 2+2 excited states in 148sm were obtained and are at the level of ~ ( 4 8 ) \xb7 1020 y . story_separator_special_tag in order to understand the nature of the neutrino , a worldwide search for neutrino-less double- $ \\ensuremath { \\beta } $ decay is planned . the present paper measures the half-life for two-neutrino decay of one of a small number of nuclei that are viable candidates for neutrino-less decay . this measurement uses a powerful coincidence technique to determine the two-neutrino half-life of $ { } ^ { 150 } $ nd , and compare it to predictions . this is an important step in enhancing the potential search for the neutrino-less mode with this nucleus . story_separator_special_tag the half-life for the decay of { sup 238 } u to { sup 238 } pu has been measured to be ( 2.0 { plus minus } 0.6 ) { times } 10 { sup 21 } yr by chemically isolating and measuring from the resultant alpha particles the amount of plutonium that had accumulated in 35 yr from 8.4 kg of purified uranyl nitrate . other sources of { sup 238 } pu have been studied and found negligible . story_separator_special_tag in order to investigate radioactive decay of 130 ba and 132 ba which have half-lives on the order 10 20 10 21 a , the isotopic composition of xenon has been measured in 3.5 ga barite of the dresser formation , pilbara , western australia . the analyzed samples were collected at about 86 m depth from a diamond drill core ( pilbara drilling project ) . the fact that the sample has been shielded from modern cosmic ray exposure reduces the number of potentially interfering production pathways , simplifying interpretation of the xe isotope spectrum . this spectrum is clearly distinct from that of either modern or ancient atmospheric xe . a strong excess of 130 xe is identified , as well as other isotopic excursions which are attributed to mass-dependent isotopic fractionation and contributions from products of uranium fission . the mass-dependent fractionation , estimated at 2.1 \xb1 0.3 % amu 1 , can be accounted for by mutual diffusion and rayleigh distillation during barite formation that is consistent with geological constraints . after correction for mass-dependent fractionation , the concentrations of fissiogenic xe isotopes demonstrate that the u xe isotope system has remained closed over 3.5 ga. story_separator_special_tag the double-beta-decay experiment nemo-3 has been taking data since february 2003. the aim of this experiment is to search for neutrinoless ( 0 ) decay and investigate two neutrino doublebeta decay in seven different isotopically enriched samples ( 100mo , 82se , 48ca , 96zr , 116cd , 130te , and 150nd ) . after analysis of the data corresponding to 3.75 yr , no evidence for 0 decay in the 100mo and 82se samples was found . the half-life limits at the 90 % c.l . are 1.1 \xd7 1024 and 3.6 \xd7 1023 yr , respectively . additionally for 0 decay the following limits at the 90 % c.l . were obtained , > 1.3 \xd7 1022 yr for 48ca , > 9.2 \xd7 1021 yr for 96zr , and > 1.8 \xd7 1022 yr for 150nd . the 2 decay half-life values were precisely measured for all investigated isotopes . story_separator_special_tag high purity germanium detectors have excellent energy resolution ; the best among the technologies used in double beta decay . since neutrino-less double beta decay hinges on the search for a rare peak upon a background continuum , this strength has enabled the technology to consistently provide leading results . the ge crystals at the heart of these experiments are very pure ; they have no measurable u or th contamination . the added efforts to reduce the background associated with electronics , cryogenic cooling , and shielding have been very successful , leading to the longevity of productivity . the first experiment published in 1967 by the milan group of fiorini , established the benchmark half-life limit $ > 3\\times10^ { 20 } $ yr. this bound was improved with the early work of the usc-pnnl , ucsb and milan groups yielding limits above $ 10^ { 23 } $ yr. the heidelberg-moscow and usc-pnnl collaborations pioneered the use of enriched ge for detector fabrication . both groups also initiated techniques of analyzing pulse waveforms to reject $ \\gamma $ -ray background . these steps extended the limits to just over $ 10^ { 25 } $ yr. in story_separator_special_tag the systematic study of ( anti- ) neutrino accompanied beta^-beta^- , beta^+beta^+ decays and beta^+/ec , ec/ec electron captures is performed under the assumption of single intermediate nuclear state dominance . the corresponding half-lives are evaluated both for transitions to the ground state as well as to the 0^+ and 2^+ excited states of final nucleus . it is stressed that the hypothesis of single state dominance can be confirmed or ruled out by the precision measurements of the differential characteristics of the 2nbb-decays of 100mo and 116cd as well as beta^+/ec electron capture in 106cd , 130ba and 136ce . story_separator_special_tag tellurobismuthite ( bi { sub 2 } te { sub 3 } ) has been analyzed for xe isotopes to determine the half-life for double- { beta } decay of { sup 130 } te . excess { sup 130 } xe amounts to ( 6.18 { plus_minus } 0.18 ) { times } 10 { sup 7 } atom/g with 47.8 wt . { percent } te , or ( 1.24 { plus_minus } 0.04 ) { times } 10 { sup 13 } for the parent/daughter ratio { sup 130 } te/ { sup 130 } xe . with ( 9.3 { plus_minus } 1.1 ) { times } 10 { sup 7 } yr for the xe retention age of te mineral , this provides ( 7.9 { plus_minus } 1.0 ) { times } 10 { sup 20 } yr for the absolute half-life of { sup 130 } te double- { beta } decay . with this and a literature ratio { ital t } { sub 1/2 } ( 130 ) / { ital t } { sub 1/2 } ( 128 ) of ( 3.52 { plus_minus } 0.11 ) { times } story_separator_special_tag we report the observation of two-neutrino double-beta decay in ( 136 ) xe with t ( 1/2 ) = 2.11 \xb1 0.04 ( stat ) \xb1 0.21 ( syst ) \xd7 10 ( 21 ) yr. this second-order process , predicted by the standard model , has been observed for several nuclei but not for ( 136 ) xe . the observed decay rate provides new input to matrix element calculations and to the search for the more interesting neutrinoless double-beta decay , the most sensitive probe for the existence of majorana particles and the measurement of the neutrino mass scale . story_separator_special_tag we report on a search for neutrinoless double-beta decay of 136xe with exo-200 . no signal is observed for an exposure of 32.5 kg yr , with a background of 1.5\xd710 ( -3 ) kg ( -1 ) yr ( -1 ) kev ( -1 ) in the \xb11 region of interest . this sets a lower limit on the half-life of the neutrinoless double-beta decay t ( 1/2 ) ( 0 ) ( 136xe ) > 1.6\xd710 ( 25 ) yr ( 90 % c.l . ) , corresponding to effective majorana masses of less than 140-380 mev , depending on the matrix element calculation . story_separator_special_tag we present limits on majoron-emitting neutrinoless double- decay modes based on an exposure of 112.3 days with 125 kg of 136xe . in particular , a lower limit on the ordinary ( spectral index n=1 ) majoron-emitting decay half-life of 136xe is obtained as t1/20 0 > 2.6\xd71024 yr at 90 % c.l. , a factor of five more stringent than previous limits . the corresponding upper limit on the effective majoron-neutrino coupling , using a range of available nuclear matrix calculations , is gee < ( 0.8-1.6 ) \xd710 5. this excludes a previously unconstrained region of parameter space and strongly limits the possible contribution of ordinary majoron emission modes to 0 decay for neutrino masses in the inverted hierarchy scheme . story_separator_special_tag we present results from the kamland-zen double-beta decay experiment based on an exposure of 77.6 days with 129 kg of 136xe . the measured two-neutrino double-beta decay half-life of 136xe is t1/22 =2.38\xb10.02 ( stat ) \xb10.14 ( syst ) \xd71021 yr , consistent with a recent measurement by exo-200 . we also obtain a lower limit for the neutrinoless double-beta decay half-life , t1/20 > 5.7\xd71024 yr at 90 % confidence level ( c. l. ) , which corresponds to almost a fivefold improvement over previous limits . story_separator_special_tag two neutrino double beta decay of 150nd to the first 0+ excited state in 150sm is investigated with the 400 cm3 low-background hpge detector . data analysis for 11320.5 h shows the excess of events at 333.9 and 406.5 kev . this makes it possible to estimate the half-life of the investigated process as [ 1.4 0.2+.04 ( stat ) \xb10.3 ( syst ) ] \xd71020yr . story_separator_special_tag abstract.the two-neutrino double-beta decay of 124 , 126xe , 128 , 130te , 130 , 132ba and 150nd isotopes is studied in the projected hartree-fock-bogoliubov ( phfb ) model . theoretical 2 - - half-lives of 128 , 130te , and 150nd isotopes , and 2 + + , 2 +ec and 2 ecec for 124 , 126xe and 130 , 132ba nuclei are presented . calculated quadrupolar transition probabilities b ( e2 : 0+ 2+ ) , static quadrupole moments and g -factors in the parent and daughter nuclei reproduce the experimental information , validating the reliability of the model wave functions . the anticorrelation between nuclear deformation and the nuclear transition matrix element m2 is confirmed . story_separator_special_tag neutrinoless double- $ \\ensuremath { \\beta } $ decay is of fundamental importance for determining the neutrino mass . although double electron decay is the most promising mode , in very recent years interest in double positron decay , positron emitting electron capture , and double electron capture has been renewed . we present here results of a calculation of nuclear matrix elements for neutrinoless double- $ { \\ensuremath { \\beta } } ^ { + } $ decay and positron emitting electron capture within the framework of the microscopic interacting boson model ( ibm-2 ) for $ { } ^ { 58 } $ ni , $ { } ^ { 64 } $ zn , $ { } ^ { 78 } $ kr , $ { } ^ { 96 } $ ru , $ { } ^ { 106 } $ cd , $ { } ^ { 124 } $ xe , $ { } ^ { 130 } $ ba , and $ { } ^ { 136 } $ ce decay . by combining these with a calculation of phase space factors we calculate expected half-lives . story_separator_special_tag sedimentary barites from south africa and western australia ( about 3 billion years old ) contain spallogenic xe isotopes produced by reactions of ba with nuclear-active particles in cosmic rays . 'surface residence time ' of these samples was calculated from the observed concentrations of spallogenic xe-126 . comparison of spallogenic ratios of xe-131/xe-126 in the two samples provides evidence for the reaction ba-130 ( n , gamma ) yields xe-131 , which is characterized by a large number of resonances for neutron absorption in the epithermal region . this observation lends additional support to the conclusions already reached regarding the origin of anomalous xe-131 in lunar samples . story_separator_special_tag we conducted an improved search for the simultaneous capture of two $ k $ -shell electrons on the $ ^ { 124 } $ xe and $ ^ { 126 } $ xe nuclei with emission of two neutrinos using 800.0 days of data from the xmass-i detector . a novel method to discriminate $ \\gamma $ -ray/ $ x $ -ray or double electron capture signals from $ \\beta $ -ray background using scintillation time profiles was developed for this search . no significant signal was found when fitting the observed energy spectra with the expected signal and background . therefore , we set the most stringent lower limits on the half-lives at $ 2.1 \\times 10^ { 22 } $ and $ 1.9 \\times 10^ { 22 } $ years for $ ^ { 124 } $ xe and $ ^ { 126 } $ xe , respectively , with 90 % confidence level . these limits improve upon previously reported values by a factor of 4.5 . story_separator_special_tag a complete and improved calculation of phase space factors ( psf ) for $ 2 u\\beta^+\\beta^+ $ and $ 0 u\\beta^+\\beta^+ $ decay , as well as for the competing modes $ 2 u ec\\beta^+ $ , $ 0 u ec\\beta^+ $ , and $ 2 u ecec $ , is presented . the calculation makes use of exact dirac wave functions with finite nuclear size and electron screening and includes life-times , single and summed positron spectra , and angular positron correlations .
schr\xf6dinger ( proc camb philos soc 31:555 563 , 1935 ) averred that entanglement is the characteristic trait of quantum mechanics . the first part of this paper is simultaneously an exploration of schr\xf6dinger s claim and an investigation into the distinction between mere entanglement and genuine quantum entanglement . the typical discussion of these matters in the philosophical literature neglects the structure of the algebra of observables , implicitly assuming a tensor product structure of the simple type i factor algebras used in ordinary quantum mechanics ( qm ) . this limitation is overcome by adopting the algebraic approach to quantum physics , which allows a uniform treatment of ordinary qm , relativistic quantum field theory , and quantum statistical mechanics . the algebraic apparatus helps to distinguish several different criteria of quantum entanglement and to prove results about the relation of quantum entanglement to two additional ways of characterizing the classical versus quantum divide , viz . abelian versus non-abelian algebras of observables , and the ability versus inability to interrogate the system without disturbing it . schr\xf6dinger s claim is reassessed in the light of this discussion . the second part of the paper deals with the story_separator_special_tag abstract we present a mathematical study of the differentiable deformations of the algebras associated with phase space . deformations of the lie algebra of c functions , defined by the poisson bracket , generalize the well-known moyal bracket . deformations of the algebra of c functions , defined by ordinary multiplication , give rise to noncommutative , associative algebras , isomorphic to the operator algebras of quantum theory . in particular , we study deformations invariant under any lie algebra of distinguished observables , thus generalizing the usual quantization scheme based on the heisenberg algebra . story_separator_special_tag in this paper we consider the problem of deformation quantization of the algebra of polynomial functions on coadjoint orbits of semisimple lie groups . the deformation of an orbit is realized by taking the quotient of the universal enveloping algebra of the lie algebra of the given lie group , by a suitable ideal . a comparison with geometric quantization in the case of su ( 2 ) is done where both methods agree . story_separator_special_tag this paper deals with non-commutative objects based on the weyl algebra from the differential geometric point of view . we propose to extend familiar notions from manifolds to non-commutative objects ; in this algebraic approach the basic object is not a point of a space but certain sections of bundles story_separator_special_tag we prove the existence of star-products and of formal deformations of the poisson lie algebra of an arbitrary symplectic manifold . moreover , all the obstructions encountered in the step-wise construction of formal deformations are vanishing . story_separator_special_tag we present a simple geometric construction linking geometric to deformation quantization . both theories depend on some apparently arbitrary parameters , most importantly a polarization and a symplectic connection , and for real polarizations we find a compatibility condition restricting the set of admissible connections . in the special case when phase space is a cotangent bundle this compatibility condition has many solutions , and the resulting quantum theory not only reproduces the well-known geometric quantization scheme , but also allows to quantize all interesting observables . for k\\ '' ahler manifolds there is no compatibility condition , but a canonical choice for the parameters . the explicit form of the observables however remains undetermined . story_separator_special_tag in this paper we study the dynamics of self-gravitating ellipsoids in n dimensions , from the point of view of the poisson structure of the dual of a suitable lie algebra . when n=3 this was done by rosenteel . in this setting we describe explicitly the ring of invariant functions . in the two-dimensional case we apply a technique due to pedersen to find globally defined darboux coordinates on the coadjoint orbits . finally we derive a characterization of tensors coming from potentials , and use it to exhibit the dynamical equations of the ellipsoid in the hamiltonian form . story_separator_special_tag we derive necessary conditions on a lie algebra from the existence of a star product on a neighbourhood of the origin in the dual of the lie algebra for the coadjoint poisson structure which is both differential and tangential to all the coadjoint orbits . in particular we show that when the lie algebra is semisimple there are no differential and tangential star products on any neighbourhood of the origin in the dual of its lie algebra . story_separator_special_tag the translation equivariance of convolutional layers enables convolutional neural networks to generalize well on image problems . while translation equivariance provides a powerful inductive bias for images , we often additionally desire equivariance to other transformations , such as rotations , especially for non-image data . we propose a general method to construct a convolutional layer that is equivariant to transformations from any specified lie group with a surjective exponential map . incorporating equivariance to a new group requires implementing only the group exponential and logarithm maps , enabling rapid prototyping . showcasing the simplicity and generality of our method , we apply the same model architecture to images , ball-and-stick molecular data , and hamiltonian dynamical systems . for hamiltonian systems , the equivariance of our models is especially impactful , leading to exact conservation of linear and angular momentum . story_separator_special_tag 1. we shall show in this note how a formula due to harish-chandra [ 1 ] may be used to obtain simple and elementary proofs of some results of kost ; ant [ 1 ] [ 2 ] concerning the algebra of invariant polynomials on a complex semisimple lie algebra . since we use harish-chandra 's formula only in a very special case , we have included a simple proof of it in that special case , so as to make the present note self-contained . story_separator_special_tag l acc\xe8s aux archives de la revue \xab annali della scuola normale superiore di pisa , classe di scienze \xbb ( http : //www.sns.it/it/edizioni/riviste/annaliscienze/ ) implique l accord avec les conditions g\xe9n\xe9rales d utilisation ( http : //www.numdam.org/legal.php ) . toute utilisation commerciale ou impression syst\xe9matique est constitutive d une infraction p\xe9nale . toute copie ou impression de ce fichier doit contenir la pr\xe9sente mention de copyright . story_separator_special_tag let g be a group of linear transformations on a finite dimensional real or complex vector space x. assume x is completely reducible as a g-module . let s be the ring of all complex-valued polynomials on x , regarded as a g-module in the obvious way , and let j ? s be the sub-ring of all g-invariant polynomials on x . story_separator_special_tag abstract we provide the existence of tangential formal deformations of the poisson bracket on a regular poisson manifold . we study relations between these deformations and tangential star products . we deduce an existence theorem for these star products .
abstract perchloroethene ( pce ) is a common groundwater contaminant , due to its common use as a dry-cleaning solvent . current treatment methods are limited in their ability to remove pce from contaminated sites in an efficient and cost effective manner . palladium-on-gold nanoparticles ( pd-on-au nps ) have been shown to be highly catalytically active in the hydrodechlorination ( hdc ) of trichloroethene ( tce ) and other chlorinated compounds . however , the catalytic chemistry of such nanoparticles for pce hdc in water has not been systematically addressed in the literature . in this paper , we assess the catalytic properties of 4\xa0nm pd-on-au nps , 4\xa0nm pd nps , and pd/al2o3 for water-phase pce hdc under ambient conditions . the pd-on-au nps exhibited volcano-shape activity as a function of pd surface coverage ( sc ) . maximum activity was at 80\xa0\xb1\xa00.8 sc % ( pseudo-first order rate constant of 5000\xa0l/gpd/min ) , which was 20x and 80x higher than that for pd nps and pd/al2o3 at room temperature and ph 7. a complete mechanistic model of pce hdc that coupled gas liquid mass transfer with the surface reactions was developed and found to be consistent with story_separator_special_tag abstract trichloroethene ( tce ) , a common carcinogen and groundwater contaminant in industrialized nations , can be catalytically degraded by au nanoparticles partially coated with pd ( pd-on-au nps ) . in this work , we synthesized pd-on-au nps using 3 , 7 , and 10\xa0nm au nps with pd surface coverages between 0 150 % and studied how particle size and composition influenced their tce hydrodechlorination ( hdc ) activity . we observed volcano-shape dependence on both au particle size and pd surface coverage , with 7\xa0nm au nps with pd coverages of 60 70 % having maximum activity . using extended x-ray absorption fine-structure spectroscopy , we found a strong correlation between catalytic activity and the presence of 2-d pd ensembles ( as small as 2 3 atoms ) . aberration-corrected scanning transmission electron microscopy further confirmed the presence of pd ensembles . the pd dispersion and oxidation state generally changed from isolated , metallic pd atoms to metallic 2-d pd ensembles of varying sizes , and to partially oxidized 3-d pd ensembles , as pd surface coverage increased . these changes occurred at different surface coverages for different au particle sizes . these findings highlight the story_separator_special_tag pollutants in the form of heavy metals , fertilizers , detergents , and pesticides have seriously reduced the supply of pure drinking water and usable water . gold metal has intriguing potential to deal with the water pollution problem , as recent research on several fronts is advancing the concept of nanoscale gold as the basis for cost-effective nanotechnology-based water treatment . nano-gold has special properties , such as enhanced catalytic activity , visible surface plasmon resonance color changes , and chemical stability , that make it more useful than other materials . this perspective article highlights the current use of gold nanoparticles for the efficient removal and the selective and sensitive detection of a variety of pollutants in water . the challenges in further developing nano-gold to address water contamination are discussed , which should stimulate future research into improved removal and detection of undesirable chemical compounds . \xa9 2013 society of chemical industry story_separator_special_tag introduction nitrate ( no3 ) is often found in the groundwater and surface water in the united states , especially in agricultural areas intensively using nitrate rich fertilizers . it is widespread due to its high stability and solubility . [ 1 ] the consumption of nitrates and its transformation production nitrite ( no2 ) would cause a series of adverse health effects , including methemoglobinemia or blue baby syndrome . [ 2 ] the catalytic reduction of nitrate/nitrite has received intense research interests since pd based catalysts ( e.g . pd-cu , pd-in and pd-sn ) can convert nitrate/nitrite to harmless nitrogen ( n2 ) with reducing agent hydrogen . [ 3 ] however , due to the over-reduction , ammonia ( nh3 ) was also easily formed besides nitrogen . in the view of water treatment , high selectivity to nitrogen is preferred since ammonia is still contaminant in water . [ 1 ] with the supported cu-pd/pd catalyst in the optimal conditions , the best selectivity to nitrogen for nitrate/nitrite reduction is near 80-95 % . [ 3 ] story_separator_special_tag nitrate ( no3 ) and nitrite ( no2 ) anions are often found in groundwater and surface water as contaminants globally , especially in agricultural areas due to nitrate-rich fertilizer use . one popular approach to studying the removal of nitrite/nitrate from water has been their degradation to dinitrogen via pd-based reduction catalysis . however , little progress has been made towards understanding how the catalyst structure can improve activity . focusing on the catalytic reduction of nitrite in this study , we report that au nps supporting pd metal ( 'pd-on-au nps ' ) show catalytic activity that varies with volcano-shape dependence on pd surface coverage . at room temperature , in co2-buffered water , and under h2 headspace , the nps were maximally active at a pd surface coverage of 80 % , with a first-order rate constant ( kcat = 576 l gpd 1 min 1 ) that was 15x and 7.5x higher than monometallic pd nps ( 4 nm ; 40 l gpd 1 min 1 ) and pd/al2o3 ( 1 wt % pd ; 76 l gpd 1 min 1 ) , respectively . accounting only for surface pd atoms , these nps ( 576 story_separator_special_tag bimetallic pdau catalysts are more active than monometallic ones for the selective oxidation of alcohols , but the reasons for improvement remain insufficiently detailed . a metal-on-metal material can probe the structure catalysis relationship more clearly than conventionally prepared bimetallics . in this study , pd-on-au nanoparticles with variable pd surface coverages ( sc % ) ranging from 10 to 300 sc % were synthesized and immobilized onto carbon ( pd-on-au/c ) . tested for glycerol oxidation at 60 \xb0c , ph 13.5 , and 1 atm under flowing oxygen , the series of pd-on-au/c materials showed volcano-shape catalytic activity dependence on pd surface coverage . increasing surface coverage led to higher catalytic activity , such that initial turnover frequency ( tof ) reached a maximum of 6000 h 1 at 80 sc % . activity decreased above 80 sc % mostly due to catalyst deactivation . pd-on-au/c at 80 sc % was > 10 times more active than monometallic au/c and pd/c , with both exhibiting tof values less than 500 h 1. glyceric acid was the dominant primary reaction product for all compositions , with its zero-conversion selectivity varying monotonically as a function of pd surface coverage . story_separator_special_tag gold has been proposed as an environmentally friendly catalyst for acetylene hydrochlorination for vinyl chloride monomer synthesis by replacing the commercially used mercury catalyst . however , long life with excellent activity is difficult to achieve because gold is readily reduced to metallic nanoparticles . the stability of gold limits its industrial application . in this paper , we promoted gold with bismuth for the hydrochlorination of acetylene . it was found that the bi promotion leads to partial reduction to aucl , rather than the complete reduction of au to metallic nanoparticles in the absence of bi . the optimized catalyst with a molar ratio of bi/au = 3:1 ( 0.3 wt % au ) showed comparable reactivity to 1.0 wt % au catalyst and significantly improved stability . furthermore , the gold bismuth catalyst had higher activity and stability than the commercial mercury catalyst , is less toxic and more environmental-friendly , making it a potentially green , mercury-free industrial catalyst for acetylene hydrochlo . story_separator_special_tag the water gas shift ( wgs ) reaction ( co + h2o co2 + h2 ) is catalyzed by many metals and metal oxides as well as recently reported homogeneous catalysts . in this present paper the kinetics of the wgs reaction as catalyzed by alumina-supported group viib , viii , and ib metals are examined . for several metals a strong effect of support on metal activity is observed . for example , the turnover number ( rate per surface metal atom ) of pt supported on al2o3 is an order of magnitude higher than the turnover number of pt on sio2 . the turnover numbers ( at 300 \xb0c ) of the various alumina-supported metals studied for wgs decrease in the order cu , re , co , ru , ni , pt , os , au , fe , pd , rh , and ir . for these metals the range of activity varies by more than three orders of magnitude . it is shown that a volcano-shaped correlation exists between the activities of these metals and their respective co heats of adsorption . the partial pressure dependencies of the reactants on these metals are reported for
redoxsensitive materialien sind verst rkt in den wissenschaftlichen fokus ger ckt . insbesondere mit disulfiden verkn pfte kolloidale netzwerke werden intensiv erforscht , da sie unter den reduktiven bedingungen im zellinneren schnell zu ihren thiol-funktionalisierten bausteinen reduziert werden . dies ermcglicht eine nahezu quantitative freisetzung von in den partikeln eingebautenmolek len in zellen . dar ber hinaus dienen disulfid-verkn pfte hydrogele als zelltr germaterialien , die unter milden reduktiven , zytokompatiblen bedingungen gezielt abgebaut werden kcnnen , ohne die vitalit t der darin eingeschlossenen zellen zu beeintr chtigen . disulfid-verkn pfte polymernetzwerke lassen sich durch direkte polymerisation disulfid-funktionalisierter monomere , mittels disulfid-haltiger vernetzer oder durch oxidative verkn pfung thiol-funktionalisierter bausteine ( thiomere ) herstellen . dabei erlaubt nur der letztgenannte ansatz einen unmittelbaren kovalenten einbau thiol-funktionalisierter molek le ( wie cystein-haltige peptidsequenzen ) w hrend der bildung des netzwerkes . eine gelbildung von thiomeren durch blosen kontakt mit luftsauerstoff ist mcglich , aber zu langsam f r die definierte herstellung von nanopartikeln mit den meisten blichen methoden sowie f r eine homogene zellverteilung in hydrogelen . daher finden oxidationskatalysatoren wie das weit verbreitete wasserstoffperoxid ( h2o2 ) zur verk rzung der reaktionszeiten anwendung . das starke oxidationspotential des h2o2 bewirkt story_separator_special_tag where a branch of science has been approached exclusively from the deductive side or exclusively from the experimental side , it is far easier to form a correct estimate of our state of knowledge in it than is the case where experimental and deductive methods have been continuously worked side by side . the study of rational dynamics has afforded , excellent mental training for those who have made the greatest marks in the world as physicists , notwithstanding the fact that the conclusions arrived at in rational dynamics are in direct contradiction to ordinary experience . thus it is impossible to verify experimentally that the times taken by particles to slide down perfectly smooth chords of a vertical circle are equal , and the phenomena of nature are far too complicated to allow of an experimental test of the velocity with which a boy would have to throw a cricket ball in vacuo in order to give it a horizontal range of 200 yards . in the study of thermodynamics , on the other hand , where the experimental has preceded the deductive treatment , as has been the case ever since joule discovered the so-called mechanical equivalent of story_separator_special_tag basic books is proud to announce the next two volumes of the complete audio cd collection of the recorded lectures delivered by the late richard p. feynman , lectures originally delivered to his physics students at caltech and later fashioned by the author into his classic textbook lectures on physics . ranging from the most basic principles of newtonian physics through such formidable theories as einstein 's general relativity , superconductivity , and quantum mechanics , feynman 's 111 lectures stand as a monument of clear exposition and deep insight . 12 cds : total playing time : approx . 12 hours story_separator_special_tag 1. introduction part i. equilibrium : 2. theories of composite systems 3. individuals : systems and constituents 4. situated individuals and the situation 5. interacting individuals and collective phenomena 6. macro individuals and emergent properties part ii . dynamics : 7. the temporality of dynamical systems 8. the complexity of deterministic dynamics 9. stochastic processes 10. directionality , history , expectation 11. epilogue . story_separator_special_tag this text provides an introduction to ergodic theory suitable for readers knowing basic measure theory . the mathematical prerequisites are summarized in chapter 0. it is hoped the reader will be ready to tackle research papers after reading the book . the first part of the text is concerned with measure-preserving transformations of probability spaces ; recurrence properties , mixing properties , the birkhoff ergodic theorem , isomorphism and spectral isomorphism , and entropy theory are discussed . some examples are described and are studied in detail when new properties are presented . the second part of the text focuses on the ergodic theory of continuous transformations of compact metrizable spaces . the family of invariant probability measures for such a transformation is studied and related to properties of the transformation such as topological traitivity , minimality , the size of the non-wandering set , and existence of periodic points . topological entropy is introduced and related to measure-theoretic entropy . topological pressure and equilibrium states are discussed , and a proof is given of the variational principle that relates pressure to measure-theoretic entropies . several examples are studied in detail . the final chapter outlines significant results and some story_separator_special_tag we investigate the time average mean-square displacement $ \\overline { { \\ensuremath { \\delta } } ^ { 2 } } \\mathbf { ( } x ( t ) \\mathbf { ) } = { \\ensuremath { \\int } } _ { 0 } ^ { t\\ensuremath { - } \\ensuremath { \\delta } } { [ x ( { t } ^ { \\ensuremath { ' } } +\\ensuremath { \\delta } ) \\ensuremath { - } x ( { t } ^ { \\ensuremath { ' } } ) ] } ^ { 2 } d { t } ^ { \\ensuremath { ' } } ( t\\ensuremath { - } \\ensuremath { \\delta } ) $ for fractional brownian-langevin motion where $ x ( t ) $ is the stochastic trajectory and $ \\ensuremath { \\delta } $ is the lag time . unlike the previously investigated continuous-time random-walk model , $ \\overline { { \\ensuremath { \\delta } } ^ { 2 } } $ converges to the ensemble average $ { x } ^ { 2 } \\ensuremath { \\sim } { t } ^ { 2h } $ in the long measurement time limit story_separator_special_tag we find a general formula for the distribution of time averaged observables for weakly non-ergodic systems . such type of ergodicity breaking is known to describe certain systems which exhibit anomalous fluctuations , e.g . blinking quantum dots and the sub-diffusive continuous time random walk model . when the fluctuations become normal we recover usual ergodic statistical mechanics . examples of a particle undergoing fractional dynamics in a binding force field are worked out in detail . we briefly discuss possible physical applications in single particle experiments . story_separator_special_tag programme educational objectives ( peos ) : master programme in applied geology aims to provide comprehensive knowledge based on various branches of geology , with special focus on applied geology subjects in the areas of geomorphology , structural geology , hydrogeology , petroleum geology , mining geology , remote sensing and environmental geology . to provide an in-depth knowledge and hands-on training to learners in the area of applied geology and enable them to work independently at a higher level education / career . to gain knowledge on the significance of dynamics of earth , basic principles of sedimentology and stratigraphy and economic mineral formations and related exploration operations in industries . to impart fundamental concepts of economic mineral explorations , geological mapping techniques , geomorphologic principles , and applications of geology in engineering and story_separator_special_tag abstract.we study a general class of nonlinear mean field fokker-planck equations in relation with an effective generalized thermodynamical ( e.g.t . ) formalism . we show that these equations describe several physical systems such as : chemotaxis of bacterial populations , bose-einstein condensation in the canonical ensemble , porous media , generalized cahn-hilliard equations , kuramoto model , bmf model , burgers equation , smoluchowski-poisson system for self-gravitating brownian particles , debye-h\xfcckel theory of electrolytes , two-dimensional turbulence . in particular , we show that nonlinear mean field fokker-planck equations can provide generalized keller-segel models for the chemotaxis of biological populations . as an example , we introduce a new model of chemotaxis incorporating both effects of anomalous diffusion and exclusion principle ( volume filling ) . therefore , the notion of generalized thermodynamics can have applications for concrete physical systems . we also consider nonlinear mean field fokker-planck equations in phase space and show the passage from the generalized kramers equation to the generalized smoluchowski equation in a strong friction limit . our formalism is simple and illustrated by several explicit examples corresponding to boltzmann , tsallis , fermi-dirac and bose-einstein entropies among others . story_separator_special_tag a general type of nonlinear fokker-planck equation is derived directly from a master equation , by introducing generalized transition rates . the $ h $ theorem is demonstrated for systems that follow those classes of nonlinear fokker-planck equations , in the presence of an external potential . for that , a relation involving terms of fokker-planck equations and general entropic forms is proposed . it is shown that , at equilibrium , this relation is equivalent to the maximum-entropy principle . families of fokker-planck equations may be related to a single type of entropy , and so , the correspondence between well-known entropic forms and their associated fokker-planck equations is explored . it is shown that the boltzmann-gibbs entropy , apart from its connection with the standard -- -linear fokker-planck equation -- -may be also related to a family of nonlinear fokker-planck equations . story_separator_special_tag as shown in sects . 3.1 , 2 we can immediately obtain expectation values for processes described by the linear langevin equations ( 3.1 , 31 ) . for nonlinear langevin equations ( 3.67 , 110 ) expectation values are much more difficult to obtain , so here we first try to derive an equation for the distribution function . as mentioned already in the introduction , a differential equation for the distribution function describing brownian motion was first derived by fokker [ 1.1 ] and planck [ 1.2 ] : many review articles and books on the fokker-planck equation now exist [ 1.5 15 ] . story_separator_special_tag abstract fractional kinetic equations of the diffusion , diffusion advection , and fokker planck type are presented as a useful approach for the description of transport dynamics in complex systems which are governed by anomalous diffusion and non-exponential relaxation patterns . these fractional equations are derived asymptotically from basic random walk models , and from a generalised master equation . several physical consequences are discussed which are relevant to dynamical processes in complex systems . methods of solution are introduced and for some special cases exact solutions are calculated . this report demonstrates that fractional equations have come of age as a complementary tool in the description of anomalous transport processes . story_separator_special_tag preface . acknowledgments . special functions of preface . acknowledgements . special functions of the fractional calculus . gamma function . mittag-leffler function . wright function . fractional derivatives and integrals . the name of the game . grunwald-letnikov fractional derivatives . riemann-liouville fractional derivatives . some other approaches . sequential fractional derivatives . left and right fractional derivatives . properties of fractional derivatives . laplace transforms of fractional derivatives . fourier transforms of fractional derivatives . mellin transforms of fractional derivatives . existence and uniqueness theorems . linear fractional differential equations . fractional differential equation of a general form . existence and uniqueness theorem as a method of solution . dependence of a solution on initial conditions . the laplace transform method . standard fractional differential equations . sequential fractional differential equations . fractional green 's function . definition and some properties . one-term equation . two-term equation . three-term equation . four-term equation . calculation of heat load intensity change in blast furnace walls . finite-part integrals and fractional derivatives . general case : n-term equation . other methods for the solution of fractional-order equations . the mellin transform method . power series method . babenko 's symbolic story_separator_special_tag brownian motion -- -the random movement of microscopic particles in a fluid -- -usually gives rise to a gaussian probability of finding a particle at a particular place at a specific time . but in some situations , this probability behaves differently . a new mathematical model shows how to reconcile this behavior with other hallmarks of brownian motion . story_separator_special_tag we report on a diffusive analysis of the motion of flagellate protozoa species . these parasites are the etiological agents of neglected tropical diseases : leishmaniasis caused by leishmania amazonensis and leishmania braziliensis , african sleeping sickness caused by trypanosoma brucei , and chagas disease caused by trypanosoma cruzi . by tracking the positions of these parasites and evaluating the variance related to the radial positions , we find that their motions are characterized by a short-time transient superdiffusive behavior . also , the probability distributions of the radial positions are self-similar and can be approximated by a stretched gaussian distribution . we further investigate the probability distributions of the radial velocities of individual trajectories . among several candidates , we find that the generalized gamma distribution shows a good agreement with these distributions . the velocity time series have long-range correlations , displaying a strong persistent behavior ( hurst exponents close to one ) . the prevalence of universal patterns across all analyzed species indicates that similar mechanisms may be ruling the motion of these parasites , despite their differences in morphological traits . in addition , further analysis of these patterns could become a useful tool for investigating story_separator_special_tag the lateral mobility of lipids in phospholipid membranes has attracted numerous experimental and theoretical studies , inspired by the model of singer and nicholson ( 1972. science , 175:720 731 ) and the theoretical description by saffman and delbruck ( 1975. proc . natl . acad . sci . usa . 72:3111 3113 ) . fluorescence recovery after photobleaching ( frap ) is used as the standard experimental technique for the study of lateral mobility , yielding an ensemble-averaged diffusion constant . single-particle tracking ( spt ) and the recently developed single-molecule imaging techniques now give access to data on individual displacements of molecules , which can be used for characterization of the mobility in a membrane . here we present a new type of analysis for tracking data by making use of the probability distribution of square displacements . the potential of this new type of analysis is shown for single-molecule imaging , which was employed to follow the motion of individual fluorescence-labeled lipids in two systems : a fluid-supported phospholipid membrane and a solid polymerstabilized phospholipid monolayer . in the fluid membrane , a high-mobility component characterized by a diffusion constant of 4.4 microns2/s and a low-mobility component story_separator_special_tag the random walks of concentrations of magnetic flux observed on the solar surface are found to give a natural , macroscopic realization of anomalous diffusion with fractal dimension d=1.56\xb10.08 and exponent of anomalous diffusion =0.25\xb10.40 . the results exclude euclidean , two-dimensional diffusion but are entirely consistent with results from percolation theory for diffusion on clusters at a density below the percolation threshold story_separator_special_tag we use the $ h $ theorem to establish the entropy and the entropic additivity law for a system composed of subsystems , with the dynamics governed by the klein-kramers equations , by considering relations among the dynamics of these subsystems and their entropies . we start considering the subsystems governed by linear klein-kramers equations and verify that the boltzmann-gibbs entropy is appropriated to this dynamics , leading us to the standard entropic additivity , $ { s } _ { bg } ^ { ( 1\\ensuremath { \\cup } 2 ) } = { s } _ { bg } ^ { 1 } + { s } _ { bg } ^ { 2 } $ , consistent with the fact that the distributions of the subsystem are independent . we then extend the dynamics of these subsystems to independent nonlinear klein-kramers equations . for this case , the results show that the $ h $ theorem is verified for a generalized entropy , which does not preserve the standard entropic additivity for independent distributions . in this scenario , consistent results are obtained when a suitable coupling among the nonlinear klein-kramers equations is considered , in which story_separator_special_tag abstract we analyze the h - theorem like to systems subjected to a process that implies in a nonconservation of the number of particles . firstly , we consider the system governed by a linear fokker planck equation with a source ( or sink ) term . after , we investigate the nonlinear situations including the tsallis entropy . we also obtain for these cases the entropy production in order to verify that the entropy in these situations system increases . story_separator_special_tag as early as 1902 , gibbs pointed out that systems whose partition function diverges , e.g . gravitation , lie outside the validity of the boltzmann-gibbs ( bg ) theory . consistently , since the pioneering bekenstein-hawking results , physically meaningful evidence ( e.g. , the holographic principle ) has accumulated that the bg entropy $ s_ { bg } $ of a $ ( 3+1 ) $ black hole is proportional to its area $ l^2 $ ( $ l $ being a characteristic linear length ) , and not to its volume $ l^3 $ . similarly it exists the \\emph { area law } , so named because , for a wide class of strongly quantum-entangled $ d $ -dimensional systems , $ s_ { bg } $ is proportional to $ \\ln l $ if $ d=1 $ , and to $ l^ { d-1 } $ if $ d > 1 $ , instead of being proportional to $ l^d $ ( $ d \\ge 1 $ ) . these results violate the extensivity of the thermodynamical entropy of a $ d $ -dimensional system . this thermodynamical inconsistency disappears if we realize that the story_separator_special_tag spin relaxation close to the glass temperature of cumn and aufe spin glasses is shown , by neutron spin echo , to follow a generalized exponential function which explicitly introduces hierarchically constrained dynamics and macroscopic interactions . the interaction parameter is directly related to the normalized tsallis nonextensive entropy parameter q and exhibits universal scaling with reduced temperature . at the glass temperature q \xbc 5=3 corresponding , within tsallis q statistics , to a mathematically defined critical value for the onset of strong disorder and nonlinear dynamics . story_separator_special_tag we present a translation of paul langevin s landmark paper . in it langevin successfully applied newtonian dynamics to a brownian particle and so invented an analytical approach to random processes which has remained useful to this day . story_separator_special_tag we derive a phenomenological model of the underlying microscopic langevin equation of the nonlinear fokker-planck equation , which is used to describe anomalous correlated diffusion . the resulting distribution-dependent stochastic equation is then analyzed and properties such as long-time scaling and the hurst exponent are calculated both analytically and from simulations . results of this microscopic theory are compared with those of fractional brownian motion . story_separator_special_tag we introduce a fractional fokker-planck equation describing the stochastic evolution of a particle under the combined influence of an external , nonlinear force and a thermal heat bath . for the force-free case , a subdiffusive behavior is recovered . the equation is shown to obey generalized einstein relations , and its stationary solution is the boltzmann distribution . the relaxation of single modes is shown to follow a mittag-leffler decay . we discuss the example of a particle in a harmonic potential . story_separator_special_tag diffusion and wave equations together with appropriate initial condition ( s ) are rewritten as integrodifferential equations with time derivatives replaced by convolution with t 1/ ( ) , =1,2 , respectively . fractional diffusion and wave equations are obtained by letting vary in ( 0,1 ) and ( 1,2 ) , respectively . the corresponding green s functions are obtained in closed form for arbitrary space dimensions in terms of fox functions and their properties are exhibited . in particular , it is shown that the green s function of fractional diffusion is a probability density . story_separator_special_tag fractional generalization of the diffusion equation includes fractional derivatives with respect to time and coordinate . it had been introduced to describe anomalous kinetics of simple dynamical systems with chaotic motion . we consider a symmetrized fractional diffusion equation with a source and find different asymptotic solutions applying a method which is similar to the method of separation of variables . the method has a clear physical interpretation presenting the solution in a form of decomposition of the process of fractal brownian motion and levy-type process . fractional generalization of the kolmogorov feller equation is introduced and its solutions are analyzed . story_separator_special_tag the heat equation is one of the three classical linear partial differential equations of second order that form the basis of any elementary introduction to the area of pdes , and only recently has it come to be fairly well understood . in this monograph , aimed at research students and academics in mathematics and engineering , as well as engineering specialists , professor vazquez provides a systematic and comprehensive presentation of the mathematical theory of the nonlinear heat equation usually called the porous medium equation ( pme ) . this equation appears in a number of physical applications , such as to describe processes involving fluid flow , heat transfer or diffusion . other applications have been proposed in mathematical biology , lubrication , boundary layer theory , and other fields . each chapter contains a detailed introduction and is supplied with a section of notes , providing comments , historical notes or recommended reading , and exercises for the reader . story_separator_special_tag nonlinear fokker planck equations have found applications in various fields such as plasma physics , surface physics , astrophysics , the physics of polymer fluids and particle beams , nonlinear hydrodynamics , theory of electronic circuitry and laser arrays , engineering , biophysics , population dynamics , human movement sciences , neurophysics , psychology and marketing . in spite of the diversity of these research fields , many phenomena addressed therein have a fundamental physical mechanism in common . they arise due to cooperative interactions between the subsystems of many-body systems . these cooperative interactions result in a reduction of the large number of degrees of freedom of many-body systems and , in doing so , bind the subunits of manybody systems by means of self-organization into synergetic entities . these synergetic many-body systems admit low dimensional descriptions in terms of nonlinear fokker planck equations that capture and uncover the essential dynamics underlying the observed phenomena . the phenomena that will be addressed in this book range from equilibrium and nonequilibrium phase transitions and the multistability of systems to the emergence of power law and cut-off distributions and the distortion of boltzmann distributions . we will study possible asymptotic behaviors story_separator_special_tag with the use of a quantity normally scaled in multifractals , a generalized form is postulated for entropy , namelys q k [ 1 i=1 w p i q ] / ( q-1 ) , whereq characterizes the generalization andp i are the probabilities associated withw ( microscopic ) configurations ( w ) . the main properties associated with this entropy are established , particularly those corresponding to the microcanonical and canonical ensembles . the boltzmann-gibbs statistics is recovered as theq 1 limit . story_separator_special_tag the linear response theory has given a general proof of the fluctuation-dissipation theorem which states that the linear response of a given system to an external perturbation is expressed in terms of fluctuation properties of the system in thermal equilibrium . this theorem may be represented by a stochastic equation describing the fluctuation , which is a generalization of the familiar langevin equation in the classical theory of brownian motion . in this generalized equation the friction force becomes retarded or frequency-dependent and the random force is no more white . they are related to each other by a generalized nyquist theorem which is in fact another expression of the fluctuation-dissipation theorem . this point of view can be applied to a wide class of irreversible process including collective modes in many-particle systems as has already been shown by mori . as an illustrative example , the density response problem is briefly discussed . story_separator_special_tag we investigate fractional brownian motion with a microscopic random-matrix model and introduce a fractional langevin equation . we use the latter to study both subdiffusion and superdiffusion of a free particle coupled to a fractal heat bath . we further compare fractional brownian motion with the fractal time process . the respective mean-square displacements of these two forms of anomalous diffusion exhibit the same power-law behavior . here we show that their lowest moments are actually all identical , except the second moment of the velocity . this provides a simple criterion that enable us to distinguish these two non-markovian processes . story_separator_special_tag if the diffusivity k of a substance whose mass per volume of atmosphere is be defined by an equation of fick s type / x + v - / y + w - / z + / t = / x ( k / x ) + / y ( k / y ) / z ( k / z ) , ( 1 ) x , y , z , t being cartesian co-ordinates and time , , v - , w - being the components of mean velocity , then the measured values of k have been found to be 0\xb72 cm.2 sec.-1 in capillary tubes ( kaye and laby s tables ) , 105 cm.2 sec.-1 when gusts are smoothed out of the mean wind ( akerblom , g. i. taylor , hesselberg , etc . ) , 108 cm.2 sec.-1 when the means extend over a time comparable with 4 hours ( l. f. richardson and d. proctor ) , 1011 cm.2 sec.-1 when the mean wind is taken to be the general circulation characteristic of the latitude ( defant ) . thus the so-called constant k varies in a ratio of 2 to a billion story_separator_special_tag recent experiments on self-diffusion in associative networks have shown superdiffusive scaling hypothesized to originate from molecular diffusive mechanisms , which include walking and hopping of th . story_separator_special_tag a plasma is considered in which a maxwellian distribution of electrons with thermal velocity ve and drift velocity vd is drifting relative to a maxwellian distribution of ions with thermal velocity vi . for vd ve the usual ion acoustic waves are stable , however , electrostatic ion cyclotron waves with i are unstable for vd 5vi . in the case when 5vi vd ve , and te/ti < 2 the electrostatic ion cyclotron waves grow to a nonlinear equilibrium spectrum . this spectrum of waves leads to a diffusion of electrons across the field lines with a diffusion coefficient d = 2e e , where e is the electron larmor radius and e is the electron larmor frequency . , the ratio of the resulting diffusion coefficient to the bohm diffusion coefficient , is given by a constant \xd7 ( vd/ve ) 5 ( te/ti ) 2 . story_separator_special_tag the diffusion of a plasma across a magnetic field is considered both theoretically and experimentally . it was found out that the fluctuation of the density can cause an anomalous diffusion that is inversely proportional to the magnetic field b. the mobility of electrons perpendicular to the magnetic field was measured in the experiment ; the measurement supports 1/b diffusion . the density fluctuation was also measured . there is good agreement between the observed mobility and the theoretical one derived from the density fluctuation . the potential fluctuation was found to be an increasing function of the external potential and independent of the magnetic field intensity . this fact also indicates that fluctuation or turbulence plays an important role in the anomalous diffusion . story_separator_special_tag abstract in this paper a study of anomalous diffusion of some group iii and group v impurities in silicon is described . three sets of conditions under which boron and phosphorus diffused anomalously fast were studied . they were ( a ) fast diffusion at high concentration of impurity , ( b ) fast diffusion in a mechanically polished slice and ( c ) the push out effect . these were studied under different conditions of oxidation and the evidence suggests that oxidation causes relief of strain which in turn causes fast diffusion . a further experiment giving direct evidence of fast diffusion during relief of strain in a crystal is described . story_separator_special_tag anomalous diffusion dynamics in confined nanoenvironments govern the macroscale properties and interactions of many biophysical and material systems . currently , it is difficult to quantitatively link the nanoscale structure of porous media to anomalous diffusion within them . fluorescence correlation spectroscopy super-resolution optical fluctuation imaging ( fcssofi ) has been shown to extract nanoscale structure and brownian diffusion dynamics within gels , liquid crystals , and polymers , but has limitations which hinder its wider application to more diverse , biophysically-relevant datasets . here , we parallelize the least-squares curve fitting step on a gpu improving computation times by up to a factor of 40 , implement anomalous diffusion and two-component brownian diffusion models , and make fcssofi more accessible by packaging it in a user-friendly gui . we apply fcssofi to simulations of the protein fibrinogen diffusing in polyacrylamide of varying matrix densities and super-resolve locations where slower , anomalous diffusion occurs within smaller , confined pores . the improvements to fcssofi in speed , scope , and usability will allow for the wider adoption of super-resolution correlation analysis to diverse research topics . story_separator_special_tag measurements of the transient photocurrent $ i ( t ) $ in an increasing number of inorganic and organic amorphous materials display anomalous transport properties . the long tail of $ i ( t ) $ indicates a dispersion of carrier transit times . however , the shape invariance of $ i ( t ) $ to electric field and sample thickness ( designated as universality for the classes of materials here considered ) is incompatible with traditional concepts of statistical spreading , i.e. , a gaussian carrier packet . we have developed a stochastic transport model for $ i ( t ) $ which describes the dynamics of a carrier packet executing a time-dependent random walk in the presence of a field-dependent spatial bias and an absorbing barrier at the sample surface . the time dependence of the random walk is governed by hopping time distribution $ \\ensuremath { \\psi } ( t ) $ . a packet , generated with a $ \\ensuremath { \\psi } ( t ) $ characteristic of hopping in a disordered system [ e.g. , $ \\ensuremath { \\psi } ( t ) \\ensuremath { \\sim } { t } ^ { \\ensuremath story_separator_special_tag formulas are obtained for the mean first passage times ( as well as their dispersion ) in random walks from the origin to an arbitrary lattice point on a periodic space lattice with periodic boundary conditions . generally this time is proportional to the number of lattice points.the number of distinct points visited after n steps on a k dimensional lattice ( with k 3 ) when n is large is a1n + a2n\xbd + a3 + a4n \xbd + . the constants a1 a4 have been obtained for walks on a simple cubic lattice when k = 3 and a1 and a2 are given for simple and face centered cubic lattices . formulas have also been obtained for the number of points visited r times in n steps as well as the average number of times a given point has been visited.the probability f ( c ) that a walker on a one dimensional lattice returns to his starting point before being trapped on a lattice of trap concentration c is f ( c ) = 1 + [ c/ ( 1 c ) ] log c.most of the results in this paper have been derived by the method story_separator_special_tag abstract the subject of this paper is the evolution of brownian particles in disordered environments . the ariadne 's clew we follow is understanding of the general statistical mechanisms which may generate anomalous ( non-brownian ) diffusion laws ; this allows us to develop simple arguments to obtain a qualitative ( but often quite accurate ) picture of most situations . several analytical techniques-such as the green function formalism and renormalization group methods-are also exposed . care is devoted to the problem of sample to sample fluctuations , particularly acute here . we consider the specific effects of a bias on anomalous diffusion , and discuss the generalizations of einstein 's relation in the presence of disorder . an effort is made to illustrate the theoretical models by describing many physical situations where anomalous diffusion laws have been-or could be-observed . story_separator_special_tag you may not be perplexed to enjoy all books collections domination and the arts of resistance hidden transcripts james c scott that we will certainly offer . it is not with reference to the costs . it 's roughly what you infatuation currently . this domination and the arts of resistance hidden transcripts james c scott , as one of the most enthusiastic sellers here will no question be along with the best options to review . story_separator_special_tag this paper proposes a method of modeling and simulation of photovoltaic arrays . the main objective is to find the parameters of the nonlinear i-v equation by adjusting the curve at three points : open circuit , maximum power , and short circuit . given these three points , which are provided by all commercial array data sheets , the method finds the best i-v equation for the single-diode photovoltaic ( pv ) model including the effect of the series and parallel resistances , and warranties that the maximum power of the model matches with the maximum power of the real array . with the parameters of the adjusted i-v equation , one can build a pv circuit model with any circuit simulator by using basic math blocks . the modeling method and the proposed circuit model are useful for power electronics designers who need a simple , fast , accurate , and easy-to-use modeling method for using in simulations of pv systems . in the first pages , the reader will find a tutorial on pv devices and will understand the parameters that compose the single-diode pv model . the modeling method is then introduced and presented in details story_separator_special_tag we deal with the cauchy problem for the space-time fractional diffusion equation , which is obtained from the standard diffusion equation by replacing the second-order space derivative with a riesz-feller derivative of order ( 0,2 ] and skewness ( | | min { ,2 } ) , and the first-order time derivative with a caputo derivative of order ( 0,2 ] . the fundamental solution ( green function ) for the cauchy problem is investigated with respect to its scaling and similarity properties , starting from its fourier-laplace representation . we review the particular cases of space-fractional diffusion { 0 < 2 , = 1 } , time-fractional diffusion { = 2 , 0 < 2 } , and neutral-fractional diffusion { 0 < = 2 } , for which the fundamental solution can be interpreted as a spatial probability density function evolving story_separator_special_tag we have studied the diffusion of tracer proteins in highly concentrated random-coil polymer and globular protein solutions imitating the crowded conditions encountered in cellular environments . using fluorescence correlation spectroscopy , we measured the anomalous diffusion exponent alpha characterizing the dependence of the mean-square displacement of the tracer proteins on time , r ( 2 ) ( t ) approximately t ( alpha ) . we observed that the diffusion of proteins in dextran solutions with concentrations up to 400 g/l is subdiffusive ( alpha < 1 ) even at low obstacle concentration . the anomalous diffusion exponent alpha decreases continuously with increasing obstacle concentration and molecular weight , but does not depend on buffer ionic strength , and neither does it depend strongly on solution temperature . at very high random-coil polymer concentrations , alpha reaches a limit value of alpha ( l ) approximately 3/4 , which we take to be the signature of a coupling between the motions of the tracer proteins and the segments of the dextran chains . a similar , although less pronounced , subdiffusive behavior is observed for the diffusion of streptavidin in concentrated globular protein solutions . these observations indicate that protein story_separator_special_tag in normal lateral diffusion , the mean-square displacement of the diffusing species is proportional to time . but in disordered systems anomalous diffusion may occur , in which the mean-square displacement is proportional to some other power of time . in the presence of moderate concentrations of obstacles , diffusion is anomalous over short distances and normal over long distances . monte carlo calculations are used to characterize anomalous diffusion for obstacle concentrations between zero and the percolation threshold . as the obstacle concentration approaches the percolation threshold , diffusion becomes more anomalous over longer distances ; the anomalous diffusion exponent and the crossover length both increase . the crossover length and time show whether anomalous diffusion can be observed in a given experiment . story_separator_special_tag controlled-source electromagnetic ( em ) induction in some geological formations isshown here to be compactly described by an anomalous subdiffusion process . such aprocess , which is not universal , is governed by a fractional diffusion equation oralternatively the convolutional form of ohm s law . a subdiffusing eddy current vortex , or electromagnetic smoke ring , propagates in such a way that its position of medianintensity overruns its position of peak intensity . this behavior is not allowed in classicaldiffusion but is a simple consequence of diffusion within a stationary fractal medium.a similar analysis has been applied to understand heavy-tailed traveltime distributions thatappear in certain hydrological time series . the tell-tale signature of anomalouselectromagnetic diffusion is a slope b of the magnetic zero-crossing moveout curve that isconstant with transmitter-receiver ( rx ) offset and significantly different from unity.neither lateral heterogeneity nor unixial anisotropy can generate such a constant-slopemoveout curve with an economy of model parameters . controlled-source em data fromtwo sites in texas and one in new mexico are used in this study to test the eddy currentsubdiffusion hypothesis . story_separator_special_tag this article provides an update on the global cancer burden using the globocan 2020 estimates of cancer incidence and mortality produced by the international agency for research on cancer . worldwide , an estimated 19.3 million new cancer cases ( 18.1 million excluding nonmelanoma skin cancer ) and almost 10.0 million cancer deaths ( 9.9 million excluding nonmelanoma skin cancer ) occurred in 2020. female breast cancer has surpassed lung cancer as the most commonly diagnosed cancer , with an estimated 2.3 million new cases ( 11.7 % ) , followed by lung ( 11.4 % ) , colorectal ( 10.0 % ) , prostate ( 7.3 % ) , and stomach ( 5.6 % ) cancers . lung cancer remained the leading cause of cancer death , with an estimated 1.8 million deaths ( 18 % ) , followed by colorectal ( 9.4 % ) , liver ( 8.3 % ) , stomach ( 7.7 % ) , and female breast ( 6.9 % ) cancers . overall incidence was from 2 fold to 3 fold higher in transitioned versus transitioning countries for both sexes , whereas mortality varied < 2 fold for men and little for women . story_separator_special_tag animal movement comes in a variety of types including small foraging movements , larger one-way dispersive movements , seasonally-predictable round-trip migratory movements , and erratic nomadic movements . although most individuals move at some point throughout their lives , movement patterns can vary widely across individuals within the same species : differing within an individual over time ( intra-individual ) , among individuals in the same population ( inter-individual ) , or among populations ( inter-population ) . yet , studies of movement ( theoretical and empirical alike ) more often focus on understanding typical movement patterns than understanding variation in movement . here , i synthesize current knowledge of movement variation ( drawing parallels across species and movement types ) , describing the causes ( what factors contribute to individual variation ) , patterns ( what movement variation looks like ) , consequences ( why variation matters ) , maintenance ( why variation persists ) , implications ( for management and conservation ) , and finally gaps ( what pieces we are currently missing ) . by synthesizing across scales of variation , i span across work on plasticity , personality , and geographic variation . individual movement can story_separator_special_tag scale invariant patterns have been found in different biological systems , in many cases resembling what physicists have found in other nonbiological systems . here we describe the foraging patterns of free-ranging spider monkeys ( ateles geoffroyi ) in the forest of the yucatan peninsula , mexico and find that these patterns resemble what physicists know as levy walks . first , the length of a trajectory s constituent steps , or continuous moves in the same direction , is best described by a power-law distribution in which the frequency of ever larger steps decreases as a negative power function of their length . the rate of this decrease is very close to that predicted by a previous analytical levy walk model to be an optimal strategy to search for scarce resources distributed at random viswanathan et al 1999 ) . second , the frequency distribution of the duration of stops or waiting times also approximates a power-law function . finally , the mean square displacement during the monkeys first foraging trip increases more rapidly than would be expected from a random walk with constant step length , but within the range predicted for levy walks . in view of story_separator_special_tag this paper addresses the electrochemical impedance of diffusion in a spatially restricted layer . a physically grounded framework is provided for the behavior z ( i ) ( i ) /2 ( 0 < < 2 ) , thus generalising the warburg impedance ( =1 ) . the analysis starts from the notion of anomalous diffusion , which is characterized by a mean squared displacement of the diffusing particles that has a power law dependence on time . using a theoretical approach to anomalous diffusion that employs fractional calculus , several models are presented . in the first model , the continuity equation is generalised to a situation in which the number of diffusing particles is not conserved . in the second model the constitutive equation is derived from the stochastic scheme of a continuous time random walk . and in the third , the generalised constitutive equation can be interpreted within a non-local transport theory as establishing a relationship of the flux to the previous history of the concentration through a power-law behaving memory kernel . this third model is also related to diffusion in a fractal geometry . the electrochemical impedance is studied for each of these models story_separator_special_tag we present an appraisal of differential-equation models for anomalous diffusion , in which the time evolution of the mean-square displacement is $ { r } ^ { 2 } $ ( t ) \\ensuremath { \\sim } $ { t } ^ { \\ensuremath { \\gamma } } $ with \\ensuremath { \\gamma } \\ensuremath { e } 1. by comparison , continuous-time random walks lead via generalized master equations to an integro-differential picture . using l\\'evy walks and a kernel which couples time and space , we obtain a generalized picture for anomalous transport , which provides a unified framework both for dispersive ( \\ensuremath { \\gamma } l1 ) and for enhanced diffusion ( \\ensuremath { \\gamma } g1 ) . story_separator_special_tag applying the liouville-riemann fractional calculus , we derive and solve a fractional operator relaxation equation . we demonstrate how the exponent of the asymptotic power law decay t relates to the order of the fractional operatord v/dt v ( 0 < < 1 ) . continuous-time random walk ( ctrw ) models offer a physical interpretation of fractional order equations , and thus we point out a connection between a special type of ctrw and our fractional relaxation model . exact analytical solutions of the fractional relaxation equation are obtained in terms of fox functions by using laplace and mellin transforms . apart from fractional relaxation , fox functions are further used to calculate fourier integrals of kohlrausch-williams-watts type relaxation functions . because of its close connection to integral transforms , the rich class of fox functions forms a suitable framework for discussing slow relaxation phenomena . story_separator_special_tag abstract we obtain a generalized diffusion equation in modified or riemann-liouville form from continuous time random walk theory . the waiting time probability density function and mean squared displacement for different forms of the equation are explicitly calculated . we show examples of generalized diffusion equations in normal or caputo form that encode the same probability distribution functions as those obtained from the generalized diffusion equation in modified form . the obtained equations are general and many known fractional diffusion equations are included as special cases . story_separator_special_tag the investigation of diffusive process in nature presents a complexity associated with memory effects . thereby , it is necessary new mathematical models to involve memory concept in diffusion . in the following , i approach the continuous time random walks in the context of generalised diffusion equations . to do this , i investigate the diffusion equation with exponential and mittag-leffler memory-kernels in the context of caputo-fabrizio and atangana-baleanu fractional operators on caputo sense . thus , exact expressions for the probability distributions are obtained , in that non-gaussian distributions emerge . i connect the distribution obtained with a rich class of diffusive behaviour . moreover , i propose a generalised model to describe the random walk process with resetting on memory kernel context . story_separator_special_tag abstract motivated by recently proposed generalizations of the diffusion-wave equation with the caputo time fractional derivative of order ( 1 , 2 ) , in the present survey paper a class of generalized time-fractional diffusion-wave equations is introduced . its definition is based on the subordination principle for volterra integral equations and involves the notion of complete bernstein function . various members of this class are surveyed , including the distributed-order time-fractional diffusion-wave equation and equations governing wave propagation in viscoelastic media with completely monotone relaxation moduli . story_separator_special_tag we explore the behavior of random walkers that fly instantaneously between successive sites , however distant , and those that must walk between these sites . the latter case is related to intermittent behavior in joseph- son junctions and to turbulent diffusion . story_separator_special_tag we suggest a modification of a comb model to describe anomalous transport in spiny dendrites . geometry of the comb structure consisting of a one-dimensional backbone and lateral branches makes it possible to describe anomalous diffusion , where dynamics inside fingers corresponds to spines , while the backbone describes diffusion along dendrites . the presented analysis establishes that the fractional dynamics in spiny dendrites is controlled by fractal geometry of the comb structure and fractional kinetics inside the spines . our results show that the transport along spiny dendrites is subdiffusive and depends on the density of spines in agreement with recent experiments . story_separator_special_tag the main purpose of this paper is to study the single traveling wave solutions of the fractional coupled nonlinear schr\xf6dinger equation . by using the complete discriminant system method and computer algebra with symbolic computation , a series of new single traveling wave solutions are obtained , which include trigonometric function solutions , jacobi elliptic function solutions , hyperbolic function solutions , solitary wave solutions and rational function solutions . in order to further explain the propagation of the fractional coupled nonlinear schr\\ '' { o } dinger equation in nonlinear optics , two-dimensional and three-dimensional graphs are drawn . story_separator_special_tag we experimentally study anomalous diffusion of ultracold atoms in a one dimensional polarization optical lattice . the atomic spatial distribution is recorded at different times and its dynamics and shape are analyzed . we find that the width of the cloud exhibits a power-law time dependence with an exponent that depends on the lattice depth . moreover , the distribution exhibits fractional self-similarity with the same characteristic exponent . the self-similar shape of the distribution is found to be well fitted by a l\\'evy distribution , but with a characteristic exponent that differs from the temporal one . numerical simulations suggest that this is due to long trapping times in the lattice and correlations between the atom 's velocity and flight duration . story_separator_special_tag recently , anomalous superdiffusion of ultracold 87rb atoms in an optical lattice has been observed along with a fat-tailed , levy type , spatial distribution . the anomalous exponents were found to depend on the depth of the optical potential . we find , within the framework of the semiclassical theory of sisyphus cooling , three distinct phases of the dynamics as the optical potential depth is lowered : normal diffusion ; levy diffusion ; and x t ( 3/2 ) scaling , the latter related to obukhov 's model ( 1959 ) of turbulence . the process can be formulated as a levy walk , with strong correlations between the length and duration of the excursions . we derive a fractional diffusion equation describing the atomic cloud , and the corresponding anomalous diffusion coefficient . story_separator_special_tag we discuss a renewal process in which successive events are separated by scale-free waiting time periods . among other ubiquitous long-time properties , this process exhibits aging : events counted initially in a time interval \xbd0 ; tstatistically strongly differ from those observed at later times \xbdta ; ta \xfe t . the versatility of renewal theory is owed to its abstract formulation . renewals can be interpreted as steps of a random walk , switching events in two-state models , domain crossings of a random motion , etc . in complex , disordered media , processes with scale-free waiting times play a particularly prominent role . we set up a unified analytical foundation for such anomalous dynamics by discussing in detail the distribution of the aging renewal process . we analyze its half-discrete , half-continuous nature and study its aging time evolution . these results are readily used to discuss a scale-free anomalous diffusion process , the continuous-time random walk . by this , we not only shed light on the profound origins of its characteristic features , such as weak ergodicity breaking , along the way , we also add an extended discussion on aging effects . in story_separator_special_tag fractional diffusion equations imply non-gaussian distributions that generalise the standard diffusive process . recent advances in fractional calculus lead to a class of new fractional operators defined by non-singular memory kernels , differently from the fractional operator defined in the literature . in this work we propose a generalisation of the fokker-planck equation in terms of a non-singular fractional temporal operator and considering a non-constant diffusion coefficient . we obtain analytical solutions for the caputo-fabrizio and the atangana-baleanu fractional kernel operators , from which non-gaussian distributions emerge having a long and short tails . in addition , we show that these non-gaussian distributions are unimodal or bimodal according if the diffusion index $ u $ is positive or negative respectively , where a diffusion coefficient of the power law type $ \\mathcal { d } ( x ) =\\mathcal { d } _0|x|^ { u } $ is considered . thereby , a class of anomalous diffusion phenomena connected with fractional derivatives and with a diffusion coefficient of the power law type is presented . the techniques employed in this work open new possibilities for studying memory effects in diffusive contexts . story_separator_special_tag a non-markovian generalization of the chapman kolmogorov transition equation for continuous time random processes governed by a waiting time distribution is investigated . it is shown under which conditions a long-tailed waiting time distribution with a diverging characteristic waiting time leads to a fractional generalization of the klein kramers equation . from the latter equation a fractional rayleigh equation and a fractional fokker planck equation are deduced . these equations are characterized by a slow , nonexponential relaxation of the modes toward the gibbs boltzmann and the maxwell thermal equilibrium distributions . the derivation sheds some light on the physical origin of the generalized diffusion and friction constants appearing in the fractional fokker planck equation . story_separator_special_tag we introduce a fractional kramers equation for a particle interacting with a thermal heat bath and external non-linear force field . for the force free case the velocity damping follows the mittag-leffler relaxation and the diffusion is enhanced . the equation obeys the generalized einstein relation , and its stationary solution is the boltzmann distribution . our results are compared to previous results on enhanced l\\'evy type of diffusion derived from stochastic collision models . story_separator_special_tag the large-scale transport of inertial particles is investigated by means of lagrangian simulations . our main focus is on the possible emergence of anomalous diffusion for the class of random parallel flows . for such flows , a perturbative prediction in the limit of small inertia has recently become available in the literature . anomalous diffusion was traced back to the possible divergence of the first-order correction to the eddy diffusivity of the tracer limit . our numerical results show that the conditions for the emergence of anomalous diffusion are affected by resummations of higher-order perturbative contributions which regularize the resulting diffusion process , thus ruling out the occurrence of anomalous diffusion . story_separator_special_tag abstract we discuss the anomalous diffusion associated with a nonlinear fractional fokker planck equation with a diffusion coefficient d |x| ( r ) . two classes of exact solutions are found . the first one is a modified porous medium equation and corresponds to integer derivatives and a drift force f x|x| 1 ( r ) . the second one corresponds to fractional space derivative in the absence of external drift . the connection with nonextensive statistical mechanics is also discussed in both cases . story_separator_special_tag part i. introduction : 1. introduction f. m. gradstein 2. chronostratigraphy - linking time and rock f. m. gradstein , j. g. ogg and a. g. smith part ii . concepts and methods : 3. biostratigraphy f. m. gradstein , r. a. cooper and p. m. sadler 4. earth 's orbital parameters and cycle stratigraphy l. a. hinnov 5. the geomagnetic polarity time scale j. g. ogg and a. g. smith 6. radiogenic isotope geochronology m. villeneuve 7. stable isotopes j. m. mcarthur and r. j. howarth 8. geomathematics f. p. agterberg part iii . geologic periods : 9. the precambrian : the archaen and proterozoic eons l. j. robb , a. h. knoll , k. a. plumb , g. a. shields , h. strauss and j. veizer 10. toward a 'natural ' precambrian time scale w. bleeker 11. the cambrian period j. h. shergold and r. a. cooper 12. the ordovician period r. a. cooper and p. m. sadler 13. the silurian period m. j. melchin , r. a. cooper and p. m. sadler 14. the devonian period m. r. house and f. m. gradstein 15. the carboniferous period v. davydov , b. r. wardlaw and f. m. gradstein story_separator_special_tag this text is concerned with the quantitative aspects of the theory of nonlinear diffusion equations ; equations which can be seen as nonlinear variations of the classical heat equation . they appear as mathematical models in different branches of physics , chemistry , biology , and engineering , and are also relevant in differential geometry and relativistic physics . much of the modern theory of such equations is based on estimates and functional analysis . concentrating on a class of equations with nonlinearities of power type that lead to degenerate or singular parabolicity ( `` equations of porous medium type '' ) , the aim of this text is to obtain sharp a priori estimates and decay rates for general classes of solutions in terms of estimates of particular problems . these estimates are the building blocks in understanding the qualitative theory , and the decay rates pave the way to the fine study of asymptotics . many technically relevant questions are presented and analyzed in detail . a systematic picture of the most relevant phenomena is obtained for the equations under study , including time decay , smoothing , extinction in finite time , and delayed regularity . story_separator_special_tag we have carried out simulations of particle diffusion through polyacrylamide gel networks . the model structures were built on a diamond lattice , in a simulation box with periodic boundary conditions . the method of structure generation consists of a random distribution of knots on the lattice and interconnection between randomly chosen pairs of knots . the structures generated by this procedure approximate the topology of real polymer gels . parameters that control the distance between knots and the degree of stretching of the chain permit us to simulate a polyacrylamide system in which the concentration of species as well as the degree of crosslinking can be compared to realistic gels as prepared by the available experimental procedures . these structures were geometrically characterized by the analysis of the pore size distribution and excluded volume . the structures thus generated are used as model networks for monte carlo studies of the diffusion of hard spheres in the restricted geometry . modeling the de . story_separator_special_tag we study by molecular dynamics computer simulation a binary soft-sphere mixture that shows a pronounced difference in the species ' long-time dynamics . anomalous , power-law-like diffusion of small particles arises that can be understood as a precursor of a double-transition scenario , combining a glass transition and a separate small-particle localization transition . switching off small-particle excluded-volume constraints slows down , rather than enhances , small-particle transport . story_separator_special_tag we present dynamic light scattering ( dls ) measurements of soft poly ( methyl-methacrylate ) ( pmma ) and polyacrylamide ( pa ) polymer gels prepared with trapped bodies ( latex spheres or magnetic nanoparticles ) . we show that the anomalous diffusivity of the trapped particles can be analyzed in terms of a fractal gaussian network gel model for the entire time range probed by dls technique . this model is a generalization of the rouse model for linear chains extended for structures with power law network connectivity scaling , which includes both percolating and uniform bulk gel limits . for a dilute dispersion of strongly scattering particles trapped in a gel , the scattered electric field correlation function at small wavevector ideally probes self-diffusion of gel portions imprisoning the particles . our results show that the time-dependent diffusion coefficients calculated from the correlation functions change from a free diffusion regime at short times to an anomalous subdiffusive regime at long times ( increasingly arrested displacement ) . the characteristic time of transition between these regimes depends on scattering vector as approximately q ( -2 ) , while the time decay power exponent tends to the value expected for story_separator_special_tag to investigate diffusion processes in agarose gel , nanoparticles with sizes in the range between 1 and 140 nm have been tested by means of fluorescence correlation spectroscopy . understanding the diffusion properties in agarose gels is interesting , because such gels are good models for microbial biofilms and cells cytoplasm . the fluorescence correlation spectroscopy technique is very useful for such investigations due to its high sensitivity and selectivity , its excellent spatial resolution compared to the pore size of the gel , and its ability to probe a wide range of sizes of diffusing nanoparticles . the largest hydrodynamic radius ( rc ) of trapped particles that displayed local mobility was estimated to be 70 nm for a 1.5 % agarose gel . the results showed that diffusion of particles in agarose gel is anomalous , with a diverging fractal dimension of diffusion when the large particles become entrapped in the pores of the gel . the latter situation occurs when the reduced size ( ra/rc ) of the diffusing particle , a , is > 0.4 . variations of the fractal exponent of diffusion ( dw ) with the reduced particle size were in agreement with three-dimensional story_separator_special_tag driven anomalous diffusions ( such as those occurring in some surface growths ) are currently described through the nonlinear fokker-planck-like equation $ ( \\frac { \\ensuremath { \\partial } } { \\ensuremath { \\partial } t } ) { p } ^ { \\ensuremath { \\mu } } =\\ensuremath { - } ( \\frac { \\ensuremath { \\partial } } { \\ensuremath { \\partial } x } ) [ f ( x ) { p } ^ { \\ensuremath { \\mu } } ] +d ( \\frac { { \\ensuremath { \\partial } } ^ { 2 } } { \\ensuremath { \\partial } { x } ^ { 2 } } ) { p } ^ { \\ensuremath { u } } $ [ $ ( \\ensuremath { \\mu } , \\ensuremath { u } ) \\ensuremath { \\in } { \\mathcal { r } } ^ { 2 } $ ; $ f ( x ) = { k } _ { 1 } \\ensuremath { - } { k } _ { 2 } x $ is the external force ; $ { k } _ { 2 } g~0 $ ] . we exhibit here the story_separator_special_tag we consider conventional relaxation dynamics for surfaces , both evaporation dynamics and surface diffusion . we point out that the cusp singularity of the surface free energy implies that the relaxation dynamics has to be treated as a free boundary value problem . on this basis we predict , under appropriate conditions , the spontaneous formation of facets and a finite time of healing for the high symmetry surface story_separator_special_tag in this work we present the logarithmic diffusion equation as a limit case when the index that characterizes a nonlinear fokker-planck equation , in its diffusive term , goes to zero . a linear drift and a source term are considered in this equation . its solution has a lorentzian form , consequently this equation characterizes a superdiffusion like a levy kind . in addition an equation that unifies the porous media and the logarithmic diffusion equations , including a generalized diffusion equation in fractal dimension , is obtained . this unification is performed in the nonextensive thermostatistics context and increases the possibilities about the description of anomalous diffusive processes . story_separator_special_tag abstractthe stability of q-gaussian distributions as particular solutions of the linear diffusion equation and its generalized nonlinear form , $ \\frac { \\partial p ( x , t ) } { \\partial t } = d \\frac { \\partial ^2 [ p ( x , t ) ] ^ { 2-q } } { \\partial x^2 } $ , the porous-medium equation , is investigated through both numerical and analytical approaches . an analysis of the kurtosis of the distributions strongly suggests that an initial q-gaussian , characterized by an index qi , approaches asymptotically the final , analytic solution of the porous-medium equation , characterized by an index q , in such a way that the relaxation rule for the kurtosis evolves in time according to a q-exponential , with a relaxation index qrel qrel ( q ) . in some cases , particularly when one attempts to transform an infinite-variance distribution ( qi 5/3 ) into a finite-variance one ( q < 5/3 ) , the relaxation towards the asymptotic solution may occur very slowly in time . this fact might shed some light on the slow relaxation , for some long-range-interacting many-body hamiltonian systems , from long-standing story_separator_special_tag several indices have been created to measure diversity , and the most frequently used are the shannon-wiener ( h ) and simpson ( d ) indices along with the number of species ( s ) and evenness ( e ) . controversies about which index should be used are common in literature . however , a generalized entropy ( tsallis entropy ) has the potential to solve part of these problems . here we explore a family of diversity indices ( s q ; where q is the tsallis index ) and evenness ( e q ) , based on tsallis entropy that incorporates the most used indices . it approaches s when q = 0 , h when q 1 and gives d when q = 2. in general , varying the value of the tsallis index ( q ) , s q varies from emphasis on species richness ( q 1 ) . similarly , e q also works as a tool to investigate diversity . in particular , for a given community , its minimum value represents the maximum deviation from homogeneity ( e q = 1 ) for a particular q ( herein named q * story_separator_special_tag the standard central limit theorem plays a fundamental role in boltzmann-gibbs statistical mechanics . this important physical theory has been generalized [ 1 ] in 1988 by using the entropy \\ ( s_ { q } = \\frac { 1-\\sum_ { i } p^ { q } _ { i } } { q-1 } \\ ) \\ ( ( { \\rm with } \\ , q\\ , \\in { { { \\mathcal { r } } } } ) \\ ) instead of its particular bg case \\ ( s_ { 1 } = s_ { bg } = - \\sum_ { i } p_ { i } \\ , { \\rm ln } \\ , p_ { i } \\ ) . the theory which emerges is usually referred to as nonextensive statistical mechanics and recovers the standard theory for q = 1. during the last two decades , this q-generalized statistical mechanics has been successfully applied to a considerable amount of physically interesting complex phenomena . a conjecture [ 2 ] and numerical indications available in the literature have been , for a few years , suggesting the possibility of q-versions of the standard central limit theorem story_separator_special_tag fil : plastino , angel ricardo . consejo nacional de investigaciones cientificas y tecnicas ; argentina . universidad nacional del noroeste de la provincia de buenos aires ; argentina story_separator_special_tag recent developments on the generalizations of two important equations of quantum physics , namely the schroedinger and klein gordon equations , are reviewed . these generalizations present nonlinear terms , characterized by exponents depending on an index q , in such a way that the standard linear equations are recovered in the limit q 1 . interestingly , these equations present a common , soliton-like , traveling solution , which is written in terms of the q-exponential function that naturally emerges within nonextensive statistical mechanics . in both cases , the corresponding well-known einstein energy-momentum relations , as well as the planck and the de broglie ones , are preserved for arbitrary values of q. in order to deal appropriately with the continuity equation , a classical field theory has been developed , where besides the usual ( x , t ) , a new field ( x , t ) must be introduced ; this latter field becomes * ( x , t ) only when q 1 . a class of linear nonhomogeneous schroedinger equations , characterized by position-dependent masses , for which the extra field ( x , t ) becomes necessary , is also investigated . story_separator_special_tag we briefly review the foundations and applications of statistical mechanics based on the nonadditive entropies s q . then we address four frequently focused points , namely ( i ) on the form of the constraints within a variational entropy principle ; ( ii ) are the q-indices first-principle-computable quantities or fitting parameters ? ; ( iii ) if one admits violation of the entropic additivity , why not admitting also violation of the entropic extensivity ? ; and ( iv ) critical-like behavior . story_separator_special_tag boltzmann introduced in the 1870s a logarithmic measure for the connection between the thermodynamical entropy and the probabilities of the microscopic configurations of the system . his celebrated entropic functional for classical systems was then extended by gibbs to the entire phase space of a many-body system and by von neumann in order to cover quantum systems , as well . finally , it was used by shannon within the theory of information . the simplest expression of this functional corresponds to a discrete set of w microscopic possibilities and is given by s b g = k i = 1 w p i ln p i ( k is a positive universal constant ; bg stands for boltzmann gibbs ) . this relation enables the construction of bgstatistical mechanics , which , together with the maxwell equations and classical , quantum and relativistic mechanics , constitutes one of the pillars of contemporary physics . the bg theory has provided uncountable important applications in physics , chemistry , computational sciences , economics , biology , networks and others . as argued in the textbooks , its application in physical systems is legitimate whenever the hypothesis of ergodicity is satisfied , story_separator_special_tag computational applications of the nonextensive entropy s '' q and nonextensive statistical mechanics , a current generalization of the boltzmann-gibbs ( bg ) theory , are briefly reviewed . the corresponding bibliography is provided as well . story_separator_special_tag in this paper we review the q-exponential distribution and its properties . distributions of extreme order statistics are obtained . the marshall-olkin q-exponential distribution is developed and studied in detail . estimation of parameters is also discussed . ar ( 1 ) models and max-min ar ( 1 ) models are developed and sample path properties are explored . these can be used for modeling time series data on river flow , dam levels , finance and exchange rates . story_separator_special_tag we analyze a simple classical hamiltonian system within the hypothesis of renormalizability and isotropy that essentially led maxwell to his ubiquitous gaussian distribution of velocities . we show that the equilibrium-like power-law energy distribution emerging within nonextensive statistical mechanics satisfies these hypothesis , in spite of not being factorizable . a physically satisfactory renormalization group emerges in the $ ( q , t_q ) $ space , where q and $ t_q $ respectively are the entropic index characterizing nonextensivity , and an appropriate temperature . this scenario enables the conjectural formulation of the one to be expected for d-dimensional systems involving long-range interactions ( e.g. , a classical two-body potential $ \\propto r^ { -\\alpha } $ with $ 0 \\le \\alpha/d \\le 1 $ ) . as a corollary , we recover a quite general expression for the classical principle of equipartition of energy for arbitrary q . story_separator_special_tag nonlinear fokker-planck equations endowed with power-law diffusion terms have proven to be valuable tools for the study of diverse complex systems in physics , biology , and other fields . the nonlinearity appearing in these evolution equations can be interpreted as providing an effective description of a system of particles interacting via short-range forces while performing overdamped motion under the effect of an external confining potential . this point of view has been recently applied to the study of thermodynamical features of interacting vortices in type ii superconductors . in the present work we explore an embedding of the nonlinear fokker-planck equation within a vlasov equation , thus incorporating inertial effects to the concomitant particle dynamics . exact time-dependent solutions of the q-gaussian form ( with compact support ) are obtained for the vlasov equation in the case of quadratic confining potentials . story_separator_special_tag the boltzmann gibbs ( bg ) entropy and its associated statistical mechanics were generalized , three decades ago , on the basis of the nonadditive entropy s q ( q r ) , which recovers the bg entropy in the q 1 limit . the optimization of s q under appropriate simple constraints straightforwardly yields the so-called q-exponential and q-gaussian distributions , respectively generalizing the exponential and gaussian ones , recovered for q = 1 . these generalized functions ubiquitously emerge in complex systems , especially as economic and financial stylized features . these include price returns and volumes distributions , inter-occurrence times , characterization of wealth distributions and associated inequalities , among others . here , we briefly review the basic concepts of this q-statistical generalization and focus on its rapidly growing applications in economics and finance . story_separator_special_tag in this work , the thermodynamic properties of pseudo-harmonic potential in the presence of external magnetic and aharanov-bohm fields are investigated . the effective boltzmann factor in the superstatistics formalism was used to obtain the thermodynamic properties such as helmholtz free energy , internal energy , entropy and specific heat capacity of the system . in addition , the thermal properties of some selected diatomic molecules of n2 , cl2 , i2 and ch using their experimental spectroscopic parameters and the effect of varying the deformation parameter of q=0,0.3,0.7 were duly examined . story_separator_special_tag abstract the gibbs jaynes path for introducing statistical mechanics is based on the adoption of a specific entropic form s and of physically appropriate constraints . for instance , for the usual canonical ensemble , one adopts ( i ) s 1 = k i p i ln p i , ( ii ) ipi=1 , and ( iii ) i p i e i =u 1 ( { ei } eigenvalues of the hamiltonian ; u1 internal energy ) . equilibrium consists in optimizing s1 with regard to { pi } in the presence of constraints ( ii ) and ( iii ) . within the recently introduced nonextensive statistics , ( i ) is generalized into sq=k [ 1 ipiq ] / [ q 1 ] ( q 1 reproduces s1 ) , ( ii ) is maintained , and ( iii ) is generalized in a manner which might involve piq . in the present effort , we analyze the consequences of some special choices for ( iii ) , and their formal and practical implications for the various physical systems that have been studied in the literature . to illustrate some mathematically relevant points , we story_separator_special_tag the entropy time rate of systems described by nonlinear fokker-planck equations -- which are directly related to generalized entropic forms -- is analyzed . both entropy production , associated with irreversible processes , and entropy flux from the system to its surroundings are studied . some examples of known generalized entropic forms are considered , and particularly , the flux and production of the boltzmann-gibbs entropy , obtained from the linear fokker-planck equation , are recovered as particular cases . since nonlinear fokker-planck equations are appropriate for the dynamical behavior of several physical phenomena in nature , like many within the realm of complex systems , the present analysis should be applicable to irreversible processes in a large class of nonlinear systems , such as those described by tsallis and kaniadakis entropies . story_separator_special_tag the nonlinear diffusion equation $ \\ensuremath { \\partial } \\ensuremath { \\rho } /\\ensuremath { \\partial } t=d\\stackrel { \\ifmmode \\tilde { } \\else \\~ { } \\fi { } } { \\ensuremath { \\delta } } { \\ensuremath { \\rho } } ^ { \\ensuremath { u } } $ is analyzed here , where $ \\stackrel { \\ifmmode \\tilde { } \\else \\~ { } \\fi { } } { \\ensuremath { \\delta } } \\ensuremath { \\equiv } { ( 1/r } ^ { d\\ensuremath { - } 1 } ) ( \\ensuremath { \\partial } /\\ensuremath { \\partial } { r ) r } ^ { d\\ensuremath { - } 1\\ensuremath { - } \\ensuremath { \\theta } } \\ensuremath { \\partial } /\\ensuremath { \\partial } r , $ and d , $ \\ensuremath { \\theta } , $ and $ \\ensuremath { u } $ are real parameters . this equation unifies the anomalous diffusion equation on fractals $ ( \\ensuremath { u } =1 ) $ and the spherical anomalous diffusion for porous media $ ( \\ensuremath { \\theta } =0 ) . $ an exact point-source solution is obtained , enabling us story_separator_special_tag nonlinear fokker-planck equations ( e.g. , the diffusion equation for porous medium ) are important candidates for describing anomalous diffusion in a variety of systems . in this paper we introduce such nonlinear fokker-planck equations with general state-dependent diffusion , thus significantly generalizing the case of constant diffusion which has been discussed previously . an approximate maximum entropy ( maxent ) approach based on the tsallis nonextensive entropy is developed for the study of these equations . the maxent solutions are shown to preserve the functional relation between the time derivative of the entropy and the time dependent solution . in some particular important cases of diffusion with power-law multiplicative noise , our maxent scheme provides exact time dependent solutions . we also prove that the stationary solutions of the nonlinear fokker-planck equation with diffusion of the ( generalized ) stratonovich type exhibit the tsallis maxent form . story_separator_special_tag we investigate asymptotically the occurrence of anomalous diffusion and its associated family of statistical evolution equations . starting from a non-markovian process a la langevin we show that the mean probability distribution of the displacement of a particle follows a generalized non-linear fokker planck equation . thus we show that the anomalous behavior can be linked to a fast fluctuation process with memory from a microscopic dynamics level , and slow fluctuations of the dissipative variable . the general results can be applied to a wide range of physical systems that present a departure from the brownian regime . story_separator_special_tag abstract the objective of the proposed method is to utilize a site investigation of a debris flow disaster and verify a real scale analysis to evaluate the impulsive load on an open sabo dam . the nagiso debris flow disaster which occurred in nagano in 2014 , where damage caused by typhoon neogri was studied . the verification result of the site investigation demonstrated the weak components of the open sabo dam experienced damage owing to the debris flow . a discrete element method is normally applied to a solid body to calculate an interaction function force with respect to the contact point between boulders and the dam . the numerical method initially concatenates elements that model the open sabo dam . moreover , the stiffness coefficient of flanges and coupling joints between pipes was expressed to utilize the sectional partition method to determine the structural characteristics . the method was improved to separate from the connecting elements beyond the boundary conditions . the debris flow model uses a water flow distribution model , and the debris flow flowed from 200\xa0m upstream of the open sabo dam . accordingly , the proposed method was examined to verify the primary cause story_separator_special_tag as well known , boltzmann-gibbs statistics is the correct way of thermostatistically approaching ergodic systems . on the other hand , nontrivial ergodicity breakdown and strong correlations typically drag the system into out-of-equilibrium states where boltzmann-gibbs statistics fails . for a wide class of such systems , it has been shown in recent years that the correct approach is to use tsallis statistics instead . here we show how the dynamics of the paradigmatic conservative ( area-preserving ) standard map exhibits , in an exceptionally clear manner , the crossing from one statistics to the other . our results unambiguously illustrate the domains of validity of both boltzmann-gibbs and tsallis statistics . story_separator_special_tag we scrutinize the anomalies in diffusion observed in an extended long-range system of classical rotors , the hmf model . under suitable preparation , the system falls into long-lived quasi-stationary states for which superdiffusion of rotor phases has been reported . in the present work , we investigate diffusive motion by monitoring the evolution of full distributions of unfolded phases . after a transient , numerical histograms can be fitted by the $ q $ -gaussian form $ p ( x ) \\ensuremath { \\propto } { { 1+ ( q\\ensuremath { - } 1 ) { [ x \\ensuremath { \\beta } ] } ^ { 2 } } } ^ { 1 ( 1\\ensuremath { - } q ) } $ , with parameter $ q $ increasing with time before reaching a steady value $ q\\ensuremath { \\simeq } 3 2 $ ( squared lorentzian ) . from the analysis of the second moment of numerical distributions , we also discuss the relaxation to equilibrium and show that diffusive motion in quasistationary trajectories depends strongly on system size . story_separator_special_tag consequences of the connection between nonlinear fokker-planck equations and entropic forms are investigated . a particular emphasis is given to the feature that different nonlinear fokker-planck equations can be arranged into classes associated with the same entropic form and its corresponding stationary state . through numerical integration , the time evolution of the solution of nonlinear fokker-planck equations related to the boltzmann-gibbs and tsallis entropies are analyzed . the time behavior in both stages , in a time much smaller than the one required for reaching the stationary state , as well as towards the relaxation to the stationary state , are of particular interest . in the former case , by using the concept of classes of nonlinear fokker-planck equations , a rich variety of physical behavior may be found , with some curious situations , like an anomalous diffusion within the class related to the boltzmann-gibbs entropy , as well as a normal diffusion within the class of equations related to tsallis entropy . in addition to that , the relaxation towards the stationary state may present a behavior different from most of the systems studied in the literature . story_separator_special_tag in multicellular organisms , cell motility is central in all morphogenetic processes , tissue maintenance , wound healing and immune surveillance . hence , failures in its regulation potentiates numerous diseases . here , cell migration assays on plastic 2d surfaces were performed using normal ( melan a ) and tumoral ( b16f10 ) murine melanocytes in random motility conditions . the trajectories of the centroids of the cell perimeters were tracked through time-lapse microscopy . the statistics of these trajectories was analyzed by building velocity and turn angle distributions , as well as velocity autocorrelations and the scaling of mean-squared displacements . we find that these cells exhibit a crossover from a normal to a super-diffusive motion without angular persistence at long time scales . moreover , these melanocytes move with non-gaussian velocity distributions . this major finding indicates that amongst those animal cells supposedly migrating through levy walks , some of them can instead perform q-gaussian walks . furthermore , our results reveal that b16f10 cells infected by mycoplasmas exhibit essentially the same diffusivity than their healthy counterparts . finally , a q-gaussian random walk model was proposed to account for these melanocytic migratory traits . simulations based story_separator_special_tag signaling through the ror2 receptor tyrosine kinase promotes invadopodia formation for tumor invasion . here , we identify intraflagellar transport 20 ( ift20 ) as a new target of this signaling in tumors that lack primary cilia , and find that ift20 mediates the ability of ror2 signaling to induce the invasiveness of these tumors . we also find that ift20 regulates the nucleation of golgi-derived microtubules by affecting the gm130-akap450 complex , which promotes golgi ribbon formation in achieving polarized secretion for cell migration and invasion . furthermore , ift20 promotes the efficiency of transport through the golgi complex . these findings shed new insights into how ror2 signaling promotes tumor invasiveness , and also advance the understanding of how golgi structure and transport can be regulated . story_separator_special_tag by introducing fractional gaussian noise into the generalized langevin equation , the subdiffusion of a particle can be described as a stationary gaussian process with analytical tractability . this model is capable of explaining the equilibrium fluctuation of the distance between an electron transfer donor and acceptor pair within a protein that spans a broad range of time scales , and is in excellent agreement with a single-molecule experiment . story_separator_special_tag we apply the reproducing kernel hilbert space method to a class of nonlinear systems of partial differential equations and to get multiple solutions of second order differential equations . we have reached meaningful results . these results have been depicted by figures . this method is a very impressive technique for solving nonlinear systems of partial differential equations and second order differential equations . story_separator_special_tag abstract we consider the unified transform method , also known as the fokas method , for solving partial differential equations . we adapt and modify the methodology , incorporating new ideas where necessary , in order to apply it to solve a large class of partial differential equations of fractional order . we demonstrate the applicability of the method by implementing it to solve a model fractional problem . story_separator_special_tag we extend the fractional caputo fabrizio derivative of order 0 < 1 $ 0\\leq \\sigma < 1 $ on cr [ 0,1 ] $ c_ { \\mathbb { r } } [ 0,1 ] $ and investigate two higher-order series-type fractional differential equations involving the extended derivation . also , we provide an example to illustrate one of the main results . story_separator_special_tag by using the fractional caputo fabrizio derivative , we introduce two types new high order derivations called cfd and dcf . also , we study the existence of solutions for two such type high order fractional integro-differential equations . we illustrate our results by providing two examples . story_separator_special_tag in this manuscript , we prove new aspects for several opial-type integral inequalities for the left and right caputo fabrizio operators with nonsingular kernel . for this purpose we use the inequalities obtained by andric et al . ( integral transforms spec . funct . 25 ( 4 ) :324 335 , 2014 ) , which is the generalization of an inequality of agarwal and pang ( opial inequalities with applications in differential and difference equations , 1995 ) . besides , examples are presented to validate the reported results . story_separator_special_tag the solutions of system of linear fractional differential equations of incommensurate orders are considered and analytic expressions for the solutions are given by using the laplace transform and multi-variable mittag leffler functions of matrix arguments . we verify the result with numeric solutions of an example . the results show that the mittag leffler functions are important tools for analysis of a fractional system . the analytic solutions obtained are easy to program and are approximated by symbolic computation software such as mathematica . story_separator_special_tag in this paper , solutions for systems of linear fractional differential equations are considered . for the commensurate order case , solutions in terms of matrix mittag leffler functions were derived by the picard iterative process . for the incommensurate order case , the system was converted to a commensurate order case by newly introducing unknown functions . computation of matrix mittag leffler functions was considered using the methods of the jordan canonical matrix and minimal polynomial or eigenpolynomial , respectively . finally , numerical examples were solved using the proposed methods . story_separator_special_tag abstract using the fundamental theorem of fractional calculus together with the well-known lagrange polynomial interpolation , we constructed a new numerical scheme . the new numerical scheme is suggested to solve non-linear and linear partial differential equation with fractional order derivative . the method was used to solve numerically the time fractional keller-segel model . the existence and uniqueness solution of the model with fractional mittag-leffler kernel derivative are presented in detail . some simulations are performed to access the efficiency of the newly proposed method . story_separator_special_tag abstractin this paper , we have generalized the time-fractional telegraph equation involving operators with mittag-leffler kernel of variable order in liouville-caputo sense . the fractional variable-order equation was solved numerically via crank-nicholson scheme . we present the existence and uniqueness of the solution . numerical simulations of the special solutions were done and new behaviors are obtained . story_separator_special_tag a new numerical scheme for fractional differentiation is developed in this paper . the powerful numerical scheme known as upwind is used to establish a numerical approximation for the riemann-liouville and caputo fractional operators . using the crank-nicolson approach and the upwind first-order and second-order approximation , a new numerical scheme is developed . the new numerical approximation is then used to approximate for 0 < < 1 and 1 < < 2 the riemann-liouville fractional derivative . a detailed numerical analysis to prove the convergence and accuracy of the new numerical scheme is presented . the new approximation is then applied to solve numerically the advection equation . this new numerical scheme will be very useful to solving fractional differential equations . story_separator_special_tag abstract we presented an analysis of evolutions equations generated by three fractional derivatives namely the riemann liouville , caputo fabrizio and the atangana baleanu fractional derivatives . for each evolution equation , we presented the exact solution for time variable and studied the semigroup principle . the riemann liouville fractional operator verifies the semigroup principle but the associate evolution equation does not . the caputo fabrizio fractional derivative does not satisfy the semigroup principle but surprisingly , the exact solution satisfies very well all the principle of semigroup . however , the atangana baleanu for small time is the stretched exponential derivative , which does not satisfy the semigroup as operators . for a large time the atangana baleanu derivative is the same with riemann liouville fractional derivative , thus satisfies semigroup principle as an operator . the solution of the associated evolution equation does not satisfy the semigroup principle as riemann liouville . with the connection between semigroup theory and the markovian processes , we found out that the atangana baleanu fractional derivative has at the same time markovian and non-markovian processes . we concluded that , the fractional differential operator does not need to satisfy the semigroup properties story_separator_special_tag this paper presents a novel method that allows to generalise the use of the adam-bashforth to partial differential equations with local and non local operator . the method derives a two step adam-bashforth numerical scheme in laplace space and the solution is taken back into the real space via inverse laplace transform . the method yields a powerful numerical algorithm for fractional order derivative where the usually very difficult to manage summation in the numerical scheme disappears . error analysis of the method is also presented . applications of the method and numerical simulations are presented on a wave-equation like , and on a fractional order diffusion equation . story_separator_special_tag abstract in this paper , we develop a new numerical algorithm for solving the riesz tempered space fractional diffusion equation . the stability and convergence of the numerical scheme are discussed via the technique of matrix analysis . finally , numerical experiments are performed to confirm the effectiveness of our numerical algorithm . story_separator_special_tag meerschaert and sabzikar [ 12 ] , [ 13 ] introduced tempered fractional brownian/stable motion ( tfbm/tfsm ) by including an exponential tempering factor in the moving average representation of fbm/fsm . the present paper discusses another tempered version of fbm/fsm , termed tempered fractional brownian/stable motion of second kind ( tfbm ii/tfsm ii ) .we prove that tfbm/tfsm and tfbm ii/tfsm ii are different processes . particularly , large time properties of tfbm ii/tfsm ii are similar to those of fbm/fsm and are in deep contrast to large time properties of tfbm/tfsm . story_separator_special_tag we discuss invariance principles for autoregressive tempered fractionally integrated moving averages in $ \\alpha $ -stable $ ( 1 < \\alpha \\le 2 ) $ i.i.d . innovations and related tempered linear processes with vanishing tempering parameter $ \\lambda \\sim \\lambda_ * /n $ . we show that the limit of the partial sums process takes a different form in the weakly tempered ( $ \\lambda_ * = 0 $ ) , strongly tempered ( $ \\lambda_ * = \\infty $ ) , and moderately tempered ( $ 0 < \\lambda_ * < \\infty $ ) cases . these results are used to derive the limit distribution of the ols estimate of ar ( 1 ) unit root with weakly , strongly , and moderately tempered moving average errors . story_separator_special_tag abstract we study a generalized langevin equation for a free particle in presence of a truncated power-law and mittag-leffler memory kernel . it is shown that in presence of truncation , the particle from subdiffusive behavior in the short time limit , turns to normal diffusion in the long time limit . the case of harmonic oscillator is considered as well , and the relaxation functions and the normalized displacement correlation function are represented in an exact form . by considering external time-dependent periodic force we obtain resonant behavior even in case of a free particle due to the influence of the environment on the particle movement . additionally , the double-peak phenomenon in the imaginary part of the complex susceptibility is observed . it is obtained that the truncation parameter has a huge influence on the behavior of these quantities , and it is shown how the truncation parameter changes the critical frequencies . the normalized displacement correlation function for a fractional generalized langevin equation is investigated as well . all the results are exact and given in terms of the three parameter mittag-leffler function and the prabhakar generalized integral operator , which in the kernel contains a three story_separator_special_tag for characterizing the brownian motion in a bounded domain : $ \\omega $ , it is well-known that the boundary conditions of the classical diffusion equation just rely on the given information of the solution along the boundary of a domain ; on the contrary , for the l\\'evy flights or tempered l\\'evy flights in a bounded domain , it involves the information of a solution in the complementary set of $ \\omega $ , i.e. , $ \\mathbb { r } ^n\\backslash \\omega $ , with the potential reason that paths of the corresponding stochastic process are discontinuous . guided by probability intuitions and the stochastic perspectives of anomalous diffusion , we show the reasonable ways , ensuring the clear physical meaning and well-posedness of the partial differential equations ( pdes ) , of specifying ` boundary ' conditions for space fractional pdes modeling the anomalous diffusion . some properties of the operators are discussed , and the well-posednesses of the pdes with generalized boundary conditions are proved . story_separator_special_tag the fractional laplacian $ \\delta^ { \\beta/2 } $ is the generator of the $ \\beta $ -stable levy process , which is the scaling limit of the levy flight . due to the divergence of the second moment of the jum . story_separator_special_tag starting from the cattaneo constitutive relation with a jeffrey s kernel the derivation of a transient heat diffusion equation with relaxation term expressed through the caputo-fabrizio time fractional derivative has been developed . this approach allows seeing the physical background of the newly defined caputo-fabrizio time fractional derivative and demonstrates how other constitutive equations could be modified with non-singular fading memories . story_separator_special_tag this chapter summarizes the recent results on approximate analytical integral-balance solutions of initial-boundary value problems of spatial-fractional partial differential diffusion equation with riemann liouville fractional derivative in space . the approximate method is based on two principal steps : the integral-balance method and a series expansion of an assumed parabolic profile with undefined exponent . the spatial correlation of the superdiffusion coefficient in two power-law forms has been discussed . the law of the spatial and temporal propagation of the solution is the primary issue . approximate solutions based on assumed parabolic profile with unspecified exponent have been developed . story_separator_special_tag approximate explicit analytical solutions of the heat radiation diffusion equation by applying the double integration technique of the integral-balance method have been developed . the method allows approximate closed form solutions to be developed . a problem with a step change of the surface temperature and two problems with time dependent boundary conditions have been solved . the error minimization of the approximate solutions has been developed straightforwardly by minimization of the residual function of the governing equation . story_separator_special_tag abstract fractional-derivative models have been developed recently to interpret various hydrologic dynamics , such as dissolved contaminant transport in groundwater . however , they have not been applied to quantify other fluid dynamics , such as gas transport through complex geological media . this study reviewed previous gas transport experiments conducted in laboratory columns and real-world oil gas reservoirs and found that gas dynamics exhibit typical sub-diffusive behavior characterized by heavy late-time tailing in the gas breakthrough curves ( btcs ) , which can not be effectively captured by classical transport models . numerical tests and field applications of the time fractional convection diffusion equation ( fcde ) have shown that the fcde model can capture the observed gas btcs including their apparent positive skewness . sensitivity analysis further revealed that the three parameters used in the fcde model , including the time index , the convection velocity , and the diffusion coefficient , play different roles in interpreting the delayed gas transport dynamics . in addition , the model comparison and analysis showed that the time fcde model is efficient in application . therefore , the time fractional-derivative models can be conveniently extended to quantify gas transport through natural story_separator_special_tag abstract this study is mainly to explore gas transport process in heterogeneous media , which is to lay the foundation for oil-gas exploitation and development . anomalous transport is observed to be ubiquitous in complex geological formations and has a paramount impact on petroleum engineering . simultaneously , the random motion of particles usually exhibits obvious path- and history- dependent behaviors . this paper investigates the time-space fractional derivative models as a potential explanation for the time memory of long waiting time and the space non-locality of large regional-scale . a one-dimensional fractional advection-dispersion equation ( fade ) based on fractional fick s law is firstly used to accurately describe the transport of carbon dioxide ( co2 ) in complex media . the new fractional darcy-advection-dispersion equation ( fdade ) model has subsequently been proposed to make a comparison with fade model and demonstrate its physical mechanism . finally , the priori estimation of the parameters ( fractional derivative index ) in fractional derivative models and corresponding physical explanation are presented . combined with experimental data , the numerical simulations show the fractional derivative models can well characterize the heavy-tailed and early breakthrough phenomenon of co2 transport . story_separator_special_tag abstract non-fickian or anomalous diffusion had been well documented in material transport through heterogeneous systems at all scales , whose dynamics can be quantified by the time fractional derivative equations ( fdes ) . while analytical or numerical solutions have been developed for the standard time fde in bounded domains , the standard time fde suffers from the singularity issue due to its power-law function kernel . this study aimed at deriving the analytical solutions for the time fde models with a modified kernel in bounded domains . the mittag-leffler function was selected as the alternate kernel to improve the standard power-law function in defining the time fractional derivative , which was known to be able to overcome the singularity issue of the standard fractional derivative . results showed that the method of variable separation can be applied to derive the analytical solution for various time fdes with absorbing and/or reflecting boundary conditions . finally , numerical examples with detailed comparison for fdes with different kernels showed that the models and solutions obtained by this study can capture anomalous diffusion in bounded domains . story_separator_special_tag in this work , we show that under specific anomalous diffusion conditions , chemical systems can produce well-ordered self-similar concentration patterns through diffusion-driven instability . we also find spiral patterns and patterns with mixtures of rotational symmetries . the type of anomalous diffusion discussed in this work , either subdiffusion or superdiffusion , is a consequence of the medium heterogeneity , and it is modeled through a space-dependent diffusion coefficient with a power-law functional form . story_separator_special_tag in this work , we study the diabetes model and its complications with the caputo fabrizio fractional derivative . a deterministic mathematical model pertaining to the fractional derivative of the diabetes mellitus is discussed . the analytical solution of the diabetes model is derived by exerting the homotopy analysis method , the laplace transform and the pade approximation . moreover , existence and uniqueness of the solution are examined by making use of fixed point theory and the picard lindelof approach . ultimately , for illustrating the obtained results some numerical simulations are performed . story_separator_special_tag in this paper we obtain approximate bound state solutions of $ n $ -dimensional fractional time independent schr\\ '' { o } dinger equation for generalised mie-type potential , namely $ v ( r^ { \\alpha } ) =\\frac { a } { r^ { 2\\alpha } } +\\frac { b } { r^ { \\alpha } } +c $ . here $ \\alpha ( 0 < \\alpha < 1 ) $ acts like a fractional parameter for the space variable $ r $ . when $ \\alpha=1 $ the potential converts into the original form of mie-type of potential that is generally studied in molecular and chemical physics . the entire study is composed with jumarie type fractional derivative approach . the solution is expressed via mittag-leffler function and fractionally defined confluent hypergeometric function . to ensure the validity of the present work , obtained results are verified with the previous works for different potential parameter configurations , specially for $ \\alpha=1 $ . at the end , few numerical calculations for energy eigenvalue and bound states eigenfunctions are furnished for a typical diatomic molecule . story_separator_special_tag abstract fractional calculus is at this stage an arena where many models are still to be introduced , discussed and applied to real world applications in many branches of science and engineering where nonlocality plays a crucial role . although researchers have already reported many excellent results in several seminal monographs and review articles , there are still a large number of non-local phenomena unexplored and waiting to be discovered . therefore , year by year , we can discover new aspects of the fractional modeling and applications . this review article aims to present some short summaries written by distinguished researchers in the field of fractional calculus . we believe this incomplete , but important , information will guide young researchers and help newcomers to see some of the main real-world applications and gain an understanding of this powerful mathematical tool . we expect this collection will also benefit our community . story_separator_special_tag the boltzmann-gibbs-von neumann entropy of a large part ( of linear size l ) of some ( much larger ) d-dimensional quantum systems follows the so-called area law ( as for black holes ) , i.e. , it is proportional to $ l^ { d-1 } $ . here we show , for d=1,2 , that the ( nonadditive ) entropy s_q satisfies , for a special value of $ q eq 1 $ , the classical thermodynamical prescription for the entropy to be extensive , i.e. , $ s_q \\propto l^d $ . therefore , we reconcile with classical thermodynamics the area law widespread in quantum systems . recently , a similar behavior was exhibited , by m. gell-mann , y. sato and one of us ( c.t . ) , in mathematical models with scale-invariant correlations . finally , we find that the system critical features are marked by a maximum of the special entropic index q . story_separator_special_tag the nonextensive kinetic theory for degenerate quantum gases is discussed in the general relativistic framework . by incorporating nonadditive modifications in the collisional term of the relativistic boltzmann equation and entropy current , it is shown that tsallis entropic framework satisfies a h-theorem in the presence of gravitational fields . consistency with the 2nd law of thermodynamics is obtained only whether the entropic q-parameter lies in the interval $ q \\in [ 0,2 ] $ . as occurs in the absence of gravitational fields , it is also proved that the local collisional equilibrium is described by the extended bose-einstein ( fermi-dirac ) q-distributions . story_separator_special_tag distributions derived from non-extensive tsallis statistics are closely connected with dynamics described by a nonlinear fokker-planck equation . the combination shows promise in describing stochastic processes with power-law distributions and superdiffusive dynamics . we investigate intra-day price changes in the s & p500 stock index within this framework by direct analysis and by simulation . we find that the power-law tails of the distributions , and the index 's anomalously diffusing dynamics , are very accurately described by this approach . our results show good agreement between market data , fokker-planck dynamics , and simulation . thus the combination of the tsallis non-extensive entropy and the nonlinear fokker-planck equation unites in a very natural way the power-law tails of the distributions and their superdiffusive dynamics . story_separator_special_tag bajo el principio de que el acceso abierto a los resultados de investigaci\xf3n acelera el avance del conocimiento , todos los contenidos de la edici\xf3n electr\xf3nica de clip se distribuyen bajo una licencia de uso y distribuci\xf3n creative commons reconocimiento-nocomercialcompartirigual 3.0 espa\xf1a ( cc by-nc-sa 3.0 es ) . nace \xabsoy de la cuesta\xbb , asociaci\xf3n ciudadana al rescate de la feria de libros de la cuesta de moyano de madrid story_separator_special_tag abstract recent works have associated systems of particles , characterized by short-range repulsive interactions and evolving under overdamped motion , to a nonlinear fokker planck equation within the class of nonextensive statistical mechanics , with a nonlinear diffusion contribution whose exponent is given by = 2 q . the particular case = 2 applies to interacting vortices in type-ii superconductors , whereas > 2 covers systems of particles characterized by short-range power-law interactions , where correlations among particles are taken into account . in the former case , several studies presented a consistent thermodynamic framework based on the definition of an effective temperature ( presenting experimental values much higher than typical room temperatures t , so that thermal noise could be neglected ) , conjugated to a generalized entropy s ( with = 2 ) . herein , the whole thermodynamic scheme is revisited and extended to systems of particles interacting repulsively , through short-ranged potentials , described by an entropy s , with > 1 , covering the = 2 ( vortices in type-ii superconductors ) and > 2 ( short-range power-law interactions ) physical examples . one basic requirement concerns a cutoff in the equilibrium distribution p eq story_separator_special_tag j. bousquet , n. khaltaev , a. a. cruz , j. denburg , w. j. fokkens , a. togias , t. zuberbier , c. e. baena-cagnani , g. w. canonica , c. van weel , i. agache , n. a t-khaled , c. bachert , m. s. blaiss , s. bonini , l.-p. boulet , p.-j . bousquet , p. camargos , k.-h. carlsen , y. chen , a. custovic , r. dahl , p. demoly , h. douagui , s. r. durham , r. gerth van wijk , o. kalayci , m. a. kaliner , y.-y . kim , m. l. kowalski , p. kuna , l. t. t. le , c. lemiere , j. li , r. f. lockey , s. mavale-manuel , e. o. meltzer , y. mohammad , j. mullol , r. naclerio , r. e. o hehir , k. ohta , s. ouedraogo , s. palkonen , n. papadopoulos , g. passalacqua , r. pawankar , t. a. popov , k. f. rabe , j. rosado-pinto , g. k. scadding , f. e. r. simons , e. toskala , e. valovirta , p. van cauwenberge , d.-y . wang , m. wickman , b. p. story_separator_special_tag we demonstrate the non-ergodicity of a simple markovian stochastic process with space-dependent diffusion coefficient d ( x ) . for power-law forms d ( x ) ' |x| , this process yields anomalous diffusion of the form hx 2 ( t ) i ' t 2/ ( 2 ) . interestingly , in both the sub- and superdiffusive regimes we observe weak ergodicity breaking : the scaling of the time-averaged mean-squared displacement 2 ( 1 ) remains linear in the lag time 1 and thus differs from the corresponding ensemble average hx 2 ( t ) i. we analyse the non-ergodic behaviour of this process in terms of the time-averaged mean-squared displacement 2 and its random features , i.e . the statistical distribution of 2 and the ergodicity breaking parameters . the heterogeneous diffusion model represents an alternative approach to non-ergodic , anomalous diffusion that might be particularly relevant for diffusion in heterogeneous media . story_separator_special_tag combining extensive single particle tracking microscopy data of endogenous lipid granules in living fission yeast cells with analytical results we show evidence for anomalous diffusion and weak ergodicity breaking . namely we demonstrate that at short times the granules perform subdiffusion according to the laws of continuous time random walk theory . the associated violation of ergodicity leads to a characteristic turnover between two scaling regimes of the time averaged mean squared displacement . at longer times the granule motion is consistent with fractional brownian motion . story_separator_special_tag recent advances in single particle tracking and supercomputing techniques & # 13 ; demonstrate the emergence of normal or anomalous , viscoelastic diffusion & # 13 ; in conjunction with non-gaussian distributions in soft , biological , and & # 13 ; active matter systems . we here formulate a stochastic model based on a & # 13 ; generalised langevin equation in which non-gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin , namely a random parametrisation of the stochastic force . we perform a & # 13 ; detailed analysis demonstrating how various types of parameter & # 13 ; distributions for the memory kernel result in exponential , power law , & # 13 ; or power-log law tails of the memory functions. & # 13 ; the studied system is also shown to exhibit a further unusual property : the & # 13 ; velocity has a gaussian one point probability density but non-gaussian joint & # 13 ; distributions . this behaviour is reflected in the relaxation from a gaussian to & # 13 ; non-gaussian distribution observed for the position variable . we show that & # story_separator_special_tag the problem of biological motion is a very intriguing and topical issue . many efforts are being focused on the development of novel modelling approaches for the description of anomalous diffusion in biological systems , such as the very complex and heterogeneous cell environment . nevertheless , many questions are still open , such as the joint manifestation of statistical features in agreement with different models that can also be somewhat alternative to each other , e.g . continuous time random walk and fractional brownian motion . to overcome these limitations , we propose a stochastic diffusion model with additive noise and linear friction force ( linear langevin equation ) , thus involving the explicit modelling of velocity dynamics . the complexity of the medium is parametrized via a population of intensity parameters ( relaxation time and diffusivity of velocity ) , thus introducing an additional randomness , in addition to white noise , in the particle 's dynamics . we prove that , for proper distributions of these parameters , we can get both gaussian anomalous diffusion , fractional diffusion and its generalizations . story_separator_special_tag abstract the study of the dynamics of biological systems requires one to follow relaxation processes in time with micron-size spatial resolution . this need has led to the development of different fluorescence correlation techniques with high spatial resolution and a tremendous ( from nanoseconds to seconds ) temporal dynamic range . spatiotemporal information can be obtained even on complex dynamic processes whose time evolution is not forecast by simple brownian diffusion . our discussion of the most recent applications of image correlation spectroscopy to the study of anomalous sub- or superdiffusion suggests that this field still requires the development of multidimensional image analyses based on analytical models or numerical simulations . we focus in particular on the framework of spatiotemporal image correlation spectroscopy and examine the critical steps in getting information on anomalous diffusive processes from the correlation maps . we point out how a dual space-time correlative analysis , in both the direct and the fourier space , can provide quantitative information on superdiffusional processes when these are analyzed through an empirical model based on intermittent active dynamics . we believe that this dual space-time analysis , potentially amenable to mathematical treatment and to the exact fit of experimental story_separator_special_tag in this paper , we establish a moderate deviation principle for the langevin dynamics with strong damping . the weak convergence approach plays an important role in the proof . story_separator_special_tag we discuss how to derive a langevin equation ( le ) in non standard systems , i.e . when the kinetic part of the hamiltonian is not the usual quadratic function . this generalization allows to consider also cases with negative absolute temperature . we first give some phenomenological arguments suggesting the shape of the viscous drift , replacing the usual linear viscous damping , and its relation with the diffusion coefficient modulating the white noise term . as a second step , we implement a procedure to reconstruct the drift and the diffusion term of the le from the time-series of the momentum of a heavy particle embedded in a large hamiltonian system . the results of our reconstruction are in good agreement with the phenomenological arguments . applying the method to systems with negative temperature , we can observe that also in this case there is a suitable langevin equation , obtained with a precise protocol , able to reproduce in a proper way the statistical features of the slow variables . in other words , even in this context , systems with negative temperature do not show any pathology . story_separator_special_tag we investigate through a generalized langevin formalism the phenomenon of anomalous diffusion for asymptotic times , and we generalized the concept of the diffusion exponent . a method is proposed to obtain the diffusion coefficient analytically through the introduction of a time scaling factor $ \\ensuremath { \\lambda } $ . we obtain as well an exact expression for $ \\ensuremath { \\lambda } $ for all kinds of diffusion . moreover , we show that $ \\ensuremath { \\lambda } $ is a universal parameter determined by the diffusion exponent . the results are then compared with numerical calculations and very good agreement is observed . the method is general and may be applied to many types of stochastic problem . story_separator_special_tag we consider poisson shot noise processes that are appropriate to model stock prices and provide an economic reason for long-range dependence in asset returns . under a regular variation condition we show that our model converges weakly to a fractional brownian motion . whereas fractional brownian motion allows for arbitrage , the shot noise process itself can be chosen arbitrage-free . using the marked point process skeleton of the shot noise process we construct a corresponding equivalent martingale measure explicitly . story_separator_special_tag we measured individual trajectories of fluorescently labeled telomeres in the nucleus of eukaryotic cells in the time range of 10-2 104 sec by combining a few acquisition methods . at short times the motion is subdiffusive with ( r2 ) ? t ? and it changes to normal diffusion at longer times . the short times diffusion may be explained by the reptation model and the transient diffusion is consistent with a model of telomeres that are subject to a local binding mechanism with a wide but finite distribution of waiting times . these findings have important biological implications with respect to the genome organization in the nucleus . story_separator_special_tag active walker models have recently proved their great value for describing the formation of clusters , periodic patterns , and spiral waves as well as the development of rivers , dielectric breakdown patterns , and many other structures . it is shown that they also allow one to simulate the formation of trail systems by pedestrians and ants , yielding a better understanding of human and animal behavior . a comparison with empirical material shows a good agreement between model and reality . our trail formation model includes an equation of motion , an equation for environmental changes , and an orientation relation . it contains some model functions , which are specified according to the characteristics of the considered animals or pedestrians . not only the kind of environmental changes differs : whereas pedestrians leave footprints on the ground , ants produce chemical markings for their orientation . nevertheless , it is more important that pedestrians steer towards a certain destination , while ants usually find their food sources by chance , i.e. , they reach their destination in a stochastic way . as a consequence , the typical structure of the evolving trail systems depends on the respective story_separator_special_tag in this article we review classical and recent results in anomalous diffusion and provide mechanisms useful for the study of the fundamentals of certain processes , mainly in condensed matter physics , chemistry and biology . emphasis will be given to some methods applied in the analysis and characterization of diffusive regimes through the memory function , the mixing condition ( or irreversibility ) , and ergodicity . those methods can be used in the study of small-scale systems , ranging in size from single-molecule to particle clusters and including among others polymers , proteins , ion channels and biological cells , whose diffusive properties have received much attention lately . story_separator_special_tag the khinchin theorem provides the condition that a stationary process is ergodic , in terms of the behavior of the corresponding correlation function . many physical systems are governed by nonstationary processes in which correlation functions exhibit aging . we classify the ergodic behavior of such systems and suggest a possible generalization of khinchin s theorem . our work also quantifies deviations from ergodicity in terms of aging correlation functions . using the framework of the fractional fokker-planck equation , we obtain a simple analytical expression for the two-time correlation function of the particle displacement in a general binding potential , revealing universality in the sense that the binding potential only enters into the prefactor through the first two moments of the corresponding boltzmann distribution . we discuss applications to experimental data from systems exhibiting anomalous dynamics . story_separator_special_tag we present a phenomenological model for the dynamics of disordered ( complex ) systems . we postulate that the lifetimes of the many metastable states are distributed according to a broad , power law probability distribution . we show that aging occurs in this model when the average lifetime is infmite . a simple hypothesis leads to a new functional fornl for the relaxation which is in remarkable agreement with spin-glass experiments over nearly five decades in time . in spite of fifteen years of dispute , the theory of equilibrium spin-glasses is not yet settled ( 1 , 2 , 3 ) . dynamical effects are , however , likely to be the dominant aspect in experiments . one of the most striking aspects of the dynamics of spin-glasses in their low temperature phase is the aging phenomenon- a rather peculiar and awkward feature from the thermodynamics point of view : the relaxation of a system depends on its history . more precisely , if a system is field-cooled below its spin-glass temperature , the magnetization relaxation depends on the waiting time t~ between the quench and the switch off of the magnetic field ( 4 , 5 , story_separator_special_tag the unsteady creeping motion of a thin sheet of viscous liquid as it advances over a gently sloping dry bed is examined . attention is focused on the motion of the leading edge under various influences and four problems are discussed . in the first problem the fluid is travelling down an open channel formed by two straight parallel retaining walls placed perpendicular to an inclined plane . when the channel axis is parallel to the fall line there is a progressive-wave solution with a straight leading edge , but inclination of the axis generates distortions and these are calculated . in the second problem a sheet with a straight leading edge travelling over an inclined plane penetrates a region where the bed is uneven , and the subsequent deformation of the leading edge is followed . the third problem considers the flow down an open channel of circular cross-section ( a partially filled pipe ) and the time-dependent shape of the leading edge is calculated . the fourth problem is that of flow down an inclined plane with a single curved retaining wall . these problems are all analysed by assuming that a length characteristic of the geometry is story_separator_special_tag the classic marshak wave equation ( an equilibrium diffusion radiative transfer description ) is obtained as the lowest order approximation in an asymptotic analysis of a system of time dependent nonequilibrium radiative transfer equations.the next approximation leads to a more general equilibrium diffusion approximation , which contains the radiative energy in the description . we derive an asymptotic solution of this higher order equilibrium diffusion approximation by including the smallness parameter in both the independent time variable and the dependent variable of the problem . the solution obtained is applicable over a longer time interval than the solution of the marshak equation . its main qualitative feature is that the predicted position of the wave front lags behind the marshak prediction . story_separator_special_tag conformational memory in single-molecule dynamics has attracted recent attention and , in particular , has been invoked as a possible explanation of some of the intriguing properties of transition paths observed in single-molecule force spectroscopy ( smfs ) studies . here we study one candidate for a non-markovian model that can account for conformational memory , the generalized langevin equation with a friction force that depends not only on the instantaneous velocity but also on the velocities in the past . the memory in this model is determined by a time-dependent friction memory kernel . we propose a method for extracting this kernel directly from an experimental signal and illustrate its feasibility by applying it to a generalized rouse model of a smfs experiment , where the memory kernel is known exactly . using the same model , we further study how memory affects various statistical properties of transition paths observed in smfs experiments and evaluate the performance of recent approximate analytical theo . story_separator_special_tag abstract in this paper , a new fractional operator of variable order with the use of the monotonic increasing function is proposed in sense of caputo type . the properties in term of the laplace and fourier transforms are analyzed and the results for the anomalous diffusion equations of variable order are discussed . the new formulation is efficient in modeling a class of concentrations in the complex transport process . story_separator_special_tag a great variety of complex phenomena in many scientific fields exhibit power-law behavior , reflecting a hierarchical or fractal structure . many of these phenomena seem to be susceptible to description using approaches drawn from thermodynamics or statistical mechanics , particularly approaches involving the maximization of entropy and of boltzmann-gibbs statistical mechanics and standard laws in a natural way . the book addresses the interdisciplinary applications of these ideas , and also on various phenomena that could possibly be quantitatively describable in terms of these ideas . story_separator_special_tag superstatistics is a superposition of two different statistics relevant for driven nonequilibrium systems with a stationary state and intensive parameter fluctuations . it contains tsallis statistics as a special case . after briefly summarizing some of the theoretical aspects , we describe recent applications of this concept to three different physical problems , namely a ) fully developed hydrodynamic turbulence b ) pattern formation in thermal convection states and c ) the statistics of cosmic rays . story_separator_special_tag the transport coefficients of a weakly ionized plasma are studied in the non-extensive statistics framework using the boltzmann equation . we find that some of these transport coefficients depend on the nonextensive parameter , q. it is seen that , in the case of qs smaller than one ( superthermal particles ) , the diffusion coefficient is meaningful only in the range 3 / 5 1 ( sub-thermal particles ) , the diffusion coefficient decreases as q increases . on the other hand , the thermal conductivity is meaningful just in the range of 5 / 7 1 ( sub-thermal particles ) , the diffusion coefficient decreases as q increases . on the other hand , the thermal conductivity is meaningful just in the range of 5 / 7 < q < 1. in addition , it is observed that the increase in q gives rise to the decrease in the thermal conductivity value in both superthermal and sub-thermal particles . furthermore , the electrical conductivity is independent of the q parameter in contrast to the fully ionized plasma [ z. ebne abbasi and a. esfandyari-kalejahi , phys . plasmas 23 , 073112 ( 2016 ) ] . finally , story_separator_special_tag a considerable number of systems have recently been reported in which brownian yet non-gaussian dynamics was observed . these are processes characterised by a linear growth in time of the mean squared displacement , yet the probability density function of the particle displacement is distinctly non-gaussian , and often of exponential ( laplace ) shape . this apparently ubiquitous behaviour observed in very different physical systems has been interpreted as resulting from diffusion in inhomogeneous environments and mathematically represented through a variable , stochastic diffusion coefficient . indeed different models describing a fluctuating diffusivity have been studied . here we present a new view of the stochastic basis describing time dependent random diffusivities within a broad spectrum of distributions . concretely , our study is based on the very generic class of the generalised gamma distribution . two models for the particle spreading in such random diffusivity settings are studied . the first belongs to the class of generalised grey brownian motion while the second follows from the idea of diffusing diffusivities . the two processes exhibit significant characteristics which reproduce experimental results from different biological and physical systems . we promote these two physical models for the description of story_separator_special_tag we consider a generalized langevin equation with regularized prabhakar derivative operator . we analyze the mean square displacement , time-dependent diffusion coefficient and velocity autocorrelation function . we further introduce the so-called tempered regularized prabhakar derivative and analyze the corresponding generalized langevin equation with friction term represented through the tempered derivative . various diffusive behaviors are observed . we show the importance of the three parameter mittag-leffler function in the description of anomalous diffusion in complex media . we also give analytical results related to the generalized langevin equation for a harmonic oscillator with generalized friction . the normalized displacement correlation function shows different behaviors , such as monotonic and non-monotonic decay without zero-crossings , oscillation-like behavior without zero-crossings , critical behavior , and oscillation-like behavior with zero-crossings . these various behaviors appear due to the friction of the complex environment represented by the mittag-leffler and tempered mittag-leffler memory kernels . depending on the values of the friction parameters in the system , either diffusion or oscillations dominate . story_separator_special_tag abstract we present a generalization of hilfer derivatives in which riemann liouville integrals are replaced by more general prabhakar integrals . we analyze and discuss its properties . furthermore , we show some applications of these generalized hilfer prabhakar derivatives in classical equations of mathematical physics such as the heat and the free electron laser equations , and in difference differential equations governing the dynamics of generalized renewal stochastic processes . story_separator_special_tag in this work , we investigate a series of mathematical aspects for the fractional diffusion equation with stochastic resetting . the stochastic resetting process in evans majumdar sense has several applications in science , with a particular emphasis on non-equilibrium physics and biological systems . we propose a version of the stochastic resetting theory for systems in which the reset point is in motion , so the walker does not return to the initial position as in the standard model , but returns to a point that moves in space . in addition , we investigate the proposed stochastic resetting model for diffusion with the fractional operator of prabhakar . the derivative of prabhakar consists of an integro-differential operator that has a mittag leffler function with three parameters in the integration kernel , so it generalizes a series of fractional operators such as riemann liouville caputo . we present how the generalized model of stochastic resetting for fractional diffusion implies a rich class of anomalous diffusive processes , i.e. , ( x ) 2 t , which includes sub-super-hyper-diffusive regimes . in the sequence , we generalize these ideas to the fractional fokker planck equation for quadratic potential u ( story_separator_special_tag analytical solutions of the first and second model of hristov fractional diffusion equations based on the non-singular atangana-baleanu derivative have been developed . the solutions are based on an integral method based on the consequent application of the fourier and laplace transforms . particular cases of hristov fractional diffusion equations considering operators with orders converging to unity have been analyzed , too . story_separator_special_tag the analytical solutions of the fractional diffusion equations in one and two-dimensional spaces have been proposed . the analytical solution of the cattaneo-hristov diffusion model with the particular boundary conditions has been suggested . in general , the numerical methods have been used to solve the fractional diffusion equations and the cattaneo-hristov diffusion model . the laplace and the fourier sine transforms have been used to get the analytical solutions . the analytical solutions of the classical diffusion equations and the cattaneo-hristov diffusion model obtained when the order of the fractional derivative converges to 1 have been recalled . the graphical representations of the analytical solutions of the fractional diffusion equations and the cattaneo-hristov diffusion model have been provided . story_separator_special_tag we study distributed-order time fractional diffusion equations characterized by multifractal memory kernels , in contrast to the simple power-law kernel of common time fractional diffusion equations . based on the physical approach to anomalous diffusion provided by the seminal scher-montroll-weiss continuous time random walk , we analyze both natural and modified-form distributed-order time fractional diffusion equations and compare the two approaches . the mean squared displacement is obtained and its limiting behavior analyzed . we derive the connection between the wiener process , described by the conventional langevin equation and the dynamics encoded by the distributed-order time fractional diffusion equation in terms of a generalized subordination of time . a detailed analysis of the multifractal properties of distributed-order diffusion equations is provided . story_separator_special_tag the extended thermodynamics of tsallis is reviewed in detail and applied to turbulence . it is based on a generalization of the exponential and logarithmic functions with a parameter q. by applying this nonequilibrium thermodynamics , the boltzmann-gibbs thermodynamic approach of kraichnan to 2-d turbulence is generalized . this physical modeling implies fractional calculus methods , obeying anomalous diffusion , described by l\xe9vy statistics with q < 5/3 ( sub diffusion ) , q = 5/3 ( normal or brownian diffusion ) and q > 5/3 ( super diffusion ) . the generalized energy spectrum of kraichnan , occurring at small wave numbers k , now reveals the more general and precise result k q. this corresponds well for q = 5/3 with the kolmogorov-oboukov energy spectrum and for q > 5/3 to turbulence with intermittency . the enstrophy spectrum , occurring at large wave numbers k , leads to a k 3q power law , suggesting that large wave-number eddies are in thermodynamic equilibrium , which is characterized by q = 1 , finally resulting in kraichnan s correct k 3 enstrophy spectrum . the theory reveals in a natural manner a generalized temperature of turbulence , which in story_separator_special_tag diverse processes in statistical physics are usually analyzed on the assumption that the drag force acting on a test particle moving in a resisting medium is linear on the velocity of the particle . however , nonlinear drag forces do appear in relevant situations that are currently the focus of experimental and theoretical work . motivated by these developments , we explore the consequences of nonlinear drag forces for the thermostatistics of systems of interacting particles performing overdamped motion . we derive a family of nonlinear fokker-planck equations for these systems , taking into account the effects of nonlinear drag forces . we investigate the main properties of these evolution equations , including an h-theorem , and obtain exact solutions of the stretched q-exponential form . story_separator_special_tag we consider the two- ( 2d ) and three-dimensional ( 3d ) ising models on a square lattice at the critical temperature $ { t } _ { c } $ , under monte carlo spin flip dynamics . the bulk magnetization and the magnetization of a tagged line in the 2d ising model , and the bulk magnetization and the magnetization of a tagged plane in the 3d ising model , exhibit anomalous diffusion . specifically , their mean-square displacements increase as power laws in time , collectively denoted as $ \\ensuremath { \\sim } { t } ^ { c } $ , where $ c $ is the anomalous exponent . we argue that the anomalous diffusion in all these quantities for the ising model stems from time-dependent restoring forces , decaying as power laws in time -- -also with exponent $ c $ -- -in striking similarity to anomalous diffusion in polymeric systems . prompted by our previous work that has established a memory-kernel based generalized langevin equation ( gle ) formulation for polymeric systems , we show that a closely analogous gle formulation holds for the ising model as well . we obtain the memory story_separator_special_tag in contexts such as suspension feeding in marine ecologies there is an interplay between brownian motion of nonmotile particles and their advection by flows from swimming microorganisms . as a laboratory realization , we study passive tracers in suspensions of eukaryotic swimmers , the alga chlamydomonas reinhardtii . while the cells behave ballistically over short intervals , the tracers behave diffusively , with a time-dependent but self-similar probability distribution function of displacements consisting of a gaussian core and robust exponential tails . we emphasize the role of flagellar beating in creating oscillatory flows that exceed brownian motion far from each swimmer . story_separator_special_tag in contrast to membrane protein diffusion in artificial bilayers , the diffusion of proteins in cells is both slower and non-ergodic . anomalous diffusion in cells has been attributed to cytoskeletal-membrane interactions and nanoscopic heterogeneity in membrane protein and lipid composition . using a combination of single-molecule fluorescence tracking and interferometric scattering microscopy we quantify how these factors affect anomalous diffusion over a range of time-scales in artificial lipid membranes . by varying the membrane composition and nature and density of pinning sites , we examine the onset of anomalous diffusion and its dependence on these properties . we observe a reduction in tracer diffusion rates as membrane crowding is increased , and the diffusion appears to become increasingly anomalous as membrane components are immobilized . story_separator_special_tag dynamics of hydration water is essential for the function of biomacromolecules . previous studies have demonstrated that water molecules exhibit subdiffusion on the surface of biomacromolecules ; yet the microscopic mechanism remains vague . here , by performing neutron scattering , molecular dynamics simulations , and analytic modeling on hydrated perdeuterated protein powders , we found water molecules jump randomly between trapping sites on protein surfaces , whose waiting times obey a broad distribution , resulting in subdiffusion . moreover , the subdiffusive exponent gradually increases with observation time towards normal diffusion due to a many-body volume-exclusion effect .
angesichts der tatsache , das html nur ein bestimmtes dokumentmodell implementiert , ist die extensible markup language ( xml ) definiert worden , die es ermoglicht , anwendungsspezifische dokumenttypen zu verwenden , die in einer xml-umgebung erstellt , verbreitet und interpretiert werden konnen . story_separator_special_tag this specification defines the mathematical markup language , or mathml . mathml is an xml application for describing mathematical notation and capturing both its structure and content . the goal of mathml is to enable mathematics to be served , received , and processed on the world wide web , just as html has enabled this functionality for text . this specification of the markup language mathml is intended primarily for a readership consisting of those who will be developing or implementing renderers or editors using it , or software that will communicate using mathml as a protocol for input or output . it is not a user s guide but rather a reference document . mathml can be used to encode both mathematical notation and mathematical content . about thirtyeight of the mathml tags describe abstract notational structures , while another about one hundred and seventy provide a way of unambiguously specifying the intended meaning of an expression . additional chapters discuss how the mathml content and presentation elements interact , and how mathml renderers might be implemented and should interact with browsers . finally , this document addresses the issue of special characters used for mathematics , their story_separator_special_tag this paper introduces a subdomain chemistry format for storing computational chemistry data called compchem . it has been developed based on the design , concepts and methodologies of chemical markup language ( cml ) by adding computational chemistry semantics on top of the cml schema . the format allows a wide range of ab initio quantum chemistry calculations of individual molecules to be stored . these calculations include , for example , single point energy calculation , molecular geometry optimization , and vibrational frequency analysis . the paper also describes the supporting infrastructure , such as processing software , dictionaries , validation tools and database repositories . in addition , some of the challenges and difficulties in developing common computational chemistry dictionaries are discussed . the uses of compchem are illustrated by two practical applications . story_separator_special_tag the field of evolution is diverse , spanning all levels of biological organisation . whilst such diversity makes for a fascinating area of research , it makes the challenge of writing a general evolution textbook a daunting one . in the ten years since it first appeared , evolution has attempted to rise to this challenge . ten years on , we have a third edition , so how has evolution evolved ? users of the previous editions will find no great surprises . in fact , on the surface little has changed . the book is based around the same tried and trusted sections : introduction ; evolutionary genetics ; adaptation and natural selection ; evolution and diversity ; andmacroevolution . the content of the book is also much as you would expect , with large sections devoted to population genetics , species and speciation . indeed , this is very much an evolution textbook in the traditional mould with no real attempt to present the subject in a novel or unexpected way . that said , the traditional mould is filled rather well . the writing style is clear and straightforward and the text is liberally illustrated with story_separator_special_tag xpath is a language for addressing parts of an xml document , designed to be used by both xslt and xpointer . status of this document this document has been reviewed by w3c members and other interested parties and has been endorsed by the director as a w3c recommendation . it is a stable document and may be used as reference material or cited as a normative reference from other documents . w3c 's role in making the recommendation is to draw attention to the specification and to promote its widespread deployment . this enhances the functionality and interoperability of the web . xml path language ( xpath ) http : //www.w3.org/tr/1999/rec-xpath-19991116 ( 1 of 30 ) [ 7/19/2001 5:31:03 pm ] the list of known errors in this specification is available at http : //www.w3.org/1999/11/rec-xpath-19991116-errata . comments on this specification may be sent to www-xpath-comments @ w3.org ; archives of the comments are available . the english version of this specification is the only normative version . however , for translations of this document , see http : //www.w3.org/style/xsl/translations.html . a list of current w3c recommendations and other technical documents can be found at http : //www.w3.org/tr . this story_separator_special_tag most query and transformation languages developed since the mid 90es for xml and semistructured data - e.g . xquery [ 1 ] , the precursors of xquery [ 2 ] , and xslt [ 3 ] - build upon a path-oriented node selection : a node in a data item is specified in terms of a root-to-node path in the manner of the file selection languages of operating systems . constructs inspired from the regular expression constructs * , + , ? , and `` wildcards '' give rise to a flexible node retrieval from incompletely specified data items.this paper further introduces into xcerpt , a query and transformation language further developing an alternative approach to querying xml and semistructured data first introduced with the language unql [ 4 ] . a metaphor for this approach views queries as patterns , answers as data items matching the queries . formally , an answer to a query is defined as a simulation [ 5 ] of an instance of the query in a data item . story_separator_special_tag querying xml has been the subject of much recent investigation . a formal bulk algebra is essential for applying database-style optimization to xml queries . we develop such an algebra , called tax ( tree algebra for xml ) , for manipulating xml data , modeled as forests of labeled ordered trees . motivated both by aesthetic considerations of intuitiveness , and by efficient computability and amenability to optimization , we develop tax as a natural extension of relational algebra , with a small set of operators . tax is complete for relational algebra extended with aggregation , and can express most queries expressible in popular xml query languages . it forms the basis for the timber xml database system currently under development by us . story_separator_special_tag we examine the inex ad hoc search tasks and ask if ( or not ) it is possible to identify any existing commercial use of the task . in each of the tasks : thorough , focused , relevant in context , and best in context , such uses are found . commercial use of co and cas queries are also found . finally we present abstract use cases of each ad hoc task . our finding is that xml-ir , or at least parallels in other semi-structured formats , is in use and has been for many years . story_separator_special_tag to address the needs of data intensive xml applications , a number of efficient tree pattern algorithms have been proposed . still , most xquery compilers do not support those algorithms . this is due in part to the lack of support for tree patterns in xml algebras , but also because deciding which part of a query plan should be evaluated as a tree pattern is a hard problem . in this paper , we extend a tuple algebra for xquery with a tree pattern operator , and present rewrit-ings suitable to introduce that operator in query plans . we demonstrate the robustness of the proposed rewritings under syntactic variations commonly found in queries . the proposed tree pattern operator can be implemented using popular algorithms such as twig joins and staircase joins . our experiments yield useful information to decide which algorithm should be used in a given plan . story_separator_special_tag systems for managing and querying semistructured-data sources often store data in proprietary object repositories or in a tagged-text format . we describe a technique that can use relational database management systems to store and manage semistructured data . our technique relies on a mapping between the semistructured data model and the relational data model , expressed in a query language called stored . when a semistructured data instance is given , a stored mapping can be generated automatically using data-mining techniques . we are interested in applying stored to xml data , which is an instance of semistructured data . we show how a document-type-descriptor ( dtd ) , when present , can be exploited to further improve performance . story_separator_special_tag xml is fast emerging as the dominant standard for representing data in the world wide web . sophisticated query engines that allow users to effectively tap the data stored in xml documents will be crucial to exploiting the full power of xml . while there has been a great deal of activity recently proposing new semistructured data models and query languages for this purpose , this paper explores the more conservative approach of using traditional relational database engines for processing xml documents conforming to document type descriptors ( dtds ) . to this end , we have developed algorithms and implemented a prototype system that converts xml documents to relational tuples , translates semi-structured queries over xml documents to sql queries over tables , and converts the results to xml . we have qualitatively evaluated this approach using several real dtds drawn from diverse domains . it turns out that the relational approach can handle most ( but not all ) of the semantics of semi-structured queries over xml data , but is likely to be effective only in some cases . we identify the causes for these limitations and propose certain extensions to the relational model that would make story_separator_special_tag xml and xquery semantics are very sensitive to the order of the produced output . although pattern-tree based algebraic approaches are becoming more and more popular for evaluating xml , there is no universally accepted technique which can guarantee both a correct output order and a choice of efficient alternative plans.we address the problem using hybrid collections of trees that can be either sets or sequences or something in between . each such collection is coupled with an ordering specification that describes how the trees are sorted ( full , partial or no order ) . this provides us with a formal basis for developing a query plan having parts that maintain no order and parts with partial or full order.it turns out that duplicate elimination introduces some of the same issues as order maintenance : it is expensive and a single collection type does not always provide all the flexibility required to optimize this properly . to solve this problem we associate with each hybrid collection a duplicate specification that describes the presence or absence of duplicate elements in it . we show how to extend an existing bulk tree algebra , tlc [ 12 ] , to use story_separator_special_tag relational algebra has been a crucial foundation for relational database systems , and has played a large role in enabling their success . a corresponding xml algebra for xml query processing has been more elusive , due to the comparative complexity of xml , and its history . we argue that having a sound algebraic basis remains important nonetheless . in this paper , we show how the complexity of xml can be modeled effectively in a simple algebra , and how the conceptual clarity attained thereby can lead to significant benefits . story_separator_special_tag in this paper we present the data manipulation facility for a structured english query language ( sequel ) which can be used for accessing data in an integrated relational data base . without resorting to the concepts of bound variables and quantifiers sequel identifies a set of simple operations on tabular structures , which can be shown to be of equivalent power to the first order predicate calculus . a sequel user is presented with a consistent set of keyword english templates which reflect how people use tables to obtain information . moreover , the sequel user is able to compose these basic templates in a structured manner in order to form more complex queries . sequel is intended as a data base sublanguage for both the professional programmer and the more infrequent data base user . story_separator_special_tag tree patterns forms a natural basis to query tree-structured data such as xml and ldap . since the efficiency of tree pattern matching against a tree-structured database depends on the size of the pattern , it is essential to identify and eliminate redundant nodes in the pattern and do so as quickly as possible . in this paper , we study tree pattern minimization both in the absence and in the presence of integrity constraints ( ics ) on the underlying tree-structured database.when no ics are considered , we call the process of minimizing a tree pattern , constraint-independent minimization . we develop a polynomial time algorithm called cim for this purpose . cim 's efficiency stems from two key properties : ( i ) a node can not be redundant unless its children are , and ( ii ) the order of elimination of redundant nodes is immaterial . when ics are considered for minimization , we refer to it as constraint-dependent minimization . for tree-structured databases , required child/descendant and type co-occurrence ics are very natural . under such ics , we show that the minimal equivalent query is unique . we show the surprising result that the story_separator_special_tag extensible markup language ( xml ) is emerging as a de facto standard for information exchange among various applications on the world wide web . there has been a growing need for developing high-performance techniques to query large xml data repositories efficiently . one important problem in xml query processing is twig pattern matching , that is , finding in an xml data tree d all matches that satisfy a specified twig ( or path ) query pattern q. in this survey , we review , classify , and compare major techniques for twig pattern matching . specifically , we consider two classes of major xml query processing techniques : the relational approach and the native approach . the relational approach directly utilizes existing relational database systems to store and query xml data , which enables the use of all important techniques that have been developed for relational databases , whereas in the native approach , specialized storage and query processing systems tailored for xml data are developed from scratch to further improve xml query performance . as implied by existing work , xml data querying and management are developing in the direction of integrating the relational approach with the story_separator_special_tag twig joins are key building blocks in current xml indexing systems , and numerous algorithms and useful data structures have been introduced . we give a structured , qualitative analysis of recent advances , which leads to the identification of a number of opportunities for further improvements . cases where combining competing or orthogonal techniques would be advantageous are highlighted , such as algorithms avoiding redundant computations and schemes for cheaper intermediate result management . we propose some direct improvements over existing solutions , such as reduced memory usage and stronger filters for bottom-up algorithms . in addition we identify cases where previous work has been overlooked or not used to its full potential , such as for virtual streams , or the benefits of previous techniques have been underestimated , such as for skipping joins . using the identified opportunities as a guide for future work , we are hopefully one step closer to unification of many advances in twig join algorithms . story_separator_special_tag xml queries are usually expressed by means of xpath expressions identifying portions of the selected documents . an xpath expression defines a way of navigating an xml tree and returns the set of nodes which are reachable from one or more starting nodes through the paths specified by the expression . the problem of efficiently answering xpath queries is very interesting and has recently received increasing attention by the research community . in particular , an increasing effort has been devoted to define effective optimization techniques for xpath queries . one of the main issues related to the optimization of xpath queries is their minimization . the minimization of xpath queries has been studied for limited fragments of xpath , containing only the descendent , the child and the branch operators . in this work , we address the problem of minimizing xpath queries for a more general fragment , containing also the wildcard operator . we characterize the complexity of the minimization of xpath queries , stating that it is np-hard , and propose an algorithm for computing minimum xpath queries . moreover , we identify an interesting tractable case and propose an ad hoc algorithm handling the minimization story_separator_special_tag xquery is the de facto standard xml query language , and it is important to have efficient query evaluation techniques available for it . a core operation in the evaluation of xquery is the finding of matches for specified tree patterns , and there has been much work towards algorithms for finding such matches efficiently . multiple xpath expressions can be evaluated by computing one or more tree pattern matches . however , relatively little has been done on efficient evaluation of xquery queries as a whole . in this paper , we argue that there is much more to xquery evaluation than a tree pattern match . we propose a structure called generalized tree pattern ( gtp ) for concise representation of a whole xquery expression . evaluating the query reduces to finding matches for its gtp . using this idea we develop efficient evaluation plans for xquery expressions , possibly involving join , quantifiers , grouping , aggregation , and nesting . xml data often conforms to a schema . we show that using relevant constraints from the schema , one can optimize queries significantly , and give algorithms for automatically inferring gtp simplifications given a schema . story_separator_special_tag xpath and xquery ( which includes xpath as a sublanguage ) are the major query languages for xml . an important issue arising in efficient evaluation of queries expressed in these languages is satisfiability , i.e. , whether there exists a database , consistent with the schema if one is available , on which the query has a non-empty answer . our experience shows satisfiability check can effect substantial savings in query evaluation . we systematically study satisfiability of tree pattern queries ( which capture a useful fragment of xpath ) together with additional constraints , with or without a schema . we identify cases in which this problem can be solved in polynomial time and develop novel efficient algorithms for this purpose . we also show that in several cases , the problem is np-complete . we ran a comprehensive set of experiments to verify the utility of satisfiability check as a preprocessing step in query processing . our results show that this check takes a negligible fraction of the time needed for processing the query while often yielding substantial savings . story_separator_special_tag we show that several classes of tree patterns observe the independence of containing patterns property , that is , if a pattern is contained in the union of several patterns , then it is contained in one of them . we apply this property to two related problems on tree pattern rewriting using views . first , given view v and query q , is it possible for q to have an equivalent rewriting using v which is the union of two or more tree patterns , but not an equivalent rewriting which is a single pattern ? this problem is of both theoretical and practical importance because , if the answer is no , then , to find an equivalent rewriting of a tree pattern using a view , we should use more efficient methods , such as the polynomial time algorithm of xu and ozsoyoglu ( 2005 ) , rather than try to find the union of all contained rewritings ( which takes exponential time in the worst case ) and test its equivalence to q. second , given a set s of views , we want to know under what conditions a subset s ' of s story_separator_special_tag l retrieve xml documents or fragments of documents from a collection of documents based on specified selection criteria . l the documents may have been originally authored as xml documents ( real documents ) or they may be an xml view of existing data ( virtual documents ) . l real xml documents may be stored in the underlying repository in a fragmented fashion based on some mapping . l the results from a xml query may be xml documents or collections of fragments . l xml documents or fragments may be selected based on their structural as well as on their data content . story_separator_special_tag the world wide web promises to transform human society by making virtually all types of information instantly available everywhere . two prerequisites for this promise to be realized are a universal markup language and a universal query language . the power and flexibility of xml make it the leading candidate for a universal markup language . xml provides a way to label information from diverse data sources including structured and semi-structured documents , relational databases , and object repositories . several xml-based query languages have been proposed , each oriented toward a specific category of information . quilt is a new proposal that attempts to unify concepts from several of these query languages , resulting in a new language that exploits the full versatility of xml . the name quilt suggests both the way in which features from several languages were assembled to make a new query language , and the way in which quilt queries can combine information from diverse data sources into a query result with a new structure of its own . story_separator_special_tag xml data are expected to be widely used in web information systems and ec/edi applications . such applications usually use a large number of xml data . first , we must allow users to retrieve only necessary portions of xml data by specifying search conditions to flexibly describe such applications . second , we must allow users to combine xml data from different sources . to this end , we will provide a query language for xml data tentatively called xql . we have designed xql , keeping in mind its continuity with database standards such as sql and oql although we do n't stick to its strict conformity . in this paper , we describe the requirements for a query language for xml data and explain the functionality of xql including xml version of database operators such as select , joint , sort , grouping , union , and view definition . we make brief comments on the implementation and semantics . story_separator_special_tag with the recent and rapid advance of the internet , management of structured documents such as xml documents and their databases has become more and more important . a number of query languages for xml documents have been proposed up to the present . some of them enable tag-based powerful document structure manipulation . however , their contents processing capability is very limited . here , the contents processing implies the similarity-based selection , ranking , summary generation , topic extraction , and so on , as well as simple string-based pattern matching . in this paper , we propose an extensible xml query language x2ql , which features inclusion of user-defined foreign functions to process document contents in the context of xml-ql-based document structure manipulation . this feature makes it possible to integrate application-oriented high-level contents processing facilities into querying documents . we also describe an implementation of an x2ql query processing systemon top of xslt processors . story_separator_special_tag xml is widely praised for its flexibility in allowing repeated and missing sub-elements . however , this flexibility makes it challenging to develop a bulk algebra , which typically manipulates sets of objects with identical structure . a set of xml elements , say of type book , may have members that vary greatly in structure , e.g . in the number of author sub-elements . this kind of heterogeneity may permeate the entire document in a recursive fashion : e.g. , different authors of the same or different book may in turn greatly vary in structure . even when the document conforms to a schema , the flexible nature of schemas for xml still allows such significant variations in structure among elements in a collection . bulk processing of such heterogeneous sets is problematic.in this paper , we introduce the notion of logical classes ( lc ) of pattern tree nodes , and generalize the notion of pattern tree matching to handle node logical classes . this abstraction pays off significantly in allowing us to reason with an inherently heterogeneous collection of elements in a uniform , homogeneous way . based on this , we define a tree logical story_separator_special_tag recently , several researches have investigated the discovering of frequent xml query patterns using frequent structure mining techniques . all these works ignore the order properties of xml queries , and therefore are limited in their effectiveness . in this paper , we consider the discovering of ordered query patterns . we propose an algorithm for ordered query pattern mining . experiments show that our method is efficient . story_separator_special_tag existing approaches for querying xml ( e.g. , xpath and twig patterns ) assume that the data form a tree . often , however , xml documents have a graph structure , due to id references . the common way of adapting known techniques to xml graphs is straightforward , but may result in a huge number of results , where only a small portion of them has valuable information . we propose two mechanisms . filtering is used for eliminating semantically weak answers . ranking is used for presenting the remaining answers in the order of decreasing semantic significance . we show how to integrate these features in a language for querying xml graphs . query evaluation is tractable in the following sense . for a wide range of ranking functions , it is possible to generate answers in ranked order with polynomial delay , under query-and-data complexity . this result holds even if projection is used . furthermore , it holds for any tractable ranking function for which the top-ranked answer can be found efficiently ( assuming that equalities and inequalities involving ids of xml nodes are permitted in queries ) . story_separator_special_tag xml queries are frequently based on path expressions where their elements are connected to each other in a tree-pattern structure , called query tree pattern ( qtp ) . therefore , a key operation in xml query processing is finding those elements which match the given qtp . in this paper , we propose a novel method , called s^3 , which can selectively process the document 's nodes . in s^3 , unlike all previous methods , path expressions are not directly executed on the xml document , but first they are evaluated against a guidance structure , called queryguide . enriched by information extracted from the queryguide , a query execution plan , called smp , is generated to provide focused pattern matching and avoid document access as far as possible . moreover , our experimental results confirm that s^3 and its optimized version os^3 substantially outperform previous qtp processing methods w.r.t . response time , i/o overhead , and memory consumption - critical parameters in any real multi-user environment . story_separator_special_tag xpath is a simple language for navigating an xml document and selecting a set of element nodes . xpath expressions are used to query xml data , describe key constraints , express transformations , and reference elements in remote documents . this paper studies the containment and equivalence problems for a fragment of the xpath query language , with applications in all these contexts.in particular , we study a class of xpath queries that contain branching , label wildcards and can express descendant relationships between nodes . prior work has shown that languages which combine any two of these three features have efficient containment algorithms . however , we show that for the combination of features , containment is conp-complete . we provide a sound and complete exptime algorithm for containment , and study parameterized ptime special cases . while we identify two parameterized classes of queries for which containment can be decided efficiently , we also show that even with some bounded parameters , containment is conp-complete . in response to these negative results , we describe a sound algorithm which is efficient for all queries , but may return false negatives in some cases . story_separator_special_tag as business and enterprises generate and exchange xml data more often , there is an increasing need for efficient processing of queries on xml data . searching for the occurrences of a tree pattern query in an xml database is a core operation in xml query processing . prior works demonstrate that holistic twig pattern matching algorithm is an efficient technique to answer an xml tree pattern with parent-child ( p-c ) and ancestor-descendant ( a-d ) relationships , as it can effectively control the size of intermediate results during query processing . however , xml query languages ( e.g. , xpath and xquery ) define more axes and functions such as negation function , order-based axis , and wildcards . in this paper , we research a large set of xml tree pattern , called extended xml tree pattern , which may include p-c , a-d relationships , negation functions , wildcards , and order restriction . we establish a theoretical framework about matching cross which demonstrates the intrinsic reason in the proof of optimality on holistic algorithms . based on our theorems , we propose a set of novel algorithms to efficiently process three categories of extended xml story_separator_special_tag tree patterns are fundamental to querying tree-structured data like xml . because of the heterogeneity of xml data , it is often more appropriate to permit approximate query matching and return ranked answers , in the spirit of information retrieval , than to return only exact answers . in this paper , we study the problem of approximate xml query matching , based on tree pattern relaxations , and devise efficient algorithms to evaluate relaxed tree patterns . we consider weighted tree patterns , where exact and relaxed weights , associated with nodes and edges of the tree pattern , are used to compute the scores of query answers . we are interested in the problem of finding answers whose scores are at least as large as a given threshold . we design data pruning algorithms where intermediate query results are filtered dynamically during the evaluation process . we develop anoptimization that exploits scores of intermediate results to improve query evaluation efficiency . finally , we show experimentally that our techniques outperform rewriting-based and post-pruning strategies . story_separator_special_tag xml has become ubiquitous , and xml data has to be managed in databases . the current industry standard is to map xml data into relational tables and store this information in a relational database . such mappings create both expressive power problems and performance problems.in the timber [ 7 ] project we are exploring the issues involved in storing xml in native format . we believe that the key intellectual contribution of this system is a comprehensive set-at-a-time query processing ability in a native xml store , with all the standard components of relational query processing , including algebraic rewriting and a cost-based optimizer . story_separator_special_tag in this paper , we investigate the complexity of deciding the satisfiability of xpath 2.0 expressions , i.e. , whether there is an xml document for which their result is nonempty . several fragments that allow certain types of expressions are classified as either in ptime or np-hard to see which type of expression make this a hard problem . finally , we establish a link between xpath expressions and partial tree descriptions which are studied in computational linguistics . story_separator_special_tag we consider boolean combinations of data tree patterns as a specification and query language for xml documents . data tree patterns are tree patterns plus variable ( in ) equalities which express joins between attribute values . data tree patterns are a simple and natural formalism for expressing properties of xml documents . we consider first the model checking problem ( query evaluation ) , we show that it is dp-complete in general and already np -complete when we consider a single pattern . we then consider the satisfiability problem in the presence of a dtd . we show that it is in general undecidable and we identify several decidable fragments . story_separator_special_tag we study the satisfiability problem associated with xpath in the presence of dtds . this is the problem of determining , given a query p in an xpath fragment and a dtd d , whether or not there exists an xml document t such that t conforms to d and the answer of p on t is nonempty . we consider a variety of xpath fragments widely used in practice , and investigate the impact of different xpath operators on satisfiability analysis . we first study the problem for negation-free xpath fragments with and without upward axes , recursion and data-value joins , identifying which factors lead to tractability and which to np-completeness . we then turn to fragments with negation but without data values , establishing lower and upper bounds in the absence and in the presence of upward modalities and recursion . we show that with negation the complexity ranges from pspace to exptime . moreover , when both data values and negation are in place , we find that the complexity ranges from nexptime to undecidable . finally , we give a finer analysis of the problem for particular classes of dtds , exploring the impact of story_separator_special_tag we consider the problem of minimizing tree pattern queries ( tpq ) that arise in xml and in ldap-style network directories . in [ minimization of tree pattern queries , proc . acm sigmod intl . conf . management of data , 2001 , pp . 497-508 ] , amer-yahia , cho , lakshmanan and srivastava presented an o ( n4 ) algorithm for minimizing tpqs in the absence of integrity constraints ( case 1 ) ; n is the number of nodes in the query . then they considered the problem of minimizing tpqs in the presence of three kinds of integrity constraints : required-child , required-descendant and subtype ( case 2 ) . they presented an o ( n6 ) algorithm for minimizing tpqs in the presence of only required-child and required-descendant constraints ( i.e. , no subtypes allowed ; case 3 ) . we present o ( n2 ) , o ( n4 ) and o ( n2 ) algorithms for minimizing tpqs in these three cases , respectively , based on the concept of graph simulation . we believe that our o ( n2 ) algorithms for cases 1 and 3 are runtime optimal . story_separator_special_tag tree pattern queries ( tpqs ) provide a natural and easy formalism to query tree-structured xml data , and the efficient processing of such queries has attracted a lot of attention . since the size of a tpq is a key determinant of its evaluation cost , recent research has focused on the problem of query minimization using integrity constraints to eliminate redundant query nodes ; specifically , tpq minimization has been studied for the class of forward and subtype constraints ( ft-constraints ) . in this paper , we explore the tpq minimization problem further for a richer class of fbst-constraints that includes not only ft-constraints but also backward and sibling constraints . by exploiting the properties of minimal queries under fbst-constraints , we propose efficient algorithms to both compute a single minimal query as well as enumerate all minimal queries . in addition , we also develop more efficient minimization algorithms for the previously studied class of ft-constraints . our experimental study demonstrates the effectiveness and efficiency of query minimization using fbst-constraints . story_separator_special_tag we define the class of conjunctive queries in relational data bases , and the generalized join operator on relations . the generalized join plays an important part in answering conjunctive queries , and it can be implemented using matrix multiplication . it is shown that while answering conjunctive queries is np complete ( general queries are pspace complete ) , one can find an implementation that is within a constant of optimal . the main lemma used to show this is that each conjunctive query has a unique minimal equivalent query ( much like minimal finite automata ) . story_separator_special_tag xpath is a simple language for navigating an xml tree and returning a set of answer nodes . the focus in this paper is on the complexity of the containment problem for various fragments of xpath . in addition to the basic operations ( child , descendant , filter , and wildcard ) , we consider disjunction , dtds and variables . w.r.t . variables we study two semantics : ( 1 ) the value of variables is given by an outer context ; ( 2 ) the value of variables is defined existentially . we establish an almost complete classification of the complexity of the containment problem w.r.t . these fragments . story_separator_special_tag the containment and equivalence problems for various fragments of xpath have been studied by a number of authors . for some fragments , deciding containment ( and even minimisation ) has been shown to be in ptime , while for minor extensions containment has been shown to be conp-complete . when containment is with respect to trees satisfying a set of constraints ( such as a schema or dtd ) , the problem seems to be more difficult . for example , containment under dtds is conp-complete for an xpath fragment denoted xp { [ ] } for which containment is in ptime . it is also undecidable for a larger class of xpath queries when the constraints are so-called simple xpath integrity constraints ( sxics ) . in this paper , we show that containment is decidable for an important fragment of xpath , denoted xp { [ ] , * , // } , when the constraints are dtds . we also identify xpath fragments for which containment under dtds can be decided in ptime . story_separator_special_tag in this paper , we present a polynomial-time algorithm for tpq ( tree pattern queries ) minimization without xml constraints involved . the main idea of the algorithm is a dynamic programming strategy to find all the matching subtrees within a tpq . a matching subtree implies a redundancy and should be removed in such a way that the semantics of the original tpq is not damaged . our algorithm consists of two parts : one for subtree recognization and the other for subtree deletion . both of them needs only o ( n2 ) time , where n is the number of nodes in a tpq . story_separator_special_tag efficient query optimization is critical to the performance of query processing in a database system . the same is true for xml databases and queries . xml queries naturally carry a tree-shaped search pattern which usually contains redundancies . efficient minimization of the tree patterns of xml queries forms an integral and important part of xml query optimization . this short paper sketchily presents the algorithms we developed for xml tree pattern query minimization that outperform previous approaches . story_separator_special_tag the paper introduces a model of the web as an infinite , semi-structured set of objects . we reconsider the classical notions of genericity and computability of queries in this new context and relate them to styles of computation prevalent on the web , based on browsing and searching . we revisit several well-known declarative query languages ( first-order logic , datalog , and datalog with negation ) and consider their computational characteristics in terms the notions introduced in this paper . in particular , we are interested in languages or fragments thereof which can be implemented by browsing , or by browsing and searching combined . surprisingly , stratified and well-founded semantics for negation turn out to have basic shortcomings in this context , while inflationary semantics emerges as an appealing alternative . story_separator_special_tag with the rapid growth of xml-document traffic on the internet , scalable content-based dissemination of xml documents to a large , dynamic group of consumers has become an important research challenge . to indicate the type of content that they are interested in , data consumers typically specify their subscriptions using some xml pattern specification language ( e.g. , xpath ) . given the large volume of subscribers , system scalability and efficiency mandate the ability to aggregate the set of consumer subscriptions to a smaller set of content specifications , so as to both reduce their storage-space requirements as well as speed up the document-subscription matching process . in this paper , we provide the first systematic study of subscription aggregation where subscriptions are specified with tree patterns ( an important subclass of xpath expressions ) . the main challenge is to aggregate an input set of tree patterns into a smaller set of generalized tree patterns such that : ( 1 ) a given space constraint on the total size of the subscriptions is met , and ( 2 ) the loss in precision ( due to aggregation ) during document filtering is minimized . we propose an story_separator_special_tag in this paper , we provide a polynomial-time tree pattern query minimization algorithm whose efficiency stems from two key observations : ( i ) inherent redundant components usually exist inside the rudimentary query provided by the user . ( ii ) irredundant nodes may become redundant when constraints such as co-occurrence and required child/descendant are given . we show the result that the algorithm obtained by first augmenting the input tree pattern using the constraints , and then applying minimization , always finds the unique minimal equivalent to the original query . we complement our analytical results with an experimental study that shows the effectiveness of our tree pattern minimization techniques . story_separator_special_tag xml queries typically specify patterns of selection predicates on multiple elements that have some specified tree structured relationships . the primitive tree structured relationships are parent-child and ancestor-descendant , and finding all occurrences of these relationships in an xml database is a core operation for xml query processing . we develop two families of structural join algorithms for this task : tree-merge and stack-tree . the tree-merge algorithms are a natural extension of traditional merge joins and the multi-predicate merge joins , while the stack-tree algorithms have no counterpart in traditional relational join processing . we present experimental results on a range of data and queries using the timber native xml query engine built on top of shore . we show that while , in some cases , tree-merge algorithms can have performance comparable to stack-tree algorithms , in many cases they are considerably worse . this behavior is explained by analytical results that demonstrate that , on sorted inputs , the stack-tree algorithms have worst-case i/o and cpu complexities linear in the sum of the sizes of inputs and output , while the tree-merge algorithms do not have the same guarantee . story_separator_special_tag in this talk i outlined and surveyed some developments in the field of xml tree pattern query processing , especially focussing on holis- tic approaches . xml tree pattern query ( tpq ) processing is a research stream within xml data management that focuses on efficient tpq an- swering . with the increasing popularity of xml for data representation , there is a lot of interest in query processing over data that conforms to a tree-structured data model . queries on xml data are commonly ex- pressed in the form of tree patterns ( or twig patterns ) , which represent a very useful subset of xpath and xquery . efficiently finding all tree pattern matches in an xml database is a major concern of xml query processing . in the past few years , many algorithms have been proposed to match such tree patterns . in the talk , i presented an overview of the state of the art in tpq processing . this overview shall start by provid- ing some background in holistic approaches to process tpq and then introduce different algorithms and finally present benchmark datasets and experiments . story_separator_special_tag with the advent of xml as a standard for data representation and exchange on the internet , storing and querying xml data becomes more and more important . several xml query languages have been proposed , and the common feature of the languages is the use of regular path expressions to query xml data . this poses a new challenge concerning indexing and searching xml data , because conventional approaches based on tree traversals may not meet the processing requirements under heavy access requests . in this paper , we propose a new system for indexing and storing xml data based on a numbering scheme for elements . this numbering scheme quickly determines the ancestor-descendant relationship between elements in the hierarchy of xml data . we also propose several algorithms for processing regular path expressions , namely , ( 1 ) -join for searching paths from an element to another , ( 2 ) -join for scanning sorted elements and attributes to find element-attribute pairs , and ( 3 ) -join for finding kleene-closure on repeated paths or elements . the -join algorithm is highly effective particularly for searching paths that are very long or whose lengths are unknown . story_separator_special_tag efficient evaluation of xml queries requires the determination of whether a relationship exists between two elements . a number of labeling schemes have been designed to label the element nodes such that the relationships between nodes can be easily determined by comparing their labels . with the increased popularity of xml on the web , finding a labeling scheme that is able to support order-sensitive queries in the presence of dynamic updates becomes urgent . we propose a new labeling scheme that take advantage of the unique property of prime numbers to meet this need . the global order of the nodes can be captured by generating simultaneous congruence values from the prime number node labels . theoretical analysis of the label size requirements for the various labeling schemes is given . experiment results indicate that the prime number labeling scheme is compact compared to existing dynamic labeling schemes , and provides efficient support to order-sensitive queries and updates . story_separator_special_tag several methods have been proposed to evaluate queries over a native xml dbms , where the queries specify both path and keyword constraints . these broadly consist of graph traversal approaches , optimized with auxiliary structures known as structure indexes ; and approaches based on information-retrieval style inverted lists . we propose a strategy that combines the two forms of auxiliary indexes , and a query evaluation algorithm for branching path expressions based on this strategy . our technique is general and applicable for a wide range of choices of structure indexes and inverted list join algorithms . our experiments over the niagara xml dbms show the benefit of integrating the two forms of indexes . we also consider algorithmic issues in evaluating path expression queries when the notion of relevance ranking is incorporated . by integrating the above techniques with the threshold algorithm proposed by fagin et al. , we obtain instance optimal algorithms to push down top k computation . story_separator_special_tag xml employs a tree-structured data model , and , naturally , xml queries specify patterns of selection predicates on multiple elements related by a tree structure . finding all occurrences of such a twig pattern in an xml database is a core operation for xml query processing . prior work has typically decomposed the twig pattern into binary structural ( parent-child and ancestor-descendant ) relationships , and twig matching is achieved by : ( i ) using structural join algorithms to match the binary relationships against the xml database , and ( ii ) stitching together these basic matches . a limitation of this approach for matching twig patterns is that intermediate result sizes can get large , even when the input and output sizes are more manageable.in this paper , we propose a novel holistic twig join algorithm , twigstack , for matching an xml query twig pattern . our technique uses a chain of linked stacks to compactly represent partial results to root-to-leaf query paths , which are then composed to obtain matches for the twig pattern . when the twig pattern uses only ancestor-descendant relationships between elements , twigstack is i/o and cpu optimal among all sequential story_separator_special_tag xml is quickly becoming the de facto standard for data exchange over the internet . this is creating a new set of data management requirements involving xml , such as the need to store and query xml documents . researchers have proposed using relational database systems to satisfy these requirements by devising ways to `` shred '' xml documents into relations , and translate xml queries into sql queries over these relations . however , a key issue with such an approach , which has largely been ignored in the research literature , is how ( and whether ) the ordered xml data model can be efficiently supported by the unordered relational data model . this paper shows that xml 's ordered data model can indeed be efficiently supported by a relational database system . this is accomplished by encoding order as a data value . we propose three order encoding methods that can be used to represent xml order in the relational data model , and also propose algorithms for translating ordered xpath expressions into sql using these encoding methods . finally , we report the results of an experimental study that investigates the performance of the proposed order story_separator_special_tag finding all the occurrences of a twig pattern in an xml database is a core operation for efficient evaluation of xml queries . a number of algorithms have been proposed to process a twig query based on region encoding labeling scheme . while region encoding supports efficient determination of structural relationship between two elements , we observe that the information within a single label is very limited . in this paper , we propose a new labeling scheme , called extended dewey . this is a powerful labeling scheme , since from the label of an element alone , we can derive all the elements names along the path from the root to the element . based on extended dewey , we design a novel holistic twig join algorithm , called tjfast . unlike all previous algorithms based on region encoding , to answer a twig query , tjfast only needs to access the labels of the leaf query nodes . through this , not only do we reduce disk access , but we also support the efficient evaluation of queries with wildcards in branching nodes , which is very difficult to be answered by algorithms based on region encoding story_separator_special_tag we introduce a hierarchical labeling scheme called ordpath that is implemented in the upcoming version of microsoft\xae sql server . ordpath labels nodes of an xml tree without requiring a schema ( the most general case -- -a schema simplifies the problem ) . an example of an ordpath value display format is `` 1.5.3.9.1 '' . a compressed binary representation of ordpath provides document order by simple byte-by-byte comparison and ancestry relationship equally simply . in addition , the ordpath scheme supports insertion of new nodes at arbitrary positions in the xml tree , their ordpath values `` careted in '' between ordpaths of sibling nodes , without relabeling any old nodes . story_separator_special_tag labeling schemes lie at the core of query processing for many xml database management systems . designing labeling schemes for dynamic xml documents is an important problem that has received a lot of research attention . existing dynamic labeling schemes , however , often sacrifice query performance and introduce additional labeling cost to facilitate arbitrary updates even when the documents actually seldom get updated . since the line between static and dynamic xml documents is often blurred in practice , we believe it is important to design a labeling scheme that is compact and efficient regardless of whether the documents are frequently updated or not . in this paper , we propose a novel labeling scheme called dde ( for dynamic dewey ) which is tailored for both static and dynamic xml documents . for static documents , the labels of dde are the same as those of dewey which yield compact size and high query performance . when updates take place , dde can completely avoid re-labeling and its label quality is most resilient to the number and order of insertions compared to the existing approaches . in addition , we introduce compact dde ( cdde ) which is story_separator_special_tag xml query technology has attracted more and more attention in data management research community . finding all the occurrences of a twig pattern in an xml database is a core operation for xml queries . the previous approaches produce large set of intermediate results when they processing queries with parent-child relationship edges . we propose a new labeling scheme , called extended region encoding labeling scheme . from the label of an element , we can obtain all distinct tag names of its children . based on this new labeling scheme , we design a holistic twig join algorithm twigstackbe . our main technique is before processing twig query we firstly check whether an element would contribute to the final solutions . so the set of intermediate results in twig pattern matching can be much smaller than previous algorithms . the experimental results indicate that the proposed algorithm performs better than the previous . story_separator_special_tag we compare several optimization strategies implemented in an xml query evaluation system . the strategies incorporate the use of path summaries into the query optimizer , and rely on heuristics that exploit data statistics.we present experimental results that demonstrate a wide range of performance improvements for the different strategies supported . in addition , we compare the speedups obtained using path summaries with those reported for index-based methods . the comparison shows that low-cost path summaries combined with optimization strategies achieve essentially the same benefits as more expensive index structures . story_separator_special_tag with the growing importance of semi-structure data in information exchange , much research has been done to provide an effective mechanism to match a twig query in an xml database . a number of algorithms have been proposed recently to process a twig query holistically . those algorithms are quite efficient for quires with only ancestor-descendant edges . but for queries with mixed ancestor-descendant and parent-child edges , the previous approaches still may produce large intermediate results , even when the input and output size are more manageable . to overcome this limitation , in this paper , we propose a novel holistic twig join algorithm , namely twigstacklist . our main technique is to look-ahead read some elements in input data steams and cache limited number of them to lists in the main memory . the number of elements in any list is bounded by the length of the longest path in the xml document . we show that twigstacklist is i/o optimal for queries with only ancestor-descendant relationships below branching nodes . further , even when queries contain parent-child relationship below branching nodes , the set of intermediate results in twigstacklist is guaranteed to be a subset of story_separator_special_tag finding all the occurrences of a twig pattern in an xml database is a core operation for efficient evaluation of xml queries . holistic twig join algorithm has showed its superiority over binary decompose based approach due to efficient reducing intermediate results . the existing holistic join algorithms , however , can not deal with ordered twig queries . a straightforward approach that first matches the unordered twig queries and then prunes away the undesired answers is obviously not optimal in most cases . in this paper , we study a novel holistic-processing algorithm , called orderedtj , for ordered twig queries . we show that orderedtj can identify a large query class to guarantee the i/o optimality . finally , our experiments show the effectiveness , scalability and efficiency of our proposed algorithm . story_separator_special_tag searching for all occurrences of a twig pattern in an xml document is an important operation in xml query processing . recently a holistic method twigstack . [ 2 ] has been proposed . the method avoids generating large intermediate results which do not contribute to the final answer and is cpu and i/o optimal when twig patterns only have ancestor-descendant relationships . another important direction of xml query processing is to build structural indexes [ 3 ] [ 8 ] [ 13 ] [ 15 ] over xml documents to avoid unnecessary scanning of source documents . we regard xml structural indexing as a technique to partition xml documents and call it streaming scheme in our paper . in this paper we develop a method to perform holistic twig pattern matching on xml documents partitioned using various streaming schemes . our method avoids unnecessary scanning of irrelevant portion of xml documents . more importantly , depending on different streaming schemes used , it can process a large class of twig patterns consisting of both ancestor-descendant and parent-child relationships and avoid generating redundant intermediate results . our experiments demonstrate the applicability and the performance advantages of our approach . story_separator_special_tag tree pattern matching is one of the most fundamental tasks for xml query processing . holistic twig query processing techniques [ 4 , 16 ] have been developed to minimize the intermediate results , namely , those root-to-leaf path matches that are not in the final twig results . however , useless path matches can not be completely avoided , especially when there is a parent-child relationship in the twig query . furthermore , existing approaches do not consider the fact that in practice , in order to process xpath or xquery statements , a more powerful form of twig queries , namely , generalized-tree-pattern ( gtp ) [ 8 ] queries , is required . most existing works on processing gtp queries generally calls for costly post-processing for eliminating redundant data and/or grouping of the matching results.in this paper , we first propose a novel hierarchical stack encoding scheme to compactly represent the twig results . we introduce twig2stack , a bottom-up algorithm for processing twig queries based on this encoding scheme . then we show how to efficiently enumerate the query results from the encodings for a given gtp query . to our knowledge , this is the story_separator_special_tag xml ( extensible mark-up language ) has been embraced as a new approach to data modeling . nowadays , more and more information is formatted as semi-structured data , e.g. , articles in a digital library , documents on the web , and so on . implementation of an efficient system enabling storage and querying of xml documents requires development of new techniques . many different techniques of xml indexing have been proposed in recent years.in the case of xml data , we can distinguish the following trees : an xml tree , a tree of elements and attributes , and a dataguide , a tree of element tags and attribute names . obviously , the xml tree of an xml document is much larger than the dataguide of a given document . authors often consider dataguide as a small tree . therefore , they consider the dataguide search as a small problem . however , we show that dataguide trees are often massive in the case of real xml documents . consequently , a trivial dataguide search may be time and memory consuming . in this article , we introduce efficient methods for searching an xml twig pattern in story_separator_special_tag in semistructured databases there is no schema fixed in advance . to provide the benefits of a schema in such environments , we introduce dataguides : concise and accurate structural summaries of semistructured databases . dataguides serve as dynamic schemas , generated from the database ; they are useful for browsing database structure , formulating queries , storing information such as statistics and sample values , and enabling query optimization . this paper presents the theoretical foundations of dataguides along with an algorithm for their creation and an overview of incremental maintenance . we provide performance results based on our implementation of dataguides in the lore dbms for semistructured data . we also describe the use of dataguides in lore , both in the user interface to enable structure browsing and query formulation , and as a means of guiding the query processor and optimizing query execution . story_separator_special_tag with the rapid emergence of xml as an enabler for data exchange and data transfer over the web , querying xml data has become a major concern . in this paper , we present a hybrid system , twigx-guide ; an extension of the well-known dataguide index and region encoding labeling to support twig query processing . with twigx-guide , a complex query can be decomposed into a set of path queries , which are evaluated individually by retrieving the path or node matches from the dataguide index table and subsequently joining the results using the holistic twig join algorithm twigstack . twigx-guide improves the performance of twigstack for queries with parent-child relationships and mixed relationships by reducing the number of joins needed to evaluate a query . experimental results indicate that twigx-guide can process twig queries on an average 38 % better than the twigstack algorithm , 30 % better than twiginlab , 10 % better than twigstacklist and about 5 % better than twigstackxb in terms of execution time . story_separator_special_tag a common problem of xml query algorithms is that execution time and input size grows rapidly as the size of xml document increases . in this paper , we propose a version-labeling scheme and twigversion algorithm to address this problem . the version-labeling scheme is utilized to identify all repetitive structures in xml documents , and the version tree is constructed to hold such version information . to process a query , twigversion generates a filter through the created version tree , and the final answer to the query can be retrieved from the database easily through the filtering process . both theoretical proof and experimental results reported in this paper demonstrate that the concise structure of version tree and the reduced input size make twigversion outperform the existing approaches . story_separator_special_tag tree pattern matching is a fundamental problem that has a wide range of applications in web data management , xml processing , and selective data dissemination . in this paper we develop efficient algorithms for the tree homeomorphism problem , i.e. , the problem of matching a tree pattern with exclusively transitive ( descendant ) edges . we first prove that deciding whether there is a tree homeomorphism is logspace-complete , improving on the current logcfl upper bound . furthermore , we develop a practical algorithm for the tree homeomorphism decision problem that is both space- and time-efficient . the algorithm is in logdcfl and space consumption is strongly bounded , while the running time is linear in the size of the data tree . this algorithm immediately generalizes to the problem of matching the tree pattern against all subtrees of the data tree , preserving the mentioned efficiency properties . story_separator_special_tag finding all distinct matchings of the query tree pattern is the core operation of xml query evaluation . the existing methods for tree pattern matching are decomposition-matching-merging processes , which may produce large useless intermediate result or require repeated matching of some sub-patterns . we propose a fast tree pattern matching algorithm called treematch to directly find all distinct matchings of a query tree pattern . the only requirement for the data source is that the matching elements of the non-leaf pattern nodes do not contain sub-elements with the same tag . the treematch does not produce any intermediate results and the final results are compactly encoded in stacks , from which the explicit representation can be produced efficiently . story_separator_special_tag as xml becomes ubiquitous , the efficient retrieval of xml data becomes critical . research to improve query response time has been largely concentrated on indexing paths , and optimizing xml queries . an orthogonal approach is to discover frequent xml query patterns and cache their results to improve the performance of xml management systems . in this paper , we present an efficient algorithm called fastxminer , to discover frequent xml query patterns . we develop theorems to prove that only a small subset of the generated candidate patterns needs to undergo expensive tree containment tests . in addition , we demonstrate how the frequent query patterns can be used to improve caching performance . experiments results show that fastxminer is efficient and scalable , and caching the results of frequent patterns significantly improves the query response time . story_separator_special_tag contained rewriting and maximal contained rewriting of tree pattern queries using views have been studied recently for the class of tree patterns involving / , // , and [ ] . given query q and view v , it has been shown that a contained rewriting of q using v can be obtained by finding a useful embedding of q in v . however , for the same q and v , there may be many useful embeddings and thus many contained rewritings . some of the useful embeddings may be redundant in that the rewritings obtained from them are contained in those obtained from other useful embeddings . redundant useful embeddings are useless and they waste computational resource . thus it becomes important to identify and remove them . in this paper , we show that the criteria for identifying redundant useful embeddings given in a previous work are neither sufficient nor necessary . we then present some useful observations on the containment relationship of rewritings , and based on which , a heuristic algorithm for removing redundant useful embeddings . we demonstrate the efficiency of our algorithm using examples . story_separator_special_tag peer data management systems ( pdms ) offer a flexible architecture for decentralized data sharing . in a pdms , every peer is associated with a schema that represents the peer 's domain of interest , and semantic relationships between peers are provided locally between pairs ( or small sets ) of peers . by traversing semantic paths of mappings , a query over one peer can obtain relevant data from any reachable peer in the network . semantic paths are traversed by reformulating queries at a peer into queries on its neighbors.naively following semantic paths is highly inefficient in practice . we describe several techniques for optimizing the reformulation process in a pdms and validate their effectiveness using real-life data sets . in particular , we develop techniques for pruning paths in the reformulation process and for minimizing the reformulated queries as they are created . in addition , we consider the effect of the strategy we use to search through the space of reformulations . finally , we show that pre-computing semantic paths in a pdms can greatly improve the efficiency of the reformulation process . together , all of these techniques form a basis for scalable query story_separator_special_tag with the proliferation of xml-based data sources available across the internet , it is increasingly important to provide users with a data warehouse of xml data sources to facilitate decision-making processes . due to the extremely large amount of xml data available on web , unguided warehousing of xml data turns out to be highly costly and usually can not well accommodate the users ' needs in xml data acquirement . in this paper , we propose an approach to materialize xml data warehouses based on frequent query patterns discovered from historical queries issued by users . the schemas of integrated xml documents in the warehouse are built using these frequent query patterns represented as frequent query pattern trees ( freqqpts ) . using hierarchical clustering technique , the integration approach in the data warehouse is flexible with respect to obtaining and maintaining xml documents . experiments show that the overall processing of the same queries issued against the global schema become much efficient by using the xml data warehouse built than by directly searching the multiple data sources . story_separator_special_tag satisfiability is an important problem of queries for xml documents . this paper focuses on the satisfiability of tree pattern queries for active xml ( axml for short ) documents conforming to a given axml schema . an axml document is an xml document where some data is given explicitly and other parts are defined intensionally by means of embedded calls to web services , which can be invoked to generate data . for the efficient evaluation of a query over an axml document , one should check whether there exists an ( a ) xml document obtained from the original one by invoking some web services , on which the query has a non-empty answer . an algorithm for checking satisfiability of tree pattern queries for axml documents that runs polynomial time is proposed based on tree automata theory . then experiments were made to verify the utility of satisfiability checking as a preprocessing step in queries procession . our results show that the check takes a negligible fraction of the time needed for processing the query while often yielding substantial savings . story_separator_special_tag manyweb applications are based on dynamic interactions between web components exchanging flows of information . such a situation arises for instance in mashup systems [ 22 ] or when monitoring distributed autonomous systems [ 6 ] . this is a challenging problem that has generated recently a lot of attention ; see web 2.0 [ 38 ] . for capturing interactions between web components , we use active documents interacting with the rest of the world via streams of updates . their input streams specify updates to the document ( in the spirit of rss feeds ) , whereas their output streams are defined by queries on the document . in most of the paper , the focus is on input streams where the updates are only insertions , although we do consider also deletions.we introduce and study two fundamental concepts in this setting , namely , satisfiability and relevance . some fact is satisfiable for an active document and a query if it has a chance to be in the result of the query in some future state . given an active document and a query , a call in the document is relevant if the data brought by story_separator_special_tag with the advent of xml as the de facto language for data publishing and exchange , scalable distribution of xml data to large , dynamic populations of consumers remains an important challenge . content-based publish/subscribe systems offer a convenient design paradigm , as most of the complexity related to addressing and routing is encapsulated within the network infrastructure . to indicate the type of content that they are interested in , data consumers typically specify their subscriptions using a tree-pattern specification language ( an important subset of xpath ) , while producers publish xml content without prior knowledge of any potential recipients . discovering semantic communities of consumers with similar interests is an important requirement for scalable content-based systems : such `` semantic clusters '' of consumers play a critical role in the design of effective content-routing protocols and architectures . the fundamental problem underlying the discovery of such semantic communities lay in effectively evaluating the similarity of different tree-pattern subscriptions based on the observed document stream . in this paper , we propose a general framework and algorithmic tools for estimating different tree-pattern similarity metrics over continuous streams of xml documents . in a nutshell , our approach relies story_separator_special_tag since tree-structured data such as xml files are widely used for data representation and exchange on the internet , discovering frequent tree patterns over tree-structured data streams becomes an interesting issue . in this paper , we propose an online algorithm to continuously discover the current set of frequent tree patterns from the data stream . a novel and efficient technique is introduced to incrementally generate all candidate tree patterns without duplicates . moreover , a framework for counting the approximate frequencies of the candidate tree patterns is presented . combining these techniques , the proposed approach is able to compute frequent tree patterns with guarantees of completeness and accuracy . story_separator_special_tag as xml prevails over the internet , the efficient retrieval of xml data becomes important . research to improve query response times has been largely concentrate on indexing xml documents and processing regular path expressions . another approach is to discover frequent query patterns since the answers to these queries can be stored and indexed . mining frequent query patterns requires more than simple tree matching since the xml queries involves special characters such as `` * '' or `` // '' . in addition , the matching process can be expensive since the search space is exponential to the size of xml schema . in this paper , we present two mining algorithms , xqpminer and xqpminertid , to discover frequent query pattern frees from a large collection of xml queries efficiently . both algorithms exploit schema information to guide the enumeration of candidate subtrees , thus eliminating unnecessary node expansions . experiments results show that the proposed methods are efficient and have good scalability . story_separator_special_tag xml languages , such as xquery , xslt and sql/xml , employ xpath as the search and extraction language . xpath expressions often define complicated navigation , resulting in expensive query processing , especially when executed over large collections of documents . in this paper , we propose a framework for exploiting materialized xpath views to expedite processing of xml queries . we explore a class of materialized xpath views , which may contain xml fragments , typed data values , full paths , node references or any combination thereof . we develop an xpath matching algorithm to determine when such views can be used to answer a user query containing xpath expressions . we use the match information to identify the portion of an xpath expression in the user query which is not covered by the xpath view . finally , we construct , possibly multiple , compensation expressions which need to be applied to the view to produce the query result . experimental evaluation , using our prototype implementation , shows that the matching algorithm is very efficient and usually accounts for a small fraction of the total query compilation time . story_separator_special_tag we study the query answering using views ( qav ) problem for tree pattern queries . given a query and a view , the qav problem is traditionally formulated in two ways : ( i ) find an equivalent rewriting of the query using only the view , or ( ii ) find a maximal contained rewriting using only the view . the former is appropriate for classical query optimization and was recently studied by xu and ozsoyoglu for tree pattern queries ( tp ) . however , for information integration , we can not rely on equivalent rewriting and must instead use maximal contained rewriting as shown by halevy . motivated by this , we study maximal contained rewriting for tp , a core subset of xpath , both in the absence and presence of a schema . in the absence of a schema , we show there are queries whose maximal contained rewriting ( mcr ) can only be expressed as the union of exponentially many tps . we characterize the existence of a maximal contained rewriting and give a polynomial time algorithm for testing the existence of an mcr . we also give an algorithm for generating story_separator_special_tag publisher summary the content of an active extensible markup language ( axml ) document is dynamic , because it is possible to specify when a service call should be activated ( for example , when needed , every hour , etc . ) , and for how long its result should be considered valid . thus , this simple mechanism allows capturing and combining different styles of data integration , such as warehousing and mediation . to fully take advantage of the use of services , axml also allows calling continuous services ( that provide streams of answers ) and services supporting intentional data ( axml document including service calls ) as parameters and/or result . the latter feature leads to powerful , recursive integration schemes . the axml framework is centered on axml documents , which are xml documents that may contain calls to web services . when calls included in an axml document are fired , the latter is enriched by the corresponding results . story_separator_special_tag current streaming applications have stringent requirements on query response time and memory consumption because of the large ( possibly unbounded ) size of data they handle . further , known query evaluation algorithms on streaming xml documents focus almost exclusively on tree-pattern queries ( tpqs ) . however recently , requirements for flexible querying of xml data have motivated the introduction of query languages that are more general and flexible than tpqs . these languages are not supported by known algorithms . in this paper , we consider a language which generalizes and strictly contains tpqs . queries in this language can be represented as dags enhanced with constraints . we explore this representation to design an original polynomial time streaming algorithm for these queries . our algorithm avoids storing and processing matches of the query dag that do not contribute to new solutions ( redundant matches ) . its key feature is that it applies an eager evaluation strategy to quickly determine when node matches should be returned as solutions to the user and also to proactively detect redundant matches . we experimentally test its time and space performance . the results show the superiority of the eager algorithm story_separator_special_tag query processing techniques for xml data have focused mainly on tree-pattern queries ( tpqs ) . however , the need for querying xml data sources whose structure is very complex or not fully known to the user , and the need to integrate multiple xml data sources with different structures have driven , recently , the suggestion of query languages that relax the complete specification of a tree pattern . in order to implement the processing of such languages in current dbmss , their containment problem has to be efficiently solved.in this paper , we consider a query language which generalizes tpqs by allowing the partial specification of a tree pattern . partial tree-pattern queries ( ptpqs ) constitute a large fragment of xpath that flexibly permits the specification of a broad range of queries from keyword queries without structure , to queries with partial specification of the structure , to complete tpqs . we address the containment problem for ptpqs . this problem becomes more complex in the context of ptpqs because the partial specification of the structure allows new , non-trivial , structural expressions to be inferred from those explicitly specified in a query . we show that
abstract we consider a brane moing close to a large number of coincident branes . we compare the calculation of the effective action using the gauge theory living on the brane and the calculation using the supergravity approximation . we discuss some general features about the correspondence between large n gauge theories and black holes . then we do a one loop calculation which applies for extremal and near extremal black holes . we comment on the expected results for higher loop calculations . we make some comments on the matrix theory interpretation of these results . story_separator_special_tag the hamiltonian describing matrix theory on t^n is identified with the hamiltonian describing the dynamics of d0-branes on t^n in an appropriate weak coupling limit for all n up to 5. new subtleties arise in taking this weak coupling limit for n=6 , since the transverse size of the d0 brane system blows up in this limit . this can be attributed to the appearance of extra light states in the theory from wrapped d6 branes . this subtlety is related to the difficulty in finding a matrix formulation of m-theory on t^6 . story_separator_special_tag we consider the compactification of m theory on a lightlike circle as a limit of a compactification on a small spatial circle boosted by a large amount . assuming that the compactification on a small spatial circle is weakly coupled type-iia theory , we derive susskind { close_quote } s conjecture that m theory compactified on a lightlike circle is given by the finite n version of the matrix model of banks , fischler , shenker , and susskind . this point of view provides a uniform derivation of the matrix model for m theory compactified on a transverse torus t { sup p } for p=0 , { hor_ellipsis } ,5 and clarifies the difficulties for larger values of p . { copyright } { ital 1997 } { ital the american physical society } story_separator_special_tag we consider scale invariant theories which couple gravity to maxwell fields and antisymmetric tensor fields with a dilaton field . we exhibit in a unified way solutions representing black hole , space-time membrane , vortex and cosmological solutions . their physical properties depend sensitively on the coupling constant of the dilaton field , there being critical value separating qualitatively different types of behaviour , e.g . the temperature of a charged black hole in the extreme limit . it is also shown that compactification into the 4-dimensional minkowski space in terms of a membrane solution is possible in 10-dimensional supergravity model . story_separator_special_tag we carry out a thorough survey of entropy for a large class of $ p $ -branes in various dimensions . we find that the bekenstein-hawking entropy may be given a simple world volume interpretation only for the non-dilatonic $ p $ -branes , those with the dilaton constant throughout spacetime . the entropy of extremal non-dilatonic $ p $ -branes is non-vanishing only for the solutions preserving 1/8 of the original supersymmetries . upon toroidal compactification these reduce to dyonic black holes in 4 and 5 dimensions . for the self-dual string in 6 dimensions , which preserves 1/4 of the original supersymmetries , the near-extremal entropy is found to agree with a world sheet calculation , in support of the existing literature . the remaining 3 interesting cases preserve 1/2 of the original supersymmetries . these are the self-dual 3-brane in 10 dimensions , and the 2- and 5-branes in 11 dimensions . for all of them the scaling of the near-extremal bekenstein-hawking entropy with the hawking temperature is in agreement with a statistical description in terms of free massless fields on the world volume . story_separator_special_tag the modifications of the classical equations of motion of the gravitational field in type ii string theory are derived by studying tree-level gravitational scattering amplitudes . the effective gravitational action is determined through quartic order in the riemann tensor . it is shown that generic ricci-flat manifolds do not solve the modified equations , unless in addition the manifolds are kahler ( 2 n -dimensional manifolds of su ( n ) holonomy ) . translated into sigma model language , this calculation would indicate that the n = 1 supersymmetric sigma model in 2 dimensions , with a ricci-flat target space , is not conformally invariant , but has a nonzero beta function at four-loop order . story_separator_special_tag we investigate a bunch of d0-branes to reveal its quantum nature from the gravity side . in the classical limit , it is well described by a non-extremal black 0-brane in type iia supergravity . the solution is uplifted to the eleven dimensions and expressed by a non-extremal m-wave solution . after reviewing the effective action for the m-theory , we explicitly solve the equations of motion for the near horizon geometry of the m-wave . as a result we derive an unique solution which includes the effect of the quantum gravity . thermodynamic property of the quantum near horizon geometry of the black 0-brane is also studied by using wald 's formula . combining our result with that of the monte carlo simulation of the dual thermal gauge theory , we find strong evidence for the gauge/gravity duality in the d0-branes system at the level of quantum gravity . story_separator_special_tag we perform a direct test of the gauge-gravity duality associated with the system of $ n $ $ d0 $ -branes in type iia superstring theory at finite temperature . based on the fact that higher derivative corrections to the type iia supergravity action start at the order of $ { \\ensuremath { \\alpha } } ^ { \\ensuremath { ' } 3 } $ , we derive the internal energy in expansion around infinite 't hooft coupling up to the subleading term with one unknown coefficient . the power of the subleading term is shown to be nicely reproduced by the monte carlo data obtained nonperturbatively on the gauge theory side at finite but large effective ( dimensionless ) 't hooft coupling constant . this suggests , in particular , that the open strings attached to the $ d0 $ -branes provide the microscopic origin of the black hole thermodynamics of the dual geometry including $ { \\ensuremath { \\alpha } } ^ { \\ensuremath { ' } } $ corrections . the coefficient of the subleading term extracted from the fit to the monte carlo data provides a prediction for the gravity side . story_separator_special_tag this paper considers general features of the derivative expansion of feynman diagram contributions to the four-graviton scattering amplitude in eleven-dimensional supergravity compactified on a two-torus . these are translated into statements about interactions of the form d^2k r^4 in type ii superstring theories , assuming the standard m-theory/string theory duality relationships , which provide powerful constraints on the effective interactions . in the ten-dimensional iia limit we find that there can be no perturbative contributions beyond k string loops ( for k > 0 ) . furthermore , the genus h=k contributions are determined exactly by the one-loop eleven-dimensional supergravity amplitude for all values of k. a plausible interpretation of these observations is that the sum of h-loop feynman diagrams of maximally extended supergravity is less divergent than might be expected and could be ultraviolet finite in dimensions d < 4 + 6/h -- the same bound as for n=4 yang -- mills . story_separator_special_tag abstract we give a correction to a mistake in proposition 2.1 in the title mentioned paper , examine and correct or point out the consequences that are affected by the mistake . story_separator_special_tag the strong coupling dynamics of string theories in dimension $ d\\geq 4 $ are studied . it is argued , among other things , that eleven-dimensional supergravity arises as a low energy limit of the ten-dimensional type iia superstring , and that a recently conjectured duality between the heterotic string and type iia superstrings controls the strong coupling dynamics of the heterotic string in five , six , and seven dimensions and implies $ s $ duality for both heterotic and type ii strings . story_separator_special_tag technological developments sparked by quantum mechanics and wave particle duality are still gaining ground over a hundred years after the theories were devised . while the impact of the theories in fundamental research , philosophy and even art and literature is widely appreciated , the implications in device innovations continue to breed potential . applications inspired by these concepts include quantum computation and quantum cryptography protocols based on single photons , among many others . in this issue , researchers in germany and the us report a step towards precisely triggered single-photon sources driven by surface acoustic waves ( saws ) [ 1 ] . the work brings technology based on quantum mechanics yet another step closer to practical device reality . generation of single 'antibunched ' photons has been one of the key challenges to progress in quantum information processing and communication . researchers from toshiba and cambridge university in the uk recently reported what they described as 'the first electrically driven single-photon source capable of emitting indistinguishable photons ' [ 2 ] . single-photon sources have been reported previously [ 3 ] . however the approach demonstrated by shields and colleagues allows electrical control , which is particularly story_separator_special_tag we report on numerical simulations of one dimensional maximally supersymmetric su ( n ) yang-mills theory , by using the lattice action with two exact supercharges . based on the gauge/gravity duality , the gauge theory corresponds to n d0-branes system in type iia superstring theory at finite temperature . we aim to verify the gauge/gravity duality numerically by comparing our results of the gauge side with analytic solutions of the gravity side . first of all , by examining the supersymmetric ward-takahashi relation , we show that supersymmetry breaking effects from the cut-off vanish in the continuum limit and our lattice theory has the desired continuum limit . then , we find that , at low temperature , the black hole internal energy obtained from our data is close to the analytic solution of the gravity side . it suggests the validity of the duality . story_separator_special_tag we report on the lattice simulation result of one dimensional supersymmetric yang-mills theory with sixteen supercharges . according to the gauge/gravity duality , this theory is expected to be dual to n d0-branes in type iia superstring/supergravity . if the imaginary time direction is interpreted as temperature , in the large n limit , low temperature region where the gauge theory is strongly coupled corresponds to a classical black hole . we examine the gauge theory side by the lattice simulation to verify the duality conjecture . in this article , we focus on the internal energy of the black hole . the internal energy obtained by the simulation is compared with high temperature expansion at high temperatures , on the other hand , it is getting close to the analytic solution of the gravity side at low temperatures . story_separator_special_tag we formulate supersymmetric euclidean spacetime ad * lattices whose classical continuum limits are u ( n ) supersymmetric yang-mills theories with sixteen supercharges in d = 1,2,3 and 4 dimensions . this family includes the especially interesting = 4 supersymmetry in four dimensions , as well as a euclidean path integral formulation of matrix theory on a one dimensional lattice . story_separator_special_tag we examine the relation between twisted versions of the extended supersymmetric gauge theories and supersymmetric orbifold lattices . in particular , for the $ \\mathcal { n } =4 $ sym in $ d=4 $ , we show that the continuum limit of orbifold lattice reproduces the twist introduced by marcus , and the examples at lower dimensions are usually blau-thompson type . the orbifold lattice point group symmetry is a subgroup of the twisted lorentz group , and the exact supersymmetry of the lattice is indeed the nilpotent scalar supersymmetry of the twisted versions . we also introduce twisting in terms of spin groups of finite point subgroups of $ r $ -symmetry and spacetime symmetry . story_separator_special_tag we discuss supersymmetric yang-mills theory dimensionally reduced to zero dimensions and evaluate the su ( 2 ) and su ( 3 ) partition functions by monte carlo methods . the exactly known su ( 2 ) results are reproduced to very high precision . our calculations for su ( 3 ) agree closely with an extension of a conjecture due to green and gutperle concerning the exact value of the su ( n ) partition functions . story_separator_special_tag a bstractin recent years a new class of supersymmetric lattice theories have been proposed which retain one or more exact supersymmetries for non-zero lattice spacing . recently there has been some controversy in the literature concerning whether these theories suffer from a sign problem . in this paper we address this issue by conducting simulations of the $ \\mathcal { n } $ = ( 2 , 2 ) and $ \\mathcal { n } $ = ( 8 , 8 ) supersymmetric yang-mills theories in two dimensions for the u ( n ) theories with n = 2 , 3 , 4 , using the new twisted lattice formulations . our results provide evidence that these theories do not suffer from a sign problem in the continuum limit . these results thus boost confidence that the new lattice formulations can be used successfully to explore non-perturbative aspects of four-dimensional $ \\mathcal { n } $ = 4 supersymmetric yang-mills theory . story_separator_special_tag recently a class of supersymmetric gauge theories have been successfully implemented on the lattice . however , there has been an ongoing debate on whether lattice versions of some of these theories suffer from a sign problem , with independent simulations for the $ { \\cal n } = ( 2 , 2 ) $ supersymmetric yang-mills theories in two dimensions yielding seemingly contradictory results . here , we address this issue from an interesting theoretical point of view . we conjecture that the sign problem observed in some of the simulations is related to the so called neuberger 0/0 problem , which arises in ordinary non-supersymmetric lattice gauge theories , and prevents the realization of becchi-rouet-stora-tyutin symmetry on the lattice . after discussing why we expect a sign problem in certain classes of supersymmetric lattice gauge theories far from the continuum limit , we argue that these problems can be evaded by use of a non-compact parametrization of the gauge link fields . story_separator_special_tag recently there has been some controversy in the literature concerning the existence of a fermion sign problem in the n = ( 2,2 ) supersymmetric yang mills ( sym ) theories on the lattice . in this work , we address this issue by conducting monte carlo simulations not only for n = ( 2,2 ) but also for n = ( 8,8 ) sym in two dimensions for the u ( n ) theories with n = 2 , using the new ideas derived from topological twisting followed by geometric discretization . our results from simulations provide the evidence that these theories do not suffer from a sign problem as the continuum limit is approached . these results thus boost confidence that these new lattice formulations can be used successfully to explore the nonperturbative aspects of the four-dimensional n = 4 sym theory . story_separator_special_tag we perform lattice simulations of n d0-branes at finite temperature in the decoupling limit , namely 16 supercharge su ( n ) yang-mills quantum mechanics in the 't hooft limit . at low temperature this theory is conjectured to be dual to certain supergravity black holes . we emphasize that the existence of a non-compact moduli space renders the partition function of the quantum mechanics theory divergent , and we perform one loop calculations that demonstrate this explicitly . in consequence we use a scalar mass term to regulate this divergence and argue that the dual black hole thermodynamics may be recovered in the appropriate large n limit as the regulator is removed . we report on simulations for n up to 5 including the pfaffian phase , and n up to 12 in the phase quenched approximation . interestingly , in the former case , where we may calculate this potentially difficult phase , we find that it appears to play little role dynamically over the temperature range tested , which is certainly encouraging for future simulations of this theory . story_separator_special_tag we report on lattice simulations of 16 supercharge su ( n ) yang-mills quantum mechanics in the 't hooft limit . maldacena duality conjectures that in this limit the theory is dual to iia string theory , and , in particular , that the behavior of the thermal theory at low temperature is equivalent to that of certain black holes in iia supergravity . our simulations probe the low temperature regime for n { < = } 5 and the intermediate and high temperature regimes for n { < = } 12. we observe 't hooft scaling , and at low temperatures our results are consistent with the dual black hole prediction . the intermediate temperature range is dual to the horowitz-polchinski correspondence region , and our results are consistent with continuous behavior there . we include the pfaffian phase arising from the fermions in our calculations where appropriate . story_separator_special_tag we continue to construct lattice super yang-mills theories along the line discussed in the previous papers \\cite { sugino , sugino2 } . in our construction of $ { \\cal n } =2 , 4 $ theories in four dimensions , the problem of degenerate vacua seen in \\cite { sugino } is resolved by extending some fields and soaking up would-be zero-modes in the continuum limit , while in the weak coupling expansion some surplus modes appear both in bosonic and fermionic sectors reflecting the exact supersymmetry . a slight modification to the models is made such that all the surplus modes are eliminated in two- and three-dimensional models obtained by dimensional reduction thereof . $ { \\cal n } =4 , 8 $ models in three dimensions need fine-tuning of three and one parameters respectively to obtain the desired continuum theories , while two-dimensional models with $ { \\cal n } =4 , 8 $ do not require any fine-tuning . story_separator_special_tag the past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions . the rational hybrid monte carlo ( rhmc ) algorithm , where hybrid monte carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments . this algorithm has been found to be extremely beneficial in many areas of lattice qcd ( chiral fermions , finite temperature , wilson fermions etc. ) . we review the algorithm and some of these benefits , and we compare against other recent algorithm developements . we conclude with an update of the berlin wall plot comparing costs of all popular fermion formulations . story_separator_special_tag we investigate the application of krylov space methods to the solution of shifted linear systems of the form ( a+\\sigma ) x - b = 0 for several values of \\sigma simultaneously , using only as many matrix-vector operations as the solution of a single system requires . we find a suitable description of the problem , allowing us to understand known algorithms in a common framework and developing shifted methods basing on short recurrence methods , most notably the cg and the bicgstab solvers . the convergence properties of these shifted solvers are well understood and the derivation of other shifted solvers is easily possible . the application of these methods to quark propagator calculations in quenched qcd using wilson and clover fermions is discussed and numerical examples in this framework are presented . with the shifted cg method an optimal algorithm for staggered fermions is available . story_separator_special_tag we present the first monte carlo results for supersymmetric matrix quantum mechanics with 16 supercharges at finite temperature . the recently proposed nonlattice simulation enables us to include the effects of fermionic matrices in a transparent and reliable manner . the internal energy nicely interpolates the weak coupling behavior obtained by the high temperature expansion , and the strong coupling behavior predicted from the dual black-hole geometry . the polyakov line asymptotes at low temperature to a characteristic behavior for a deconfined theory , suggesting the absence of a phase transition . these results provide highly nontrivial evidence for the gauge-gravity duality . story_separator_special_tag a generalization of the ads/cft conjecture postulates a duality between iia string theory and 16 supercharge yang-mills quantum mechanics in the large n 't hooft limit . at low temperatures string theory describes black holes , whose thermodynamics may hence be studied using the dual quantum mechanics . this quantum mechanics is strongly coupled which motivates the use of lattice techniques . we argue that , contrary to expectation , the theory when discretized naively will nevertheless recover continuum supersymmetry as the lattice spacing is sent to zero . we test these ideas by studying the 4 supercharge version of this yang-mills quantum mechanics in the 't hooft limit . we use both a naive lattice action and a manifestly supersymmetric action . using monte carlo methods we simulate the euclidean theories , and study the lattice continuum limit , for both thermal and non-thermal periodic boundary conditions , confirming continuum supersymmetry is recovered for the naive action when appropriate . we obtain results for the thermal system with n up to 12. these favor the existence of a single deconfined phase for all non-zero temperatures . these results are an encouraging indication that the 16 supercharge theory is within story_separator_special_tag in the string-gauge duality it is important to understand how the space-time geometry is encoded in gauge theory observables . we address this issue in the case of the d0-brane system at finite temperature t. based on the duality , the temporal wilson loop w in gauge theory is expected to contain the information of the schwarzschild radius rsch of the dual black hole geometry as log ( w ) =rsch/ ( 2pialpha't ) . this translates to the power-law behavior log ( w ) =1.89 ( t/lambda 1/3 ) -3/5 , where lambda is the 't hooft coupling constant . we calculate the wilson loop on the gauge theory side in the strongly coupled regime by performing monte carlo simulations of supersymmetric matrix quantum mechanics with 16 supercharges . the results reproduce the expected power-law behavior up to a constant shift , which is explainable as alpha ' corrections on the gravity side . our conclusion also demonstrates manifestly the fuzzball picture of black holes . story_separator_special_tag black holes have been predicted to radiate particles and eventually evaporate , which has led to the information loss paradox and implies that the fundamental laws of quantum mechanics may be violated . superstring theory , a consistent theory of quantum gravity , provides a possible solution to the paradox if evaporating black holes can actually be described in terms of standard quantum mechanical systems , as conjectured from the theory . here , we test this conjecture by calculating the mass of a black hole in the corresponding quantum mechanical system numerically . our results agree well with the prediction from gravity theory , including the leading quantum gravity correction . our ability to simulate black holes offers the potential to further explore the yet mysterious nature of quantum gravity through well-established quantum mechanics . numerical simulations of an evaporating black hole are consistent with a quantum description of gravity [ also see perspective by maldacena ] confirming cosmic dual conjecture quantum mechanics and gravity can seem to contradict each other . superstring theory may provide a route to reconcile the two , thanks to the gauge/gravity duality conjecture , which allows the system to be described mathematically . story_separator_special_tag we formulate the high temperature expansion in supersymmetric matrix quantum mechanics with 4 , 8 and 16 supercharges . the models can be obtained by dimensionally reducing = 1 u ( n ) super yang-mills theory in d = 4,6,10 to 1 dimension , respectively . while the non-zero frequency modes become weakly coupled at high temperature , the zero modes remain strongly coupled . we find , however , that the integration over the zero modes that remains after integrating out all the non-zero modes perturbatively , reduces to the evaluation of connected green 's functions in the bosonic ikkt model . we perform monte carlo simulation to compute these green 's functions , which are then used to obtain the coefficients of the high temperature expansion for various quantities up to the next-leading order . our results nicely reproduce the asymptotic behaviors of the recent simulation results at finite temperature . in particular , the fermionic matrices , which decouple at the leading order , give rise to substantial effects at the next-leading order , reflecting finite temperature behaviors qualitatively different from the corresponding models without fermions . story_separator_special_tag we explain how the string spectrum in flat space and pp waves arises from the large n limit , at fixed gym2 , of u ( n ) n = 4 super yang mills . we reproduce the spectrum by summing a subset of the planar feynman diagrams . we give a heuristic argument for why we can neglect other diagrams . story_separator_special_tag we consider all 1/2 bps excitations of ads \xd7 s configurations in both type-iib string theory and m-theory . in the dual field theories these excitations are described by free fermions . configurations which are dual to arbitrary droplets of free fermions in phase space correspond to smooth geometries with no horizons . in fact , the ten dimensional geometry contains a special two dimensional plane which can be identified with the phase space of the free fermion system . the topology of the resulting geometries depends only on the topology of the collection of droplets on this plane . these solutions also give a very explicit realization of the geometric transitions between branes and fluxes . we also describe all 1/2 bps excitations of plane wave geometries . the problem of finding the explicit geometries is reduced to solving a laplace ( or toda ) equation with simple boundary conditions . we present a large class of explicit solutions . in addition , we are led to a rather general class of ads5 compactifications of m-theory preserving = 2 superconformal symmetry . we also find smooth geometries that correspond to various vacua of the maximally supersymmetric mass-deformed m2 brane story_separator_special_tag we study theories with 16 supercharges and a discrete energy spectrum . one class of theories has symmetry group su ( 2|4 ) . they arise as truncations of n=4 super yang-mills theory . they include the plane-wave matrix model , 2+1 super yang-mills theory on rxs { sup 2 } and n=4 super yang-mills theory on rxs { sup 3 } /z { sub k } . we explain how to obtain their gravity duals in a unified way . we explore the regions of the geometry that are relevant for the study of some 1/2 bps and near bps states . this leads to a class of two dimensional ( 4,4 ) supersymmetric sigma models with nonzero h flux , including a massive deformed wzw model . we show how to match some features of the string spectrum with the yang-mills theory . the other class of theories are also connected to n=4 super yang-mills theory and arise by making some of the transverse scalars compact . their vacua are characterized by a 2d yang-mills theory or 3d chern-simons theory . these theories realize peculiar super-poincare symmetry algebras in 2+1 or 1+1 dimensions with `` noncentral '' charges story_separator_special_tag we present results of lattice simulations of the plane wave matrix model ( pwmm ) . the pwmm is a theory of supersymmetric quantum mechanics that has a well-defined canonical ensemble . we simulate this theory by applying rational hybrid monte carlo techniques to a naive lattice action . we examine the strong coupling behaviour of the model focussing on the deconfinement transition . story_separator_special_tag we construct the black hole geometry dual to the deconfined phase of the bmn matrix model at strong 't hooft coupling . we approach this solution from the limit of large temperature where it is approximately that of the non-extremal d0-brane geometry with a spherical $ s^8 $ horizon . this geometry preserves the $ so ( 9 ) $ symmetry of the matrix model trivial vacuum . as the temperature decreases the horizon becomes deformed and breaks the $ so ( 9 ) $ to the $ so ( 6 ) \\times so ( 3 ) $ symmetry of the matrix model . when the black hole free energy crosses zero the system undergoes a phase transition to the confined phase described by a lin-maldacena geometry . we determine this critical temperature , whose computation is also within reach of monte carlo simulations of the matrix model . story_separator_special_tag we review and extend earlier work that uses the ads/cft correspondence to relate the black-hole-black-string transition of gravitational theories on a circle to a phase transition in maximally supersymmetric ( 1 + 1 ) -dimensional su ( n ) gauge theories at large n , again compactified on a circle . we perform gravity calculations to determine a likely phase diagram for the strongly coupled gauge theory . we then directly study the phase structure of the same gauge theory , now at weak 't hooft coupling . in the interesting temperature regime for the phase transition , the ( 1 + 1 ) -dimensional theory reduces to a ( 0 + 1 ) -dimensional bosonic theory , which we solve using monte carlo methods . we find strong evidence that the weakly coupled gauge theory also exhibits a black hole-black string-like phase transition in the large n limit . we demonstrate that a simple landau-ginzburg-like model describes the behaviour near the phase transition remarkably well . the weak coupling transition appears to be close to the cusp between a first-order and a second-order transition . story_separator_special_tag we consider field theories with sixteen supersymmetries , which include u ( n ) yang-mills theories in various dimensions , and argue that their large n limit is related to certain supergravity solutions . we study this by considering a system of d-branes in string theory and then taking a limit where the brane world volume theory decouples from gravity . at the same time we study the corresponding d-brane supergravity solution and argue that we can trust it in certain regions where the curvature ( and the effective string coupling , where appropriate ) are small . the supergravity solutions typically have several weakly coupled regions and interpolate between different limits of string m theory . { copyright } { ital 1998 } { ital the american physical society } story_separator_special_tag we investigate the evolution of small perturbations around black strings and branes which are low energy solutions of string theory . for simplicity we focus attention on the zero charge case and show that there are unstable modes for a range of time frequency and wavelength in the extra 10 [ minus ] [ ital d ] dimensions . these perturbations can be stabilized if the extra dimensions are compactified to a scale smaller than the minimum wavelength for which instability occurs and thus will not affect large astrophysical black holes in four dimensions . we comment on the implications of this result for the cosmic censorship hypothesis . story_separator_special_tag abstract we present the alternative topological twisting of n = 4 yang-mills , in which the path integral is dominated not by instantons , but by flat connections of the complexified gauge group . the theory is non-trivial on compact orientable four-manifolds with non-positive euler number , which are necessarily not simply connected . on such manifolds , one finds a single topological invariant , analogous to the casson invariant of three-manifolds . story_separator_special_tag we show how to derive the supersymmetric orbifold lattices of cohen et al . ( 1 , 2 ) and kaplan et al . ( 3 ) by direct discretization of an appropriate twisted supersymmetric yang-mills theory . we examine in detail the four supercharge two dimensional theory and the theory with sixteen supercharges in four dimensions . the continuum limit of the latter theory is the well known marcus twist of n = 4 yang-mills . the lattice models are gauge invariant and possess one exact supersymmetry at non-zero lattice spacing . story_separator_special_tag we provide an introduction to recent lattice formulations of supersymmetric theories which are invariant under one or more real supersymmetries at nonzero lattice spacing . these include the especially interesting case of $ { \\cal n } =4 $ sym in four dimensions . we discuss approaches based both on twisted supersymmetry and orbifold-deconstruction techniques and show their equivalence in the case of gauge theories . the presence of an exact supersymmetry reduces and in some cases eliminates the need for fine tuning to achieve a continuum limit invariant under the full supersymmetry of the target theory . we discuss open problems . story_separator_special_tag we provide a general classification of supersymmetric lattice gauge theories that can be obtained from orbifolding of theories with four and eight supercharges . we impose at least one preserved supercharge on the lattice and lorentz invariance in the naive continuum limit . starting with four supercharges , we obtain one two-dimensional lattice gauge theory , identical to the one already given in the literature . starting with eight supercharges , we obtain a unique three-dimensional lattice gauge theory and infinitely many two-dimensional lattice theories . they can be classified according to seven distinct groups , five of which have two preserved supercharges while the others have only one . story_separator_special_tag we prove that the prescription for construction of supersymmetric lattice gauge theories by orbifolding and deconstruction directly leads to catterall 's geometrical discretization scheme in general . these two prescriptions always give the same lattice discretizations when applied to theories of p-form fields . we also show that the geometrical discretization scheme can be applied to more general theories . story_separator_special_tag inspired by the ideas from topological field theory it is possible to rewrite the supersymmetric charges of certain classes of extended supersymmetric yang mills ( sym ) theories in such a way that they are compatible with the discretization on a euclidean space time lattice . such theories are known as maximally twisted sym theories . in this review we discuss the construction and some applications of such classes of theories . the one-loop perturbative renormalization of the four-dimensional lattice sym is discussed in particular . the lattice theories constructed using twisted approach play an important role in investigating the thermal phases of strongly coupled sym theories and also the thermodynamic properties of their dual gravitational theories . story_separator_special_tag we report on the results of numerical simulations of 1 + 1 dimensional su ( n ) yang-mills theory with maximal supersymmetry at finite temperature and compactified on a circle . for large n this system is thought to provide a dual description of the decoupling limit of n coincident d1-branes on a circle . it has been proposed that at large n there is a phase transition at strong coupling related to the gregory-laflamme ( gl ) phase transition in the holographic gravity dual . in a high temperature limit there was argued to be a deconfinement transition associated to the spatial polyakov loop , and it has been proposed that this is the continuation of the strong coupling gl transition . investigating the theory on the lattice for su ( 3 ) and su ( 4 ) and studying the time and space polyakov loops we find evidence supporting this . in particular at strong coupling we see the transition has the parametric dependence on coupling predicted by gravity . we estimate the gl phase transition temperature from the lattice data which , interestingly , is not yet known directly in the gravity dual . fine tuning in story_separator_special_tag we study the black hole black string phase transitions of gravitational theories compactified on a circle using the holographic duality conjecture . the gauge theory duals of these theories are maximally supersymmetric and strongly coupled 1 + 1 dimensional su ( n ) yang-mills theories compactified on a circle , in the large n limit . we perform the strongly coupled finite temperature gauge theory calculations on a lattice , using the recently developed exact lattice supersymmetry methods based on topological twisting and orbifolding . the spatial polyakov line serves as relevant order parameter of the confinement deconfinement phase transitions in the gauge theory duals . story_separator_special_tag we present new parallel software , susy\xa0lattice , for lattice studies of four-dimensional n=4n=4 supersymmetric yang mills theory with gauge group su ( nn ) . the lattice action is constructed to exactly preserve a single supersymmetry charge at non-zero lattice spacing , up to additional potential terms included to stabilize numerical simulations . the software evolved from the milc code for lattice qcd , and retains a similar large-scale framework despite the different target theory . many routines are adapted from an existing serial code ( catterall and joseph , 2012 ) , which susy\xa0lattice \xa0supersedes . this paper provides an overview of the new parallel software , summarizing the lattice system , describing the applications that are currently provided and explaining their basic workflow for non-experts in lattice gauge theory . we discuss the parallel performance of the code , and highlight some notable aspects of the documentation for those interested in contributing to its future development . program summary program title : susy lattice catalogue identifier : aels_v2_0 program summary url : http : //cpc.cs.qub.ac.uk/summaries/aels_v2_0.html program obtainable from : cpc program library , queen s university , belfast , n. ireland licensing provisions : standard cpc licence story_separator_special_tag we present a procedure to improve the lattice definition of $ $ \\mathcal { n } =4 $ $ supersymmetric yang-mills theory . the lattice construction necessarily involves u ( 1 ) flat directions , and we show how these can be lifted without violating the exact lattice supersymmetry . the basic idea is to modify the equations of motion of an auxiliary field , which determine the moduli space of the system . applied to numerical calculations , the resulting improved lattice action leads to dramatically reduced violations of supersymmetric ward identities and much more rapid approach to the continuum limit . story_separator_special_tag abstract we propose a construction of five-branes which fill both light-cone dimensions in banks , fischler , shenker and susskind 's matrix model of m theory . we argue that they have the correct long-range fields and spectrum of excitations . we prove dirac charge quantization with the membrane by showing that the five-brane induces a berry phase in the membrane world-volume theory , with a familiar magnetic monopole form .
intrusion-detection systems aim at detecting attacks against computer systems and networks , or against information systems in general , as it is difficult to provide provably secure information systems and maintain them in such a secure state for their entire lifetime and for every utilization . sometimes , legacy or operational constraints do not even allow a fully secure information system to be realized at all . therefore , the task of intrusion-detection systems is to monitor the usage of such systems and to detect the apparition of insecure states . they detect attempts and active misuse by legitimate users of the information systems or external parties to abuse their privileges or exploit security vulnerabilities . in this paper , we introduce a taxonomy of intrusion-detection systems that highlights the various aspects of this area . this taxonomy defines families of intrusion-detection systems according to their properties . it is illustrated by numerous examples from past and current projects . story_separator_special_tag this paper presents the preliminary architecture of a network level intrusion detection system . the proposed system will monitor base level information in network packets ( source , destination , packet size , and time ) , learning the normal patterns and announcing anomalies as they occur . the goal of this research is to determine the applicability of current intrusion detection technology to the detection of network level intrusions . in particular , the authors are investigating the possibility of using this technology to detect and react to worm programs . story_separator_special_tag in network intrusion detection research , one popular strategy for finding attacks is monitoring a network 's activity for anomalies : deviations from profiles of normality previously learned from benign traffic , typically identified using tools borrowed from the machine learning community . however , despite extensive academic research one finds a striking gap in terms of actual deployments of such systems : compared with other intrusion detection approaches , machine learning is rarely employed in operational `` real world '' settings . we examine the differences between the network intrusion detection problem and other areas where machine learning regularly finds much more success . our main claim is that the task of finding attacks is fundamentally different from these other applications , making it significantly harder for the intrusion detection community to employ machine learning effectively . we support this claim by identifying challenges particular to network intrusion detection , and provide a set of guidelines meant to strengthen future research on anomaly detection . story_separator_special_tag the monitoring and management of high-volume feature-rich traffic in large networks offers significant challenges in storage , transmission , and computational costs . the predominant approach to reducing these costs is based on performing a linear mapping of the data to a low-dimensional subspace such that a certain large percentage of the variance in the data is preserved in the low-dimensional representation . this variance-based subspace approach to dimensionality reduction forces a fixed choice of the number of dimensions , is not responsive to real-time shifts in observed traffic patterns , and is vulnerable to normal traffic spoofing . based on theoretical insights proved in this paper , we propose a new distance-based approach to dimensionality reduction motivated by the fact that the real-time structural differences between the covariance matrices of the observed and the normal traffic is more relevant to anomaly detection than the structure of the training data alone . our approach , called the distance-based subspace method , allows a different number of reduced dimensions in different time windows and arrives at only the number of dimensions necessary for effective anomaly detection . we present centralized and distributed versions of our algorithm and , using simulation on story_separator_special_tag the internet and computer networks are exposed to an increasing number of security threats . with new types of attacks appearing continually , developing flexible and adaptive security oriented approaches is a severe challenge . in this context , anomaly-based network intrusion detection techniques are a valuable technology to protect target systems and networks against malicious activities . however , despite the variety of such methods described in the literature in recent years , security tools incorporating anomaly detection functionalities are just starting to appear , and several important problems remain to be solved . this paper begins with a review of the most well-known anomaly-based intrusion detection techniques . then , available platforms , systems under development and research projects in the area are presented . finally , we outline the main challenges to be dealt with for the wide scale deployment of anomaly-based intrusion detectors , with special emphasis on assessment issues . story_separator_special_tag intrusion detection is an important area of research . traditionally , the approach taken to find attacks is to inspect the contents of every packet . however , packet inspection can not easily be performed at high-speeds . therefore , researchers and operators started investigating alternative approaches , such as flow-based intrusion detection . in that approach the flow of data through the network is analyzed , instead of the contents of each individual packet . the goal of this paper is to provide a survey of current research in the area of flow-based intrusion detection . the survey starts with a motivation why flow-based intrusion detection is needed . the concept of flows is explained , and relevant standards are identified . the paper provides a classification of attacks and defense techniques and shows how flow-based techniques can be used to detect scans , worms , botnets and ( dos ) attacks . story_separator_special_tag data preprocessing is widely recognized as an important stage in anomaly detection . this paper reviews the data preprocessing techniques used by anomaly-based network intrusion detection systems ( nids ) , concentrating on which aspects of the network traffic are analyzed , and what feature construction and selection methods have been used . motivation for the paper comes from the large impact data preprocessing has on the accuracy and capability of anomaly-based nids . the review finds that many nids limit their view of network traffic to the tcp/ip packet headers . time-based statistics can be derived from these headers to detect network scans , network worm behavior , and denial of service attacks . a number of other nids perform deeper inspection of request packets to detect attacks against network services and network applications . more recent approaches analyze full service responses to detect attacks targeting clients . the review covers a wide range of nids , highlighting which classes of attack are detectable by each of these approaches . data preprocessing is found to predominantly rely on expert domain knowledge for identifying the most relevant parts of network traffic and for constructing the initial candidate set of traffic story_separator_special_tag network anomaly detection is an important and dynamic research area . many network intrusion detection methods and systems ( nids ) have been proposed in the literature . in this paper , we provide a structured and comprehensive overview of various facets of network anomaly detection so that a researcher can become quickly familiar with every aspect of network anomaly detection . we present attacks normally encountered by network intrusion detection systems . we categorize existing network anomaly detection methods and systems based on the underlying computational techniques used . within this framework , we briefly describe and compare a large number of network anomaly detection methods and systems . in addition , we also discuss tools that can be used by network defenders and datasets that researchers in network anomaly detection can use . we also highlight research directions in network anomaly detection . story_separator_special_tag information and communication technology ( ict ) has a great impact on social wellbeing , economic growth and national security in todays world . generally , ict includes computers , mobile communication devices and networks . ict is also embraced by a group of people with malicious intent , also known as network intruders , cyber criminals , etc . confronting these detrimental cyber activities is one of the international priorities and important research area . anomaly detection is an important data analysis task which is useful for identifying the network intrusions . this paper presents an in-depth analysis of four major categories of anomaly detection techniques which include classification , statistical , information theory and clustering . the paper also discusses research challenges with the datasets used for network intrusion detection . highlightsmaps different types of anomalies with network attacks.provides an up-to-date taxonomy of network anomaly detection.evaluates effectiveness of different categories of techniques.explores recent research related to publicly available network intrusion evaluation datasets . story_separator_special_tag this survey paper describes a focused literature survey of machine learning ( ml ) and data mining ( dm ) methods for cyber analytics in support of intrusion detection . short tutorial descriptions of each ml/dm method are provided . based on the number of citations or the relevance of an emerging method , papers representing each method were identified , read , and summarized . because data are so important in ml/dm approaches , some well-known cyber data sets used in ml/dm are described . the complexity of ml/dm algorithms is addressed , discussion of challenges for using ml/dm for cyber security is presented , and some recommendations on when to use a given method are provided . story_separator_special_tag abstract flow-based intrusion detection is an innovative way of detecting intrusions in high-speed networks . flow-based intrusion detection only inspects the packet header and does not analyze the packet payload . this paper provides a comprehensive survey of current state of the art in flow-based intrusion detection . it also describes the available flow-based datasets used for evaluation of flow-based intrusion detection systems . the paper proposes a taxonomy for flow-based intrusion detection systems on the basis of the technique used for detection of maliciousness in flow records . we review the architecture and evaluation results of available flow-based intrusion detection systems . we also identify important research challenges for future research in the area of flow-based intrusion detection . story_separator_special_tag abstract network anomaly detection systems ( nadss ) are gaining a more important role in most network defense systems for detecting and preventing potential threats . the paper discusses various aspects of anomaly-based network intrusion detection systems ( nidss ) . the paper explains cyber kill chain models and cyber-attacks that compromise network systems . moreover , the paper describes various decision engine ( de ) approaches , including new ensemble learning and deep learning approaches . the paper also provides more details about benchmark datasets for training and validating de approaches . most of nadss applications , such as data centers , internet of things ( iot ) , as well as fog and cloud computing , are also discussed . finally , we present several experimental explanations which we follow by revealing various promising research directions . story_separator_special_tag cyber-attacks are becoming more sophisticated and thereby presenting increasing challenges in accurately detecting intrusions . failure to prevent the intrusions could degrade the credibility of security services , e.g . data confidentiality , integrity , and availability . numerous intrusion detection methods have been proposed in the literature to tackle computer security threats , which can be broadly classified into signature-based intrusion detection systems ( sids ) and anomaly-based intrusion detection systems ( aids ) . this survey paper presents a taxonomy of contemporary ids , a comprehensive review of notable recent works , and an overview of the datasets commonly used for evaluation purposes . it also presents evasion techniques used by attackers to avoid detection and discusses future research challenges to counter such techniques so as to make computer systems more secure . story_separator_special_tag the aim of this research work is to discover the exception by using the rough set approach and to structure/represent the exceptions in the form of rule pair , a knowledge structure that consist of commonsense rule and exception rule . knowledge structures are compact representation of rules and increase the comprehensibility . data mining refers to extracting or mining knowledge from large amounts of data . the overall process of extracting useful information is referred as knowledge discovery in databases . data mining is particular step in this process application of specific algorithms for extracting patterns ( models ) from data . mining exceptions is getting attention of researchers because it is interesting to discover exceptions , as they challenge the existing knowledge , lead to the growth of knowledge in new directions and help decision makers to make right decisions even in rare circumstances . story_separator_special_tag abstract existing machine learning solutions for network-based intrusion detection can not maintain their reliability over time when facing high-speed networks and evolving attacks . in this paper , we propose bigflow , an approach capable of processing evolving network traffic while being scalable to large packet rates . bigflow employs a verification method that checks if the classifier outcome is valid in order to provide reliability . if a suspicious packet is found , an expert may help bigflow to incrementally change the classification model . experiments with bigflow , over a network traffic dataset spanning a full year , demonstrate that it can maintain high accuracy over time . it requires as little as 4 % of storage and between 0.05 % and 4 % of training time , compared with other approaches . bigflow is scalable , coping with a 10-gbps network bandwidth in a 40-core cluster commodity hardware . story_separator_special_tag network anomaly detection is a vibrant research area . researchers have approached this problem using various techniques such as artificial intelligence , machine learning , and state machine modeling . in this paper , we first review these anomaly detection methods and then describe in detail a statistical signal processing technique based on abrupt change detection . we show that this signal processing technique is effective at detecting several network anomalies . case studies from real network data that demonstrate the power of the signal processing approach to network anomaly detection are presented . the application of signal processing techniques to this area is still in its infancy , and we believe that it has great potential to enhance the field , and thereby improve the reliability of ip networks . story_separator_special_tag data mining and knowledge discovery in databases have been attracting a significant amount of research , industry , and media attention of late . what is all the excitement about ? this article provides an overview of this emerging field , clarifying how data mining and knowledge discovery in databases are related both to each other and to related fields , such as machine learning , statistics , and databases . the article mentions particular real-world applications , specific data-mining techniques , challenges involved in real-world applications of knowledge discovery , and current and future research directions in the field . story_separator_special_tag machine learning algorithms can figure out how to perform important tasks by generalizing from examples . this is often feasible and cost-effective where manual programming is not . as more data becomes available , more ambitious problems can be tackled . as a result , machine learning is widely used in computer science and other fields . however , developing successful machine learning applications requires a substantial amount of black art that is hard to find in textbooks . this article summarizes twelve key lessons that machine learning researchers and practitioners have learned . these include pitfalls to avoid , important issues to focus on , and answers to common questions . story_separator_special_tag gigabit per second and higher bandwidths imply greater challenge to perform lossless packet capturing on generic pc architectures . this is because of software based capture solutions , which did not improve as fast as network bandwidth and they still heavily rely on the os 's packet processing mechanism . there are hardware and operating system factors that primarily affect capture performance . this paper summarizes these parameters and shows how to predict packet loss ratio during the capture process . index terms linux , software-based packet capturing , libpcap , wireshark , communication system traffic . story_separator_special_tag driven by the growing data transfer needs of the scientific community and the standardization of the 100 gbps ethernet specification , 100 gbps is now becoming a reality for many hpc sites . this tenfold increase in bandwidth creates a number of significant technical challenges . we show that by using the heavy tail flow effect as a filter , it should be possible to perform active ids analysis at this traffic rate using a cluster of commodity systems driven by a dedicated load balancing mechanism . additionally , we examine the nature of current network traffic characteristics applying them to 100 gpbs speeds . story_separator_special_tag this document specifies the ip flow information export ( ipfix ) protocol , which serves as a means for transmitting traffic flow information over the network . in order to transmit traffic flow information from an exporting process to a collecting process , a common representation of flow data and a standard means of communicating them are required . this document describes how the ipfix data and template records are carried over a number of transport protocols from an ipfix exporting process to an ipfix collecting process . this document obsoletes rfc 5101 . story_separator_special_tag this document specifies the data export format for version 9 of cisco systems ' netflow services , for use by implementations on the network elements and/or matching collector programs . the version 9 export format uses templates to provide access to observations of ip packet flows in a flexible and extensible manner . a template defines a collection of fields , with corresponding descriptions of structure and semantics . this memo provides information for the internet community . story_separator_special_tag sampling techniques are widely used for traffic measurements at high link speed to conserve router resources . traditionally , sampled traffic data is used for network management tasks such as traffic matrix estimations , but recently it has also been used in numerous anomaly detection algorithms , as security analysis becomes increasingly critical for network providers . while the impact of sampling on traffic engineering metrics such as flow size and mean rate is well studied , its impact on anomaly detection remains an open question.this paper presents a comprehensive study on whether existing sampling techniques distort traffic features critical for effective anomaly detection . we sampled packet traces captured from a tier-1 ip-backbone using four popular methods : random packet sampling , random flow sampling , smart sampling , and sample-and-hold . the sampled data is then used as input to detect two common classes of anomalies : volume anomalies and port scans . since it is infeasible to enumerate all existing solutions , we study three representative algorithms : a wavelet-based volume anomaly detection and two portscan detection algorithms based on hypotheses testing . our results show that all the four sampling methods introduce fundamental bias that degrades story_separator_special_tag intrusion detection ( id ) is an important component of infrastructure protection mechanisms . intrusion detection systems ( idss ) need to be accurate , adaptive , and extensible . given these requirements and the complexities of today 's network environments , we need a more systematic and automated ids development process rather that the pure knowledge encoding and engineering approaches . this article describes a novel framework , madam id , for mining audit data for automated models for instrusion detection . this framework uses data mining algorithms to compute activity patterns from system audit data and extracts predictive features from the patterns . it then applies machine learning algorithms to the audit records taht are processed according to the feature definitions to generate intrusion detection rules . results from the 1998 darpa intrusion detection evaluation showed that our id model was one of the best performing of all the participating systems . we also briefly discuss our experience in converting the detection models produced by off-line data mining programs to real-time modules of existing idss . story_separator_special_tag the constant increase of attacks against networks and their resources ( as recently shown by the codered worm ) causes a necessity to protect these valuable assets . firewalls are now a common installation to repel intrusion attempts in the first place . intrusion detection systems ( ids ) , which try to detect malicious activities instead of preventing them , offer additional protection when the first defense perimeter has been penetrated . id systems attempt to pin down attacks by comparing collected data to predefined signatures known to be malicious ( signature based ) or to a model of legal behavior ( anomaly based ) .anomaly based systems have the advantage of being able to detect previously unknown attacks but they suffer from the difficulty to build a solid model of acceptable behavior and the high number of alarms caused by unusual but authorized activities . we present an approach that utilizes application specific knowledge of the network services that should be protected . this information helps to extend current , simple network traffic models to form an application model that allows to detect malicious content hidden in single network packets . we describe the features of our proposed story_separator_special_tag we introduce an algorithm called lerad that learns rules for finding rare events in nominal time-series data with long range dependencies . we use lerad to find anomalies in network packets and tcp sessions to detect novel intrusions . we evaluated lerad on the 1999 darpa/lincoln laboratory intrusion detection evaluation data set and on traffic collected in a university departmental server environment . story_separator_special_tag over the past few years , the number of rfcs that define and use ipsec and ike has greatly proliferated . this is complicated by the fact that these rfcs originate from numerous ietf working groups : the original ipsec wg , its various spin-offs , and other wgs that use ipsec and/or ike to protect their protocols ' traffic . this document is a snapshot of ipsec- and ike-related rfcs . it includes a brief description of each rfc , along with background information explaining the motivation and context of ipsec 's outgrowths and extensions . it obsoletes the previous ipsec document roadmap [ rfc2411 ] . story_separator_special_tag anomaly-based network intrusion detection systems ( ids ) are valuable tools for the defense-in-depth of computer networks . unsupervised or unlabeled learning approaches for network anomaly detection have been recently proposed . such anomaly-based network ids are able to detect ( unknown ) zero-day attacks , although much care has to be dedicated to controlling the amount of false positives generated by the detection system . as a matter of fact , it is has been shown that the false positive rate is the true limiting factor for the performance of ids , and that in order to substantially increase the bayesian detection rate , p ( intrusion|alarm ) , the ids must have a very low false positive rate ( e.g. , as low as 10^-^5 or even lower ) . in this paper we present mcpad ( multiple classifier payload-based anomaly detector ) , a new accurate payload-based anomaly detection system that consists of an ensemble of one-class classifiers . we show that our anomaly detector is very accurate in detecting network attacks that bear some form of shell-code in the malicious payload . this holds true even in the case of polymorphic attacks and for very low story_separator_special_tag nowadays the security of web applications is one of the key topics in computer security . among all the solutions that have been proposed so far , the analysis of the http payload at the byte level has proven to be effective as it does not require the detailed knowledge of the applications running on the web server . the solutions proposed in the literature actually achieved good results for the detection rate , while there is still room for reducing the false positive rate . to this end , in this paper we propose hmmpayl , an ids where the payload is represented as a sequence of bytes , and the analysis is performed using hidden markov models ( hmm ) . the algorithm we propose for feature extraction and the joint use of hmm guarantee the same expressive power of n -gram analysis , while allowing to overcome its computational complexity . in addition , we designed hmmpayl following the multiple classifiers system paradigm to provide for a better classification accuracy , to increase the difficulty of evading the ids , and to mitigate the weaknesses due to a non optimal choice of hmm parameters . experimental results story_separator_special_tag the software infrastructure used on volunteered distributed computing is evolving to meet the diverse needs of researchers . these emerging systems are being built using open source tools . we describe a number of such systems in the context of this emerging field . throughout this paper we revisit concepts presented in the april 2004 paper entitled , tapping the matrix . 1. machines are underutilized modern machines are capable of executing billions of instructions in the time it takes us to blink . this fact may be less surprising when we consider that the typical machine s sold today feature processors running at multiple gigahertz supported by hundreds of megabytes of main memory . surprisingly , the vast majority of personal computers are underutilized . the truth is many machines are idle for as much as 90 % of an entire day . even when active , most applications utilize fewer than 10 % percent of the machines cpu . furthermore , this trend shows no signs of reversing , in fact , conservative estimates indicate that there are roughly 800 million personal computers in use . about 150 million are internet connected machines which are expected to increase story_separator_special_tag an example method disclosed herein to monitor internet usage comprises intercepting , using a kernel extension executing in an operating system kernel of a device , a first request to be sent to a content source by a monitored client executing on the device , providing a first certificate to the client in response to intercepting the first request sent by the client to the content source , the first certificate associated with a meter that is to monitor internet usage , sending a second request to the content source , receiving a second certificate that is associated with the content source in response to sending the second request to the content source , and obtaining a session key to decrypt encrypted traffic exchanged between the content source and the client , the session key being obtained from the client based on the first certificate and being sent to the content source based on the second certificate . story_separator_special_tag secure sockets layer ( ssl ) [ 1 ] and its successor transport layer security ( tls ) [ 2 ] have become key components of the modern internet . the privacy , integrity , and authenticity [ 3 ] [ 4 ] provided by these protocols are critical to allowing sensitive communications to occur . without these systems , ecommerce , online banking , and business-to-business exchange of information would likely be far less frequent . story_separator_special_tag as https deployment grows , middlebox and antivirus products are increasingly intercepting tls connections to retain visibility into network traffic . in this work , we present a comprehensive study on the prevalence and impact of https interception . first , we show that web servers can detect interception by identifying a mismatch between the http user-agent header and tls client behavior . we characterize the tls handshakes of major browsers and popular interception products , which we use to build a set of heuristics to detect interception and identify the responsible product . we deploy these heuristics at three large network providers : ( 1 ) mozilla firefox update servers , ( 2 ) a set of popular e-commerce sites , and ( 3 ) the cloudflare content distribution network . we find more than an order of magnitude more interception than previously estimated and with dramatic impact on connection security . to understand why security suffers , we investigate popular middleboxes and clientside security software , finding that nearly all reduce connection security and many introduce severe vulnerabilities . drawing on our measurements , we conclude with a discussion on recent proposals to safely monitor https and recommendations story_separator_special_tag numerous approaches for identifying important content for automatic text summarization have been developed to date . topic representation approaches first derive an intermediate representation of the text that captures the topics discussed in the input . based on these representations of topics , sentences in the input document are scored for importance . in contrast , in indicator representation approaches , the text is represented by a diverse set of possible indicators of importance which do not aim at discovering topicality . these indicators are combined , very often using machine learning techniques , to score the importance of each sentence . finally , a summary is produced by selecting sentences in a greedy approach , choosing the sentences that will go in the summary one by one , or globally optimizing the selection , choosing the best set of sentences to form a summary . in this chapter we give a broad overview of existing approaches based on these distinctions , with particular attention on how representation , sentence scoring or summary selection strategies alter the overall performance of the summarizer . we also point out some of the peculiarities of the task of summarization which have posed challenges story_separator_special_tag abstract network and internet security is a critical universal issue . the increased rate of cyber terrorism has put national security under risk . in addition , internet attacks have caused severe damages to different sectors ( i.e. , individuals , economy , enterprises , organizations and governments ) . network intrusion detection systems ( nids ) are one of the solutions against these attacks . however , nids always need to improve their performance in terms of increasing the accuracy and decreasing false alarms . integrating feature selection with intrusion detection has shown to be a successful approach since feature selection can help in selecting the most informative features from the entire set of features . usually , for the stealthy and low profile attacks ( zero day attacks ) , there are few neatly concealed packets distributed over a long period of time to mislead firewalls and nids . besides , there are many features extracted from those packets , which may make some machine learning-based feature selection methods to suffer from overfitting especially when the data have large numbers of features and relatively small numbers of examples . in this paper , we are proposing a nids story_separator_special_tag we describe bro , a stand-alone system for detecting network intruders in real-time by passively monitoring a network link over which the intruder 's traffic transits . we give an overview of the system 's design , which emphasizes high-speed ( fddi-rate ) monitoring , real-time notification , clear separation between mechanism and policy , and extensibility . to achieve these ends , bro is divided into an `` event engine '' that reduces a kernel-filtered network traffic stream into a series of higher-level events , and a `` policy script interpreter '' that interprets event handlers written in a specialized language used to express a site 's security policy . event handlers can update state information , synthesize new events , record information to disk , and generate real-time notifications via syslog . we also discuss a number of attacks that attempt to subvert passive monitoring systems and defenses against these , and give particulars of how bro analyzes the four applications integrated into it so far : finger , ftp , portmapper and telnet . the system is publicly available in source code form . story_separator_special_tag the integrated meta-model for organizational resource audit is a consistent and comprehensive instrument for auditing intangible resources and their relations and connections from the network perspective . this book undertakes a critically important problem of management sciences , poorly recognized in literature although determining the current and future competitiveness of organizations , sectors , and economies . the author notes the need to introduce a theoretical input , which is manifested by the meta-model . an expression of this treatment is the inclusion of the network as a structure of activities , further knowledge as an activity , and intangible assets as intellectual capital characterized by a structure of connections . the case study presented is an illustration of the use of network analysis tools and other instruments to identify not only the most important resources , tasks , or actors , as well as their effectiveness , but also to connect the identified networks with each other . the author opens the field for applying her methodology , revealing the structural and dynamic features of the intangible resources of the organization . the novelty of the proposed meta-model shows the way to in-depth applications of network analysis techniques in story_separator_special_tag intrusion detection system , ids , traditionally inspects the payload information of packets . this approach is not valid in encrypted traffic as the payload information is not available . there are two approaches , with different detection capabilities , to overcome the challenges of encryption : traffic decryption or traffic analysis . this paper presents a comprehensive survey of the research related to the idss in encrypted traffic . the focus is on traffic analysis , which does not need traffic decryption . one of the major limitations of the surveyed researches is that most of them are concentrating in detecting the same limited type of attacks , such as brute force or scanning attacks . both the security enhancements to be derived from using the ids and the security challenges introduced by the encrypted traffic are discussed . by categorizing the existing work , a set of conclusions and proposals for future research directions are presented . story_separator_special_tag as various services are provided as web applications , attacks against web applications constitute a serious problem . intrusion detection systems ( idses ) are one solution , however , these systems do not work effectively when the accesses are encrypted by protocols . because the idses inspect the contents of a packet , it is difficult to find attacks by the current ids . this paper presents a novel approach to anomaly detection for encrypted web accesses . this approach applies encrypted traffic analysis to intrusion detection , which analyzes contents of encrypted traffic using only data size and timing without decryption . first , the system extracts information from encrypted traffic , which is a set comprising data size and timing for each web client . second , the accesses are distinguished based on similarity of the information and access frequencies are calculated . finally , malicious activities are detected according to rules generated from the frequency of accesses and characteristics of http traffic . the system does not extract private information or require enormous pre-operation beforehand , which are needed in conventional encrypted traffic analysis . we show that the system detects various attacks with a high story_separator_special_tag in recent years the internet has evolved into a critical communication infrastructure that is omnipresent in almost all aspects of our daily life . this dependence of modern societies on the internet has also resulted in more criminals using the internet for their purposes , causing a steady increase of attacks , both in terms of quantity as well as quality . although research on the detection of attacks has been performed for several decades , today 's systems are not able to cope with modern attack vectors . one of the reasons is the increasing use of encrypted communication that strongly limits the detection of malicious activities . while encryption provides a number of significant advantages for the end user like , for example , an increased level of privacy , many classical approaches of intrusion detection fail . since it is typically not possible to decrypt the traffic , performing analysis w.r.t . the presence of certain patterns is almost impossible . to overcome this shortcoming we present a new behavior-based detection architecture that uses similarity measurements to detect intrusions as well as insider activities like data exfiltration in encrypted environments . story_separator_special_tag from the first appearance of network attacks , the internet worm , to the most recent one in which the servers of several famous e-business companies were paralyzed for several hours , causing huge financial losses , network-based attacks have been increasing in frequency and severity . as a powerful weapon to protect networks , intrusion detection has been gaining a lot of attention . traditionally , intrusion detection techniques are classified into two broad categories : misuse detection and anomaly detection . misuse detection aims to detect well-known attacks as well as slight variations of them , by characterizing the rules that govern these attacks . due to its nature , misuse detection has low false alarms but it is unable to detect any attacks that lie beyond its knowledge . anomaly detection is designed to capture any deviations from the established profiles of users and systems normal behavior pattern . although in principle , anomaly detection has the ability to detect new attacks , in practice this is far from easy . anomaly detection has the potential to generate too many false alarms , and it is very time consuming and labor expensive to sift true intrusions from story_separator_special_tag current network monitoring systems rely strongly on signature-based and supervised-learning-based detection methods to hunt out network attacks and anomalies . despite being opposite in nature , both approaches share a common downside : they require the knowledge provided by an expert system , either in terms of anomaly signatures , or as normal-operation profiles . in a diametrically opposite perspective we introduce unada , an unsupervised network anomaly detection algorithm for knowledge-independent detection of anomalous traffic . unada uses a novel clustering technique based on sub-space-density clustering to identify clusters and outliers in multiple low-dimensional spaces . the evidence of traffic structure provided by these multiple clusterings is then combined to produce an abnormality ranking of traffic flows , using a correlation-distance-based approach . we evaluate the ability of unada to discover network attacks in real traffic without relying on signatures , learning , or labeled traffic . additionally , we compare its performance against previous unsupervised detection methods using traffic from two different networks . story_separator_special_tag network anomalies , circumstances in which the network behavior deviates from its normal operational baseline , can be due to various factors such as network overload conditions , malicious/hostile activities , denial of service attacks , and network intrusions . new detection schemes based on machine learning principles are therefore desirable as they can learn the nature of normal traffic behavior and autonomously adapt to variations in the structure of 'normality ' as well as recognize the significant deviations as suspicious or anomalous events . the main advantages of these techniques are that , in principle , they are not restricted to any specific environment and that they can provide a way of detecting unknown attacks . detection performance is directly correlated with the traffic model quality , in terms of ability of representing the traffic behavior from its most characterizing internal dynamics . starting from these ideas , we developed a two-stage anomaly detection strategy based on multiple distributed sensors located throughout the network . by using independent component analysis , the first step , modeled as a blind source separation problem , extracts the fundamental traffic components the 'source ' signals , corresponding to the independent traffic dynamics story_separator_special_tag abstract a popular approach for detecting network intrusion attempts is to monitor the network traffic for anomalies . extensive research effort has been invested in anomaly-based network intrusion detection using machine learning techniques ; however , in general these techniques remain a research topic , rarely being used in real-world environments . in general , the approaches proposed in the literature lack representative datasets and reliable evaluation methods that consider real-world network properties during the system evaluation . in general , the approaches adopt a set of assumptions about the training data , as well as about the validation methods , rendering the created system unreliable for open-world usage . this paper presents a new method for creating intrusion databases . the objective is that the databases should be easy to update and reproduce with real and valid traffic , representative , and publicly available . using our proposed method , we propose a new evaluation scheme specific to the machine learning intrusion detection field . sixteen intrusion databases were created , and each of the assumptions frequently adopted in studies in the intrusion detection literature regarding network traffic behavior was validated . to make machine learning detection schemes feasible story_separator_special_tag the number of novel attacks observed in networked systems increases every day . due to the large amount of generated data over the network , its storage for further analysis may not be feasible . moreover , current attacks are becoming more sophisticated , as the attackers are attempting to evade traditional intrusion detection mechanisms by perverting their properties . this paper presents a novel real-time ( ongoing ) network traffic measurement approach that supports resilient analysis for stream learning intrusion detection . the network data is grouped at runtime according to its characteristics , while each network traffic flow is discretized at regular time intervals . each network flow is classified by a multi-view stream learning classifiers pool , defining the network flow class through a majority voting approach . the proposal is able to provide resiliency to the classifiers even for the detection of unknown attacks . the evaluation tests for the average operation point ( 25 views ) provides an increase in the system resilience to adversarial attacks of 22 % when compared to traditional approaches . moreover , in the scalability experiments with a 10-node ( single core each ) cluster testbed , the network flow story_separator_special_tag recently , with the increased use of network communication , the risk of compromising the information has grown immensely . intrusions have become more sophisticated and few methods can achieve efficient results while the network behavior constantly changes . this paper proposes an intrusion detection system based on modeling distributions of network statistics and extreme learning machine ( elm ) to achieve high detection rates of intrusions . the proposed model aggregates the network traffic at the ip subnetwork level and the distribution of statistics are collected for the most frequent ipv4 addresses encountered as destination . the obtained probability distributions are learned by elm . this model is evaluated on the iscx-ids 2012 dataset , which is collected using a real-time testbed . the model is compared against leading approaches using the same dataset . experimental results show that the presented method achieves an average detection rate of 91 % and a misclassification rate of 9 % . the experimental results show that our methods significantly improve the performance of the simple elm despite a trade-off between performance and time complexity . furthermore , our methods achieve good performance in comparison with the other few state-of-the-art approaches evaluated on story_separator_special_tag machine-learning based intrusion detection classifiers are able to detect unknown attacks , but at the same time , they may be susceptible to evasion by obfuscation techniques . an adversary intruder which possesses a crucial knowledge about a protection system can easily bypass the detection module . the main objective of our work is to improve the performance capabilities of intrusion detection classifiers against such adversaries . to this end , we firstly propose several obfuscation techniques of remote attacks that are based on the modification of various properties of network connections ; then we conduct a set of comprehensive experiments to evaluate the effectiveness of intrusion detection classifiers against obfuscated attacks . we instantiate our approach by means of a tool , based on netem and metasploit , which implements our obfuscation operators on any tcp communication . this allows us to generate modified network traffic for machine learning experiments employing features for assessing network statistics and behavior of tcp connections . we perform the evaluation of five classifiers : gaussian naive bayes , gaussian naive bayes with kernel density estimation , logistic regression , decision tree , and support vector machines . our experiments confirm the assumption that story_separator_special_tag the intrusion detection systems ( idss ) are essential elements when it comes to the protection of an ict infrastructure . a misuse ids is a stable method that can achieve high attack detection rates ( adr ) while keeping false alarm rates under acceptable levels . however , the misuse idss suffer from the lack of agility , as they are unqualified to adapt to new and unknown environments . that is , such an ids puts the security administrator into an intensive engineering task for keeping the ids up-to-date every time it faces efficiency drops . considering the extended size of modern networks and the complexity of big network traffic data , the problem exceeds the substantial limits of human managing capabilities . in this regard , we propose a novel methodology which combines the benefits of self-taught learning and mape-k frameworks to deliver a scalable , self-adaptive , and autonomous misuse ids . our methodology enables the misuse ids to sustain high adr , even if it is imposed on consecutive and drastic environmental changes . through the utilization of deep-learning based methods , the ids is able to grasp an attack 's nature based on the story_separator_special_tag nowadays , network intrusion detectors mainly rely on knowledge databases to detect suspicious traffic . these databases have to be continuously updated which requires important human resources and time . unsupervised network anomaly detectors overcome this issue by using intelligent techniques to identify anomalies without any prior knowledge . however , these systems are often very complex as they need to explore the network traffic to identify flows patterns . therefore , they are often unable to meet real-time requirements . in this paper , we present a new online and real-time unsupervised network anomaly detection algorithm ( orunada ) . our solution relies on a discrete time-sliding window to update continuously the feature space and an incremental grid clustering to detect rapidly the anomalies . the evaluations showed that orunada can process online large network traffic while ensuring a low detection delay and good detection performance . the experiments performed on the traffic of a core network of a spanish intermediate internet service provider demonstrated that orunada detects in less than half a second an anomaly after its occurrence . furthermore , the results highlight that our solution outperforms in terms of true positive rate and false positive rate story_separator_special_tag the increasing practicality of large-scale flow capture makes it possible to conceive of traffic analysis methods that detect and identify a large and diverse set of anomalies . however the challenge of effectively analyzing this massive data source for anomaly diagnosis is as yet unmet . we argue that the distributions of packet features ( ip addresses and ports ) observed in flow traces reveals both the presence and the structure of a wide range of anomalies . using entropy as a summarization tool , we show that the analysis of feature distributions leads to significant advances on two fronts : ( 1 ) it enables highly sensitive detection of a wide range of anomalies , augmenting detections by volume-based methods , and ( 2 ) it enables automatic classification of anomalies via unsupervised learning . we show that using feature distributions , anomalies naturally fall into distinct and meaningful clusters . these clusters can be used to automatically classify anomalies and to uncover new anomaly types . we validate our claims on data from two backbone networks ( abilene and geant ) and conclude that feature distributions show promise as a key element of a fairly general network anomaly story_separator_special_tag in recent years , much research focused on entropy as a metric describing the `` chaos '' inherent to network traffic . in particular , network entropy time series turned out to be a scalable technique to detect unexpected behavior in network traffic . in this paper , we propose an algorithm capable of detecting abrupt changes in network entropy time series . abrupt changes indicate that the underlying frequency distribution of network traffic has changed significantly . empirical evidence suggests that abrupt changes are often caused by malicious activity such as ( d ) dos , network scans and worm activity , just to name a few . our experiments indicate that the proposed algorithm is able to reliably identify significant changes in network entropy time series . we believe that our approach helps operators of large-scale computer networks in identifying anomalies which are not visible in flow statistics . story_separator_special_tag detecting multiple and various network intrusions is essential to maintain the reliability of network services . the problem of network intrusion detection can be regarded as a pattern recognition problem . traditional detection approaches neglect the correlation information contained in groups of network traffic samples which leads to their failure to improve the detection effectiveness . this paper directly utilizes the covariance matrices of sequential samples to detect multiple network attacks . it constructs a covariance feature space where the correlation differences among sequential samples are evaluated . two statistical supervised learning approaches are compared : a proposed threshold based detection approach and a traditional decision tree approach . experimental results show that both achieve high performance in distinguishing multiple known attacks while the threshold based detection approach offers an advantage of identifying unknown attacks . it is also pointed out that utilizing statistical information in groups of samples , especially utilizing the covariance information , will benefit the detection effectiveness . story_separator_special_tag this paper presents a covariance-matrix modeling and detection approach to detecting various flooding attacks . based on the investigation of correlativity changes of monitored network features during flooding attacks , this paper employs statistical covariance matrices to build a norm profile of normal activities in information systems and directly utilizes the changes of covariance matrices to detect various flooding attacks . the classification boundary is constrained by a threshold matrix , where each element evaluates the degree to which an observed covariance matrix is different from the norm profile in terms of the changes of correlation between the monitored network features represented by this element . based on chebyshev inequality theory , we give a practical ( heuristic ) approach to determining the threshold matrix . furthermore , the result matrix obtained in the detection serves as the second-order features to characterize the detected flooding attack . the performance of the approach is examined by detecting neptune and smurf attacks-two common distributed denial-of-service flooding attacks . the evaluation results show that the detection approach can accurately differentiate the flooding attacks from the normal traffic . moreover , we demonstrate that the system extracts a stable set of the second-order features story_separator_special_tag abnormal network traffic analysis has become an increasingly important research topic to protect computing infrastructures from intruders . yet , it is challenging to accurately discover threats due to the high volume of network traffic . to have better knowledge about network intrusions , this paper focuses on designing a multi-level network detection method . mainly , it is composed of three steps as ( 1 ) understanding hidden underlying patterns from network traffic data by creating reliable rules to identify network abnormality , ( 2 ) generating a predictive model to determine exact attack categories , and ( 3 ) integrating a visual analytics tool to conduct an interactive visual analysis and validate the identified intrusions with transparent reasons.to verify our approach , a broadly known intrusion dataset ( i.e . nsl-kdd ) is used . we found that the generated rules maintain a high performance rate and provide clear explanations . the proposed predictive model resulted about 96 % of accuracy in detecting exact attack categories . with the interactive visual analysis , a significant difference among the attack categories was discovered by visually representing attacks in separated clusters . overall , our multi-level detection method is well-suited story_separator_special_tag abstract the evaluation of algorithms and techniques to implement intrusion detection systems heavily rely on the existence of well designed datasets . in the last years , a lot of efforts have been done toward building these datasets . yet , there is still room to improve . in this paper , a comprehensive review of existing datasets is first done , making emphasis on their main shortcomings . then , we present a new dataset that is built with real traffic and up-to-date attacks . the main advantage of this dataset over previous ones is its usefulness for evaluating idss that consider long-term evolution and traffic periodicity . models that consider differences in daytime/nighttime or weekdays/weekends can also be trained and evaluated with it . we discuss all the requirements for a modern ids evaluation dataset and analyze how the one presented here meets the different needs . story_separator_special_tag evaluating anomaly detectors is a crucial task in traffic monitoring made particularly difficult due to the lack of ground truth . the goal of the present article is to assist researchers in the evaluation of detectors by providing them with labeled anomaly traffic traces . we aim at automatically finding anomalies in the mawi archive using a new methodology that combines different and independent detectors . a key challenge is to compare the alarms raised by these detectors , though they operate at different traffic granularities . the main contribution is to propose a reliable graph-based methodology that combines any anomaly detector outputs . we evaluated four unsupervised combination strategies ; the best is the one that is based on dimensionality reduction . the synergy between anomaly detectors permits to detect twice as many anomalies as the most accurate detector , and to reject numerous false positive alarms reported by the detectors . significant anomalous traffic features are extracted from reported alarms , hence the labels assigned to the mawi archive are concise . the results on the mawi traffic are publicly available and updated daily . also , this approach permits to include the results of upcoming anomaly detectors story_separator_special_tag it is clear that cyber-attacks are a danger that must be addressed with great resolve , as they threaten the information infrastructure upon which we all depend . many studies have been published expressing varying levels of success with machine learning approaches to combating cyber-attacks , but many modern studies still focus on training and evaluating with very outdated datasets containing old attacks that are no longer a threat , and also lack data on new attacks . recent datasets like unsw-nb15 and santa have been produced to address this problem . even so , these modern datasets suffer from class imbalance , which reduces the efficacy of predictive models trained using these datasets . herein we evaluate several pre-processing methods for addressing the class imbalance problem ; using several of the most popular machine learning algorithms and a variant of unsw-nb15 based upon the attributes from the santa dataset . story_separator_special_tag with the continuous expansion of data availability in many large-scale , complex , and networked systems , such as surveillance , security , internet , and finance , it becomes critical to advance the fundamental understanding of knowledge discovery and analysis from raw data to support decision-making processes . although existing knowledge discovery and data engineering techniques have shown great success in many real-world applications , the problem of learning from imbalanced data ( the imbalanced learning problem ) is a relatively new challenge that has attracted growing attention from both academia and industry . the imbalanced learning problem is concerned with the performance of learning algorithms in the presence of underrepresented data and severe class distribution skews . due to the inherent complex characteristics of imbalanced data sets , learning from such data requires new understandings , principles , algorithms , and tools to transform vast amounts of raw data efficiently into information and knowledge representation . in this paper , we provide a comprehensive review of the development of research in learning from imbalanced data . our focus is to provide a critical review of the nature of the problem , the state-of-the-art technologies , and the current story_separator_special_tag prevention of security breaches completely using the existing security technologies is unrealistic . as a result , intrusion detection is an important component in network security . however , many current intrusion detection systems ( idss ) are rule-based systems , which have limitations to detect novel intrusions . moreover , encoding rules is time-consuming and highly depends on the knowledge of known intrusions . therefore , we propose new systematic frameworks that apply a data mining algorithm called random forests in misuse , anomaly , and hybrid-network-based idss . in misuse detection , patterns of intrusions are built automatically by the random forests algorithm over training data . after that , intrusions are detected by matching network activities against the patterns . in anomaly detection , novel intrusions are detected by the outlier detection mechanism of the random forests algorithm . after building the patterns of network services by the random forests algorithm , outliers related to the patterns are determined by the outlier detection algorithm . the hybrid detection system improves the detection performance by combining the advantages of the misuse and anomaly detection . we evaluate our approaches over the knowledge discovery and data mining 1999 ( story_separator_special_tag \x8ag\x8bu\x8c\x8e\x8d\x90\x8fe\x91\x90\x92\x94\x93n\x8be\x91y\x95 \x93n\x91\x8e\x96 \x97|\x91\x8e\x96 \x8d\x90\x92\x94\x93n\x8f ` \x91y\x914\x96 \x98 \x8f\x99\x8d\x90\x92\x94\x8c\x9b\x9a\x9c\x8c4\x9d\x99\x8d\x90\x96 \x97\x9e\x8c \x92\x9f\x8b2\x97 \x8b\x99\x96 \x8cd \x93s\x8d4\xa1 & \x96 \x8bu\xa2\xa3\x92\x94\x8d\x90\x93n\x8be\xa42\x96 \x8bu\x8c \xa5 \x97s\x8be\xa6\xa7\x8c4\x9de\x96 \x8d4\x96 \xa9\x93s\x8d\x90\x96\x9c\x8b\x99\x96 \x96 \xa6w\x8c\x8e\x93 [ a : \x96\x9c\x95e\x8d4\x93n\xa42\x95 ` \x8c4\xab\x94\x9a [ \xa6e\x96 \x8c\x8e\x96 \x98\xac\x8c\x8e\x96 \xa6+\x97s\x8be\xa6\\\xa6e\x96 \x97s\xab\x94\x8c8 8\x92\x9f\x8c4\x9d ; \xad8\xae \x96 \x92\xb0\x8ba\xb1 \x8c\x8e\x8d\x90\x8fe\x914\x92\x94\x93n\x8b\x7f\x8cd\x9a\xa3\x95 \x96 \x91 \xa5\x99\x93s l \x9de\x92\x9f\x98\x90\x9d2\xa6\x99\x96 \x8c\x8e\x96 \x98\xac\x8c4\x92\x9f\x93n\x8b \x91\x8e\x9aa\x91\x8e\x8c\x8e\x96 \xa43\x91 \xa43\x97 ) \x9a [ \x8b\x99\x93s\x8c \x96 \xa2s\x96 \x8b a \x96 \x97 \x97\x9e\x8d4\x96s\xa5 ` \x97\x9e\x8d4\x96 \x8c4\x9d\x99\x96\xa7\xa42\x93n\x91\x8e\x8c\x7f\xa6e\x92\x94 `` \x98 \x8fe\xab\x9f\x8cj\x8c\x8e\x93 \xa6\x99\x96 \x8c\x8e\x96 \x98\xac\x8c \xad\xb7\xb6 \x8f\x99\x8d4\x8d4\x96 \x8bu\x8c3\x914\x92\x94 n\x8b ` \x97\x9e\x8c4\x8f\x99\x8d4\x96 a ` \x97s\x914\x96 \xa61\xa42\x96 \x8c4\x9d\x99\x93a\xa6e\x913\x97s\x8be\xa6 \xab\x94\x96 \x97\x9e\x8d\x90\x8be\x92\x9f\x8be \x9c\x97s\xab\x94 s\x93s\x8d\x90\x92\x9f\x8c4\x9de\xa43\x91 8\x9de\x92\x9f\x98 \x9d '' \x8d\x90\x96 \xab\x94\x9a3\x93n\x8b \xab\x9f\x97\x9ea \x96 \xab\x94\x96 \xa6 [ \xa6e\x97\x9e\x8c4\x97 & \x8c\x8e\x932\x8c\x8e\x8d\x90\x97s\x92\x9f\x8b @ \xa5u s\x96 \x8b\x99\x96 \x8d \x97s\xab\x9f\xab\x94\x9a '' \x98 \x97s\x8b \x8b\x99\x93s\x8c2\xa6\x99\x96 \x8c\x8e\x96 \x98\xac\x8c2\x8c4\x9d\x99\x96 \x91\x8e\x96 \x8b\x99\x96 o\x92\x9f\x8bu\x8c\x8e\x8d\x90\x8fe\x914\x92\x9f\x93n\x8be\x91 \xad ( \xbb+\x962\x95e\x8d4\x96 \x91\x8e\x96 \x8bu\x8c2\x97\\ 1 4\x8d\x90\x97s\xa42\x96 \x93s\x8d4\xa1 \xa9\x93s\x8d3\x97s\x8f\x99\x8c\x8e\x93\x9e\xb1 \xa43\x97\x9e\x8c4\x92\x9f\x98 \x97s\xab\x9f\xab\x94\x9a \xa6\x99\x96 \x8c\x8e\x96 \x98\xac\x8c4\x92\xb0\x8b\x99 [ \x92\xb0\x8bm\x8c\x8e\x8d\x90\x8f ` \x914\x92\x94\x93n\x8be\x91 \xa5p\x8b\x99\x96 1 2\x93s\x8d \x93s\x8c4\x9d\x99\x96 \x8d4 \x92\x9f\x91\x8e\x96s\xa5 ` \x96 \xa2s\x96 \x8b ( \x92\x94 ] \x8c4\x9d\x99\x96 \x9a\\\x97\x9e\x8d4\x96\x9c\x9as\x96 \x8c \x8fe\x8b\x99\xa1a\x8b\x99\x93 ) 8\x8b3 4\x8c\x8e\x93 \x8c4\x9de\x96w\x91\x8e\x9aa\x91\x8e\x8c\x8e\x96 \xa4 \xad^\x8ag\x8b3 4\x93n\x8f\x99\x8d2\x91\x8e\x9a\xa3\x914\x8c\x8e\x96 \xa4 \xa58\x8b\x99\x93 % \xa43\x97s\x8b\xa3\x8fe\x97s\xab\x9f\xab\x94\x9a % \x93s\x8d [ \x93s\x8c4\x9d\x99\x96 \x8d4 8\x92\x9f\x914\x96 \x98 \xab\x9f\x97s\x914\x914\x92\x94\xbf ` \x96 \xa6 \xa6e\x97\x9e\x8c4\x97n\x92\x9f\x91 & \x8b\x99\x96 \x98\xac\x96 \x914\x914\x97\x9e\x8d4\x9a 1 4\x93s\x8d2\x8c\x8e\x8d\x90\x97s\x92\x9f\x8be\x92\x9f\x8be \x99\xad \xe0 \x8f\x99\x8d2\xa42\x96 \x8c4\x9d\x99\x93a\xa6\xe1\x92\x9f\x91 & \x97\x9ea ` \xab\x94\x962\x8c\x8e\x93n\xa6e\x96\xac\xb1 \x8c\x8e\x96 \x98\xac\x8c\xe2\xa43\x97s\x8bu\x9a\xa7\xa6e\x92\x94\xe3 : \x96 \x8d4\x96 \x8bu\x8c8\x8c\x9b\x9au\x95 \x96 \x91 \x93s /\x92\x9f\x8bu\x8c\x8e\x8d\x90\x8fe\x914\x92\x9f\x93n\x8be\x91 \xa5\x99 8\x9de\x92\xb0\xab\x94\x96 $ \xa43\x97s\x92\x9f\x8bu\x8c4\x97s\x92\x9f\x8be\x92\xb0\x8b\x99 story_separator_special_tag most current intrusion detection systems employ signature-based methods or data mining-based methods which rely on labeled training data . this training data is typically expensive to produce . we present a new geometric framework for unsupervised anomaly detection , which are algorithms that are designed to process unlabeled data . in our framework , data elements are mapped to a feature space which is typically a vector space d. anomalies are detected by determining which points lies in sparse regions of the feature space . we present two feature maps for mapping data elements to a feature space . our first map is a data-dependent normalization feature map which we apply to network connections . our second feature map is a spectrum kernel which we apply to system call traces . we present three algorithms for detecting which points lie in sparse regions of the feature space . we evaluate our methods by performing experiments over network records from the kdd cup 1999 data set and system call traces from the 1999 lincoln labs darpa evaluation . story_separator_special_tag the dramatic proliferation of sophisticated cyber attacks , in conjunction with the ever growing use of internet-based services and applications , is nowadays becoming a great concern in any organization . among many efficient security solutions proposed in the literature to deal with this evolving threat , ensemble approaches , a particular family of data mining , have proven very successful in designing high performance intrusion detection systems ( idss ) resting on the mutual combination of multiple classifiers . however , the strength of ensemble systems depends heavily on the methods to generate and combine individual classifiers . in this thread , we propose a novel design method to generate a robust ensemble-based ids . in our approach , individual classifiers are built using both the input feature space and additional features exploited from k-means clustering . in addition , the ensemble combination is calculated based on the classification ability of classifiers on different local data regions defined in form of k-means clustering . experimental results prove that our solution is superior to several well-known methods . story_separator_special_tag most current network intrusion detection systems employ signature-based methods or data mining-based methods which rely on labelled training data . this training data is typically expensive to produce . moreover , these methods have difficulty in detecting new types of attack . using unsupervised anomaly detection techniques , however , the system can be trained with unlabelled data and is capable of detecting previously `` unseen '' attacks . in this paper , we present a new density-based and grid-based clustering algorithm that is suitable for unsupervised anomaly detection . we evaluated our methods using the 1999 kdd cup data set . our evaluation shows that the accuracy of our approach is close to that of existing techniques reported in the literature , and has several advantages in terms of computational complexity . story_separator_special_tag since the early days of research on intrusion detection , anomaly-based approaches have been proposed to detect intrusion attempts . attacks are detected as anomalies when compared to a model of normal ( legitimate ) events . anomaly-based approaches typically produce a relatively large number of false alarms compared to signature-based ids . however , anomaly-based ids are able to detect never-before-seen attacks . as new types of attacks are generated at an increasing pace and the process of signature generation is slow , it turns out that signature-based ids can be easily evaded by new attacks . the ability of anomaly-based ids to detect attacks never observed in the wild has stirred up a renewed interest in anomaly detection . in particular , recent work focused on unsupervised or unlabeled anomaly detection , due to the fact that it is very hard and expensive to obtain a labeled dataset containing only pure normal events . the unlabeled approaches proposed so far for network ids focused on modeling the normal network traffic considered as a whole . as network traffic related to different protocols or services exhibits different characteristics , this paper proposes an unlabeled network anomaly ids based on story_separator_special_tag due to the advance of information and communication techniques , sharing information through online has been increased . and this leads to creating the new added value . as a result , various online services were created . however , as increasing connection points to the internet , the threats of cyber security have also been increasing . intrusion detection system ( ids ) is one of the important security issues today . in this paper , we construct an ids model with deep learning approach . we apply long short term memory ( lstm ) architecture to a recurrent neural network ( rnn ) and train the ids model using kdd cup 1999 dataset . through the performance test , we confirm that the deep learning approach is effective for ids . story_separator_special_tag abstract : the 1998 darpa intrusion detection evaluation created the first standard corpus for evaluating computer intrusion detection systems . this corpus was designed to evaluate both false alarm rates and detection rates of intrusion detection systems using many types of both known and new attacks embedded in a large amount of normal background traffic . the corpus was collected from a simulation network that was used to automatically generate realistic traffic-including attempted attacks . the focus of this thesis is the attacks that were developed for use in the 1998 darpa intrusion detection evaluation . in all , over 300 attacks were included in the 9 weeks of data collected for the evaluation . these 300 attacks were drawn from 32 different attack types and 7 different attack scenarios . the attack types covered the different classes of computer attacks and included older , well-known attacks , newer attacks that have recently been released to publicly available forums , and some novel attacks developed specifically for this evaluation . the development of a high quality corpus for evaluating intrusion detection systems required not only a variety of attack types , but also required realistic variance in the methods used story_separator_special_tag an important goal of the ongoing darpa intrusion detection evaluations is to promote development of intrusion detection systems that can detect stealthy attacks which might be launched by well-funded hostile nations or terrorists organizations . this goal can only be reached if such stealthy attacks are included in the darpa evaluations . this report describes new and known approaches and strategies that were used to make attacks stealthy for the 1999 darpa intrusion detection evaluation . it explains why some attacks used in the initial 1998 evaluation were easy to detect , presents general guidelines that were followed for the 1999 evaluation , includes many examples of stealthy scripts , and includes perl and shell scripts that can be use to implement stealthy procedures . story_separator_special_tag we describe the results achieved using the jam distributed data mining system for the real world problem of fraud detection in financial information systems . for this domain we provide clear evidence that state-of-the-art commercial fraud detection systems can be substantially improved in stopping losses due to fraud by combining multiple models of fraudulent transaction shared among banks . we demonstrate that the traditional statistical metrics used to train and evaluate the performance of learning systems ( ie . statistical accuracy or roc analysis ) are misleading and perhaps inappropriate for this application . cost-based metrics are more relevant in certain domains , and defining such metrics poses significant and interesting research questions both in evaluating systems and alternative models , and in formalizing the problems to which one may wish to apply data mining technologies . this paper also demonstrates how the techniques developed for fraud detection can be generalized and applied to the important area of intrusion detection in networked information systems . we report the outcome of recent evaluations of our system applied to tcpdump network intrusion data specifically with respect to statistical accuracy . this work involved building additional components of jam that we have come story_separator_special_tag during the last decade , anomaly detection has attracted the attention of many researchers to overcome the weakness of signature-based idss in detecting novel attacks , and kddcup'99 is the mostly widely used data set for the evaluation of these systems . having conducted a statistical analysis on this data set , we found two important issues which highly affects the performance of evaluated systems , and results in a very poor evaluation of anomaly detection approaches . to solve these issues , we have proposed a new data set , nsl-kdd , which consists of selected records of the complete kdd data set and does not suffer from any of mentioned shortcomings . story_separator_special_tag with the rapid evolution and proliferation of botnets , large-scale cyber attacks such as ddos , spam emails are also becoming more and more dangerous and serious cyber threats . because of this , network based security technologies such as network based intrusion detection systems ( nidss ) , intrusion prevention systems ( ipss ) , firewalls have received remarkable attention to defend our crucial computer systems , networks and sensitive information from attackers on the internet . in particular , there has been much effort towards high-performance nidss based on data mining and machine learning techniques . however , there is a fatal problem in that the existing evaluation dataset , called kdd cup 99 ' dataset , can not reflect current network situations and the latest attack trends . this is because it was generated by simulation over a virtual network more than 10 years ago . to the best of our knowledge , there is no alternative evaluation dataset . in this paper , we present a new evaluation dataset , called kyoto 2006+ , built on the 3 years of real traffic data ( nov. 2006 ~ aug. 2009 ) which are obtained from diverse types story_separator_special_tag in network intrusion detection , anomaly-based approaches in particular suffer from accurate evaluation , comparison , and deployment which originates from the scarcity of adequate datasets . many such datasets are internal and can not be shared due to privacy issues , others are heavily anonymized and do not reflect current trends , or they lack certain statistical characteristics . these deficiencies are primarily the reasons why a perfect dataset is yet to exist . thus , researchers must resort to datasets that are often suboptimal . as network behaviors and patterns change and intrusions evolve , it has very much become necessary to move away from static and one-time datasets toward more dynamically generated datasets which not only reflect the traffic compositions and intrusions of that time , but are also modifiable , extensible , and reproducible . in this paper , a systematic approach to generate the required datasets is introduced to address this need . the underlying notion is based on the concept of profiles which contain detailed descriptions of intrusions and abstract distribution models for applications , protocols , or lower level network entities . real traces are analyzed to create profiles for agents that generate story_separator_special_tag with exponential growth in the number of computer applications and the sizes of networks , the potential damage that can be caused by attacks launched over the internet keeps increasing dramatically . a number of network intrusion detection methods have been developed with respective strengths and weaknesses . the majority of network intrusion detection research and development is still based on simulated datasets due to non-availability of real datasets . a simulated dataset can not represent a real network intrusion scenario . it is important to generate real and timely datasets to ensure accurate and consistent evaluation of detection methods . in this paper , we propose a systematic approach to generate unbiased fullfeature real-life network intrusion datasets to compensate for the crucial shortcomings of existing datasets . we establish the importance of an intrusion dataset in the development and validation process of detection mechanisms , identify a set of requirements for effective dataset generation , and discuss several attack scenarios and their incorporation in generating datasets . we also establish the effectiveness of the generated dataset in the context of several existing datasets . story_separator_special_tag one of the major research challenges in this field is the unavailability of a comprehensive network based data set which can reflect modern network traffic scenarios , vast varieties of low footprint intrusions and depth structured information about the network traffic . evaluating network intrusion detection systems research efforts , kdd98 , kddcup99 and nslkdd benchmark data sets were generated a decade ago . however , numerous current studies showed that for the current network threat environment , these data sets do not inclusively reflect network traffic and modern low footprint attacks . countering the unavailability of network benchmark data set challenges , this paper examines a unsw-nb15 data set creation . this data set has a hybrid of the real modern normal and the contemporary synthesized attack activities of the network traffic . existing and novel methods are utilised to generate the features of the unswnb15 data set . this data set is available for research purposes and can be accessed from the link . story_separator_special_tag in 1998 and again in 1999 , the lincoln laboratory of mit conducted a comparative evaluation of intrusion detection systems ( idss ) developed under darpa funding . while this evaluation represents a significant and monumental undertaking , there are a number of issues associated with its design and execution that remain unsettled . some methodologies used in the evaluation are questionable and may have biased its results . one problem is that the evaluators have published relatively little concerning some of the more critical aspects of their work , such as validation of their test data . the appropriateness of the evaluation techniques used needs further investigation . the purpose of this article is to attempt to identify the shortcomings of the lincoln lab effort in the hope that future efforts of this kind will be placed on a sounder footing . some of the problems that the article points out might well be resolved if the evaluators were to publish a detailed description of their procedures and the rationale that led to their adoption , but other problems would clearly remain./par > story_separator_special_tag the darpa/mit lincoln laboratory off-line intrusion detection evaluation data set is the most widely used public benchmark for testing intrusion detection systems . our investigation of the 1999 background network traffic suggests the presence of simulation artifacts that would lead to overoptimistic evaluation of network anomaly detection systems . the effect can be mitigated without knowledge of specific artifacts by mixing real traffic into the simulation , although the method requires that both the system and the real traffic be analyzed and possibly modified to ensure that the system does not model the simulated traffic independently of the real traffic . story_separator_special_tag the 1999 darpa/lincoln laboratory ids evaluation data has been widely used in the intrusion detection and networking community , even though it is known to have a number of artifacts . here we show that many of these artifacts , including the lack of damaged or unusual background packets and uniform host distribution , can be easily extracted using netadhict , a tool we developed for understanding networks . in addition , using netadhict we were able to identify extreme temporal variation in the data , a characteristic that was not identified in past analyses . these results illustrate the utility of netadhict in characterizing network traces for experimental purposes . story_separator_special_tag many consider the kdd cup 99 data sets to be outdated and inadequate . therefore , the extensive use of these data sets in recent studies to evaluate network intrusion detection systems is a matter of concern . we contribute to the literature by addressing these concerns . story_separator_special_tag real-world data is never perfect and can often suffer from corruptions ( noise ) that may impact interpretations of the data , models created from the data and decisions made based on the data . noise can reduce system performance in terms of classification accuracy , time in building a classifier and the size of the classifier . accordingly , most existing learning algorithms have integrated various approaches to enhance their learning abilities from noisy environments , but the existence of noise can still introduce serious negative impacts . a more reasonable solution might be to employ some preprocessing mechanisms to handle noisy instances before a learner is formed . unfortunately , rare research has been conducted to systematically explore the impact of noise , especially from the noise handling point of view . this has made various noise processing techniques less significant , specifically when dealing with noise that is introduced in attributes . in this paper , we present a systematic evaluation on the effect of noise in machine learning . instead of taking any unified theory of noise to evaluate the noise impacts , we differentiate noise into two categories : class noise and attribute noise , story_separator_special_tag abstract : this paper proposes a novel scheme that uses robust principal component classifier in intrusion detection problems where the training data may be unsupervised . assuming that anomalies can be treated as outliers , an intrusion predictive model is constructed from the major and minor principal components of the normal instances . a measure of the difference of an anomaly from the normal instance is the distance in the principal component space . the distance based on the major components that account for 50 % of the total variation and the minor components whose eigenvalues less than 0.20 is shown to work well . the experiments with kdd cup 1999 data demonstrate that the proposed method achieves 98.94 % in recall and 97.89 % in precision with the false alarm rate 0.92 % and outperforms the nearest neighbor method , density-based local outliers ( lof ) approach , and the outlier detection algorithm based on canberra metric . story_separator_special_tag intrusion detection systems have traditionally been based on the characterization of an attack and the tracking of the activity on the system to see if it matches that characterization . recently , new intrusion detection systems based on data mining are making their appearance in the field . this paper describes the design and experiences with the adam ( audit data analysis and mining ) system , which we use as a testbed to study how useful data mining techniques can be in intrusion detection . keywords intrusion detection , data mining , association rules , classifiers . story_separator_special_tag abstract outlier detection is of considerable interest in fields such as physical sciences , medical diagnosis , surveillance detection , fraud detection and network anomaly detection . the data mining and network management research communities are interested in improving existing score-based network traffic anomaly detection techniques because of ample scopes to increase performance . in this paper , we present a multi-step outlier-based approach for detection of anomalies in network-wide traffic . we identify a subset of relevant traffic features and use it during clustering and anomaly detection . to support outlier-based network anomaly identification , we use the following modules : a mutual information and generalized entropy based feature selection technique to select a relevant non-redundant subset of features , a tree-based clustering technique to generate a set of reference points and an outlier score function to rank incoming network traffic to identify anomalies . we also design a fast distributed feature extraction and data preparation framework to extract features from raw network-wide traffic . we evaluate our approach in terms of detection rate , false positive rate , precision , recall and f-measure using several high dimensional synthetic and real-world datasets and find the performance superior in comparison story_separator_special_tag redundant and irrelevant features in data have caused a long-term problem in network traffic classification . these features not only slow down the process of classification but also prevent a classifier from making accurate decisions , especially when coping with big data . in this paper , we propose a mutual information based algorithm that analytically selects the optimal feature for classification . this mutual information based feature selection algorithm can handle linearly and nonlinearly dependent data features . its effectiveness is evaluated in the cases of network intrusion detection . an intrusion detection system ( ids ) , named least square support vector machine based ids ( lssvm-ids ) , is built using the features selected by our proposed feature selection algorithm . the performance of lssvm-ids is evaluated using three intrusion detection evaluation datasets , namely kdd cup 99 , nsl-kdd and kyoto 2006+ dataset . the evaluation results show that our feature selection algorithm contributes more critical features for lssvm-ids to achieve better accuracy and lower computational cost compared with the state-of-the-art methods . story_separator_special_tag whenever an intrusion occurs , the security and value of a computer system is compromised . network-based attacks make it difficult for legitimate users to access various network services by purposely occupying or sabotaging network resources and services . this can be done by sending large amounts of network traffic , exploiting well-known faults in networking services , and by overloading network hosts . intrusion detection attempts to detect computer attacks by examining various data records observed in processes on the network and it is split into two groups , anomaly detection systems and misuse detection systems . anomaly detection is an attempt to search for malicious behavior that deviates from established normal patterns . misuse detection is used to identify intrusions that match known attack scenarios . our interest here is in anomaly detection and our proposed method is a scalable solution for detecting network-based anomalies . we use support vector machines ( svm ) for classification . the svm is one of the most successful classification algorithms in the data mining area , but its long training time limits its use . this paper presents a study for enhancing the training time of svm , specifically when dealing story_separator_special_tag the growth of the internet and , consequently , the number of interconnected computers , has exposed significant amounts of information to intruders and attackers . firewalls aim to detect violations according to a predefined rule-set and usually block potentially dangerous incoming traffic . however , with the evolution of attack techniques , it is more difficult to distinguish anomalies from normal traffic . different detection approaches have been proposed , including the use of machine learning techniques based on neural models such as self-organizing maps ( soms ) . in this paper , we present a classification approach that hybridizes statistical techniques and som for network anomaly detection . thus , while principal component analysis ( pca ) and fisher discriminant ratio ( fdr ) have been considered for feature selection and noise removal , probabilistic self-organizing maps ( psom ) aim to model the feature space and enable distinguishing between normal and anomalous connections . the detection capabilities of the proposed system can be modified without retraining the map , but only by modifying the units activation probabilities . this deals with fast implementations of intrusion detection systems ( ids ) necessary to cope with current link bandwidths story_separator_special_tag to exploit the strengths of misuse detection and anomaly detection , an intensive focus on intrusion detection combines the two . from a novel perspective , in this paper , we proposed a hybrid approach toward achieving a high detection rate with a low false positive rate . the approach is a two-level hybrid solution consisting of two anomaly detection components and a misuse detection component . in stage 1 , an anomaly detection method with low computing complexity is developed and employed to build the detection component . the k-nearest neighbors algorithm becomes crucial in building the two detection components for stage 2. in this hybrid approach , all of the detection components are well-coordinated . the detection component of stage 1 becomes involved in the course of building the two detection components of stage 2 that reduce the false positives and false negatives generated by the detection component of stage 1. experimental results on the kdd'99 dataset and the kyoto university benchmark dataset confirm that the proposed hybrid approach can effectively detect network anomalies with a low false positive rate . highlightsa novel two-level hybrid intrusion detection approach is proposed.a novel anomaly detection method based on change of story_separator_special_tag a network intrusion detection system ( nids ) helps system administrators to detect network security breaches in their organizations . however , many challenges arise while developing a flexible and efficient nids for unforeseen and unpredictable attacks . we propose a deep learning based approach for developing such an efficient and flexible nids . we use self-taught learning ( stl ) , a deep learning based technique , on nsl-kdd - a benchmark dataset for network intrusion . we present the performance of our approach and compare it with a few previous work . compared metrics include accuracy , precision , recall , and f-measure values . story_separator_special_tag intrusion detection plays an important role in ensuring information security , and the key technology is to accurately identify various attacks in the network . in this paper , we explore how to model an intrusion detection system based on deep learning , and we propose a deep learning approach for intrusion detection using recurrent neural networks ( rnn-ids ) . moreover , we study the performance of the model in binary classification and multiclass classification , and the number of neurons and different learning rate impacts on the performance of the proposed model . we compare it with those of j48 , artificial neural network , random forest , support vector machine , and other machine learning methods proposed by previous researchers on the benchmark data set . the experimental results show that rnn-ids is very suitable for modeling a classification model with high accuracy and that its performance is superior to that of traditional machine learning classification methods in both binary and multiclass classification . the rnn-ids model improves the accuracy of the intrusion detection and provides a new research method for intrusion detection . story_separator_special_tag intrusion detection is well-known as an essential component to secure the systems in information and communication technology ( ict ) . based on the type of analyzing events , two kinds of intrusion detection systems ( ids ) have been proposed : anomaly-based and misuse-based . in this paper , three-layer recurrent neural network ( rnn ) architecture with categorized features as inputs and attack types as outputs of rnn is proposed as misuse-based ids . the input features are categorized to basic features , content features , time-based traffic features , and host-based traffic features . the attack types are classified to denial-of-service ( dos ) , probe , remote-to-local ( r2l ) , and user-to-root ( u2r ) . for this purpose , in this study , we use the 41 features per connection defined by international knowledge discovery and data mining group ( kdd ) . the rnn has an extra output which corresponds to normal class ( no attack ) . the connections between the nodes of two hidden layers of rnn are considered partial . experimental results show that the proposed model is able to improve classification rate , particularly in r2l attacks . this story_separator_special_tag in this paper , we propose a novel intrusion detection system ( ids ) architecture utilizing both anomaly and misuse detection approaches . this hybrid intrusion detection system architecture consists of an anomaly detection module , a misuse detection module and a decision support system combining the results of these two detection modules . the proposed anomaly detection module uses a self-organizing map ( som ) structure to model normal behavior . deviation from the normal behavior is classified as an attack . the proposed misuse detection module uses j.48 decision tree algorithm to classify various types of attacks . the principle interest of this work is to benchmark the performance of the proposed hybrid ids architecture by using kdd cup 99 data set , the benchmark dataset used by ids researchers . a rule-based decision support system ( dss ) is also developed for interpreting the results of both anomaly and misuse detection modules . simulation results of both anomaly and misuse detection modules based on the kdd 99 data set are given . it is observed that the proposed hybrid approach gives better performance over individual approaches . story_separator_special_tag intrusion detection is an emerging area of research in the computer security and networks with the growing usage of internet in everyday life . most intrusion detection systems ( idss ) mostly use a single classifier algorithm to classify the network traffic data as normal behaviour or anomalous . however , these single classifier systems fail to provide the best possible attack detection rate with low false alarm rate . in this paper , we propose to use a hybrid intelligent approach using combination of classifiers in order to make the decision intelligently , so that the overall performance of the resultant model is enhanced . the general procedure in this is to follow the supervised or un-supervised data filtering with classifier or clusterer first on the whole training dataset and then the output is applied to another classifier to classify the data . we use 2-class classification strategy along with 10-fold cross validation method to produce the final classification results in terms of normal or intrusion . experimental results on nsl-kdd dataset , an improved version of kddcup 1999 dataset show that our proposed approach is efficient with high detection rate and low false alarm rate . \xa9 2011 story_separator_special_tag nat . methods 14 , 757 758 ( 2017 ) ; published online 28 july 2017 ; corrected after print 28 july 2017 in the version of this article initially published , the expression ( g1 , g2 ) used to describe a sample subset in the figure 1 legend was incorrect . the correct expression is ( ig ( s1 ) , ig ( s2 ) ) . the error has been corrected in the html and pdf versions of the article . story_separator_special_tag this paper proposes latent representation models for improving network anomaly detection . well-known anomaly detection algorithms often suffer from challenges posed by network data , such as high dimension and sparsity , and a lack of anomaly data for training , model selection , and hyperparameter tuning . our approach is to introduce new regularizers to a classical autoencoder ( ae ) and a variational ae , which force normal data into a very tight area centered at the origin in the nonsaturating area of the bottleneck unit activations . these trained aes on normal data will push normal points toward the origin , whereas anomalies , which differ from normal data , will be put far away from the normal region . the models are very different from common regularized aes , sparse ae , and contractive ae , in which the regularized aes tend to make their latent representation less sensitive to changes of the input data . the bottleneck feature space is now used as a new data representation . a number of one-class learning algorithms are used for evaluating the proposed models . the experiments testify that our models help these classifiers to perform efficiently and story_separator_special_tag log-linear models are widely used probability models for statistical pattern recognition . typically , log-linear models are trained according to a convex criterion . in recent years , the interest in log-linear models has greatly increased . the optimization of log-linear model parameters is costly and therefore an important topic , in particular for large-scale applications . different optimization algorithms have been evaluated empirically in many papers . in this work , we analyze the optimization problem analytically and show that the training of log-linear models can be highly ill-conditioned . we verify our findings on two handwriting tasks . by making use of our convergence analysis , we obtain good results on a large-scale continuous handwriting recognition task with a simple and generic approach . story_separator_special_tag the log-transformation is widely used in biomedical and psychosocial research to deal with skewed data . this paper highlights serious problems in this classic approach for dealing with skewed data . despite the common belief that the log transformation can decrease the variability of data and make data conform more closely to the normal distribution , this is usually not the case . moreover , the results of standard statistical tests performed on log-transformed data are often not relevant for the original , non-transformed data.we demonstrate these problems by presenting examples that use simulated data . we conclude that if used at all , data transformations must be applied very cautiously . we recommend that in most circumstances researchers abandon these traditional methods of dealing with skewed data and , instead , use newer analytic methods that are not dependent on the distribution the data , such as generalized estimating equations ( gee ) . story_separator_special_tag abstract this paper considers efficient backpropagation learning using dynamically optimal learning rate ( lr ) and momentum factor ( mf ) . a family of approaches exploiting the derivatives with respect to the lr and mf is presented , which does not need to explicitly compute the first two order derivatives in weight space , but rather makes use of the information gathered from the forward and backward procedures . the computational and storage burden for estimating the optimal lr and mf at most triple that of the standard backpropagation algorithm ( bpa ) ; however , the backpropagation learning procedure can be accelerated with remarkable savings in running time . extensive computer simulations provided in this paper indicate that at least a magnitude of savings in running time can be achieved using the present family of approaches . \xa9 1997 elsevier science ltd. all rights reserved . story_separator_special_tag in previous research the support vector data description is proposed to solve the problem of one-class classification . in one-class classification one set of data , called the target set , has to be distinguished from the rest of the feature space . this description should be constructed such that objects not originating from the target set , by definition the outlier class , are not accepted by the data description . in this paper the support vector data description is applied to the problem of image database retrieval . the user selects an example image region as target class and resembling images from a database should be retrieved . this application shows some of the weaknesses of the svdd , particularly the dependence on the scaling of the features . by rescaling features and combining several descriptions oll well scaled feature sets , performance can be significantly improved . story_separator_special_tag anomalies are unusual and significant changes in a network 's traffic levels , which can often span multiple links . diagnosing anomalies is critical for both network operators and end users . it is a difficult problem because one must extract and interpret anomalous patterns from large amounts of high-dimensional , noisy data.in this paper we propose a general method to diagnose anomalies . this method is based on a separation of the high-dimensional space occupied by a set of network traffic measurements into disjoint subspaces corresponding to normal and anomalous network conditions . we show that this separation can be performed effectively by principal component analysis.using only simple traffic measurements from links , we study volume anomalies and show that the method can : ( 1 ) accurately detect when a volume anomaly is occurring ; ( 2 ) correctly identify the underlying origin-destination ( od ) flow which is the source of the anomaly ; and ( 3 ) accurately estimate the amount of traffic involved in the anomalous od flow.we evaluate the method 's ability to diagnose ( i.e. , detect , identify , and quantify ) both existing and synthetically injected volume anomalies in real traffic story_separator_special_tag as the network-based technologies become omnipresent , threat detection and prevention for these systems become increasingly important . one of the effective ways to achieve higher security is to use intrusion detection systems , which are software tools used to detect abnormal activities in the computer or network . one technical challenge in intrusion detection systems is the curse of high dimensionality . to overcome this problem , we propose a feature selection phase , which can be generally implemented in any intrusion detection system . in this work , we propose two feature selection algorithms and study the performance of using these algorithms compared to a mutual information-based feature selection method . these feature selection algorithms require the use of a feature goodness measure . we investigate using both a linear and a non-linear measure-linear correlation coefficient and mutual information , for the feature selection . further , we introduce an intrusion detection system that uses an improved machine learning based method , least squares support vector machine . experiments on kdd cup 99 data set address that our proposed mutual information-based feature selection method results in detecting intrusions with higher accuracy , especially for remote to login ( story_separator_special_tag interconnected systems , such as web servers , database servers , cloud computing servers and so on , are now under threads from network attackers . as one of most common and aggressive means , denial-of-service ( dos ) attacks cause serious impact on these computing systems . in this paper , we present a dos attack detection system that uses multivariate correlation analysis ( mca ) for accurate network traffic characterization by extracting the geometrical correlations between network traffic features . our mca-based dos attack detection system employs the principle of anomaly based detection in attack recognition . this makes our solution capable of detecting known and unknown dos attacks effectively by learning the patterns of legitimate network traffic only . furthermore , a triangle-area-based technique is proposed to enhance and to speed up the process of mca . the effectiveness of our proposed detection system is evaluated using kdd cup 99 data set , and the influences of both non-normalized data and normalized data on the performance of the proposed detection system are examined . the results show that our system outperforms two other previously developed state-of-the-art approaches in terms of detection accuracy . story_separator_special_tag in the past twenty years , progress in intrusion detection has been steady but slow . the biggest challenge is to detect new attacks in real time . in this work , a deep learning approach for anomaly detection using a restricted boltzmann machine ( rbm ) and a deep belief network are implemented . our method uses a one-hidden layer rbm to perform unsupervised feature reduction . the resultant weights from this rbm are passed to another rbm producing a deep belief network . the pre-trained weights are passed into a fine tuning layer consisting of a logistic regression ( lr ) classifier with multi-class soft-max . we have implemented the deep learning architecture in c++ in microsoft visual studio 2013 and we use the darpa kddcup'99 dataset to evaluate its performance . our architecture outperforms previous deep learning methods implemented by li and salama in both detection speed and accuracy . we achieve a detection rate of 97.9 % on the total 10 % kddcup'99 test dataset . by improving the training process of the simulation , we are also able to produce a low false negative rate of 2.47 % . although the deficiencies in the kddcup'99 story_separator_special_tag discrete values have important roles in data mining and knowledge discovery . they are about intervals of numbers which are more concise to represent and specify , easier to use and comprehend as they are closer to a knowledge-level representation than continuous values . many studies show induction tasks can benefit from discretization : rules with discrete values are normally shorter and more understandable and discretization can lead to improved predictive accuracy . furthermore , many induction algorithms found in the literature require discrete features . all these prompt researchers and practitioners to discretize continuous features before or during a machine learning or data mining task . there are numerous discretization methods available in the literature . it is time for us to examine these seemingly different methods for discretization and find out how different they really are , what are the key components of a discretization process , how we can improve the current level of research for new development as well as the use of existing methods . this paper aims at a systematic study of discretization methods with their history of development , effect on classification , and trade-off between speed and accuracy . contributions of this story_separator_special_tag discretization is an essential preprocessing technique used in many knowledge discovery and data mining tasks . its main goal is to transform a set of continuous attributes into discrete ones , by associating categorical values to intervals and thus transforming quantitative data into qualitative data . in this manner , symbolic data mining algorithms can be applied over continuous data and the representation of information is simplified , making it more concise and specific . the literature provides numerous proposals of discretization and some attempts to categorize them into a taxonomy can be found . however , in previous papers , there is a lack of consensus in the definition of the properties and no formal categorization has been established yet , which may be confusing for practitioners . furthermore , only a small set of discretizers have been widely considered , while many other methods have gone unnoticed . with the intention of alleviating these problems , this paper provides a survey of discretization methods proposed in the literature from a theoretical and empirical perspective . from the theoretical perspective , we develop a taxonomy based on the main properties pointed out in previous research , unifying the notation story_separator_special_tag many supervised machine learning algorithms require a discrete feature space . in this paper , we review previous work on continuous feature discretization , identify defining characteristics of the methods , and conduct an empirical evaluation of several methods . we compare binning , an unsupervised discretization method , to entropy-based and purity-based methods , which are supervised algorithms . we found that the performance of the naive-bayes algorithm significantly improved when features were discretized using an entropy-based method . in fact , over the 16 tested datasets , the discretized version of naive-bayes slightly outperformed c4.5 on average . we also show that in some cases , the performance of the c4.5 induction algorithm significantly improved if features were discretized in advance ; in our experiments , the performance never significantly degraded , an interesting phenomenon considering the fact that c4.5 is capable of locally discretizing features . story_separator_special_tag a reported weakness of c4.5 in domains with continuous attributes is addressed by modifying the formation and evaluation of tests on continuous attributes . an mdl-inspired penalty is applied to such tests , eliminating some of them from consideration and altering the relative desirability of all tests . empirical trials show that the modifications lead to smaller decision trees with higher predictive accuracies . results also confirm that a new version of c4.5 incorporating these changes is superior to recent approaches that use global discretization and that construct small trees with multi-interval splits . story_separator_special_tag since most real-world applications of classification learning involve continuous-valued attributes , properly addressing the discretization process is an important problem . this paper addresses the use of the entropy minimization heuristic for discretizing the range of a continuous-valued attribute into multiple intervals . story_separator_special_tag this paper argues that two commonly-used discretization approaches , fixed k-interval discretization and entropy-based discretization have sub-optimal characteristics for naive-bayes classification . this analysis leads to a new discretization method , proportional k-interval discretization ( pkid ) , which adjusts the number and size of discretized intervals to the number of training instances , thus seeks an appropriate trade-off between the bias and variance of the probability estimation for naive-bayes classifiers . we justify pkid in theory , as well as test it on a wide cross-section of datasets . our experimental results suggest that in comparison to its alternatives , pkid provides naive-bayes classifiers competitive classification performance for smaller datasets and better classification performance for larger datasets . story_separator_special_tag kdd cup 99 dataset is a classical challenge for computer intrusion detection as well as machine learning researchers . due to the problematic of this dataset , several sophisticated machine learning algorithms have been tried by different authors . in this paper a new approach is proposed that consists in a combination of a discretizator , a filter method and a very simple classical classifier . the results obtained show the adequacy of the method , that achieves comparable or even better performances than those of other more complicated algorithms , but with a considerable reduction in the number of input features . the proposed method has also been tried over another two large datasets maintaining the same behavior as in the kdd cup 99 dataset . story_separator_special_tag with increasing internet connectivity and traffic volume , recent intrusion incidents have reemphasized the importance of network intrusion detection systems for combating increasingly sophisticated network attacks . techniques such as pattern recognition and the data mining of network events are often used by intrusion detection systems to classify the network events as either normal events or attack events . our research study claims that the hidden naive bayes ( hnb ) model can be applied to intrusion detection problems that suffer from dimensionality , highly correlated features and high network data stream volumes . hnb is a data mining model that relaxes the naive bayes method 's conditional independence assumption . our experimental results show that the hnb model exhibits a superior overall performance in terms of accuracy , error rate and misclassification cost compared with the traditional naive bayes model , leading extended naive bayes models and the knowledge discovery and data mining ( kdd ) cup 1999 winner . our model performed better than other leading state-of-the art models , such as svm , in predictive accuracy . the results also indicate that our model significantly improves the accuracy of detecting denial-of-services ( dos ) attacks . story_separator_special_tag in this paper , a new hybrid intrusion detection method that hierarchically integrates a misuse detection model and an anomaly detection model in a decomposition structure is proposed . first , a misuse detection model is built based on the c4.5 decision tree algorithm and then the normal training data is decomposed into smaller subsets using the model . next , multiple one-class svm models are created for the decomposed subsets . as a result , each anomaly detection model does not only use the known attack information indirectly , but also builds the profiles of normal behavior very precisely . the proposed hybrid intrusion detection method was evaluated by conducting experiments with the nsl-kdd data set , which is a modified version of well-known kdd cup 99 data set . the experimental results demonstrate that the proposed method is better than the conventional methods in terms of the detection rate for both unknown and known attacks while it maintains a low false positive rate . in addition , the proposed method significantly reduces the high time complexity of the training and testing processes . experimentally , the training and testing time of the anomaly detection model is shown to story_separator_special_tag in this paper we propose a new method to perform incremental discretization . the basic idea is to perform the task in two layers . the first layer receives the sequence of input data and keeps some statistics on the data using many more intervals than required . based on the statistics stored by the first layer , the second layer creates the final discretization . the proposed architecture processes streaming examples in a single scan , in constant time and space even for infinite sequences of examples . we experimentally demonstrate that incremental discretization is able to maintain the performance of learning algorithms in comparison to a batch discretization . the proposed method is much more appropriate in incremental learning , and in problems where data flows continuously , as in most of the recent data mining applications . story_separator_special_tag abstract data quality is deemed as determinant in the knowledge extraction process . low-quality data normally imply low-quality models and decisions . discretization , as part of data preprocessing , is considered one of the most relevant techniques for improving data quality . in static discretization , output intervals are generated at once , and maintained along the whole process . however , many contemporary problems demands rapid approaches capable of self-adapting their discretization schemes to an ever-changing nature . other major issues for stream-based discretization such as interval definition , labeling or how is implemented the interaction between learning and discretization components are also discussed in this paper . in order to address all the aforementioned problems , we propose a novel , online and self-adaptive discretization solution for streaming classification which aims at reducing the negative impact of fluctuations in evolving intervals . experiments with a long list of standard streaming datasets and discretizers have demonstrated that our proposal performs significantly more accurately than the other alternatives . in addition , our scheme is able to leverage from class information without incurring in an overweight cost , being ranked as one of the most rapid supervised options . story_separator_special_tag current network intrusion detection systems lack adaptability to the frequently changing network environments . furthermore , intrusion detection in the new distributed architectures is now a major requirement . in this paper , we propose two online adaboost-based intrusion detection algorithms . in the first algorithm , a traditional online adaboost process is used where decision stumps are used as weak classifiers . in the second algorithm , an improved online adaboost process is proposed , and online gaussian mixture models ( gmms ) are used as weak classifiers . we further propose a distributed intrusion detection framework , in which a local parameterized detection model is constructed in each node using the online adaboost algorithm . a global detection model is constructed in each node by combining the local parametric models using a small number of samples in the node . this combination is achieved using an algorithm based on particle swarm optimization ( pso ) and support vector machines . the global model in each node is used to detect intrusions . experimental results show that the improved online adaboost process with gmms obtains a higher detection rate and a lower false alarm rate than the traditional online story_separator_special_tag nowadays it is very important to maintain a high level security to ensure safe and trusted communication of information between various organizations . but secured data communication over internet and any other network is always under threat of intrusions and misuses . so intrusion detection systems have become a needful component in terms of computer and network security . there are various approaches being utilized in intrusion detections , but unfortunately any of the systems so far is not completely flawless . so , the quest of betterment continues . in this progression , here we present an intrusion detection system ( ids ) , by applying genetic algorithm ( ga ) to efficiently detect various types of network intrusions . parameters and evolution processes for ga are discussed in details and implemented . this approach uses evolution theory to information evolution in order to filter the traffic data and thus reduce the complexity . to implement and measure the performance of our system we used the kdd99 benchmark dataset and obtained reasonable detection rate . story_separator_special_tag this paper describes one of the first attempts to model the temporal structure of massive data streams in real-time using data stream clustering . recently , many data stream clustering algorithms have been developed which efficiently find a partition of the data points in a data stream . however , these algorithms disregard the information represented by the temporal order of the data points in the stream which for many applications is an important part of the data stream . in this paper we propose a new framework called temporal relationships among clusters for data streams ( tracds ) which allows us to learn the temporal structure while clustering a data stream . we identify , organize and describe the clustering operations which are used by state-of-the-art data stream clustering algorithms . then we show that by defining a set of new operations to transform markov chains with states representing clusters dynamically , we can efficiently capture temporal ordering information . this framework allows us to preserve temporal relationships among clusters for any state-of-the-art data stream clustering algorithm with only minimal overhead . to investigate the usefulness of tracds , we evaluate the improvement of tracds over pure data stream story_separator_special_tag in this work a method for detecting distance-based outliers in data streams is presented . we deal with the sliding window model , where outlier queries are performed in order to detect anomalies in the current window . two algorithms are presented . the first one exactly answers outlier queries , but has larger space requirements . the second algorithm is directly derived from the exact one , has limited memory requirements and returns an approximate answer based on accurate estimations with a statistical guarantee . several experiments have been accomplished , confirming the effectiveness of the proposed approach and the high quality of approximate solutions . story_separator_special_tag many distributed systems continuously gather , produce and elaborate data , often as data streams that can change over time . discovering anomalous data is fundamental to obtain critical and actionable information such as intrusions , faults , and system failures . this paper proposes a multi-agent algorithm to detect anomalies in distributed data streams . as data items arrive from whatever sources , they are associated with bio-inspired agents and randomly disseminated onto a virtual space . the loaded agents move on the virtual space in order to form a group following the flocking algorithm . the agents group on the basis of a predefined concept of similarity of their associated objects . only the agents associated to similar objects form a flock , whereas the agents associated with objects dissimilar to each other do not group in flocks . anomalies are objects associated with isolated agents or objects associated with agents belonging to flocks having a few number of elements . swarm intelligence features of the approach , such as adaptivity , parallelism , asynchronism , and decentralization , make the algorithm scalable to very large data sets and very large distributed systems . experimental results for real story_separator_special_tag feature selection techniques have become an apparent need in many bioinformatics applications . in addition to the large pool of techniques that have already been developed in the machine learning and data mining fields , specific applications in bioinformatics have led to a wealth of newly proposed techniques . in this article , we make the interested reader aware of the possibilities of feature selection , providing a basic taxonomy of feature selection techniques , and discussing their use , variety and potential in a number of both common as well as upcoming bioinformatics applications . contact : yvan.saeys @ psb.ugent.be supplementary information : http : //bioinformatics.psb.ugent.be/supplementary_data/yvsae/fsreview story_separator_special_tag in this paper , we propose genetic algorithm ( ga ) to improve support vector machines ( svm ) based intrusion detection system ( ids ) . svm is relatively a novel classification technique and has shown higher performance than traditional learning methods in many applications . so several security researchers have proposed svm based ids . we use fusions of ga and svm to enhance the overall performance of svm based ids . through fusions of ga and svm , the `` optimal detection model '' for svm classifier can be determined . as the result of this fusion , svm based ids not only select `` optimal parameters `` for svm but also `` optimal feature set '' among the whole feature set . we demonstrate the feasibility of our method by performing several experiments on kdd 1999 intrusion detection system competition dataset . story_separator_special_tag intrusion detection system ( ids ) is to monitor the attacks occurring in the computer or networks . anomaly intrusion detection plays an important role in ids to detect new attacks by detecting any deviation from the normal profile . in this paper , an intelligent algorithm with feature selection and decision rules applied to anomaly intrusion detection is proposed . the key idea is to take the advantage of support vector machine ( svm ) , decision tree ( dt ) , and simulated annealing ( sa ) . in the proposed algorithm , svm and sa can find the best selected features to elevate the accuracy of anomaly intrusion detection . by analyzing the information from using kdd'99 dataset , dt and sa can obtain decision rules for new attacks and can improve accuracy of classification . in addition , the best parameter settings for the dt and svm are automatically adjusted by sa . the proposed algorithm outperforms other existing approaches . simulation results demonstrate that the proposed algorithm is successful in detecting anomaly intrusion detection . story_separator_special_tag as internet access widens , ids ( intrusion detection system ) is becoming a very important component of network security to prevent unauthorized use and misuse of data . an ids routinely handles massive amounts of data traffc that contain redundant and irrelevant features , which impact the performance of the ids negatively . feature selection methods play an important role in eliminating unrelated and redundant features in ids . statistical analysis , neural networks , machine learning , data mining techniques , and support vector machine models are employed in some such methods . good feature selection leads to better classification accuracy . recently , bio-inspired optimization algorithms have been used for feature selection . this work provides a survey of feature selection techniques for ids , including bio-inspired algorithms . story_separator_special_tag a central problem in machine learning is identifying a representative set of features from which to construct a classification model for a particular task . this thesis addresses the problem of feature selection for machine learning through a correlation based approach . the central hypothesis is that good feature sets contain features that are highly correlated with the class , yet uncorrelated with each other . a feature evaluation formula , based on ideas from test theory , provides an operational definition of this hypothesis . cfs ( correlation based feature selection ) is an algorithm that couples this evaluation formula with an appropriate correlation measure and a heuristic search strategy . cfs was evaluated by experiments on artificial and natural datasets . three machine learning algorithms were used : c4.5 ( a decision tree learner ) , ib1 ( an instance based learner ) , and naive bayes . experiments on artificial datasets showed that cfs quickly identifies and screens irrelevant , redundant , and noisy features , and identifies relevant features as long as their relevance does not strongly depend on other features . on natural domains , cfs typically eliminated well over half the features . in story_separator_special_tag cyber crimes and malicious network activities have posed serious threats to the entire internet and its users . this issue is becoming more critical , as network-based services , are more widespread and closely related to our daily life . thus , it has raised a serious concern in individual internet users , industry and research community . a significant amount of work has been conducted to develop intelligent anomaly-based intrusion detection systems ( idss ) to address this issue . however , one technical challenge , namely reducing false alarm , has been along with the development of anomaly-based idss since 1990s . in this paper , we provide a solution to this challenge . a nonlinear correlation coefficient-based ( ncc ) similarity measure is proposed to help extract both linear and nonlinear correlations between network traffic records . this extracted correlative information is used in our proposed ids to detect malicious network behaviours . the effectiveness of the proposed ncc-based measure and the proposed ids are evaluated using nsl-kdd dataset . the evaluation results demonstrate that the proposed ncc-based measure not only helps reduce false alarm rate , but also helps discriminate normal and abnormal behaviours efficiently . story_separator_special_tag feature selection is an effective technique in dealing with dimensionality reduction . for classification , it is used to find an `` optimal '' subset of relevant features such that the overall accuracy of classification is increased while the data size is reduced and the comprehensibility is improved . feature selection methods contain two important aspects : evaluation of a candidate feature subset and search through the feature space . existing algorithms adopt various measures to evaluate the goodness of feature subsets . this work focuses on inconsistency measure according to which a feature subset is inconsistent if there exist at least two instances with same feature values but with different class labels . we compare inconsistency measure with other measures and study different search strategies such as exhaustive , complete , heuristic and random search , that can be applied to this measure . we conduct an empirical study to examine the pros and cons of these search methods , give some guidelines on choosing a search method , and compare the classifier error rates before and after feature selection . story_separator_special_tag feature interaction presents a challenge to feature selection for classification . a feature by itself may have little correlation with the target concept , but when it is combined with some other features , they can be strongly correlated with the target concept . unintentional removal of these features can result in poor classification performance . handling feature interaction can be computationally intractable . recognizing the presence of feature interaction , we propose to efficiently handle feature interaction to achieve efficient feature selection and present extensive experimental results of evaluation . story_separator_special_tag 1. overview and descriptive statistics . populations , samples , and processes . pictorial and tabular methods in descriptive statistics . measures of location . measures of variability . 2. probability . sample spaces and events . axioms , interpretations , and properties of probability . counting techniques . conditional probability . independence . 3. discrete random variables and probability distributions . random variables . probability distributions for discrete random variables . expected values . the binomial probability distribution . hypergeometric and negative binomial distributions . the poisson probability distribution . 4. continuous random variables and probability distributions . probability density functions . cumulative distribution functions and expected values . the normal distribution . the exponential and gamma distributions . other continuous distributions . probability plots . 5. joint probability distributions and random samples . jointly distributed random variables . expected values , covariance , and correlation . statistics and their distributions . the distribution of the sample mean . the distribution of a linear combination . 6. point estimation . some general concepts of point estimation . methods of point estimation . 7. statistical intervals based on a single sample . basic properties of confidence intervals . large-sample confidence story_separator_special_tag the process of monitoring the events occurring in a computer system or network and analyzing them for sign of intrusions is known as intrusion detection system ( ids ) . this paper presents two hybrid approaches for modeling ids . decision trees ( dt ) and support vector machines ( svm ) are combined as a hierarchical hybrid intelligent system model ( dt-svm ) and an ensemble approach combining the base classifiers . the hybrid intrusion detection model combines the individual base classifiers and other hybrid machine learning paradigms to maximize detection accuracy and minimize computational complexity . empirical results illustrate that the proposed hybrid systems provide more accurate intrusion detection systems . story_separator_special_tag anomaly detection is a critical issue in network intrusion detection systems ( nidss ) . most anomaly based nidss employ supervised algorithms , whose performances highly depend on attack-free training data . however , this kind of training data is difficult to obtain in real world network environment . moreover , with changing network environment or services , patterns of normal traffic will be changed . this leads to high false positive rate of supervised nidss . unsupervised outlier detection can overcome the drawbacks of supervised anomaly detection . therefore , we apply one of the efficient data mining algorithms called random forests algorithm in anomaly based nidss . without attack-free training data , random forests algorithm can detect outliers in datasets of network traffic . in this paper , we discuss our framework of anomaly based network intrusion detection . in the framework , patterns of network services are built by random forests algorithm over traffic data . intrusions are detected by determining outliers related to the built patterns . we present the modification on the outlier detection algorithm of random forests . we also report our experimental results over the kdd'99 dataset . the results show that the story_separator_special_tag random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest . the generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large . the generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them . using a random selection of features to split each node yields error rates that compare favorably to adaboost ( y. freund & r. schapire , machine learning : proceedings of the thirteenth international conference , aaa , 148 156 ) , but are more robust with respect to noise . internal estimates monitor error , strength , and correlation and these are used to show the response to increasing the number of features used in the splitting . internal estimates are also used to measure variable importance . these ideas are also applicable to regression . story_separator_special_tag principal component analysis pca is a multivariate technique that analyzes a data table in which observations are described by several inter-correlated quantitative dependent variables . its goal is to extract the important information from the table , to represent it as a set of new orthogonal variables called principal components , and to display the pattern of similarity of the observations and of the variables as points in maps . the quality of the pca model can be evaluated using cross-validation techniques such as the bootstrap and the jackknife . pca can be generalized as correspondence analysis ca in order to handle qualitative variables and as multiple factor analysis mfa in order to handle heterogeneous sets of variables . mathematically , pca depends upon the eigen-decomposition of positive semi-definite matrices and upon the singular value decomposition svd of rectangular matrices . copyright \xa9 2010 john wiley & sons , inc . story_separator_special_tag abstract handling redundant and irrelevant features in high-dimension datasets has caused a long-term challenge for network anomaly detection . eliminating such features with spectral information not only speeds up the classification process but also helps classifiers make accurate decisions during attack recognition time , especially when coping with large-scale and heterogeneous data . a novel hybrid dimensionality reduction technique is proposed for intrusion detection combining the approaches of information gain ( ig ) and principal component analysis ( pca ) with an ensemble classifier based on support vector machine ( svm ) , instance-based learning algorithms ( ibk ) , and multilayer perceptron ( mlp ) . the performance of this ig-pca-ensemble method was evaluated based on three well-known datasets , namely iscx 2012 , nsl-kdd , and kyoto 2006+ . experimental results show that the proposed hybrid dimensionality reduction method with the ensemble of the base learners contributes more critical features and significantly outperforms individual approaches , achieving high accuracy and low false alarm rates . a comparative analysis is performed of our approach relative to related work and find that the proposed ig-pca-ensemble method exhibits better performance regarding classification accuracy , detection rate , and false alarm rate story_separator_special_tag a fundamental problem in neural network research , as well as in many other disciplines , is finding a suitable representation of multivariate data , i.e . random vectors . for reasons of computational and conceptual simplicity , the representation is often sought as a linear transformation of the original data . in other words , each component of the representation is a linear combination of the original variables . well-known linear transformation methods include principal component analysis , factor analysis , and projection pursuit . independent component analysis ( ica ) is a recently developed method in which the goal is to find a linear representation of non-gaussian data so that the components are statistically independent , or as independent as possible . such a representation seems to capture the essential structure of the data in many applications , including feature extraction and signal separation . in this paper , we present the basic theory and applications of ica , and our recent work on the subject . story_separator_special_tag most current intrusion detection systems ( ids ) examine all data features to detect intrusion . also existing intrusion detection approaches have some limitations , namely impossibility to process a large number of audit data for realtime operation , low detection and recognition accuracy . to overcome these limitations , we apply modular neural network models to detect and recognize attacks in computer networks . they are based on the combination of principal component analysis ( pca ) neural networks and multilayer perceptrons ( mlp ) . pca networks are employed for important data extraction and to reduce high dimensional data vectors . we present two pca neural networks for feature extraction : linear pca ( lpca ) and nonlinear pca ( npca ) . mlp is employed to detect and recognize attacks using feature-extracted data instead of original data . the proposed approaches are tested with the help of kdd-99 dataset . the experimental results demonstrate that the designed models are promising in terms of accuracy and computational time for real world intrusion detection . story_separator_special_tag network intrusion detection systems ( nidss ) play a crucial role in defending computer networks . however , there are concerns regarding the feasibility and sustainability of current approaches when faced with the demands of modern networks . more specifically , these concerns relate to the increasing levels of required human interaction and the decreasing levels of detection accuracy . this paper presents a novel deep learning technique for intrusion detection , which addresses these concerns . we detail our proposed nonsymmetric deep autoencoder ( ndae ) for unsupervised feature learning . furthermore , we also propose our novel deep learning classification model constructed using stacked ndaes . our proposed classifier has been implemented in graphics processing unit ( gpu ) -enabled tensorflow and evaluated using the benchmark kdd cup 99 and nsl-kdd datasets . promising results have been obtained from our model thus far , demonstrating improvements over existing approaches and the strong potential for use in modern nidss . story_separator_special_tag we present a new machine learning framework called `` self-taught learning '' for using unlabeled data in supervised classification tasks . we do not assume that the unlabeled data follows the same class labels or generative distribution as the labeled data . thus , we would like to use a large number of unlabeled images ( or audio samples , or text documents ) randomly downloaded from the internet to improve performance on a given image ( or audio , or text ) classification task . such unlabeled data is significantly easier to obtain than in typical semi-supervised or transfer learning settings , making self-taught learning widely applicable to many practical learning problems . we describe an approach to self-taught learning that uses sparse coding to construct higher-level features using the unlabeled data . these features form a succinct input representation and significantly improve classification performance . when using an svm for classification , we further show how a fisher kernel can be learned for this representation . story_separator_special_tag intrusion detection is a necessary step to identify unusual access or attacks to secure internal networks . in general , intrusion detection can be approached by machine learning techniques . in literature , advanced techniques by hybrid learning or ensemble methods have been considered , and related work has shown that they are superior to the models using single machine learning techniques . this paper proposes a hybrid learning model based on the triangle area based nearest neighbors ( tann ) in order to detect attacks more effectively . in tann , the k-means clustering is firstly used to obtain cluster centers corresponding to the attack classes , respectively . then , the triangle area by two cluster centers with one data from the given dataset is calculated and formed a new feature signature of the data . finally , the k-nn classifier is used to classify similar attacks based on the new feature represented by triangle areas . by using kdd-cup '99 as the simulation dataset , the experimental results show that tann can effectively detect intrusion attacks and provide higher accuracy and detection rates , and the lower false alarm rate than three baseline models based on support story_separator_special_tag this paper presents a taxonomy of intrusion detection systems that is then used to survey and classify a number of research prototypes . the taxonomy consists of a classification first of the detection principle , and second of certain operational aspects of the intrusion detection system as such . the systems are also grouped according to the increasing difficulty of the problem they attempt to address . these classifications are used predictively , pointing towards a number of areas of future research in the field of intrusion detection . story_separator_special_tag continuous , dynamic and short-term learning is an effective learning strategy when operating in dynamic and adversarial environments , where concept drift constantly occurs and attacks rapidly change over time . in an on-line , stream learning model , data arrives as a stream of sequentially ordered samples , and older data is no longer available to revise earlier suboptimal modeling decisions as the fresh data arrives . stream approaches work in a limited amount of time , and have the advantage to perform predictions at any point in time during the stream . we focus on a particularly challenging problem , that of continually learning detection models capable to recognize cyber-attacks and system intrusions in a highly dynamic and adversarial environment such as the open internet . we consider adaptive learning algorithms for the analysis of continuously evolving network data streams , using ( dynamic ) sliding windows -- representing the system memory , to periodically re-learn , automatically adapting to concept drifts in the underlying data . by continuously learning and detecting concept drifts to adapt memory length , we show that adaptive learning algorithms can realize high detection accuracy of evolving network attacks over dynamic network data story_separator_special_tag in recent years , a variety of research areas have contributed to a set of related problems with rare event , anomaly , novelty and outlier detection terms as the main actors . these multiple research areas have created a mix-up between terminology and problems . in some research , similar problems have been named differently ; while in some other works , the same term has been used to describe different problems . this confusion between terms and problems causes the repetition of research and hinders the advance of the field . therefore , a standardization is imperative . the goal of this paper is to underline the differences between each term , and organize the area by looking at all these terms under the umbrella of supervised classification . therefore , a one-to-one assignment of terms to learning scenarios is proposed . in fact , each learning scenario is associated with the term most frequently used in the literature . in order to validate this proposal , a set of experiments retrieving papers from google scholar , acm digital library and ieee xplore has been carried out . story_separator_special_tag knowledge discovery in databases ( kdd ) is an automatic , exploratory analysis and modeling of large data repositories . kdd is the organized process of identifying valid , novel , useful , and understandable patterns from large and complex data sets . data mining ( dm ) is the core of the kdd process , involving the inferring of algorithms that explore the data , develop the model and discover previously unknown patterns . the model is used for understanding phenomena from the data , analysis and prediction . story_separator_special_tag a taxonomy of weakly supervised classification problems.weak supervision in learning and prediction stages.problem structure : instance-label relationship.organization of the field : similarities and differences among frameworks.revealing unexplored challenging frameworks . in recent years , different researchers in the machine learning community have presented new classification frameworks which go beyond the standard supervised classification in different aspects . specifically , a wide spectrum of novel frameworks that use partially labeled data in the construction of classifiers has been studied . with the objective of drawing up a description of the state-of-the-art , three identifying characteristics of these novel frameworks have been considered : ( 1 ) the relationship between instances and labels of a problem , which may be beyond the one-instance one-label standard , ( 2 ) the possible provision of partial class information for the training examples , and ( 3 ) the possible provision of partial class information also for the examples in the prediction stage . these three ideas have been formulated as axes of a comprehensive taxonomy that organizes the state-of-the-art . the proposed organization allows us both to understand similarities/differences among the different classification problems already presented in the literature as well as to discover story_separator_special_tag semi-supervised learning is a learning paradigm concerned with the study of how computers and natural systems such as humans learn in the presence of both labeled and unlabeled data . traditionally , learning has been studied either in the unsupervised paradigm ( e.g. , clustering , outlier detection ) where all the data is unlabeled , or in the supervised paradigm ( e.g. , classification , regression ) where all the data is labeled.the goal of semi-supervised learning is to understand how combining labeled and unlabeled data may change the learning behavior , and design algorithms that take advantage of such a combination . semi-supervised learning is of great interest in machine learning and data mining because it can use readily available unlabeled data to improve supervised learning tasks when the labeled data is scarce or expensive . semi-supervised learning also shows potential as a quantitative tool to understand human category learning , where most of the input is self-evidently unlabeled . in this introductory book , we present some popular semi-supervised learning models , including self-training , mixture models , co-training and multiview learning , graph-based methods , and semi-supervised support vector machines . for each model , we story_separator_special_tag of the dissertation concept-learning in the absence of counter-examples : an autoassociation-based approach to classi cation by nathalie japkowicz dissertation directors : stephen jos e hanson and casimir kulikowski the overwhelming majority of research currently pursued within the framework of concept-learning concentrates on discrimination-based learning , an inductive learning paradigm that relies on both examples and counter-examples of the concept . this emphasis , however , can present a practical problem : there are real-world engineering problems for which counter-examples are both scarce and di cult to gather . for these problems , recognition-based learning systems are much more appropriate because they do not use counter-examples in the conceptlearning phase . the purpose of this dissertation is to analyze a connectionist recognition-based learning system|autoassociation-based classi cation|and answer the following questions : what features of the autoassociator make it capable of performing classi cation in the absence of counter-examples ? what causes the autoassociator to be signi cantly more e cient than mlp in certain domains ? what domain characteristics cause the autoassociator to be more accurate than mlp and mlp to be more accurate than the autoassociator ? the dissertation concludes that 1 ) autoassociation-based classi cation is possible in story_separator_special_tag application and development of specialized machine learning techniques is gaining increasing attention in the intrusion detection community . a variety of learning techniques proposed for different intrusion detection problems can be roughly classified into two broad categories : supervised ( classification ) and unsupervised ( anomaly detection and clustering ) . in this contribution we develop an experimental framework for comparative analysis of both kinds of learning techniques . in our framework we cast unsupervised techniques into a special case of classification , for which training and model selection can be performed by means of roc analysis . we then investigate both kinds of learning techniques with respect to their detection accuracy and ability to detect unknown attacks . story_separator_special_tag information security is an issue of serious global concern . the complexity , accessibility , and openness of the internet have served to increase the security risk of information systems tremendously . this paper concerns intrusion detection . we describe approaches to intrusion detection using neural networks and support vector machines . the key ideas are to discover useful patterns or features that describe user behavior on a system , and use the set of relevant features to build classifiers that can recognize anomalies and known intrusions , hopefully in real time . using a set of benchmark data from a kdd ( knowledge discovery and data mining ) competition designed by darpa , we demonstrate that efficient and accurate classifiers can be built to detect intrusions . we compare the performance of neural networks based , and support vector machine based , systems for intrusion detection . story_separator_special_tag abstract the aim of an intrusion detection systems ( ids ) is to detect various types of malicious network traffic and computer usage , which can not be detected by a conventional firewall . many ids have been developed based on machine learning techniques . specifically , advanced detection approaches created by combining or integrating multiple learning techniques have shown better detection performance than general single learning techniques . the feature representation method is an important pattern classifier that facilitates correct classifications , however , there have been very few related studies focusing how to extract more representative features for normal connections and effective detection of attacks . this paper proposes a novel feature representation approach , namely the cluster center and nearest neighbor ( cann ) approach . in this approach , two distances are measured and summed , the first one based on the distance between each data sample and its cluster center , and the second distance is between the data and its nearest neighbor in the same cluster . then , this new and one-dimensional distance based feature is used to represent each data sample for intrusion detection by a k-nearest neighbor ( k-nn ) classifier story_separator_special_tag summary with the tremendous growth of network-based services and sensitive information on networks , network security is getting more and more importance than ever . intrusion poses a serious security risk in a network environment . the ever growing new intrusion types posses a serious problem for their detection . the human labelling of the available network audit data instances is usually tedious , time consuming and expensive . in this paper , we apply one of the efficient data mining algorithms called naive bayes for anomaly based network intrusion detection . experimental results on the kdd cup 99 data set show the novelty of our approach in detecting network intrusion . it is observed that the proposed technique performs better in terms of false positive rate , cost , and computational time when applied to kdd 99 data sets compared to a back propagation neural network based approach . story_separator_special_tag 1. various aspects of memory.- 1.1 on the purpose and nature of biological memory.- 1.1.1 some fundamental concepts.- 1.1.2 the classical laws of association.- 1.1.3 on different levels of modelling.- 1.2 questions concerning the fundamental mechanisms of memory.- 1.2.1 where do the signals relating to memory act upon ? .- 1.2.2 what kind of encoding is used for neural signals ? .- 1.2.3 what are the variable memory elements ? .- 1.2.4 how are neural signals addressed in memory ? .- 1.3 elementary operations implemented by associative memory.- 1.3.1 associative recall.- 1.3.2 production of sequences from the associative memory.- 1.3.3 on the meaning of background and context.- 1.4 more abstract aspects of memory.- 1.4.1 the problem of infinite-state memory.- 1.4.2 invariant representations.- 1.4.3 symbolic representations.- 1.4.4 virtual images.- 1.4.5 the logic of stored knowledge.- 2. pattern mathematics.- 2.1 mathematical notations and methods.- 2.1.1 vector space concepts.- 2.1.2 matrix notations.- 2.1.3 further properties of matrices.- 2.1.4 matrix equations.- 2.1.5 projection operators.- 2.1.6 on matrix differential calculus.- 2.2 distance measures for patterns.- 2.2.1 measures of similarity and distance in vector spaces.- 2.2.2 measures of similarity and distance between symbol strings.- 2.2.3 more accurate distance measures for text.- 3. classical learning systems.- 3.1 story_separator_special_tag a novel multilevel hierarchical kohonen net ( k-map ) for an intrusion detection system is presented . each level of the hierarchical map is modeled as a simple winner-take-all k-map . one significant advantage of this multilevel hierarchical k-map is its computational efficiency . unlike other statistical anomaly detection methods such as nearest neighbor approach , k-means clustering or probabilistic analysis that employ distance computation in the feature space to identify the outliers , our approach does not involve costly point-to-point computation in organizing the data into clusters . another advantage is the reduced network size . we use the classification capability of the k-map on selected dimensions of data set in detecting anomalies . randomly selected subsets that contain both attacks and normal records from the kdd cup 1999 benchmark data are used to train the hierarchical net . we use a confidence measure to label the clusters . then we use the test set from the same kdd cup 1999 benchmark to test the hierarchical net . we show that a hierarchical k-map in which each layer operates on a small subset of the feature space is superior to a single-layer k-map operating on the whole feature space story_separator_special_tag ensemble methods have been called the most influential development in data mining and machine learning in the past decade . they combine multiple models into one usually more accurate than the best of its components . ensembles can provide a critical boost to industrial challenges -- from investment timing to drug discovery , and fraud detection to recommendation systems -- where predictive accuracy is more vital than model interpretability . ensembles are useful with all modeling algorithms , but this book focuses on decision trees to explain them most clearly . after describing trees and their strengths and weaknesses , the authors provide an overview of regularization -- today understood to be a key reason for the superior performance of modern ensembling algorithms . the book continues with a clear description of two recent developments : importance sampling ( is ) and rule ensembles ( re ) . is reveals classic ensemble methods -- bagging , random forests , and boosting -- to be special cases of a single algorithm , thereby showing how to improve their accuracy and speed . res are linear rule models derived from decision tree ensembles . they are the most interpretable version of ensembles story_separator_special_tag this is the first textbook on pattern recognition to present the bayesian viewpoint . the book presents approximate inference algorithms that permit fast approximate answers in situations where exact answers are not feasible . it uses graphical models to describe probability distributions when no other books apply graphical models to machine learning . no previous knowledge of pattern recognition or machine learning concepts is assumed . familiarity with multivariate calculus and basic linear algebra is required , and some experience in the use of probabilities would be helpful though not essential as the book includes a self-contained introduction to basic probability theory . story_separator_special_tag network operators are increasingly using analytic applications to improve the performance of their networks . telecommunications analytical applications typically use sql and complex event processing ( cep ) for data processing , network analysis and troubleshooting . such approaches are hindered as they require an in-depth knowledge of both the telecommunications domain and telecommunications data structures in order to create the required queries . valuable information contained in free form text data fields such as additional_info , user_text or problem_text can also be ignored . this work proposes an anomaly detection algorithm for maintenance and network troubleshooting ( adamant ) , a text analytic based network anomaly detection approach . once telecommunications data records have been indexed , adamant uses distance based outlier detection within sliding windows to detect abnormal terms at configurable time intervals . traditional approaches focus on a specific type of record and create specific cause and effect rules . with the adamant approach all free form text fields of alarms , logs , etc . are treated as text documents similar to twitter feeds . all documents within a window represent a snapshot of the network state that is processed by adamant . the adamant approach story_separator_special_tag efficiently detecting outliers or anomalies is an important problem in many areas of science , medicine and information technology . applications range from data cleaning to clinical diagnosis , from detecting anomalous defects in materials to fraud and intrusion detection . over the past decade , researchers in data mining and statistics have addressed the problem of outlier detection using both parametric and non-parametric approaches in a centralized setting . however , there are still several challenges that must be addressed . first , most approaches to date have focused on detecting outliers in a continuous attribute space . however , almost all real-world data sets contain a mixture of categorical and continuous attributes . categorical attributes are typically ignored or incorrectly modeled by existing approaches , resulting in a significant loss of information . second , there have not been any general-purpose distributed outlier detection algorithms . most distributed detection algorithms are designed with a specific domain ( e.g . sensor networks ) in mind . third , the data sets being analyzed may be streaming or otherwise dynamic in nature . such data sets are prone to concept drift , and models of the data must be dynamic story_separator_special_tag outlier detection has attracted substantial attention in many applications and research areas ; some of the most prominent applications are network intrusion detection or credit card fraud detection . many of the existing approaches are based on calculating distances among the points in the dataset . these approaches can not easily adapt to current datasets that usually contain a mix of categorical and continuous attributes , and may be distributed among different geographical locations . in addition , current datasets usually have a large number of dimensions . these datasets tend to be sparse , and traditional concepts such as euclidean distance or nearest neighbor become unsuitable . we propose a fast distributed outlier detection strategy intended for datasets containing mixed attributes . the proposed method takes into consideration the sparseness of the dataset , and is experimentally shown to be highly scalable with the number of points and the number of attributes in the dataset . experimental results show that the proposed outlier detection method compares very favorably with other state-of-the art outlier detection strategies proposed in the literature and that the speedup achieved by its distributed version is very close to linear . story_separator_special_tag the open-source software rrdtool and cricket provide a solution to the problem of collecting , storing , and visualizing service network time series data for the real-time monitoring task . however , simultaneously monitoring all service network time series of interest is an impossible task even for the accomplished network technician . the solution is to integrate a mathematical model for automatic aberrant behavior detection in time series into the monitoring software . while there are many such models one might choose , the primary goal should be a model compatible with real-time monitoring . at webtv , the solution was to integrate a model based on exponential smoothing and holt-winters forecasting into the cricket/rrdtool architecture . while perhaps not optimal , this solution is flexible , efficient , and effective as a tool for automatic aberrant behavior detection . story_separator_special_tag abstract we are seeing an enormous increase in the availability of streaming , time-series data . largely driven by the rise of connected real-time data sources , this data presents technical challenges and opportunities . one fundamental capability for streaming analytics is to model each stream in an unsupervised fashion and detect unusual , anomalous behaviors in real-time . early anomaly detection is valuable , yet it can be difficult to execute reliably in practice . application constraints require systems to process data in real-time , not batches . streaming data inherently exhibits concept drift , favoring algorithms that learn continuously . furthermore , the massive number of independent streams in practice requires that anomaly detectors be fully automated . in this paper we propose a novel anomaly detection algorithm that meets these constraints . the technique is based on an online sequence memory algorithm called hierarchical temporal memory ( htm ) . we also present results using the numenta anomaly benchmark ( nab ) , a benchmark containing real-world data streams with labeled anomalies . the benchmark , the first of its kind , provides a controlled open-source environment for testing anomaly detection algorithms on streaming data . we story_separator_special_tag anomaly detection ( ad ) use within the network intrusion detection field of research , or network intrusion ad ( niad ) , is dependent on the proper use of similarity and distance measures , but the measures used are often not documented in published research . as a result , while the body of niad research has grown extensively , knowledge of the utility of similarity and distance measures within the field has not grown correspondingly . niad research covers a myriad of domains and employs a diverse array of techniques from simple $ k $ -means clustering through advanced multiagent distributed ad systems . this review presents an overview of the use of similarity and distance measures within niad research . the analysis provides a theoretical background in distance measures and a discussion of various types of distance measures and their uses . exemplary uses of distance measures in published research are presented , as is the overall state of the distance measure rigor in the field . finally , areas that require further focus on improving the distance measure rigor in the niad field are presented . story_separator_special_tag abstract anomaly detection algorithms face several challenges , including processing speed , adapting to changes in dynamic environments , and dealing with noise in data . in this paper , a two-layer cluster-based anomaly detection structure is presented which is fast , noise-resilient and incremental . the proposed structure comprises three main steps . in the first step , the data are clustered . the second step is to represent each cluster in a way that enables the model to classify new instances . the summarization based on gaussian mixture model ( sgmm ) proposed in this paper represents each cluster as a gmm . in the third step , a two-layer structure efficiently updates clusters using gmm representation , while detecting and ignoring redundant instances . a new approach , called collective probabilistic labeling ( cpl ) is presented to update clusters incrementally . this approach makes the updating phase noise-resistant and fast . an important step in the updating is the merging of new clusters with existing ones . to this end , a new distance measure is proposed , which is a modified kullback leibler distance between two gmms . in most real-time anomaly detection applications , story_separator_special_tag abstract despite the large volume of research conducted in the field of intrusion detection , finding a perfect solution of intrusion detection systems for critical applications is still a major challenge . this is mainly due to the continuous emergence of security threats which can bypass the outdated intrusion detection systems . the main objective of this paper is to propose an adaptive design of intrusion detection systems on the basis of extreme learning machines . the proposed system offers the capability of detecting known and novel attacks and being updated according to new trends of data patterns provided by security experts in a cost-effective manner . story_separator_special_tag in this research , we propose two new clustering algorithms , the improved competitive learning network ( icln ) and the supervised improved competitive learning network ( sicln ) , for fraud detection and network intrusion detection . the icln is an unsupervised clustering algorithm , which applies new rules to the standard competitive learning neural network ( scln ) . the network neurons in the icln are trained to represent the center of the data by a new reward-punishment update rule . this new update rule overcomes the instability of the scln . the sicln is a supervised version of the icln . in the sicln , the new supervised update rule uses the data labels to guide the training process to achieve a better clustering result . the sicln can be applied to both labeled and unlabeled data and is highly tolerant to missing or delay labels . furthermore , the sicln is capable to reconstruct itself , thus is completely independent from the initial number of clusters . to assess the proposed algorithms , we have performed experimental comparisons on both research data and real-world data in fraud detection and network intrusion detection . the results demonstrate story_separator_special_tag this thesis is a study of the computational complexity of machine learning from examples in the distribution-free model introduced by l. g. valiant ( v84 ) . in the distribution-free model , a learning algorithm receives positive and negative examples of an unknown target set ( or concept ) that is chosen from some known class of sets ( or concept class ) . these examples are generated randomly according to a fixed but unknown probability distribution representing nature , and the goal of the learning algorithm is to infer an hypothesis concept that closely approximates the target concept with respect to the unknown distribution . this thesis is concerned with proving theorems about learning in this formal mathematical model . we are interested in the phenomenon of efficient learning in the distribution-free model , in the standard polynomial-time sense . our results include general tools for determining the polynomial-time learnability of a concept class , an extensive study of efficient learning when errors are present in the examples , and lower bounds on the number of examples required for learning in our model . a centerpiece of the thesis is a series of results demonstrating the computational difficulty of story_separator_special_tag data stream mining is an active research area that has recently emerged to discover knowledge from large amounts of continuously generated data . in this context , several data stream clustering algorithms have been proposed to perform unsupervised learning . nevertheless , data stream clustering imposes several challenges to be addressed , such as dealing with nonstationary , unbounded data that arrive in an online fashion . the intrinsic nature of stream data requires the development of algorithms capable of performing fast and incremental processing of data objects , suitably addressing time and memory limitations . in this article , we present a survey of data stream clustering algorithms , providing a thorough discussion of the main design components of state-of-the-art algorithms . in addition , this work addresses the temporal aspects involved in data stream clustering , and presents an overview of the usually employed experimental methodologies . a number of references are provided that describe applications of data stream clustering in different domains , such as network intrusion detection , sensor networks , and stock market analysis . information regarding software packages and data repositories are also available for helping researchers and practitioners . finally , some important story_separator_special_tag concept drift primarily refers to an online supervised learning scenario when the relation between the input data and the target variable changes over time . assuming a general knowledge of supervised learning in this article , we characterize adaptive learning processes ; categorize existing strategies for handling concept drift ; overview the most representative , distinct , and popular techniques and algorithms ; discuss evaluation methodology of adaptive algorithms ; and present a set of illustrative applications . the survey covers the different facets of concept drift in an integrated way to reflect on the existing scattered state of the art . thus , it aims at providing a comprehensive introduction to the concept drift adaptation for researchers , industry analysts , and practitioners . story_separator_special_tag nowadays , with the advance of technology , many applications generate huge amounts of data streams at very high speed . examples include network traffic , web click streams , video surveillance , and sensor networks . data stream mining has become a hot research topic . its goal is to extract hidden knowledge/patterns from continuous data streams . unlike traditional data mining where the dataset is static and can be repeatedly read many times , data stream mining algorithms face many challenges and have to satisfy constraints such as bounded memory , single-pass , real-time response , and concept-drift detection . this paper presents a comprehensive survey of the state-of-the-art data stream mining algorithms with a focus on clustering and classification because of their ubiquitous usage . it identifies mining constraints , proposes a general model for data stream mining , and depicts the relationship between traditional data mining and data stream mining . furthermore , it analyzes the advantages as well as limitations of data stream algorithms and suggests potential areas for future research . story_separator_special_tag security analysis of learning algorithms is gaining increasing importance , especially since they have become target of deliberate obstruction in certain applications . some security-hardened algorithms have been previously proposed for supervised learning ; however , very little is known about the behavior of anomaly detection methods in such scenarios . in this contribution , we analyze the performance of a particular method online centroid anomaly detection in the presence of adversarial noise . our analysis addresses three key security-related issues : derivation of an optimal attack , analysis of its efficiency and constraints . experimental evaluation carried out on real http and exploit traces confirms the tightness of our theoretical bounds . story_separator_special_tag learning from data streams is a research area of increasing importance . nowadays , several stream learning algorithms have been developed . most of them learn decision models that continuously evolve over time , run in resource-aware environments , detect and react to changes in the environment generating data . one important issue , not yet conveniently addressed , is the design of experimental work to evaluate and compare decision models that evolve over time . there are no golden standards for assessing performance in non-stationary environments . this paper proposes a general framework for assessing predictive stream learning algorithms . we defend the use of predictive sequential methods for error estimate - the prequential error . the prequential error allows us to monitor the evolution of the performance of models that evolve over time . nevertheless , it is known to be a pessimistic estimator in comparison to holdout estimates . to obtain more reliable estimators we need some forgetting mechanism . two viable alternatives are : sliding windows and fading factors . we observe that the prequential error converges to an holdout estimator when estimated over a sliding window or using fading factors . we present illustrative examples story_separator_special_tag most statistical and machine-learning algorithms assume that the data is a random sample drawn from a stationary distribution . unfortunately , most of the large databases available for mining today violate this assumption . they were gathered over months or years , and the underlying processes generating them changed during this time , sometimes radically . although a number of algorithms have been proposed for learning time-changing concepts , they generally do not scale well to very large databases . in this paper we propose an efficient algorithm for mining decision trees from continuously-changing data streams , based on the ultra-fast vfdt decision tree learner . this algorithm , called cvfdt , stays current while making the most of old data by growing an alternative subtree whenever an old one becomes questionable , and replacing the old with the new when the new becomes more accurate . cvfdt learns a model which is similar in accuracy to the one that would be learned by reapplying vfdt to a moving window of examples every time a new example arrives , but with o ( 1 ) complexity per example , as opposed to o ( w ) , where w is story_separator_special_tag the most commonly reported model evaluation metric is the accuracy . this metric can be misleading when the data are imbalanced . in such cases , other evaluation metrics should be considered in addition to the accuracy . this study reviews alternative evaluation metrics for assessing the effectiveness of a model in highly imbalanced data . we used credit card clients in taiwan as a case study . the data set contains 30,000 instances ( 22.12 % risky and 77.88 % non-risky ) assessing the likeliness of a customer defaulting on a payment . three different techniques were used during the model building process . the first technique involved down-sampling the majority class in the training subset . the second used the original imbalanced data whereas prior probabilities were set to account for oversampling in the third technique . the same sets of predictive models were then built for each technique after which the evaluation metrics were computed . the results suggest that model evaluation metrics might reveal more about distribution of classes than they do about the actual performance of models when the data are imbalanced . moreover , some of the predictive models were identified to be very story_separator_special_tag in this paper , we present three datasets that have been built from network traffic traces using asnm ( advanced security network metrics ) features , designed in our previous work . the first dataset was built using a state-of-the-art dataset cdx 2009 that was collected during a cyber defense exercise , while the remaining two datasets were collected by us in 2015 and 2018 using publicly available network services containing buffer overflow and other high severity vulnerabilities . these two datasets contain several adversarial obfuscation techniques that were applied onto malicious as well as legitimate traffic samples during the execution of their tcp network connections . adversarial obfuscation techniques were used for evading machine learning-based network intrusion detection classifiers . we show that the performance of such classifiers can be improved when partially augmenting their training data by samples obtained from obfuscation techniques . in detail , we utilized tunneling obfuscation in http ( s ) protocol and non-payload-based obfuscations modifying various properties of network traffic by , e.g. , tcp segmentation , re-transmissions , corrupting and reordering of packets , etc . to the best of our knowledge , this is the first collection of network traffic data story_separator_special_tag every day , huge volumes of sensory , transactional , and web data are continuously generated as streams , which need to be analyzed online as they arrive . streaming data can be considered as one of the main sources of what is called big data . while predictive modeling for data streams and big data have received a lot of attention over the last decade , many research approaches are typically designed for well-behaved controlled problem settings , overlooking important challenges imposed by real-world applications . this article presents a discussion on eight open challenges for data stream mining . our goal is to identify gaps between current research and meaningful applications , highlight open problems , and define new application-relevant research directions for data stream mining . the identified challenges cover the full cycle of knowledge discovery and involve such problems as : protecting data privacy , dealing with legacy systems , handling incomplete and delayed information , analysis of complex data , and evaluation of stream mining algorithms . the resulting analysis is illustrated by practical applications and provides general suggestions concerning lines of future research in data stream mining . story_separator_special_tag machine learning systems offer unparalled flexibility in dealing with evolving input in a variety of applications , such as intrusion detection systems and spam e-mail filtering . however , machine learning algorithms themselves can be a target of attack by a malicious adversary . this paper provides a framework for answering the question , `` can machine learning be secure ? '' novel contributions of this paper include a taxonomy of different types of attacks on machine learning techniques and systems , a variety of defenses against those attacks , a discussion of ideas that are important to security for machine learning , an analytical model giving a lower bound on attacker 's work function , and a list of open problems . story_separator_special_tag defending a server against internet worms and defending a user 's email inbox against spam bear certain similarities . in both cases , a stream of samples arrives , and a classifier must automatically determine whether each sample falls into a malicious target class ( e.g. , worm network traffic , or spam email ) . a learner typically generates a classifier automatically by analyzing two labeled training pools : one of innocuous samples , and one of samples that fall in the malicious target class . learning techniques have previously found success in settings where the content of the labeled samples used in training is either random , or even constructed by a helpful teacher , who aims to speed learning of an accurate classifier . in the case of learning classifiers for worms and spam , however , an adversary controls the content of the labeled samples to a great extent . in this paper , we describe practical attacks against learning , in which an adversary constructs labeled samples that , when used to train a learner , prevent or severely delay generation of an accurate classifier . we show that even a delusive adversary , whose story_separator_special_tag in security-sensitive applications , the success of machine learning depends on a thorough vetting of their resistance to adversarial data . in one pertinent , well-motivated attack scenario , an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples . in this work , we present a simple but effective gradient-based approach that can be exploited to systematically assess the security of several , widely-used classification algorithms against evasion attacks . following a recently proposed framework for security evaluation , we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker 's knowledge of the system and her ability to manipulate attack samples . this gives the classifier designer a better picture of the classifier performance under evasion attacks , and allows him to perform a more informed model selection ( or parameter setting ) . we evaluate our approach on the relevant security task of malware detection in pdf files , and show that such systems can be easily evaded . we also sketch some countermeasures suggested by our analysis . story_separator_special_tag machine learning ( ml ) models , e.g. , deep neural networks ( dnns ) , are vulnerable to adversarial examples : malicious inputs modified to yield erroneous model outputs , while appearing unmodified to human observers . potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior . yet , all existing adversarial example attacks require knowledge of either the model internals or its training data . we introduce the first practical demonstration of an attacker controlling a remotely hosted dnn with no such knowledge . indeed , the only capability of our black-box adversary is to observe labels given by the dnn to chosen inputs . our attack strategy consists in training a local model to substitute for the target dnn , using inputs synthetically generated by an adversary and labeled by the target dnn . we use the local substitute to craft adversarial examples , and find that they are misclassified by the targeted dnn . to perform a real-world and properly-blinded evaluation , we attack a dnn hosted by metamind , an online deep learning api . we find that their dnn misclassifies 84.24 % of the adversarial examples crafted with story_separator_special_tag pattern classification systems are commonly used in adversarial applications , like biometric authentication , network intrusion detection , and spam filtering , in which data can be purposely manipulated by humans to undermine their operation . as this adversarial scenario is not taken into account by classical design methods , pattern classification systems may exhibit vulnerabilities , whose exploitation may severely affect their performance , and consequently limit their practical utility . extending pattern classification theory and design methods to adversarial settings is thus a novel and very relevant research direction , which has not yet been pursued in a systematic way . in this paper , we address one of the main open issues : evaluating at design phase the security of pattern classifiers , namely , the performance degradation under potential attacks they may incur during operation . we propose a framework for empirical evaluation of classifier security that formalizes and generalizes the main ideas proposed in the literature , and give examples of its use in three real applications . reported results show that security evaluation can provide a more complete understanding of the classifier 's behavior in adversarial environments , and lead to better design choices
this article provides an update on the global cancer burden using the globocan 2020 estimates of cancer incidence and mortality produced by the international agency for research on cancer . worldwide , an estimated 19.3 million new cancer cases ( 18.1 million excluding nonmelanoma skin cancer ) and almost 10.0 million cancer deaths ( 9.9 million excluding nonmelanoma skin cancer ) occurred in 2020. female breast cancer has surpassed lung cancer as the most commonly diagnosed cancer , with an estimated 2.3 million new cases ( 11.7 % ) , followed by lung ( 11.4 % ) , colorectal ( 10.0 % ) , prostate ( 7.3 % ) , and stomach ( 5.6 % ) cancers . lung cancer remained the leading cause of cancer death , with an estimated 1.8 million deaths ( 18 % ) , followed by colorectal ( 9.4 % ) , liver ( 8.3 % ) , stomach ( 7.7 % ) , and female breast ( 6.9 % ) cancers . overall incidence was from 2 fold to 3 fold higher in transitioned versus transitioning countries for both sexes , whereas mortality varied < 2 fold for men and little for women . story_separator_special_tag a large number of experimental discoveries especially in the heavy quarkonium sector that did not at all fit to the expectations of the until then very successful quark model led to a renaissance of hadron spectroscopy . among various explanations of the internal structure of these excitations , hadronic molecules , being analogues of light nuclei , play a unique role since for those predictions can be made with controlled uncertainty . we review experimental evidences of various candidates of hadronic molecules , and methods of identifying such structures . nonrelativistic effective field theories are the suitable framework for studying hadronic molecules , and are discussed in both the continuum and finite volumes . also pertinent lattice qcd results are presented . further , we discuss the production mechanisms and decays of hadronic molecules , and comment on the reliability of certain assertions often made in the literature . story_separator_special_tag this paper proposes a method of modeling and simulation of photovoltaic arrays . the main objective is to find the parameters of the nonlinear i-v equation by adjusting the curve at three points : open circuit , maximum power , and short circuit . given these three points , which are provided by all commercial array data sheets , the method finds the best i-v equation for the single-diode photovoltaic ( pv ) model including the effect of the series and parallel resistances , and warranties that the maximum power of the model matches with the maximum power of the real array . with the parameters of the adjusted i-v equation , one can build a pv circuit model with any circuit simulator by using basic math blocks . the modeling method and the proposed circuit model are useful for power electronics designers who need a simple , fast , accurate , and easy-to-use modeling method for using in simulations of pv systems . in the first pages , the reader will find a tutorial on pv devices and will understand the parameters that compose the single-diode pv model . the modeling method is then introduced and presented in details story_separator_special_tag recently , we have shown that the x ( 3872 ) state can be naturally generated as a bound state by incorporating the hadron interactions into the godfrey-isgur quark model using a friedrichs-like model combined with the quark pair creation model , in which the wave function for the x ( 3872 ) as a combination of the bare cc\xaf state and the continuum states can also be obtained . under this scheme , we now investigate the isospin-breaking effect of x ( 3872 ) in its decays to j/ + - and j/ + - 0. by coupling its dominant continuum parts to j/ and j/ through the quark rearrangement process , one could obtain the reasonable ratio of b ( x ( 3872 ) j/ + - 0 ) /b ( x ( 3872 ) j/ + - ) ( 0.58 0.92 ) . it is also shown that the d\xafd * invariant mass distributions in the b d\xafd * k decays could be understood qualitatively at the same time . this scheme may provide more insight into the enigmatic nature of the x ( 3872 ) state . story_separator_special_tag we present a quark model calculation of the charmonium spectrum with self-energy corrections due to the coupling to the meson-meson continuum . the bare masses used in the calculation are computed within the relativized quark model by godfrey and isgur . the strong decay widths of $ 3s $ , $ 2p $ , $ 1d $ , and $ 2d $ $ c\\overline { c } $ states are also calculated , to set the values of the $ { } ^ { 3 } { p } _ { 0 } $ pair-creation model 's parameters we use to compute the vertex functions of the loop integrals . finally , the nature of the $ x ( 3872 ) $ resonance is analyzed and the main possibilities ( $ c\\overline { c } $ state or $ d { \\overline { d } } ^ { * } $ molecule ) are discussed . according to our results , the $ x ( 3872 ) $ is compatible with the meson $ { \\ensuremath { \\chi } } _ { c1 } ( 2p ) $ , with $ { j } ^ { \\mathrm { pc } story_separator_special_tag the x ( 3872 ) is the first and the most interesting one amongst the abundant xyz states . its mass coincides exactly with the d^ { 0 } d [ over \xaf ] ^ { * 0 } threshold with an uncertainty of 180\xa0kev . precise knowledge of its mass is crucial to understand the x ( 3872 ) . however , whether it is above or below the d^ { 0 } d [ over \xaf ] ^ { * 0 } threshold is still unknown . we propose a completely new method to measure the x ( 3872 ) mass precisely by measuring the x ( 3872 ) line shape between 4010 and 4020\xa0mev , which is strongly sensitive to the x ( 3872 ) mass relative to the d^ { 0 } d [ over \xaf ] ^ { * 0 } threshold due to a triangle singularity . this method can be applied to experiments which produce copious d^ { * 0 } d [ over \xaf ] ^ { * 0 } pairs , such as electron-positron , proton-antiproton , and other experiments , and may lead to much more precise knowledge of the story_separator_special_tag the possibility of the y ( 4260 ) being the molecular state of $ d { \\overline { d } } _ { 1 } ( 2420 ) +\\mathrm { c } .\\mathrm { c } . $ is investigated in the one boson exchange model . it turns out that the potential of $ { j } ^ { pc } = { 1 } ^ { \\ensuremath { - } \\ensuremath { - } } $ state formed by $ d { \\overline { d } } _ { 1 } ( 2420 ) +\\mathrm { c } .\\mathrm { c } . $ is attractive and strong enough to bind them together when the momentum cutoff $ \\mathrm { \\ensuremath { \\lambda } } \\ensuremath { \\gtrsim } 1.4\\text { } \\text { } \\mathrm { gev } $ . to produce the y ( 4260 ) with correct binding energy , we need $ \\mathrm { \\ensuremath { \\lambda } } \\ensuremath { \\approx } 2.25\\text { } \\text { } \\mathrm { gev } $ . besides , $ d { \\overline { d } } _ { 1 } ( 2420 ) +\\mathrm { story_separator_special_tag in 1825 , the actuary benjamin gompertz read a paper , on the nature of the function expressive of the law of human mortality , and on a new mode of determining the value of life contingencies , to the royal society in which he showed that over much of the adult human lifespan , age-specific mortality rates increased in an exponential manner . gompertz 's work played an important role in shaping the emerging statistical science that underpins the pricing of life insurance and annuities . latterly , as the subject of ageing itself became the focus of scientific study , the gompertz model provided a powerful stimulus to examine the patterns of death across the life course not only in humans but also in a wide range of other organisms . the idea that the gompertz model might constitute a fundamental law of mortality has given way to the recognition that other patterns exist , not only across the species range but also in advanced old age . nevertheless , gompertz 's way of representing the function expressive of the pattern of much of adult mortality retains considerable relevance for studying the factors that influence the intrinsic biology story_separator_special_tag abstract two z b hadrons with exotic quark structure b \xaf b d \xaf u were discovered by belle experiment . we present a lattice qcd study of the b \xaf b d \xaf u system in the approximation of static b quarks , where the total spin of heavy quarks is fixed to one . the energies of eigenstates are determined as a function of the separation r between b and b \xaf . the lower eigenstates are related to a bottomonium and a pion . the eigenstate dominated by b b \xaf has energy significantly below m b + m b , which points to a sizable attraction for small r. the attractive potential v ( r ) between b and b \xaf is extracted assuming that this eigenstate is related exclusively to b b \xaf . the schrodinger equation for b b \xaf within the extracted potential leads to a virtual bound state , whose mass depends on the parametrization of the lattice potential . for certain parametrizations , we find a virtual bound state slightly below b b \xaf threshold and a narrow peak in the b b \xaf rate above threshold - these features could story_separator_special_tag the most recent experimental data for all measured production and decay channels of the bottomonium-like states $ z_b ( 10610 ) $ and $ z_b ( 10650 ) $ are analysed simultaneously using solutions of the lippmann-schwinger equations which respect constraints from unitarity and analyticity . the interaction potential in the open-bottom channels $ b^ { ( * ) } \\bar { b } ^ { * } +\\mbox { c.c . } $ contains short-range interactions as well as one-pion exchange . it is found that the long-range interaction does not affect the line shapes as long as only $ s $ waves are considered . meanwhile , the line shapes can be visibly modified once $ d $ waves , mediated by the strong tensor forces from the pion exchange potentials , are included . however , in the fit they get balanced largely by a momentum dependent contact term that appears to be needed also to render the results for the line shapes independent of the cut-off . the resulting line shapes are found to be insensitive to various higher-order interactions included to verify stability of the results . both $ z_b $ states are found to story_separator_special_tag we consider a recent t-matrix analysis by albaladejo et al . ( phys lett b 755:337 , 2016 ) , which accounts for the $ $ j/\\psi \\pi $ $ j/ and $ $ d^ * \\bar { d } $ $ d d\xaf coupled-channels dynamics , and which successfully describes the experimental information concerning the recently discovered $ $ z_c ( 3900 ) ^\\pm $ $ zc ( 3900 ) \xb1 . within such scheme , the data can be similarly well described in two different scenarios , where $ $ z_c ( 3900 ) $ $ zc ( 3900 ) is either a resonance or a virtual state . to shed light into the nature of this state , we apply this formalism in a finite box with the aim of comparing with recent lattice qcd ( lqcd ) simulations . we see that the energy levels obtained for both scenarios agree well with those obtained in the single-volume lqcd simulation reported in prelovsek et al . ( phys rev d 91:014504 , 2015 ) , thus making it difficult to disentangle the two possibilities . we also study the volume dependence of the energy levels obtained with story_separator_special_tag motivated by multiple phenomenological considerations , we perform the first search for the existence of a b\xafbbb \xaf tetraquark bound state with a mass below the lowest noninteracting bottomonium-pair threshold using the first-principles lattice nonrelativistic qcd methodology . we use a full s-wave color/spin basis for the b\xafbbb \xaf operators in the three 0\xfe\xfe , 1\xfe and 2\xfe\xfe channels . we employ four gluon field ensembles at multiple lattice spacing values ranging from a \xbc 0.06 0.12 fm , all of which include u , d , s and c quarks in the sea , and one ensemble which has physical light-quark masses . additionally , we perform novel exploratory work with the objective of highlighting any signal of a near threshold tetraquark , if it existed , by adding an auxiliary potential into the qcd interactions . with our results we find no evidence of a qcd bound tetraquark below the lowest noninteracting thresholds in the channels studied . story_separator_special_tag we develop the spectroscopy of $ c\\bar c c\\bar c $ and other all-heavy tetraquark states in the dynamical diquark model . in the most minimal form of the model ( { \\it e.g . } , each diquark appears only in the color-triplet combination ; the non-orbital spin couplings connect only quarks within each diquark ) , the spectroscopy is extremely simple . namely , the $ s $ -wave multiplets contain precisely 3 degenerate states ( $ 0^ { ++ } $ , $ 1^ { +- } $ , $ 2^ { ++ } $ ) and the 7 $ p $ -wave states satisfy an equal-spacing rule when the tensor coupling is negligible . when comparing numerically to the recent lhcb results , we find the best interpretation is assigning $ x ( 6900 ) $ to the $ 2s $ multiplet , while a lower state suggested at about $ 6740 $ ~mev fits well with the members of the $ 1p $ multiplet . we also predict the location of other multiplets ( $ 1s $ , $ 1d $ , { \\it etc . } ) and discuss the significance of the $ story_separator_special_tag we use lattice qcd to investigate the spectrum of the $ \\bar { b } \\bar { b } u d $ four-quark system with quantum numbers $ i ( j^p ) = 0 ( 1^+ ) $ . we use five different gauge-link ensembles with $ 2+1 $ flavors of domain-wall fermions , including one at the physical pion mass , and treat the heavy $ \\bar { b } $ quark within the framework of lattice nonrelativistic qcd . our work improves upon previous similar computations by considering in addition to local four-quark interpolators also nonlocal two-meson interpolators and by performing a luscher analysis to extrapolate our results to infinite volume . we obtain a binding energy of $ ( -128 \\pm 24 \\pm 10 ) \\ , \\textrm { mev } $ , corresponding to the mass $ ( 10476 \\pm 24 \\pm 10 ) \\ , \\textrm { mev } $ , which confirms the existence of a $ \\bar { b } \\bar { b } u d $ tetraquark that is stable with respect to the strong and electromagnetic interactions . story_separator_special_tag summary background a recent cluster of pneumonia cases in wuhan , china , was caused by a novel betacoronavirus , the 2019 novel coronavirus ( 2019-ncov ) . we report the epidemiological , clinical , laboratory , and radiological characteristics and treatment and clinical outcomes of these patients . methods all patients with suspected 2019-ncov were admitted to a designated hospital in wuhan . we prospectively collected and analysed data on patients with laboratory-confirmed 2019-ncov infection by real-time rt-pcr and next-generation sequencing . data were obtained with standardised data collection forms shared by who and the international severe acute respiratory and emerging infection consortium from electronic medical records . researchers also directly communicated with patients or their families to ascertain epidemiological and symptom data . outcomes were also compared between patients who had been admitted to the intensive care unit ( icu ) and those who had not . findings by jan 2 , 2020 , 41 admitted hospital patients had been identified as having laboratory-confirmed 2019-ncov infection . most of the infected patients were men ( 30 [ 73 % ] of 41 ) ; less than half had underlying diseases ( 13 [ 32 % ] ) , story_separator_special_tag abstract qcd exhibits complex dynamics near s-wave two-body thresholds . for light mesons , we see this in the failure of quark models to explain the f 0 ( 500 ) and k 0 ( 700 ) masses . for charmonium , an unexpected x ( 3872 ) state appears at the open charm threshold . in heavy-light systems , analogous threshold effects appear for the lowest j p = 0 + and 1 + states in the d s and b s systems . here we describe how lattice qcd can be used to understand these threshold dynamics by smoothly varying the strange-quark mass when studying the heavy-light systems . small perturbations around the physical strange quark mass are used so to always remain near the physical qcd dynamics . this calculation is a straightforward extension of those already in the literature and can be undertaken by multiple lattice qcd collaborations with minimal computational cost .
spectral algorithms , such as principal component analysis and spectral clustering , rely on the extremal eigenpairs of a matrix $ a $ . however , these may be uninformative without preprocessing $ a $ wit . story_separator_special_tag clustering is a solution for classifying enormous data when there is not any early knowledge about classes . with emerging new concepts like cloud computing and big data and their vast applications in recent years , research works have been increased on unsupervised solutions like clustering algorithms to extract knowledge from this avalanche of data . clustering time-series data has been used in diverse scientific areas to discover patterns which empower data analysts to extract valuable information from complex and massive datasets . in case of huge datasets , using supervised classification solutions is almost impossible , while clustering can solve this problem using un-supervised approaches . in this research work , the focus is on time-series data , which is one of the popular data types in clustering problems and is broadly used from gene expression data in biology to stock market analysis in finance . this review will expose four main components of time-series clustering and is aimed to represent an updated investigation on the trend of improvements in efficiency , quality and complexity of clustering time-series approaches during the last decade and enlighten new paths for future works . anatomy of time-series clustering is revealed by introducing story_separator_special_tag sas text miner uses the vector space model for representing text . in this framework , distinct terms in the collection correspond to variables and documents represent observations . for most collections , the number of variables needed to represent each document is well above what can easily be modeled . as a result , dimension reduction becomes a crucial aspect of text mining solutions . in this paper , we explore the purpose and role of the singular value decomposition ( svd ) as a dimension reduction tool for text mining . we explain the mathematical foundation from which it is derived , provide an intuitive explanation of how it works , and provide guidence for using it in text mining applications . for those familiar with principal components analysis ( pca ) , we include a discussion of the relationship of pca to the svd . story_separator_special_tag we present an efficient method for computing a-optimal experimental designs for infinite-dimensional bayesian linear inverse problems governed by partial differential equations ( pdes ) . specifically , we address the problem of optimizing the location of sensors ( at which observational data are collected ) to minimize the uncertainty in the parameters estimated by solving the inverse problem , where the uncertainty is expressed by the trace of the posterior covariance . computing optimal experimental designs ( oeds ) is particularly challenging for inverse problems governed by computationally expensive pde models with infinite-dimensional ( or , after discretization , high-dimensional ) parameters . to alleviate the computational cost , we exploit the problem structure and build a low-rank approximation of the parameter-to-observable map , preconditioned with the square root of the prior covariance operator . this relieves our method from expensive pde solves when evaluating the optimal experimental design objective function and its derivatives . moreover , we employ a randomized trace estimator for efficient evaluation of the oed objective function . we control the sparsity of the sensor configuration by employing a sequence of penalty functions that successively approximate the $ \\ell_0 $ - '' norm '' ; this story_separator_special_tag the graph laplacian is a standard tool in data science , machine learning , and image processing . the corresponding matrix inherits the complex structure of the underlying network and is in certain applications densely populated . this makes computations , in particular matrix6 vector products , with the graph laplacian a hard task . a typical application is the computation of a number of its eigenvalues and eigenvectors . standard methods become infeasible as the number of nodes in the graph is too large . we propose the use of the fast summation based on the nonequispaced fast fourier transform ( nfft ) to perform the dense matrix-vector product with the graph laplacian fast without ever forming the whole matrix . the enormous flexibility of the nfft algorithm allows us to embed the accelerated multiplication into lanczos-based eigenvalues routines or iterative linear system solvers and even consider other than the standard gaussian kernels . we illustrate the feasibility of our approach on a number of test problems from image segmentation to semi-supervised learning based on graph-based pdes . in particular , we compare our approach with the nystrom method . moreover , we present and test an enhanced , story_separator_special_tag graph convolutional networks ( gcns ) have proven to be successful tools for semi-supervised learning on graph-based datasets . for sparse graphs , linear and polynomial filter functions have yielded impressive results . for large non-sparse graphs , however , network training and evaluation becomes prohibitively expensive . by introducing low-rank filters , we gain significant runtime acceleration and simultaneously improved accuracy . we further propose an architecture change mimicking techniques from model order reduction in what we call a reduced-order gcn . moreover , we present how our method can also be applied to hypergraph datasets and how hypergraph convolution can be implemented efficiently . story_separator_special_tag abstract a microscopic diffusional theory for the motion of a curved antiphase boundary is presented . the interfacial velocity is found to be linearly proportional to the mean curvature of the boundary , but unlike earlier theories the constant of proportionality does not include the specific surface free energy , yet the diffusional dissipation of free energy is shown to be equal to the reduction in total boundary free energy . the theory is incorporated into a model for antiphase domain coarsening . experimental measurements of domain coarsening kinetics in fe-al alloys were made over a temperature range where the specific surface free energy was varied by more than two orders of magnitude . the results are consistent with the theory ; in particular , the domain coarsening kinetics do not have the temperature dependence of the specific surface free energy . story_separator_special_tag the goal of the lapack project is to design and implement a portable linear algebra library for efficient use on a variety of high-performance computers . the library is based on the widely used linpack and eispack packages for solving linear equations , eigenvalue problems , and linear least-squares problems , but extends their functionality in a number of ways . the major methodology for making the algorithms run faster is to restructure them to perform block matrix operations ( e.g. , matrix-matrix multiplication ) in their inner loops . these block operations may be optimized to exploit the memory hierarchy of a specific architecture . the lapack project is also working on new algorithms that yield higher relative accuracy for a variety of linear algebra problems . > story_separator_special_tag an interpretation of dr. cornelius lanczos ' iteration method , which he has named `` minimized iterations '' , is discussed in this article , expounding the method as applied to the solution of the characteristic matrix equations both in homogeneous and nonhomogeneous form . this interpretation leads to a variation of the lanczos procedure which may frequently be advantageous by virtue of reducing the volume of numerical work in practical applications . both methods employ essentially the same algorithm , requiring the generation of a series of orthogonal functions through which a simple matrix equation of reduced order is established . the reduced matrix equation may be solved directly in terms of certain polynomial functions obtained in conjunction with the generated orthogonal functions , and the convergence of the solution may be observed as the order of the reduced matrix is successively increased with the order of the original matrix as a limit . the method of minimized iterations is recommended as a rapid means for determining a small number of the larger eigenvalues and modal columns of a large matrix and as a desirable alternative for various series expansions of the fredholm problem . 1. the conventional iterative story_separator_special_tag we give a randomized algorithm in deterministic time o ( nlog m ) for estimating the score vector of matches between a text string of length n and a pattern string of length m , i.e. , the vector obtained when the pattern is slid along the text , and the number of matches is counted for each position . a direct application is approximate string matching . the randomized algorithm uses convolution to find an estimator of the scores ; the variance of the estimator is particularly small for scores that are close to m , i.e. , for approximate occurrences of the pattern in the text . no assumption is made about the probabilistic characteristics of the input , or about the size of the alphabet . the solution extends to string matching with classes , class complements , `` never match '' and `` always match '' symbols , to the weighted case and to higher dimensions . story_separator_special_tag triangle counting is an important problem in graph mining with several real-world applications . interesting metrics , such as the clustering coefficient and the transitivity ratio , involve computing the number of triangles . furthermore , several interesting graph mining applications rely on computing the number of triangles in a large-scale graph . however , exact triangle counting is expensive and memory consuming , and current approximation algorithms are unsatisfactory and not practical for very large-scale graphs . in this paper we present a new highly-parallel randomized algorithm for approximating the number of triangles in an undirected graph . our algorithm uses a well-known relation between the number of triangles and the trace of the cubed adjacency matrix . a monte-carlo simulation is used to estimate this quantity . each sample requires o ( |e| ) time and o ( 2 log ( 1/ ) ( g ) 2 ) samples are required to guarantee an ( , ) -approximation , where ( g ) is a measure of the triangle sparsity of g ( ( g ) is not necessarily small ) . our algorithm requires only o ( |v | ) space in order to work efficiently . story_separator_special_tag we analyze the convergence of randomized trace estimators . starting at 1989 , several algorithms have been proposed for estimating the trace of a matrix by 1/m i=1m zitazi , where the zi are random vectors ; different estimators use different distributions for the zis , all of which lead to e ( 1/m i=1mzitazi ) = trace ( a ) . these algorithms are useful in applications in which there is no explicit representation of a but rather an efficient method compute ztaz given z. existing results only analyze the variance of the different estimators . in contrast , we analyze the number of samples m required to guarantee that with probability at least 1- , the relative error in the estimate is at most . we argue that such bounds are much more useful in applications than the variance . we found that these bounds rank the estimators differently than the variance ; this suggests that minimum-variance estimators may not be the best.we also make two additional contributions to this area . the first is a specialized bound for projection matrices , whose trace ( rank ) needs to be computed in electronic structure calculations . the second story_separator_special_tag new restarted lanczos bidiagonalization methods for the computation of a few of the largest or smallest singular values of a large matrix are presented . restarting is carried out by augmentation of krylov subspaces that arise naturally in the standard lanczos bidiagonalization method . the augmenting vectors are associated with certain ritz or harmonic ritz vectors . computed examples show the new methods to be competitive with available schemes . story_separator_special_tag in this paper we present an algorithm for computing a partial sum of eigenvalues of a large symmetric positive de nite matrix pair we show that this computational task is intimately connected to compute a bilinear form uf a u for a properly de ned matrixa a vector u and a function f compared to existing techniques which compute individual eigenvalues and then sum them up the new algorithm is generally less accurate but requires signi cantly less memory and cpu time in the application of electronic structure calculations in molecular dynamics the new algorithm has achieved a speedup factor of for small size problems to for large size problems relative accuracy within to is satisfactory previously intractable large size problems have been solved story_separator_special_tag the candecomp/parafac ( cp ) decomposition is a leading method for the analysis of multiway data . the standard alternating least squares algorithm for the cp decomposition ( cp-als ) involves a series . story_separator_special_tag approximations of expressions of the form i f : = trace ( w t f ( a ) w ) , where a ? r m i ? m is a large symmetric matrix , w ? r m i ? k with k ? m , and f is a function , can be computed without evaluating f ( a ) by applying a few steps of the global block lanczos method to a with initial block-vector w. this yields a partial global lanczos decomposition of a. we show that for suitable functions f upper and lower bounds for i f can be determined by exploiting the connection between the global block lanczos method and gauss-type quadrature rules . our approach generalizes techniques advocated by golub and meurant for the standard lanczos method ( with block size one ) to the global block lanczos method . we describe applications to the computation of upper and lower bounds of the trace of f ( a ) and consider , in particular , the computation of upper and lower bounds for the estrada index , which arises in network analysis . we also discuss an application to machine learning . story_separator_special_tag united states . air force office of scientific research ( computational mathematics grant fa9550-12-1-0420 ) story_separator_special_tag networks are a fundamental tool for understanding and modeling complex systems in physics , biology , neuroscience , engineering , and social science . many networks are known to exhibit rich , lower-order connectivity patterns that can be captured at the level of individual nodes and edges . however , higher-order organization of complex networks at the level of small network subgraphs remains largely unknown . here , we develop a generalized framework for clustering networks on the basis of higher-order connectivity patterns . this framework provides mathematical guarantees on the optimality of obtained clusters and scales to networks with billions of edges . the framework reveals higher-order organization in a number of networks , including information propagation units in neuronal networks and hub structure in transportation networks . results show that networks exhibit rich higher-order organizational structures that are exposed by clustering based on higher-order connectivity patterns . story_separator_special_tag bounds for entries of matrix functions based on gauss-type quadrature rules are applied to adjacency matrices associated with graphs . this technique allows to develop inexpensive and accurate upper and lower bounds for certain quantities ( estrada index , subgraph centrality , communicability ) that describe properties of networks . story_separator_special_tag the development and use of low-rank approximate nonnegative matrix factorization ( nmf ) algorithms for feature extraction and identification in the fields of text mining and spectral data analysis are presented . the evolution and convergence properties of hybrid methods based on both sparsity and smoothness constraints for the resulting nonnegative matrix factors are discussed . the interpretability of nmf outputs in specific contexts are provided along with opportunities for future work in the modification of nmf algorithms for large-scale and time-varying data sets . story_separator_special_tag image inpainting is the filling in of missing or damaged regions of images using information from surrounding areas . we outline here the use of a model for binary inpainting based on the cahn-hilliard equation , which allows for fast , efficient inpainting of degraded text , as well as super-resolution of high contrast images story_separator_special_tag this paper is a republication of an mms paper [ a. l. bertozzi and a. flenner , multiscale model . simul. , 10 ( 2012 ) , pp . 1090 -- 1118 ] describing a new class of algorithms for classification of high dimensional data . these methods combine ideas from spectral methods on graphs with nonlinear edge/region detection methods traditionally used in the pde-based imaging community . the algorithms use the ginzburg -- landau functional , extended to a graphical framework , which has classical pde connections to total variation minimization . convex splitting algorithms allow us to quickly find minimizers of the proposed model and take advantage of fast spectral solvers of linear graph-theoretic problems . we review the diverse computational examples presented in the original paper , involving both basic clustering and semisupervised learning for different applications . case studies include feature identification in images , segmentation in social networks , and segmentation of shapes in high dimensional datasets . since the pape . story_separator_special_tag this article explained how nodes in a network graph can infer information about the network topology or its topology related properties , based on in-network distributed learning , i.e. , without relying on an external observer who has a complete overview over the network . some key concepts from the field of sgt were reviewed , with a focus on those that allow for a simple distributed implementation , i.e. , eigenvector or katz centrality , algebraic connectivity , and the fiedler vector . this paper also explained how the nodes themselves can quantify their individual network-wide influence , as well as identify densely connected node clusters and the sparse bridge links between them . the addressed concepts , as well as more advanced concepts from the field of sgt , are believed to be crucial catalysts in the design of topology-aware distributed algorithms . examples were provided on how these techniques can be exploited in several nontrivial distributed signal processing tasks . story_separator_special_tag data mining is the science of finding unexpected , valuable , or interesting structures in large data sets . it is an interdisciplinary activity , taking ideas and methods from statistics , machine lear . story_separator_special_tag random projections have recently emerged as a powerful method for dimensionality reduction . theoretical results indicate that the method preserves distances quite nicely ; however , empirical results are sparse . we present experimental results on using random projection as a dimensionality reduction tool in a number of cases , where the high dimensionality of the data would otherwise lead to burden-some computations . our application areas are the processing of both noisy and noiseless images , and information retrieval in text documents . we show that projecting the data onto a random lower-dimensional subspace yields results comparable to conventional dimensionality reduction methods such as principal component analysis : the similarity of data vectors is preserved well under random projection . however , using random projections is computationally significantly less expensive than using , e.g. , principal component analysis . we also show experimentally that using a sparse random matrix gives additional computational savings in random projection . story_separator_special_tag the partial least squares ( pls ) method computes a sequence of approximate solutions $ x_k \\in { \\cal k } _k ( a^ta , a^tb ) $ , $ k = 1,2 , \\ldots\\ , $ , to the least squares problem $ \\min_x\\|ax - b\\|_2 $ . if carried out to completion , the method always terminates with the pseudoinverse solution $ x^\\dagger = a^\\dagger b $ . two direct pls algorithms are analyzed . the first uses the golub -- kahan householder algorithm for reducing $ a $ to upper bidiagonal form . the second is the nipals pls algorithm , due to wold et al. , which is based on rank-reducing orthogonal projections . the householder algorithm is known to be mixed forward-backward stable . numerical results are given , that support the conjecture that the nipals pls algorithm shares this stability property . we draw attention to a flaw in some descriptions and implementations of this algorithm , related to a similar problem in gram -- schmidt orthogonalization , that spoils its otherwise excellent stability . for large-scale sparse or structured problems , the iterative algorithm . story_separator_special_tag in the past years , network theory has successfully characterized the interaction among the constituents of a variety of complex systems , ranging from biological to technological , and social systems . however , up until recently , attention was almost exclusively given to networks in which all components were treated on equivalent footing , while neglecting all the extra information about the temporal- or context-related properties of the interactions under study . only in the last years , taking advantage of the enhanced resolution in real data sets , network scientists have directed their interest to the multiplex character of real-world systems , and explicitly considered the time-varying and multilayer nature of networks . we offer here a comprehensive review on both structural and dynamical organization of graphs made of diverse relationships ( layers ) between its constituents , and cover several relevant issues , from a full redefinition of the basic structural measures , to understanding how the multilayer nature of the network affects processes and dynamics . story_separator_special_tag 2in an influential paper , freeman ( 1979 ) identified three aspects of centrality : betweenness , nearness , and degree . perhaps because they are designed to apply to networks in which relations are binary valued ( they exist or they do not ) , these types of centrality have not been used in interlocking directorate research , which has almost exclusively used formula ( 2 ) below to compute centrality . conceptually , this measure , of which c ( ot , 3 ) is a generalization , is closest to being a nearness measure when 3 is positive . in any case , there is no discrepancy between the measures for the four networks whose analysis forms the heart of this paper . the rank orderings by the story_separator_special_tag abstract we explore methods for approximating the commute time and katz score between a pair of nodes . these methods are based on the approach of matrices , moments , and quadrature developed in the numerical linear algebra community . they rely on the lanczos process and provide upper and lower bounds on an estimate of the pairwise scores . we also explore methods to approximate the commute times and katz scores from a node to all other nodes in the graph . here , our approach for the commute times is based on a variation of the conjugate gradient algorithm , and it provides an estimate of all the diagonals of the inverse of a matrix . our technique for the katz scores is based on exploiting an empirical localization property of the katz matrix . we adapt algorithms used for personalized pagerank computing to these katz scores and theoretically show that this approach is convergent . we evaluate these methods on 17 real-world graphs ranging in size from 1000 to 1,000,000 nodes . our results show that our pairwise c . story_separator_special_tag diffuse interface methods have recently been introduced for the task of semisupervised learning . the underlying model is well known in materials science but was extended to graphs using a ginzburg -- landau functional and the graph laplacian . we here generalize the previously proposed model by a nonsmooth potential function . additionally , we show that the diffuse interface method can be used for the segmentation of data coming from hypergraphs . for this we show that the graph laplacian in almost all cases is derived from hypergraph information . additionally , we show that the formerly introduced hypergraph laplacian coming from a relaxed optimization problem is well suited to be used within the diffuse interface method . we present computational experiments for graph and hypergraph laplacians . story_separator_special_tag we present an efficient block-diagonal ap- proximation to the gauss-newton matrix for feedforward neural networks . our result- ing algorithm is competitive against state- of-the-art first order optimisation methods , with sometimes significant improvement in optimisation performance . unlike first-order methods , for which hyperparameter tuning of the optimisation parameters is often a labo- rious process , our approach can provide good performance even when used with default set- tings . a side result of our work is that for piecewise linear transfer functions , the net- work objective function can have no differ- entiable local maxima , which may partially explain why such transfer functions facilitate effective optimisation . story_separator_special_tag during the last decade , the data sizes have grown faster than the speed of processors . in this context , the capabilities of statistical machine learning methods is limited by the computing time rather than the sample size . a more precise analysis uncovers qualitatively different tradeoffs for the case of small-scale and large-scale learning problems . the large-scale case involves the computational complexity of the underlying optimization algorithm in non-trivial ways . unlikely optimization algorithms such as stochastic gradient descent show amazing performance for large-scale problems . in particular , second order stochastic gradient and averaged stochastic gradient are asymptotically efficient after a single pass on the training set . story_separator_special_tag we consider the problem of selecting the `` best '' subset of exactly k columns from an m x n matrix a. in particular , we present and analyze a novel two-stage algorithm that runs in o ( min { mn2 , m2n } ) time and returns as output an m x k matrix c consisting of exactly k columns of a. in the first stage ( the randomized stage ) , the algorithm randomly selects o ( k log k ) columns according to a judiciously-chosen probability distribution that depends on information in the top-k right singular subspace of a. in the second stage ( the deterministic stage ) , the algorithm applies a deterministic column-selection procedure to select and return exactly k columns from the set of columns selected in the first stage . let c be the m x k matrix containing those k columns , let pc denote the projection matrix onto the span of those columns , and let ak denote the `` best '' rank-k approximation to the matrix a as computed with the singular value decomposition . then , we prove that [ equation ] with probability at least 0.7. this spectral story_separator_special_tag the cur decomposition of an $ m \\times n $ matrix $ a $ finds an $ m \\times c $ matrix $ c $ with a subset of $ c < n $ columns of $ a , $ together with an $ r \\times n $ matrix $ r $ with a subset of $ r < m $ rows of $ a , $ as well as a $ c \\times r $ low-rank matrix $ u $ such that the matrix $ c u r $ approximates the matrix $ a , $ that is , $ || a - cur ||_f^2 \\le ( 1+\\epsilon ) || a - a_k||_f^2 $ , where $ ||.||_f $ denotes the frobenius norm and $ a_k $ is the best $ m \\times n $ matrix of rank $ k $ constructed via the svd . we present input-sparsity-time and deterministic algorithms for constructing such a cur decomposition where $ c=o ( k/\\epsilon ) $ and $ r=o ( k/\\epsilon ) $ and rank $ ( u ) = k $ . up to constant factors , our algorithms are simultaneously optimal in $ c , r , $ story_separator_special_tag randomized algorithms for very large matrix problems have received a great deal of attention in recent years . much of this work was motivated by problems in large-scale data analysis , and this work was performed by individuals from many different research communities . this monograph will provide a detailed overview of recent work on the theory of randomized matrix algorithms as well as the application of those ideas to the solution of practical problems in large-scale data analysis . an emphasis will be placed on a few simple core ideas that underlie not only recent theoretical advances but also the usefulness of these tools in large-scale data applications . crucial in this context is the connection with the concept of statistical leverage . this concept has long been used in statistical regression diagnostics to identify outliers ; and it has recently proved crucial in the development of improved worst-case matrix algorithms that are also amenable to high-quality numerical implementation and that are useful to domain scientists . randomized methods solve problems such as the linear least-squares problem and the low-rank matrix approximation problem by constructing and operating on a randomized sketch of the input matrix . depending on the story_separator_special_tag we show how to turn any classifier that classifies well under gaussian noise into a new classifier that is certifiably robust to adversarial perturbations under the $ \\ell_2 $ norm . this `` randomized smoothing '' technique has been proposed recently in the literature , but existing guarantees are loose . we prove a tight robustness guarantee in $ \\ell_2 $ norm for smoothing with gaussian noise . we use randomized smoothing to obtain an imagenet classifier with e.g . a certified top-1 accuracy of 49 % under adversarial perturbations with $ \\ell_2 $ norm less than 0.5 ( =127/255 ) . no certified defense has been shown feasible on imagenet except for smoothing . on smaller-scale datasets where competing approaches to certified $ \\ell_2 $ robustness are viable , smoothing delivers higher certified accuracies . our strong empirical results suggest that randomized smoothing is a promising direction for future research into adversarially robust classification . code and models are available at this http url . story_separator_special_tag we study data-driven methods for community detection in graphs . this estimation problem is typically formulated in terms of the spectrum of certain operators , as well as via posterior inference under certain probabilistic graphical models . focusing on random graph families such as the stochastic block model , recent research has unified these two approaches , and identified both statistical and computational signal-to-noise detection thresholds . we embed the resulting class of algorithms within a generic family of graph neural networks and show that they can reach those detection thresholds in a purely data-driven manner , without access to the underlying generative models and with no parameter assumptions . the resulting model is also tested on real datasets , requiring less computational steps and performing significantly better than rigid parametric models . story_separator_special_tag abstract : convolutional neural networks are extremely efficient architectures in image and audio recognition tasks , thanks to their ability to exploit the local translational invariance of signal classes over their domain . in this paper we consider possible generalizations of cnns to signals defined on more general domains without the action of a translation group . in particular , we propose two constructions , one based upon a hierarchical clustering of the domain , and another based on the spectrum of the graph laplacian . we show through experiments that for low-dimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size , resulting in efficient deep architectures . story_separator_special_tag acoustic-based music recommender systems have received increasing interest in recent years . due to the semantic gap between low level acoustic features and high level music concepts , many researchers have explored collaborative filtering techniques in music recommender systems . traditional collaborative filtering music recommendation methods only focus on user rating information . however , there are various kinds of social media information , including different types of objects and relations among these objects , in music social communities such as last.fm and pandora . this information is valuable for music recommendation . however , there are two challenges to exploit this rich social media information : ( a ) there are many different types of objects and relations in music social communities , which makes it difficult to develop a unified framework taking into account all objects and relations . ( b ) in these communities , some relations are much more sophisticated than pairwise relation , and thus can not be simply modeled by a graph . in this paper , we propose a novel music recommendation algorithm by using both multiple kinds of social media information and music acoustic-based content . instead of graph , we use story_separator_special_tag due to their low losses , dielectric metamaterials provide an ideal resolution to construct ultra-narrowband absorbers . to improve the sensing performance , we present numerically a near-infrared ultra-narrowband absorber by putting ultra-sparse dielectric nanowire grids on metal substrate in this paper . the simulation results show that the absorber has an absorption rate larger than 0.99 with full width at half-maximum ( fwhm ) of 0.38 nm . the simulation field distribution also indicates that the ultra-narrowband absorption is originated from the low loss in the guided-mode resonance . thanks to the ultra-narrow absorption bandwidths and the electric field mainly distributed out of the ultra-sparse dielectric nanowire grids , our absorber has a high sensitivity s of 1052 nm/riu and a large figure of merit ( fom ) of 2768 which mean that this ultra-narrowband absorber can be applied as a high-performance refractive index sensor . story_separator_special_tag fractional differential equations are becoming increasingly used as a modelling tool for processes associated with anomalous diffusion or spatial heterogeneity . however , the presence of a fractional differential operator causes memory ( time fractional ) or nonlocality ( space fractional ) issues that impose a number of computational constraints . in this paper we develop efficient , scalable techniques for solving fractional-in-space reaction diffusion equations using the finite element method on both structured and unstructured grids via robust techniques for computing the fractional power of a matrix times a vector . our approach is show-cased by solving the fractional fisher and fractional allen -- cahn reaction-diffusion equations in two and three spatial dimensions , and analyzing the speed of the traveling wave and size of the interface in terms of the fractional power of the underlying laplacian operator . story_separator_special_tag it is shown that the free energy of a volume v of an isotropic system of nonuniform composition or density is given by : nv v [ f 0 ( c ) + ( c ) 2 ] dv , where nv is the number of molecules per unit volume , c the composition or density gradient , f 0 the free energy per molecule of a homogeneous system , and a parameter which , in general , may be dependent on c and temperature , but for a regular solution is a constant which can be evaluated . this expression is used to determine the properties of a flat interface between two coexisting phases . in particular , we find that the thickness of the interface increases with increasing temperature and becomes infinite at the critical temperature tc , and that at a temperature t just below tc the interfacial free energy is proportional to ( t c t ) 3 2 . the predicted interfacial free energy and its temperature dependence are found to be in agreement with existing experimental data . the possibility of using optical measurements of the interface thickness to provide an additional check of story_separator_special_tag matrix factorization techniques have been frequently applied in information retrieval , computer vision , and pattern recognition . among them , nonnegative matrix factorization ( nmf ) has received considerable attention due to its psychological and physiological interpretation of naturally occurring data whose representation may be parts based in the human brain . on the other hand , from the geometric perspective , the data is usually sampled from a low-dimensional manifold embedded in a high-dimensional ambient space . one then hopes to find a compact representation , which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure . in this paper , we propose a novel algorithm , called graph regularized nonnegative matrix factorization ( gnmf ) , for this purpose . in gnmf , an affinity graph is constructed to encode the geometrical information and we seek a matrix factorization , which respects the graph structure . our empirical study shows encouraging results of the proposed algorithm in comparison to the state-of-the-art algorithms on real-world problems . story_separator_special_tag generative adversarial network ( gan ) and its variants exhibit state-of-the-art performance in the class of generative models . to capture higher-dimensional distributions , the common learning procedure requires high computational complexity and a large number of parameters . the problem of employing such massive framework arises when deploying it on a platform with limited computational power such as mobile phones . in this paper , we present a new generative adversarial framework by representing each layer as a tensor structure connected by multilinear operations , aiming to reduce the number of model parameters by a large factor while preserving the generative performance and sample quality . to learn the model , we employ an efficient algorithm which alternatively optimizes both discriminator and generator . experimental outcomes demonstrate that our model can achieve high compression rate for model parameters up to 35 times when compared to the original gan for mnist dataset . story_separator_special_tag recursive spectral bisection ( rsb ) is a heuristic technique for finding a minimum cut graph bisection . to use this method the second eigenvector of the laplacian of the graph is computed and from it a bisection is obtained . the most common method is to use the median of the components of the second eigenvector to induce a bisection . we prove here that this median cut method is optimal in the sense that the partition vector induced by it is the closest partition vector , in any ls norm , for $ s\\ge1 $ , to the second eigenvector . moreover , we prove that the same result also holds for any m-partition , that is , a partition into m and ( n-m ) $ vertices , when using the mth largest or smallest components of the second eigenvector . story_separator_special_tag a dimension reduction method called discrete empirical interpolation is proposed and shown to dramatically reduce the computational complexity of the popular proper orthogonal decomposition ( pod ) method for constructing reduced-order models for time dependent and/or parametrized nonlinear partial differential equations ( pdes ) . in the presence of a general nonlinearity , the standard pod-galerkin technique reduces dimension in the sense that far fewer variables are present , but the complexity of evaluating the nonlinear term remains that of the original problem . the original empirical interpolation method ( eim ) is a modification of pod that reduces the complexity of evaluating the nonlinear term of the reduced model to a cost proportional to the number of reduced variables obtained by pod . we propose a discrete empirical interpolation method ( deim ) , a variant that is suitable for reducing the dimension of systems of ordinary differential equations ( odes ) of a certain type . as presented here , it is applicable to odes arising from finite difference discretization of time dependent pdes and/or parametrically dependent steady state problems . however , the approach extends to arbitrary systems of nonlinear odes with minor modification . our contribution story_separator_special_tag we introduce a fast algorithm for entry-wise evaluation of the gauss-newton hessian ( gnh ) matrix for the multilayer perceptron . the algorithm has a precomputation step and a sampling step . while it generally requires $ o ( nn ) $ work to compute an entry ( and the entire column ) in the gnh matrix for a neural network with $ n $ parameters and $ n $ data points , our fast sampling algorithm reduces the cost to $ o ( n+d/\\epsilon^2 ) $ work , where $ d $ is the output dimension of the network and $ \\epsilon $ is a prescribed accuracy ( independent of $ n $ ) . one application of our algorithm is constructing the hierarchical-matrix ( \\hmatrix { } ) approximation of the gnh matrix for solving linear systems and eigenvalue problems . while it generally requires $ o ( n^2 ) $ memory and $ o ( n^3 ) $ work to store and factorize the gnh matrix , respectively . the \\hmatrix { } approximation requires only $ \\bigo ( n r_o ) $ memory footprint and $ \\bigo ( n r_o^2 ) $ work to be factorized story_separator_special_tag we introduce a new family of deep neural network models . instead of specifying a discrete sequence of hidden layers , we parameterize the derivative of the hidden state using a neural network . the output of the network is computed using a black-box differential equation solver . these continuous-depth models have constant memory cost , adapt their evaluation strategy to each input , and can explicitly trade numerical precision for speed . we demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models . we also construct continuous normalizing flows , a generative model that can train by maximum likelihood , without partitioning or ordering the data dimensions . for training , we show how to scalably backpropagate through any ode solver , without access to its internal operations . this allows end-to-end training of odes within larger models . story_separator_special_tag recent advent of graph signal processing ( gsp ) has spurred intensive studies of signals that live naturally on irregular data kernels described by graphs ( e.g. , social networks , wireless sensor networks ) . though a digital image contains pixels that reside on a regularly sampled 2-d grid , if one can design an appropriate underlying graph connecting pixels with weights that reflect the image structure , then one can interpret the image ( or image patch ) as a signal on a graph , and apply gsp tools for processing and analysis of the signal in graph spectral domain . in this paper , we overview recent graph spectral techniques in gsp specifically for image/video processing . the topics covered include image compression , image restoration , image filtering , and image segmentation . story_separator_special_tag tensors have found application in a variety of fields , ranging from chemometrics to signal processing and beyond . in this paper , we consider the problem of multilinear modeling of sparse count data . our goal is to develop a descriptive tensor factorization model of such data , along with appropriate algorithms and theory . to do so , we propose that the random variation is best described via a poisson distribution , which better describes the zeros observed in the data as compared to the typical assumption of a gaussian distribution . under a poisson assumption , we fit a model to observed data using the negative log-likelihood score . we present a new algorithm for poisson tensor factorization called candecomp -- parafac alternating poisson regression ( cp-apr ) that is based on a majorization-minimization approach . it can be shown that cp-apr is a generalization of the lee -- seung multiplicative updates . we show how to prevent the algorithm from converging to non-kkt points and prove convergence of cp-apr under mil . story_separator_special_tag in this paper we review basic and emerging models and associated algorithms for large-scale tensor networks , especially tensor train ( tt ) decompositions using novel mathematical and graphical representations . we discus the concept of tensorization ( i.e. , creating very high-order tensors from lower-order original data ) and super compression of data achieved via quantized tensor train ( qtt ) networks . the purpose of a tensorization and quantization is to achieve , via low-rank tensor approximations `` super '' compression , and meaningful , compact representation of structured data . the main objective of this paper is to show how tensor networks can be used to solve a wide class of big data optimization problems ( that are far from tractable by classical numerical methods ) by applying tensorization and performing all operations using relatively small size matrices and tensors and applying iteratively optimized and approximative tensor contractions . keywords : tensor networks , tensor train ( tt ) decompositions , matrix product states ( mps ) , matrix product operators ( mpo ) , basic tensor operations , tensorization , distributed representation od data optimization problems for very large-scale problems : generalized eigenvalue decomposition ( gevd story_separator_special_tag modern applications in engineering and data science are increasinglybased on multidimensional data of exceedingly high volume , variety , and structural richness . however , standard machine learning algorithmstypically scale exponentially with data volume and complexityof cross-modal couplings - the so called curse of dimensionality -which is prohibitive to the analysis of large-scale , multi-modal andmulti-relational datasets . given that such data are often efficientlyrepresented as multiway arrays or tensors , it is therefore timely andvaluable for the multidisciplinary machine learning and data analyticcommunities to review low-rank tensor decompositions and tensor networksas emerging tools for dimensionality reduction and large scaleoptimization problems . our particular emphasis is on elucidating that , by virtue of the underlying low-rank approximations , tensor networkshave the ability to alleviate the curse of dimensionality in a numberof applied areas . in part 1 of this monograph we provide innovativesolutions to low-rank tensor network decompositions and easy to interpretgraphical representations of the mathematical operations ontensor networks . such a conceptual insight allows for seamless migrationof ideas from the flat-view matrices to tensor network operationsand vice versa , and provides a platform for further developments , practicalapplications , and non-euclidean extensions . it also permits theintroduction of various story_separator_special_tag this monograph builds on tensor networks for dimensionality reduction and large-scale optimization : part 1 low-rank tensor decompositions by discussing tensor network models for super-compressed higher-order representation of data/parameters and cost functions , together with an outline of their applications in machine learning and data analytics . a particular emphasis is on elucidating , through graphical illustrations , that by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors , tensor networks have the ability to perform distributed computations on otherwise prohibitively large volume of data/parameters , thereby alleviating the curse of dimensionality . the usefulness of this concept is illustrated over a number of applied areas , including generalized regression and classification , generalized eigenvalue decomposition and in the optimization of deep neural networks . the monograph focuses on tensor train ( tt ) and hierarchical tucker ( ht ) decompositions and their extensions , and on demonstrating the ability of tensor networks to provide scalable solutions for a variety of otherwise intractable large-scale optimization problems . tensor networks for dimensionality reduction and large-scale optimization parts 1 and 2 can be used as stand-alone texts , or together as a comprehensive review of the exciting story_separator_special_tag in order to perform highly qualified neutron-gamma discrimination in mixed radiation field , we investigate the application of blind source separation methods based on nonnegative matrix and tensor factorization algorithms as new and robust neutron-gamma discrimination software-based approaches . these signal processing tools have allowed to recover original source components from real-world mixture signals which have been recorded at the output of the stilbene scintillation detector . the computation of the performance index of separability of each tested nonnegative algorithm has allowed to select second-order nmf algorithm and ntf-2 model as the most efficient techniques for discriminating neutrons and gammas . furthermore , the neutron-gamma discrimination is highlighted through the computation of the cross-correlation function . the performance of the blind source separation methods has been quantified through the obtained results that prove a good neutron-gamma separation . story_separator_special_tag this book provides a broad survey of models and efficient algorithms for nonnegative matrix factorization ( nmf ) . this includes nmfs various extensions and modifications , especially nonnegative tensor factorizations ( ntf ) and nonnegative tucker decompositions ( ntd ) . nmf/ntf and their extensions are increasingly used as tools in signal and image processing , and data analysis , having garnered interest due to their capability to provide new insights and relevant information about the complex latent relationships in experimental data sets . it is suggested that nmf can provide meaningful components with physical interpretations ; for example , in bioinformatics , nmf and its extensions have been successfully applied to gene expression , sequence analysis , the functional characterization of genes , clustering and text mining . as such , the authors focus on the algorithms that are most useful in practice , looking at the fastest , most robust , and suitable for large-scale models . key features : acts as a single source reference guide to nmf , collating information that is widely dispersed in current literature , including the authors own recently developed techniques in the subject area . uses generalized cost functions such story_separator_special_tag parametrized families of pdes arise in various contexts such as inverse problems , control and optimization , risk assessment , and uncertainty quantification . in most of these applications , the number of parameters is large or perhaps even infinite . thus , the development of numerical methods for these parametric problems is faced with the possible curse of dimensionality . this article is directed at ( i ) identifying and understanding which properties of parametric equations allow one to avoid this curse and ( ii ) developing and analyzing effective numerical methodd which fully exploit these properties and , in turn , are immune to the growth in dimensionality . the first part of this article studies the smoothness and approximability of the solution map , that is , the map $ a\\mapsto u ( a ) $ where $ a $ is the parameter value and $ u ( a ) $ is the corresponding solution to the pde . it is shown that for many relevant parametric pdes , the parametric smoothness of this map is typically holomorphic and also highly anisotropic in that the relevant parameters are of widely varying importance in describing the solution . story_separator_special_tag many pattern recognition tasks , including estimation , classification , and the finding of similar objects , make use of linear models . the fundamental operation in such tasks is the computation of the dot product between a query vector and a large database of instance vectors . often we are interested primarily in those instance vectors which have high dot products with the query . we present a random sampling based algorithm that enables us to identify , for any given query vector , those instance vectors which have large dot products , while avoiding explicit computation of all dot products . we provide experimental results that demonstrate considerable speedups for text retrieval tasks . our approximate matrix multiplication algorithm is applicable to products ofk ? 2 matrices and is of independent interest . our theoretical and experimental analysis demonstrates that in many scenarios , our method dominates standard matrix multiplication . story_separator_special_tag the support-vector network is a new learning machine for two-group classification problems . the machine conceptually implements the following idea : input vectors are non-linearly mapped to a very high-dimension feature space . in this feature space a linear decision surface is constructed . special properties of the decision surface ensures high generalization ability of the learning machine . the idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors . we here extend this result to non-separable training data . high generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated . we also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of optical character recognition . story_separator_special_tag a cur approximation of a matrix $ a $ is a particular type of low-rank approximation $ a\\approx cur $ , where $ c $ and $ r $ consist of columns and rows of $ a $ , respectively . one way to obtain such an appr . story_separator_special_tag we propose a modular extension of the backpropagation algorithm for computation of the block diagonal of the training objective 's hessian to various levels of refinement . the approach compartmentalizes the otherwise tedious construction of the hessian into local modules . it is applicable to feedforward neural network architectures , and can be integrated into existing machine learning libraries with relatively little overhead , facilitating the development of novel second-order optimization methods . our formulation subsumes several recently proposed block-diagonal approximation schemes as special cases . our pytorch implementation is included with the paper . story_separator_special_tag abstract let g be a graph on n vertices , and let 1 , 2 , , n be its eigenvalues . the estrada index of g is a recently introduced graph invariant , defined as ee = i = 1 n e i . we establish lower and upper bounds for ee in terms of the number of vertices and number of edges . also some inequalities between ee and the energy of g are obtained . story_separator_special_tag we discuss a multilinear generalization of the singular value decomposition . there is a strong analogy between several properties of the matrix and the higher-order tensor decomposition ; uniqueness , link with the matrix eigenvalue decomposition , first-order perturbation effects , etc. , are analyzed . we investigate how tensor symmetries affect the decomposition and propose a multilinear generalization of the symmetric eigenvalue decomposition for pair-wise symmetric tensors . story_separator_special_tag graph-based semi-supervised learning for classification endorses a nice interpretation in terms of diffusive random walks , where the regularisation factor in the original optimisation formulation plays the role of a restarting probability . recently , a new type of biased random walks for characterising certain dynamics on networks have been defined and rely on the -th power of the standard laplacian matrix l , with > 0. in particular , these processes embed long range transitions , the levy flights , that are capable of one-step jumps between far-distant states ( nodes ) of the graph . the present contribution envisions to build upon these volatile random walks to propose a new version of graph based semi-supervised learning algorithms whose classification outcome could benefit from the dynamics induced by the fractional transition matrix.1 story_separator_special_tag in this work , we are interested in generalizing convolutional neural networks ( cnns ) from low-dimensional regular grids , where image , video and speech are represented , to high-dimensional irregular domains , such as social networks , brain connectomes or words ' embedding , represented by graphs . we present a formulation of cnns in the context of spectral graph theory , which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs . importantly , the proposed technique offers the same linear computational complexity and constant learning complexity as classical cnns , while being universal to any graph structure . experiments on mnist and 20news demonstrate the ability of this novel deep learning system to learn local , stationary , and compositional features on graphs . story_separator_special_tag let g = ( v , e ) be a finite and simple graph with 1 , 2 , . , n as its eigenvalues . the estrada index of g is ee ( g ) = n i=1 e i . for a positive integer k , a connected graph g is called strict k-quasi tree if there exists a set u of vertices of size k such that g \\u is a tree and this is as small as possible with this property . in this paper , we define point attaching strict k-quasi tree graphs and obtain the graph with minimum estrada index among point attaching strict k-quasi tree graphs with k even cycles . story_separator_special_tag due to the fact much of today 's data can be represented as graphs , there has been a demand for generalizing neural network models for graph data . one recent direction that has shown fruitful results , and therefore growing interest , is the usage of graph convolutional neural networks ( gcns ) . they have been shown to provide a significant improvement on a wide range of tasks in network analysis , one of which being node representation learning . the task of learning low-dimensional node representations has shown to increase performance on a plethora of other tasks from link prediction and node classification , to community detection and visualization . simultaneously , signed networks ( or graphs having both positive and negative links ) have become ubiquitous with the growing popularity of social media . however , since previous gcn models have primarily focused on unsigned networks ( or graphs consisting of only positive links ) , it is unclear how they could be applied to signed networks due to the challenges presented by negative links . the primary challenges are based on negative links having not only a different semantic meaning as compared to positive links story_separator_special_tag we introduce a new method for solving nonlinear continuous optimization problems with chance constraints . our method is based on a reformulation of the probabilistic constraint as a quantile function . the quantile function is approximated via a differentiable sample average approximation . we provide theoretical statistical guarantees of the approximation , and illustrate empirically that the reformulation can be directly used by standard nonlinear optimization solvers in the case of single chance constraints . furthermore , we propose an s $ \\ell_1 $ qp-type trust-region method to solve instances with joint chance constraints . we demonstrate the performance of the method on several problems , and show that it scales well with the sample size and that the smoothing can be used to counteract the bias in the chance constraint approximation induced by the sample approximation . story_separator_special_tag current nonnegative matrix factorization ( nmf ) deals with x = fg type . we provide a systematic analysis and extensions of nmf to the symmetric w = hh , and the weighted w = hsh . we show that ( 1 ) w = hh is equivalent to kernel k-means clustering and the laplacian-based spectral clustering . ( 2 ) x = fg is equivalent to simultaneous clustering of rows and columns of a bipartite graph . algorithms are given for computing these symmetric nmfs . story_separator_special_tag an important application of graph partitioning is data clustering using a graph model - the pairwise similarities between all data objects form a weighted graph adjacency matrix that contains all necessary information for clustering . in this paper , we propose a new algorithm for graph partitioning with an objective function that follows the min-max clustering principle . the relaxed version of the optimization of the min-max cut objective function leads to the fiedler vector in spectral graph partitioning . theoretical analyses of min-max cut indicate that it leads to balanced partitions , and lower bounds are derived . the min-max cut algorithm is tested on newsgroup data sets and is found to out-perform other current popular partitioning/clustering methods . the linkage-based refinements to the algorithm further improve the quality of clustering substantially . we also demonstrate that a linearized search order based on linkage differential is better than that based on the fiedler vector , providing another effective partitioning method . story_separator_special_tag we present several new variations on the theme of nonnegative matrix factorization ( nmf ) . considering factorizations of the form x = fgt , we focus on algorithms in which g is restricted to containing nonnegative entries , but allowing the data matrix x to have mixed signs , thus extending the applicable range of nmf methods . we also consider algorithms in which the basis vectors of f are constrained to be convex combinations of the data points . this is used for a kernel extension of nmf . we provide algorithms for computing these new factorizations and we provide supporting theoretical analysis . we also analyze the relationships between our algorithms and clustering algorithms , and consider the implications for sparseness of solutions . finally , we present experimental results that explore the properties of these new methods . story_separator_special_tag for applications as varied as bayesian neural networks , determinantal point processes , elliptical graphical models , and kernel learning for gaussian processes ( gps ) , one must compute a log determinant of an n by n positive definite matrix , and its derivatives -- -leading to prohibitive o ( n^3 ) computations . we propose novel o ( n ) approaches to estimating these quantities from only fast matrix vector multiplications ( mvms ) . these stochastic approximations are based on chebyshev , lanczos , and surrogate models , and converge quickly even for kernel matrices that have challenging spectra . we leverage these approximations to develop a scalable gaussian process approach to kernel learning . we find that lanczos is generally superior to chebyshev for kernel learning , and that a surrogate approach can be highly efficient and accurate with popular kernels . story_separator_special_tag in putting together this issue of cise , we knew three things : it would be difficult to list just 10 algorithms ; it would be fun to assemble the authors and read their papers ; and , whatever we came up with in the end , it would be controversial . we tried to assemble the 10 algorithms with the greatest influence on the development and practice of science and engineering in the 20th century . following is our list ( in chronological order ) : metropolis algorithm for monte carlosimplex method for linear programmingkrylov subspace iteration methodsthe decompositional approach to matrix computationsthe fortran optimizing compilerqr algorithm for computing eigenvaluesquicksort algorithm for sortingfast fourier transforminteger relation detectionfast multipole method story_separator_special_tag abstract more than 50 years ago , john tukey called for a reformation of academic statistics . in the future of data analysis , he pointed to the existence of an as-yet unrecognized science , whose subject of interest was learning from data , or data analysis . ten to 20 years ago , john chambers , jeff wu , bill cleveland , and leo breiman independently once again urged academic statistics to expand its boundaries beyond the classical domain of theoretical statistics ; chambers called for more emphasis on data preparation and presentation rather than statistical modeling ; and breiman called for emphasis on prediction rather than inference . cleveland and wu even suggested the catchy name data science for this envisioned field . a recent and growing phenomenon has been the emergence of data science programs at major universities , including uc berkeley , nyu , mit , and most prominently , the university of michigan , which in september 2015 announced a $ 100m data science initiative that aims to hire 35 new faculty . teaching in these new programs has significant overlap in curricular subject matter with traditional statistics courses ; yet many academic statisticians perceive story_separator_special_tag we interpret non-negative matrix factorization geometrically , as the problem of finding a simplicial cone which contains a cloud of data points and which is contained in the positive orthant . we show that under certain conditions , basically requiring that some of the data are spread across the faces of the positive orthant , there is a unique such simplicial cone . we give examples of synthetic image articulation databases which obey these conditions ; these require separated support and factorial sampling . for such databases there is a generative model in terms of 'parts ' and nmf correctly identifies the 'parts ' . we show that our theoretical results are predictive of the performance of published nmf code , by running the published algorithms on one of our synthetic image articulation databases . story_separator_special_tag motivated by applications in which the data may be formulated as a matrix , we consider algorithms for several common linear algebra problems . these algorithms make more efficient use of computational resources , such as the computation time , random access memory ( ram ) , and the number of passes over the data , than do previously known algorithms for these problems . in this paper , we devise two algorithms for the matrix multiplication problem . suppose $ a $ and $ b $ ( which are $ m\\times n $ and $ n\\times p $ , respectively ) are the two input matrices . in our main algorithm , we perform $ c $ independent trials , where in each trial we randomly sample an element of $ \\ { 1,2 , \\ldots , n\\ } $ with an appropriate probability distribution $ { \\cal p } $ on $ \\ { 1,2 , \\ldots , n\\ } $ . we form an $ m\\times c $ matrix $ c $ consisting of the sampled columns of $ a $ , each scaled appropriately , and we form a $ c\\times n $ matrix $ r story_separator_special_tag the statistical leverage scores of a matrix $ a $ are the squared row-norms of the matrix containing its ( top ) left singular vectors and the coherence is the largest leverage score . these quantities are of interest in recently-popular problems such as matrix completion and nystrom-based low-rank matrix approximation as well as in large-scale statistical data analysis applications more generally ; moreover , they are of interest since they define the key structural nonuniformity that must be dealt with in developing fast randomized matrix algorithms . our main result is a randomized algorithm that takes as input an arbitrary $ n \\times d $ matrix $ a $ , with $ n \\gg d $ , and that returns as output relative-error approximations to all $ n $ of the statistical leverage scores . the proposed algorithm runs ( under assumptions on the precise values of $ n $ and $ d $ ) in $ o ( n d \\log n ) $ time , as opposed to the $ o ( nd^2 ) $ time required by the naive algorithm that involves computing an orthogonal basis for the range of $ a $ . our analysis story_separator_special_tag m atrices are ubiquitous in computer science , statistics , and applied mathematics . an m \xd7 n matrix can encode information about m objects ( each described by n features ) , or the behavior of a discretized differential operator on a finite element mesh ; an n \xd7 n positive-definite matrix can encode the correlations between all pairs of n objects , or the edge-connectivity between all pairs of nodes in a social network ; and so on . motivated largely by technological developments that generate extremely large scientific and internet datasets , recent years have witnessed exciting developments in the theory and practice of matrix algorithms . particularly remarkable is the use of randomization typically assumed to be a property of the input data due to , for example , noise in the data generation mechanisms as an algorithmic or computational resource for the develop ment of improved algorithms for fundamental matrix problems such as matrix multiplication , least-squares ( ls ) approximation , lowrank matrix approxi mation , and laplacian-based linear equ ation solvers . randomized numerical linear algebra ( randnla ) is an interdisciplinary research area that exploits randomization as a computational resource to develop story_separator_special_tag least squares approximation is a technique to find an approximate solution to a system of linear equations that has no exact solution . in a typical setting , one lets $ n $ be the number of constraints and $ d $ be the number of variables , with $ n \\gg d $ . then , existing exact methods find a solution vector in $ o ( nd^2 ) $ time . we present two randomized algorithms that provide very accurate relative-error approximations to the optimal value and the solution vector of a least squares approximation problem more rapidly than existing exact algorithms . both of our algorithms preprocess the data with the randomized hadamard transform . one then uniformly randomly samples constraints and solves the smaller problem on those constraints , and the other performs a sparse random projection and solves the smaller problem on those projected coordinates . in both cases , solving the smaller problem provides relative-error approximations , and , if $ n $ is sufficiently larger than $ d $ , the approximate solution can be computed in $ o ( nd \\log d ) $ time . story_separator_special_tag a new regression technique based on vapnik 's concept of support vectors is introduced . we compare support vector regression ( svr ) with a committee regression technique ( bagging ) based on regression trees and ridge regression done in feature space . on the basis of these experiments , it is expected that svr will have advantages in high dimensionality space because svr optimization does not depend on the dimensionality of the input space . story_separator_special_tag we introduce an economical gram -- schmidt orthogonalization on the extended krylov subspace originated by actions of a symmetric matrix and its inverse . an error bound for a family of problems arising from the elliptic method of lines is derived . the bound shows that , for the same approximation quality , the diagonal variant of the extended subspaces requires about the square root of the dimension of the standard krylov subspaces using only positive or negative matrix powers . an example of an application to the solution of a 2.5-d elliptic problem attests to the computational efficiency of the method for large-scale problems . story_separator_special_tag hyperbolic cross approximation is a special type of multivariate approximation . recently , driven by applications in engineering , biology , medicine and other areas of science new challenging problems have appeared . the common feature of these problems is high dimensions . we present here a survey on classical methods developed in multivariate approximation theory , which are known to work very well for moderate dimensions and which have potential for applications in really high dimensions . the theory of hyperbolic cross approximation and related theory of functions with mixed smoothness are under detailed study for more than 50 years . it is now well understood that this theory is important both for theoretical study and for practical applications . it is also understood that both theoretical analysis and construction of practical algorithms are very difficult problems . this explains why many fundamental problems in this area are still unsolved . only a few survey papers and monographs on the topic are published . this and recently discovered deep connections between the hyperbolic cross approximation ( and related sparse grids ) and other areas of mathematics such as probability , discrepancy , and numerical integration motivated us to write story_separator_special_tag multiple linear regression is considered and the partial least-squares method ( pls ) for computing a projection onto a lower-dimensional subspace is analyzed . the equivalence of pls to lanczos bidiagonalization is a basic part of the analysis . singular value analysis , krylov subspaces , and shrinkage factors are used to explain why , in many cases , pls gives a faster reduction of the residual than standard principal components regression . it is also shown why in some cases the dimension of the subspace , given by pls , is not as small as desired . story_separator_special_tag fast and robust decomposition of a matrix representing a spatial grid through time.rapid approximation for robust principal component analysis.competitive performance in terms of recall and precision for motion detection.gpu accelerated implementation allows faster computation . this paper introduces a fast algorithm for randomized computation of a low-rank dynamic mode decomposition ( dmd ) of a matrix . here we consider this matrix to represent the development of a spatial grid through time e.g . data from a static video source . dmd was originally introduced in the fluid mechanics community , but is also suitable for motion detection in video streams and its use for background subtraction has received little previous investigation . in this study we present a comprehensive evaluation of background subtraction , using the randomized dmd and compare the results with leading robust principal component analysis algorithms . the results are convincing and show the random dmd is an efficient and powerful approach for background modeling , allowing processing of high resolution videos in real-time . supplementary materials include implementations of the algorithms in python . story_separator_special_tag we perform importance sampling for a randomized matrix multiplication algorithm by drineas , kannan , and mahoney and derive probabilities that minimize the expected value ( with regard to the distributions of the matrix elements ) of the variance . we compare these optimized probabilities with uniform probabilities and derive conditions under which the actual variance of the optimized probabilities is lower . numerical experiments with query matching in information retrieval applications illustrate that the optimized probabilities produce more accurate matchings than the uniform probabilities and that they can also be computed efficiently . story_separator_special_tag abstract a novel approach to describe the 3d structure of small/medium-sized and large molecules is introduced . a vector and an index are defined on the basis of considering second line graphs with edges weighted by the dihedral angles of the molecule . they measure the 3d ` compactness ' or folding of the molecular structures , giving maximum values for the most folded structures . we have ranked five protein models according to their degree of folding . the similarity among these proteins has been determined , showing that the most folded proteins are not similar among them , while the less folded ones are similar to each other . story_separator_special_tag abstract a fundamental problem in the study of complex networks is to provide quantitative measures of correlation and information flow between different parts of a system . to this end , several notions of communicability have been introduced and applied to a wide variety of real-world networks in recent years . several such communicability functions are reviewed in this paper . it is emphasized that communication and correlation in networks can take place through many more routes than the shortest paths , a fact that may not have been sufficiently appreciated in previously proposed correlation measures . in contrast to these , the communicability measures reviewed in this paper are defined by taking into account all possible routes between two nodes , assigning smaller weights to longer ones . this point of view naturally leads to the definition of communicability in terms of matrix functions , such as the exponential , resolvent , and hyperbolic functions , in which the matrix argument is either the adjacency matrix or the graph laplacian associated with the network . considerable insight on communicability can be gained by modeling a network as a system of oscillators and deriving physical interpretations , both classical and story_separator_special_tag the emerging field of network science deals with the tasks of modeling , comparing , and summarizing large data sets that describe complex interactions . because pairwise affinity data can be stored in a two-dimensional array , graph theory and applied linear algebra provide extremely useful tools . here , we focus on the general concepts of centrality , communicability , and betweenness , each of which quantifies important features in a network . some recent work in the mathematical physics literature has shown that the exponential of a network 's adjacency matrix can be used as the basis for defining and computing specific versions of these measures . we introduce here a general class of measures based on matrix functions , and show that a particular case involving a matrix resolvent arises naturally from graph-theoretic arguments . we also point out connections between these measures and the quantities typically computed when spectral methods are used for data mining tasks such as clustering and ordering . we finish with computational examples showing the new matrix resolvent version applied to real networks . story_separator_special_tag we introduce a new centrality measure that characterizes the participation of each node in all subgraphs in a network . smaller subgraphs are given more weight than larger ones , which makes this measure appropriate for characterizing network motifs . we show that the subgraph centrality [ c ( s ) ( i ) ] can be obtained mathematically from the spectra of the adjacency matrix of the network . this measure is better able to discriminate the nodes of a network than alternate measures such as degree , closeness , betweenness , and eigenvector centralities . we study eight real-world networks for which c ( s ) ( i ) displays useful and desirable properties , such as clear ranking of nodes and scale-free characteristics . compared with the number of links per node , the ranking introduced by c ( s ) ( i ) ( for the nodes in the protein interaction network of s. cereviciae ) is more highly correlated with the lethality of individual proteins removed from the proteome . story_separator_special_tag with applications using smartpls ( www.smartpls.com ) the primary software used in partial least squares structural equation modeling ( pls-sem ) this practical guide provides concise instructions on how to use this evolving statistical technique to conduct research and obtain solutions . featuring the latest research , new examples , and expanded discussions throughout , the second edition is designed to be easily understood by those with limited statistical and mathematical training who want to pursue research opportunities in new ways . story_separator_special_tag as a new approach to train generative models , \\emph { generative adversarial networks } ( gans ) have achieved considerable success in image generation . this framework has also recently been applied to data with graph structures . we propose labeled-graph generative adversarial networks ( lggan ) to train deep generative models for graph-structured data with node labels . we test the approach on various types of graph datasets , such as collections of citation networks and protein graphs . experiment results show that our model can generate diverse labeled graphs that match the structural characteristics of the training data and outperforms all alternative approaches in quality and generality . to further evaluate the quality of the generated graphs , we use them on a downstream task of graph classification , and the results show that lggan can faithfully capture the important aspects of the graph structure . story_separator_special_tag this is a survey of the method of graph cuts and its applications to graph clustering of weighted unsigned and signed graphs . i provide a fairly thorough treatment of the method of normalized graph cuts , a deeply original method due to shi and malik , including complete proofs . the main thrust of this paper is the method of normalized cuts . i give a detailed account for k = 2 clusters , and also for k > 2 clusters , based on the work of yu and shi . i also show how both graph drawing and normalized cut k-clustering can be easily generalized to handle signed graphs , which are weighted graphs in which the weight matrix w may have negative coefficients . intuitively , negative coefficients indicate distance or dissimilarity . the solution is to replace the degree matrix by the matrix in which absolute values of the weights are used , and to replace the laplacian by the laplacian with the new degree matrix of absolute values . as far as i know , the generalization of k-way normalized clustering to signed graphs is new . finally , i show how the method of story_separator_special_tag volume 2 : xi . complex symmetric , skew-symmetric , and orthogonal matrices : 1. some formulas for complex orthogonal and unitary matrices 2. polar decomposition of a complex matrix 3. the normal form of a complex symmetric matrix 4. the normal form of a complex skew-symmetric matrix 5. the normal form of a complex orthogonal matrix xii . singular pencils of matrices : 1. introduction 2. regular pencils of matrices 3. singular pencils . the reduction theorem 4. the canonical form of a singular pencil of matrices 5. the minimal indices of a pencil . criterion for strong equivalence of pencils 6. singular pencils of quadratic forms 7. application to differential equations xiii . matrices with non-negative elements : 1. general properties 2. spectral properties of irreducible non-negative matrices 3. reducible matrices 4. the normal form of a reducible matrix 5. primitive and imprimitive matrices 6. stochastic matrices 7. limiting probabilities for a homogeneous markov chain with a finite number of states 8. totally non-negative matrices 9. oscillatory matrices xiv . applications of the theory of matrices to the investigation of systems of linear differential equations : 1. systems of linear differential equations with variable coefficients . general concepts story_separator_special_tag nonnegative matrix factorization ( nmf ) has become a widely used tool for the analysis of high-dimensional data as it automatically extracts sparse and meaningful features from a set of nonnegative data vectors . we first illustrate this property of nmf on three applications , in image processing , text mining and hyperspectral imaging -- this is the why . then we address the problem of solving nmf , which is np-hard in general . we review some standard nmf algorithms , and also present a recent subclass of nmf problems , referred to as near-separable nmf , that can be solved efficiently ( that is , in polynomial time ) , even in the presence of noise -- this is the how . finally , we briefly describe some problems in mathematics and computer science closely related to nmf via the nonnegative rank . story_separator_special_tag abstract let g be a graph whose eigenvalues are 1 , 2 , , n . the estrada index of g is equal to i = 1 n e i . we point out certain classes of graphs whose characteristic polynomials are closely connected to the chebyshev polynomials of the second kind . various relations , in particular approximations , for the estrada index of these graphs are obtained . story_separator_special_tag a numerically stable and fairly fast scheme is described to compute the unitary matrices u and v which transform a given matrix a into a diagonal form $ \\sigma = u^ * av $ , thus exhibiting a s singular values on $ \\sigma $ s diagonal . the scheme first transforms a to a bidiagonal matrix j , then diagonalizes j. the scheme described here is complicated but does not suffer from the computational difficulties which occasionally afflict some previously known methods . some applications are mentioned , in particular the use of the pseudo-inverse $ a^i = v\\sigma ^i u^ * $ to solve least squares problems in a way which dampens spurious oscillation and cancellation . story_separator_special_tag this is a tutorial on some basic non-asymptotic methods and concepts in random matrix theory . the reader will learn several tools for the analysis of the extreme singular values of random matrices with independent rows or columns . many of these methods sprung off from the development of geometric functional analysis since the 1970 's . they have applications in several fields , most notably in theoretical computer science , statistics and signal processing . a few basic applications are covered in this text , particularly for the problem of estimating covariance matrices in statistics and for validating probabilistic constructions of measurement matrices in compressed sensing . these notes are written particularly for graduate students and beginning researchers in different areas , including functional analysts , probabilists , theoretical statisticians , electrical engineers , and theoretical computer scientists . story_separator_special_tag the simultaneous solution of ax = b andat y = g , wherea is a non-singular matrix , is required in a number of situations . darmofal and lu have proposed a met hod based on the quasi-minimal residual algorithm ( qmr ) . we will introduce a technique for the same purpo se based on the lsqr method and show how its performance can be improved when using the generalized lsqr method . we further show how preconditioners can be introduced to enhance the speed of convergence and discus s different preconditioners that can be used . the scattering amplitudegtx , a widely used quantity in signal processing for example , ha s a close connection to the above problem sincex represents the solution of the forward problem and g is the right-hand side of the adjoint system . we show how this quantity can be efficiently approximated usi ng gauss quadrature and introduce a block-lanczos process that approximates the scattering amplitude , and wh ich can also be used with preconditioning . story_separator_special_tag a large proportion of the scientific calculations performed on computers involves matrices . partly , this is because of the ubiquity of matrices in the mathematics of scientific problems , but it is also partly due to the fact that the use of matrices is ideally suited to the iterative type of calculation in which computers realize their full power . story_separator_special_tag most numerical integration techniques consist of approximating the integrand by a polynomial in a region or regions and then integrating the polynomial exactly . often a complicated integrand can be factored into a non-negative `` weight '' function and another function better approximated by a polynomial , thus $ \\int_ { a } ^ { b } g ( t ) dt = \\int_ { a } ^ { b } \\omega ( t ) f ( t ) dt \\approx \\sum_ { i=1 } ^ { n } w_i f ( t_i ) $ . hopefully , the quadrature rule $ { \\ { w_j , t_j\\ } } _ { j=1 } ^ { n } $ corresponding to the weight function $ \\omega $ ( t ) is available in tabulated form , but more likely it is not . we present here two algorithms for generating the gaussian quadrature rule defined by the weight function when : a ) the three term recurrence relation is known for the orthogonal polynomials generated by $ \\omega $ ( t ) , and b ) the moments of the weight function are known or can be calculated . story_separator_special_tag deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction . these methods have dramatically improved the state-of-the-art in speech recognition , visual object recognition , object detection and many other domains such as drug discovery and genomics . deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer . deep convolutional nets have brought about breakthroughs in processing images , video , speech and audio , whereas recurrent nets have shone light on sequential data such as text and speech . story_separator_special_tag we develop a novel , fundamental , and surprisingly simple randomized iterative method for solving consistent linear systems . our method has six different but equivalent interpretations : sketch-and-project , constrain-and-approximate , random intersect , random linear solve , random update , and random fixed point . by varying its two parameters -- -a positive definite matrix ( defining geometry ) , and a random matrix ( sampled in an independent and identically distributed fashion in each iteration ) -- -we recover a comprehensive array of well-known algorithms as special cases , including the randomized kaczmarz method , randomized newton method , randomized coordinate descent method , and random gaussian pursuit . we naturally also obtain variants of all these methods using blocks and importance sampling . however , our method allows for a much wider selection of these two parameters , which leads to a number of new specific methods . we prove exponential convergence of the expected norm of the error in a single theorem , from w . story_separator_special_tag during the last years , low-rank tensor approximation has been established as a new tool in scientific computing to address large-scale linear and multilinear algebra problems , which would be intractable by classical techniques . this survey attempts to give a literature overview of current developments in this area , with an emphasis on function-related tensors . story_separator_special_tag a classical problem in matrix computations is the efficient and reliable approximation of a given matrix by a matrix of lower rank . the truncated singular value decomposition ( svd ) is known to provide the best such approximation for any given fixed rank . however , the svd is also known to be very costly to compute . among the different approaches in the literature for computing low-rank approximations , randomized algorithms have attracted researchers ' attention recently due to their surprising reliability and computational efficiency in different application areas . typically , such algorithms are shown to compute with very high probability low-rank approximations that are within a constant factor from optimal , and are known to perform even better in many practical situations . in this paper , we present a novel error analysis that considers randomized algorithms within the subspace iteration framework and show with very high probability that highly accurate low-rank approximations as well as singular values ca . story_separator_special_tag given anm n matrixm withm > n , it is shown that there exists a permutation fi and an integer k such that the qr factorization myi= q ( ak ckbk ) reveals the numerical rank of m : the k k upper-triangular matrix ak is well conditioned , ilckll2 is small , and bk is linearly dependent on ak with coefficients bounded by a low-degree polynomial in n. existing rank-revealing qr ( rrqr ) algorithms are related to such factorizations and two algorithms are presented for computing them . the new algorithms are nearly as efficient as qr with column pivoting for most problems and take o ( ran2 ) floating-point operations in the worst case . story_separator_special_tag community detection in real-world graphs has been shown to benefit from using multi-aspect information , e.g. , in the form of means of communication between nodes in the network . an orthogonal line of work , broadly construed as semi-supervised learning , approaches the problem by introducing a small percentage of node assignments to communities and propagates that knowledge throughout the graph . in this paper we introduce smacd , a novel semi-supervised multi-aspect community detection . to the best of our knowledge , smacd is the first approach to incorporate multiaspect graph information and semi-supervision , while being able to discover communities . we extensively evaluate smacd s performance in comparison to state-of-the-art approaches across six real and two synthetic datasets , and demonstrate that smacd , through combining semi-supervision and multi-aspect edge information , outperforms the baselines . story_separator_special_tag residual neural networks ( resnets ) are a promising class of deep neural networks that have shown excellent performance for a number of learning tasks , e.g. , image classification and recognition . ma . story_separator_special_tag low-rank tensor approximations are very promising for compression of deep neural networks . we propose a new simple and efficient iterative approach , which alternates low-rank factorization with smart rank selection and fine-tuning . we demonstrate the efficiency of our method comparing to non-iterative ones . our approach improves the compression rate while maintaining the accuracy for a variety of tasks . story_separator_special_tag the low-rank tensor approximation is very promising for the compression of deep neural networks . we propose a new simple and efficient iterative approach , which alternates low-rank factorization with a smart rank selection and fine-tuning . we demonstrate the efficiency of our method comparing to non-iterative ones . our approach improves the compression rate while maintaining the accuracy for a variety of tasks . story_separator_special_tag matrix functions are a central topic of linear algebra , and problems of their numerical approximation appear increasingly often in scientific computing . we review various rational krylov methods for the computation of large-scale matrix functions . emphasis is put on the rational arnoldi method and variants thereof , namely , the extended krylov subspace method and the shift-and-invert arnoldi method , but we also discuss the nonorthogonal generalized leja point ( or pain ) method . the issue of optimal pole selection for rational krylov methods applied for approximating the resolvent and exponential function , and functions of markov type , is treated in some detail . story_separator_special_tag matrix functions are a central topic of linear algebra , and problems requiring their numerical approximation appear increasingly often in scientific computing . we review various limited-memory methods for the approximation of the action of a large-scale matrix function on a vector . emphasis is put on polynomial methods , whose memory requirements are known or prescribed a priori . methods based on explicit polynomial approximation or interpolation , as well as restarted arnoldi methods , are treated in detail . an overview of existing software is also given , as well as a discussion of challenging open problems . story_separator_special_tag deep neural networks have become invaluable tools for supervised machine learning , e.g . classification of text or images . while often offering superior results over traditional techniques and successfully expressing complicated patterns in data , deep architectures are known to be challenging to design and train such that they generalize well to new data . critical issues with deep architectures are numerical instabilities in derivative-based learning algorithms commonly called exploding or vanishing gradients . in this paper , we propose new forward propagation techniques inspired by systems of ordinary differential equations ( ode ) that overcome this challenge and lead to well-posed learning problems for arbitrarily deep networks . the backbone of our approach is our interpretation of deep learning as a parameter estimation problem of nonlinear dynamical systems . given this formulation , we analyze stability and well-posedness of deep learning and use this new understanding to develop new network architectures . we relate the exploding and vanishing gradient phenomenon to the stability of the discrete ode and present several strategies for stabilizing deep learning for very deep networks . while our new architectures restrict the solution space , several numerical experiments show their competitiveness with state-of-the-art networks story_separator_special_tag abstract this paper focuses on a set of structural properties which characterize the gahuku gama , a social and cultural unit in the eastern central highlands of new guinea . gahuku gama society has been described as a finite network , the members of which are social groups connected by bonds of traditional warfare and alliance . the aims of the paper are ( 1 ) to formalize some ordinary english terms which have been applied to this system ; ( 2 ) to elucidate these properties through the application of graph theoretic concepts and theorems ; ( 3 ) to use certain of these theorems in the prediction of empirical facts of local grouping ; and ( 4 ) to suggest that this approach could usefully be adopted in relation to similar sociocultural systems in highland new guinea and elsewhere . story_separator_special_tag the nonnegative matrix factorization ( nmf ) has been a popular model for a wide range of signal processing and machine learning problems . it is usually formulated as a nonconvex cost minimization problem . this work settles the convergence issue of a popular algorithm based on the alternating direction method of multipliers proposed in boyd et al 2011. we show that the algorithm converges globally to the set of kkt solutions whenever certain penalty parameter satisfies > 1. we further extend the algorithm and its analysis to the problem where the observation matrix contains missing values . numerical experiments on real and synthetic data sets demonstrate the effectiveness of the algorithms under investigation . story_separator_special_tag scipy is an open-source scientific computing library for the python programming language . since its initial release in 2001 , scipy has become a de facto standard for leveraging scientific algorithms in python , with over 600 unique code contributors , thousands of dependent packages , over 100,000 dependent repositories and millions of downloads per year . in this work , we provide an overview of the capabilities and development practices of scipy 1.0 and highlight some recent technical developments . this perspective describes the development and capabilities of scipy 1.0 , an open source scientific computing library for the python programming language . story_separator_special_tag low-rank matrix approximations , such as the truncated singular value decomposition and the rank-revealing qr decomposition , play a central role in data analysis and scientific computing . this work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation . these techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets . this paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions . these methods use random sampling to identify a subspace that captures most of the action of a matrix . the input matrix is then compressed either explicitly or implicitly to this subspace , and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization . in many cases , this approach beats its classical competitors in terms of accuracy , robustness , and/or speed . these claims are supported by extensive numerical experiments and a detailed error analysis . the specific benefits of randomized techniques depend on the computational environment . consider the model problem of finding the $ k $ dominant components of the singular value decomposition of an story_separator_special_tag the cur decomposition is a factorization of a low-rank matrix obtained by selecting certain column and row submatrices of it . we perform a thorough investigation of what happens to such decompositi . story_separator_special_tag the truncated singular value decomposition ( svd ) is considered as a method for regularization of ill-posed linear least squares problems . in particular , the truncated svd solution is compared with the usual regularized solution . necessary conditions are defined in which the two methods will yield similar results . this investigation suggests the truncated svd as a favorable alternative to standard-form regularization in case of ill-conditioned matrices with a well-determined rank . story_separator_special_tag data science is not only a synthetic concept to unify statistics , data analysis and their related methods but also comprises its results . it includes three phases , design for data , collection of data , and analysis on data . fundamental concepts and various methods based on it are discussed with a heuristic example . story_separator_special_tag with advances in data collection technologies , tensor data is assuming increasing prominence in many applications and the problem of supervised tensor learning has emerged as a topic of critical significance in the data mining and machine learning community . conventional methods for supervised tensor learning mainly focus on learning kernels by flattening the tensor into vectors or matrices , however structural information within the tensors will be lost . in this paper , we introduce a new scheme to design structure-preserving kernels for supervised tensor learning . specifically , we demonstrate how to leverage the naturally available structure within the tensorial representation to encode prior knowledge in the kernel . we proposed a tensor kernel that can preserve tensor structures based upon dual-tensorial mapping . the dual-tensorial mapping function can map each tensor instance in the input space to another tensor in the feature space while preserving the tensorial structure . theoretically , our approach is an extension of the conventional kernels in the vector space to tensor space . we applied our novel kernel in conjunction with svm to real-world tensor classification problems including brain fmri classification for three different diseases ( i.e. , alzheimer 's disease , story_separator_special_tag in the context of supervised tensor learning , preserving the structural information and exploiting the discriminative nonlinear relationships of tensor data are crucial for improving the performance of learning tasks . based on tensor factorization theory and kernel methods , we propose a novel kernelized support tensor machine ( kstm ) which integrates kernelized tensor factorization with maximum-margin criterion . specifically , the kernelized factorization technique is introduced to approximate the tensor data in kernel space such that the complex nonlinear relationships within tensor data can be explored . further , dual structural preserving kernels are devised to learn the nonlinear boundary between tensor data . as a result of joint optimization , the kernels obtained in kstm exhibit better generalization power to discriminative analysis . the experimental results on realworld neuroimaging datasets show the superiority of kstm over the state-of-the-art techniques . story_separator_special_tag deep learning 's recent successes have mostly relied on convolutional networks , which exploit fundamental statistical properties of images , sounds and video data : the local stationarity and multi-scale compositional structure , that allows expressing long range interactions in terms of shorter , localized interactions . however , there exist other important examples , such as text documents or bioinformatic data , that may lack some or all of these strong statistical regularities . in this paper we consider the general question of how to construct deep architectures with small learning complexity on general non-euclidean domains , which are typically unknown and need to be estimated from the data . in particular , we develop an extension of spectral networks which incorporates a graph estimation procedure , that we test on large-scale classification problems , matching or improving over dropout networks with far less parameters to estimate . story_separator_special_tag lanczos bidiagonalization is a competitive method for computing a partial singular value decomposition of a large sparse matrix , that is , when only a subset of the singular values and corresponding singular vectors are required . however , a straightforward implementation of the algorithm has the problem of loss of orthogonality between computed lanczos vectors , and some reorthogonalization technique must be applied . also , an effective restarting strategy must be used to prevent excessive growth of the cost of reorthogonalization per iteration . on the other hand , if the method is to be implemented on a distributed-memory parallel computer , then additional precautions are required so that parallel efficiency is maintained as the number of processors increases . in this paper , we present a lanczos bidiagonalization procedure implemented in slepc , a software library for the solution of large , sparse eigenvalue problems on parallel computers . the solver is numerically robust and scales well up to hundreds of processors . story_separator_special_tag i newton 's method and the gradient method.- 1 introduction.- 2 fundamental concepts.- 3 iterative methods for solving g ( x ) = 0.- 4 convergence theorems.- 5 minimization of functions by newton 's method.- 6 gradient methods-the quadratic case.- 7 general descent methods.- 8 iterative methods for solving linear equations.- 9 constrained minima.- ii conjugate direction methods.- 1 introduction.- 2 quadratic functions on en.- 3 basic properties of quadratic functions.- 4 minimization of a quadratic function f on k-planes.- 5 method of conjugate directions ( cd-method ) .- 6 method of conjugate gradients ( cg-algorithm ) .- 7 gradient partan.- 8 cg-algorithms for nonquadratic functions.- 9 numerical examples.- 10 least square solutions.- iii conjugate gram-schmidt processes.- 1 introduction.- 2 a conjugate gram-schmidt process.- 3 cgs-cg-algorithms.- 4 a connection of cgs-algorithms with gaussian elimination.- 5 method of parallel displacements.- 6 methods of parallel planes ( parp ) .- 7 modifications of parallel displacements algorithms.- 8 cgs-algorithms for nonquadratic functions.- 9 cgs-cg-routines for nonquadratic functions.- 10 gauss-seidel cgs-routines.- 11 the case of nonnegative components.- 12 general linear inequality constraints.- iv conjugate gradient algorithms.- 1 introduction.- 2 conjugate gradient algorithms.- 3 the normalized cg-algorithm.- 4 termination.- 5 clustered eigenvalues.- 6 nonnegative hessians.- story_separator_special_tag multilayered artificial neural networks are becoming a pervasive tool in a host of application fields . at the heart of this deep learning revolution are familiar concepts from applied and computati . story_separator_special_tag a catalogue of software for computing matrix functions and their fr\\'echet derivatives is presented . for a wide variety of languages and for software ranging from commercial products to open source packages we describe what matrix function codes are available and which algorithms they implement . story_separator_special_tag krylov subspace methods for approximating the action of matrix exponentials are analyzed in this paper . we derive error bounds via a functional calculus of arnoldi and lanczos methods that reduces the study of krylov subspace approximations of functions of matrices to that of linear systems of equations . as a side result , we obtain error bounds for galerkin-type krylov methods for linear equations , namely , the biconjugate gradient method and the full orthogonalization method . for krylov approximations to matrix exponentials , we show superlinear error decay from relatively small iteration numbers onwards , depending on the geometry of the numerical range , the spectrum , or the pseudospectrum . the convergence to exp $ ( \\tau a ) v $ is faster than that of corresponding krylov methods for the solution of linear equations $ ( i-\\tau a ) x=v $ , which usually arise in the numerical solution of stiff ordinary differential equations ( odes ) . we therefore propose a new class of time integration methods for large systems of nonlinear differential equations which use krylov approximations to the exponential function of the jacobian instead of solving linear or nonlinear systems of equations in story_separator_special_tag abstract for the accurate approximation of the minimal singular triple ( singular value and left and right singular vector ) of a large sparse matrix , we may use two separate search spaces , one for the left , and one for the right singular vector . in lanczos bidiagonalization , for example , such search spaces are constructed . in siam j. sci . comput. , 23 ( 2 ) ( 2002 ) , pp . 606 628 , the author proposes a jacobi davidson type method for the singular value problem , where solutions to certain correction equations are used to expand the search spaces . as noted in the mentioned paper , the standard galerkin subspace extraction works well for the computation of large singular triples , but may lead to unsatisfactory approximations to small and interior triples . to overcome this problem for the smallest triples , we propose three harmonic and a refined approach . all methods are derived in a number of different ways . some of these methods can also be applied when we are interested in the largest or interior singular triples . theoretical results as well as numerical experiments indicate that story_separator_special_tag list of tables and figures . preface . foreword . section i : the taxonomy , educational objectives and student learning . 1. introduction . 2. the structure , specificity , and problems of objectives . section ii : the revised taxonomy structure . 3. the taxonomy table . 4. the knowledge dimension . 5. the cognitive process dimension . section iii : the taxonomy in use . 6. using the taxonomy table . 7. introduction to the vignettes . 8. nutrition vignette . 9. macbeth vignette . 10. addition facts vignette . 11. parliamentary acts vignette . 12. volcanoes ? here ? vignette . 13. report writing vignette . 14. addressing long-standing problems in classroom instruction . appendices . appendix a : summary of the changes from the original framework . appendix b : condensed version of the original taxonomy of educational objectives : cognitive domain . references . credits . index . story_separator_special_tag kernel machines such as kernel svm and kernel ridge regression usually construct high quality models ; however , their use in real-world applications remains limited due to the high prediction cost . in this paper , we present two novel insights for improving the prediction efficiency of kernel machines . first , we show that by adding `` pseudo landmark points '' to the classical nystrom kernel approximation in an elegant way , we can significantly reduce the prediction error without much additional prediction cost . second , we provide a new theoretical analysis on bounding the error of the solution computed by using nystrom kernel approximation method , and show that the error is related to the weighted kmeans objective function where the weights are given by the model computed from the original kernel . this theoretical insight suggests a new landmark point selection technique for the situation where we have knowledge of the original model . based on these two insights , we provide a divide-and-conquer framework for improving the prediction speed . first , we divide the whole problem into smaller local subproblems to reduce the problem size . in the second phase , we develop a story_separator_special_tag advances in causal modeling techniques have made it possible for researchers to simultaneously examine theory and measures . however , researchers must use these new techniques appropriately . in addition to dealing with the methodological concerns associated with more traditional methods of analysis , researchers using causal modeling approaches must understand their underlying assumptions and limitations . most researchers are well equipped with a basic understanding of lisrel-type models . in contrast , current familiarity with pls in the strategic management area is low . the current paper reviews four recent studies in the strategic management area which use pls . the review notes that the technique has been applied inconsistently , and at times inappropriately , and suggests standards for evaluating future pls applications . copyright \xa9 1999 john wiley & sons , ltd . story_separator_special_tag an unbiased stochastic estimator of tr ( i-a ) , where a is the influence matrix associated with the calculation of laplacian smoothing splines , is described . the estimator is similar to one recently developed by girard but satisfies a minimum variance criterion and does not require the simulation of a standard normal variable . it uses instead simulations of the discrete random variable which takes the values 1 , -1 each with probability 1/2 . bounds on the variance of the estimator , similar to those established by girard , are obtained using elementary methods . the estimator can be used to approximately minimize generalised cross validation ( gcv ) when using discretized iterative methods for fitting laplacian smoothing splines to very large data sets . simulated examples show that the estimated trace values , using either the estimator presented here or the estimator of girard , perform almost as well as the exact values when applied to the minimization of gcv for n as small as a few hundred , where n is the number . story_separator_special_tag this article provides an update on the global cancer burden using the globocan 2020 estimates of cancer incidence and mortality produced by the international agency for research on cancer . worldwide , an estimated 19.3 million new cancer cases ( 18.1 million excluding nonmelanoma skin cancer ) and almost 10.0 million cancer deaths ( 9.9 million excluding nonmelanoma skin cancer ) occurred in 2020. female breast cancer has surpassed lung cancer as the most commonly diagnosed cancer , with an estimated 2.3 million new cases ( 11.7 % ) , followed by lung ( 11.4 % ) , colorectal ( 10.0 % ) , prostate ( 7.3 % ) , and stomach ( 5.6 % ) cancers . lung cancer remained the leading cause of cancer death , with an estimated 1.8 million deaths ( 18 % ) , followed by colorectal ( 9.4 % ) , liver ( 8.3 % ) , stomach ( 7.7 % ) , and female breast ( 6.9 % ) cancers . overall incidence was from 2 fold to 3 fold higher in transitioned versus transitioning countries for both sexes , whereas mortality varied < 2 fold for men and little for women . story_separator_special_tag statistics an intduction to stistical lerning with applications in r an introduction to statistical learning provides an accessible overview of the fi eld of statistical learning , an essential toolset for making sense of the vast and complex data sets that have emerged in fi elds ranging from biology to fi nance to marketing to astrophysics in the past twenty years . th is book presents some of the most important modeling and prediction techniques , along with relevant applications . topics include linear regression , classifi cation , resampling methods , shrinkage approaches , tree-based methods , support vector machines , clustering , and more . color graphics and real-world examples are used to illustrate the methods presented . since the goal of this textbook is to facilitate the use of these statistical learning techniques by practitioners in science , industry , and other fi elds , each chapter contains a tutorial on implementing the analyses and methods presented in r , an extremely popular open source statistical soft ware platform . two of the authors co-wrote th e elements of statistical learning ( hastie , tibshirani and friedman , 2nd edition 2009 ) , a popular reference book story_separator_special_tag dynamic time warping ( dtw ) , which finds the minimum path by providing non-linear alignments between two time series , has been widely used as a distance measure for time series classification and clustering . however , dtw does not account for the relative importance regarding the phase difference between a reference point and a testing point . this may lead to misclassification especially in applications where the shape similarity between two sequences is a major consideration for an accurate recognition . therefore , we propose a novel distance measure , called a weighted dtw ( wdtw ) , which is a penalty-based dtw . our approach penalizes points with higher phase difference between a reference point and a testing point in order to prevent minimum distance distortion caused by outliers . the rationale underlying the proposed distance measure is demonstrated with some illustrative examples . a new weight function , called the modified logistic weight function ( mlwf ) , is also proposed to systematically assign weights as a function of the phase difference between a reference point and a testing point . by applying different weights to adjacent points , the proposed algorithm can enhance the detection story_separator_special_tag principal component analysis pca is a multivariate technique that analyzes a data table in which observations are described by several inter-correlated quantitative dependent variables . its goal is to extract the important information from the table , to represent it as a set of new orthogonal variables called principal components , and to display the pattern of similarity of the observations and of the variables as points in maps . the quality of the pca model can be evaluated using cross-validation techniques such as the bootstrap and the jackknife . pca can be generalized as correspondence analysis ca in order to handle qualitative variables and as multiple factor analysis mfa in order to handle heterogeneous sets of variables . mathematically , pca depends upon the eigen-decomposition of positive semi-definite matrices and upon the singular value decomposition svd of rectangular matrices . copyright \xa9 2010 john wiley & sons , inc . story_separator_special_tag large datasets are increasingly common and are often difficult to interpret . principal component analysis ( pca ) is a technique for reducing the dimensionality of such datasets , increasing interpretability but at the same time minimizing information loss . it does so by creating new uncorrelated variables that successively maximize variance . finding such new variables , the principal components , reduces to solving an eigenvalue/eigenvector problem , and the new variables are defined by the dataset at hand , not a priori , hence making pca an adaptive data analysis technique . it is adaptive in another sense too , since variants of the technique have been developed that are tailored to various different data types and structures . this article will begin by introducing the basic ideas of pca , discussing what it can and can not do . it will then describe some variants of pca and their application . story_separator_special_tag in this paper , we present algorithms for the approximation of multivariate periodic functions by trigonometric polynomials . the approximation is based on sampling of multivariate functions on rank-1 lattices . to this end , we study the approximation of periodic functions of a certain smoothness . our considerations include functions from periodic sobolev spaces of generalized mixed smoothness . recently an algorithm for the trigonometric interpolation on generalized sparse grids for this class of functions was investigated by griebel and hamaekers ( 2014 ) . the main advantage of our method is that the algorithm is based mainly on a single one-dimensional fast fourier transform , and that the arithmetic complexity of the algorithm depends only on the cardinality of the support of the trigonometric polynomial in the frequency domain . therefore , we investigate trigonometric polynomials with frequencies supported on hyperbolic crosses and energy norm based hyperbolic crosses in more detail . furthermore , we present an algorithm for sampling multivariate functions on perturbed rank-1 lattices and show the numerical stability of the suggested method . numerical results are presented up to dimension d = 10 , which confirm the theoretical findings . story_separator_special_tag in k-means clustering , we are given a set of n data points in d-dimensional space r/sup d/ and an integer k and the problem is to determine a set of k points in rd , called centers , so as to minimize the mean squared distance from each data point to its nearest center . a popular heuristic for k-means clustering is lloyd 's ( 1982 ) algorithm . we present a simple and efficient implementation of lloyd 's k-means clustering algorithm , which we call the filtering algorithm . this algorithm is easy to implement , requiring a kd-tree as the only major data structure . we establish the practical efficiency of the filtering algorithm in two ways . first , we present a data-sensitive analysis of the algorithm 's running time , which shows that the algorithm runs faster as the separation between clusters increases . second , we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization , data compression , and image segmentation . story_separator_special_tag any image can be represented as a function defined on a discrete weighted graph whose vertices are image pixels . each pixel can be linked to other pixels via graph edges with corresponding weights derived from similarities between image pixels ( graph vertices ) measured in some appropriate fashion . image structure is encoded in the laplacian matrix derived from these similarity weights . taking advantage of this graph-based point of view , we present a general regularization framework for image denoising . a number of well-known existing denoising methods like bilateral , nlm , and lark , can be described within this formulation . moreover , we present an analysis for the filtering behavior of the proposed method based on the spectral properties of laplacian matrices . some of the well established iterative approaches for improving kernel-based denoising like diffusion and boosting iterations are special cases of our general framework . the proposed approach provides a better understanding of enhancement mechanisms in self similarity-based methods , which can be used for their further improvement . experimental results verify the effectiveness of this approach for the task of image denoising . story_separator_special_tag this paper presents a standardized notation and terminology to be used for three- and multiway analyses , especially when these involve ( variants of ) the candecomp/parafac model and the tucker model . the notation also deals with basic aspects such as symbols for different kinds of products , and terminology for three- and higher-way data . the choices for terminology and symbols to be used have to some extent been based on earlier ( informal ) conventions . simplicity and reduction of the possibility of confusion have also played a role in the choices made . copyright ( c ) 2000 john wiley & sons , ltd . story_separator_special_tag we present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs . we motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions . our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes . in a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin . story_separator_special_tag low rank approximation of matrices has been well studied in literature . singular value decomposition , qr decomposition with column pivoting , rank revealing qr factorization , interpolative decomposi . story_separator_special_tag in most natural and engineered systems , a set of entities interact with each other in complicated patterns that can encompass multiple types of relationships , change in time , and include other types of complications . such systems include multiple subsystems and layers of connectivity , and it is important to take such `` multilayer '' features into account to try to improve our understanding of complex systems . consequently , it is necessary to generalize `` traditional '' network theory by developing ( and validating ) a framework and associated tools to study multilayer systems in a comprehensive fashion . the origins of such efforts date back several decades and arose in multiple disciplines , and now the study of multilayer networks has become one of the most important directions in network science . in this paper , we discuss the history of multilayer networks ( and related concepts ) and review the exploding body of work on such networks . to unify the disparate terminology in the large body of recent work , we discuss a general framework for multilayer networks , construct a dictionary of terminology to relate the numerous existing concepts to each other , story_separator_special_tag 3,41max planck institute for dynamics of complex technical systems , magdeburg , germany , 2institute for mathematical optimization , faculty of mathematics , otto-von-guericke university magdeburg , magdeburg , germany , 3institute for bioinformatics and systems biology , helmholtz zentrum mu\xa8nchen german research center forenvironmental health , neuherberg , germany , 4max planck institute for dynamics and self-organization , go\xa8ttingen , germany story_separator_special_tag for large square matrices a and functions f , the numerical approximation of the action of f ( a ) to a vector v has received considerable attention in the last two decades . in this paper we investigate the extended krylov subspace method , a technique that was recently proposed to approximate f ( a ) v for a symmetric . we provide a new theoretical analysis of the method , which improves the original result for a symmetric , and gives a new estimate for a nonsymmetric . numerical experiments confirm that the new error estimates correctly capture the linear asymptotic convergence rate of the approximation . by using recent algorithmic improvements , we also show that the method is computationally competitive with respect to other enhancement techniques . story_separator_special_tag this survey provides an overview of higher-order tensor decompositions , their applications , and available software . a tensor is a multidimensional or $ n $ -way array . decompositions of higher-order tensors ( i.e. , $ n $ -way arrays with $ n \\geq 3 $ ) have applications in psycho-metrics , chemometrics , signal processing , numerical linear algebra , computer vision , numerical analysis , data mining , neuroscience , graph analysis , and elsewhere . two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition : candecomp/parafac ( cp ) decomposes a tensor as a sum of rank-one tensors , and the tucker decomposition is a higher-order form of principal component analysis . there are many other tensor decompositions , including indscal , parafac2 , candelinc , dedicom , and paratuck2 as well as nonnegative variants of all of the above . the n-way toolbox , tensor toolbox , and multilinear engine are examples of software packages for working with tensors . story_separator_special_tag as the netflix prize competition has demonstrated , matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations , allowing the incorporation of additional information such as implicit feedback , temporal effects , and confidence levels . story_separator_special_tag spectral algorithms are classic approaches to clustering and community detection in networks . however , for sparse networks the standard versions of these algorithms are suboptimal , in some cases completely failing to detect communities even when other algorithms such as belief propagation can do so . here , we present a class of spectral algorithms based on a nonbacktracking walk on the directed edges of the graph . the spectrum of this operator is much better-behaved than that of the adjacency matrix or other commonly used matrices , maintaining a strong separation between the bulk eigenvalues and the eigenvalues relevant to community structure even in the sparse case . we show that our algorithm is optimal for graphs generated by the stochastic block model , detecting communities all of the way down to the theoretical limit . we also show the spectrum of the nonbacktracking operator for some real-world networks , illustrating its advantages over traditional spectral clustering . story_separator_special_tag it was only a matter of time before deep neural networks ( dnns ) deep learning made their mark in turbulence modelling , or more broadly , in the general area of high-dimensional , complex dynamical systems . in the last decade , dnns have become a dominant data mining tool for big data applications . although neural networks have been applied previously to complex fluid flows , the article featured here ( ling et\xa0al. , j. fluid mech. , vol . 807 , 2016 , pp . 155 166 ) is the first to apply a true dnn architecture , specifically to reynolds averaged navier stokes turbulence models . as one often expects with modern dnns , performance gains are achieved over competing state-of-the-art methods , suggesting that dnns may play a critically enabling role in the future of modelling complex flows . story_separator_special_tag the present investigation designs a systematic method for finding the latent roots and the principal axes of a matrix , without reducing the order of the matrix . it is characterized by a wide field of applicability and great accuracy , since the accumulation of rounding errors is avoided , through the process of `` minimized iterations '' . moreover , the method leads to a well convergent successive approximation procedure by which the solution of integral equations of the fredholm type and the solution of the eigenvalue problem of linear differential and integral operators may be accomplished . story_separator_special_tag abstract : we propose a simple two-step approach for speeding up convolution layers within large convolutional neural networks based on tensor decomposition and discriminative fine-tuning . given a layer , we use non-linear least squares to compute a low-rank cp-decomposition of the 4d convolution kernel tensor into a sum of a small number of rank-one tensors . at the second step , this decomposition is used to replace the original convolutional layer with a sequence of four convolutional layers with small kernels . after such replacement , the entire network is fine-tuned on the training data using standard backpropagation process . we evaluate this approach on two cnns and show that it is competitive with previous approaches , leading to higher obtained cpu speedups at the cost of lower accuracy drops for the smaller of the two networks . thus , for the 36-class character classification cnn , our approach obtains a 8.5x cpu speedup of the whole network with only minor accuracy drop ( 1 % from 91 % to 90 % ) . for the standard imagenet architecture ( alexnet ) , the approach speeds up the second convolution layer by a factor of 4x at the cost story_separator_special_tag abstract background the initial cases of novel coronavirus ( 2019-ncov ) infected pneumonia ( ncip ) occurred in wuhan , hubei province , china , in december 2019 and january 2020. we analyzed data on the first 425 confirmed cases in wuhan to determine the epidemiologic characteristics of ncip . methods we collected information on demographic characteristics , exposure history , and illness timelines of laboratory-confirmed cases of ncip that had been reported by january 22 , 2020. we described characteristics of the cases and estimated the key epidemiologic time-delay distributions . in the early period of exponential growth , we estimated the epidemic doubling time and the basic reproductive number . results among the first 425 patients with confirmed ncip , the median age was 59 years and 56 % were male . the majority of cases ( 55 % ) with onset before january 1 , 2020 , were linked to the huanan seafood wholesale market , as compared with 8.6 % of the subsequent cases . the mean incubation period was 5.2 days ( 95 % confidence interval [ ci ] , 4.1 to 7.0 ) , with the 95th percentile of the distribution at 12.5 days story_separator_special_tag non-negative matrix factorization ( nmf ) has previously been shown to be a useful decomposition for multivariate data . two different multiplicative algorithms for nmf are analyzed . they differ only slightly in the multiplicative factor used in the update rules . one algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized kullback-leibler divergence . the monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the expectation-maximization algorithm . the algorithms can also be interpreted as diagonally rescaled gradient descent , where the rescaling factor is optimally chosen to ensure convergence . story_separator_special_tag graph and hypergraph matching are important problems in computer vision . they are successfully used in many applications requiring 2d or 3d feature matching , such as 3d reconstruction and object recognition . while graph matching is limited to using pairwise relationships , hypergraph matching permits the use of relationships between sets of features of any order . consequently , it carries the promise to make matching more robust to changes in scale , deformations and outliers . in this paper we make two contributions . first , we present a first semi-supervised algorithm for learning the parameters that control the hypergraph matching model and demonstrate experimentally that it significantly improves the performance of current state-of-the-art methods . second , we propose a novel efficient hypergraph matching algorithm , which outperforms the state-of-the-art , and , when used in combination with other higher-order matching algorithms , it consistently improves their performance . story_separator_special_tag we study online social networks in which relationships can be either positive ( indicating relations such as friendship ) or negative ( indicating relations such as opposition or antagonism ) . such a mix of positive and negative links arise in a variety of online settings ; we study datasets from epinions , slashdot and wikipedia . we find that the signs of links in the underlying social networks can be predicted with high accuracy , using models that generalize across this diverse range of sites . these models provide insight into some of the fundamental principles that drive the formation of signed links in networks , shedding light on theories of balance and status from social psychology ; they also suggest social computing applications by which the attitude of one user toward another can be estimated from evidence provided by their relationships with other members of the surrounding social network . story_separator_special_tag the rise of graph-structured data such as social networks , regulatory networks , citation graphs , and functional brain networks , in combination with resounding success of deep learning in various applications , has brought the interest in generalizing deep learning models to non-euclidean domains . in this paper , we introduce a new spectral domain convolutional architecture for deep learning on graphs . the core ingredient of our model is a new class of parametric rational complex functions ( cayley polynomials ) allowing to efficiently compute spectral filters on graphs that specialize on frequency bands of interest . our model generates rich spectral filters that are localized in space , scales linearly with the size of the input data for sparsely connected graphs , and can handle different constructions of laplacian operators . extensive experimental results show the superior performance of our approach , in comparison to other spectral domain convolutional architectures , on spectral image classification , community detection , vertex classification , and matrix completion tasks . story_separator_special_tag we study sparse solutions of optimal control problems governed by pdes with uncertain coefficients . we propose two formulations , one where the solution is a deterministic control optimizing the mea . story_separator_special_tag the nystro m method is an efficient technique for the eigenvalue decomposition of large kernel matrices . however , to ensure an accurate approximation , a sufficient number of columns have to be sampled . on very large data sets , the singular value decomposition ( svd ) step on the resultant data submatrix can quickly dominate the computations and become prohibitive . in this paper , we propose an accurate and scalable nystro m scheme that first samples a large column subset from the input matrix , but then only performs an approximate svd on the inner submatrix using the recent randomized low-rank matrix approximation algorithms . theoretical analysis shows that the proposed algorithm is as accurate as the standard nystro m method that directly performs a large svd on the inner submatrix . on the other hand , its time complexity is only as low as performing a small svd . encouraging results are obtained on a number of large-scale data sets for low-rank approximation . moreover , as the most computational expensive steps can be easily distributed and there is minimal data transfer among the processors , significant speedup can be further obtained with the use of story_separator_special_tag we investigate a scalable $ m $ -channel critically sampled filter bank for graph signals , where each of the $ m $ filters is supported on a different subband of the graph laplacian spectrum . for analysis , the graph signal is filtered on each subband and downsampled on a corresponding set of vertices . however , the classical synthesis filters are replaced with interpolation operators . for small graphs , we use a full eigendecomposition of the graph laplacian to partition the graph vertices such that the $ m { \\text { th } } $ set comprises a uniqueness set for signals supported on the $ m { \\text { th } } $ subband . the resulting transform is critically sampled , the dictionary atoms are orthogonal to those supported on different bands , and graph signals are perfectly reconstructable from their analysis coefficients . we also investigate fast versions of the proposed transform that scale efficiently for large , sparse graphs . issues that arise in this context include designing the filter bank to be more amenable to polynomial approximation , estimating the number of samples required for each band , performing nonuniform random sampling story_separator_special_tag non-negative matrix factorization ( nmf ) has been one of the most popular methods for feature learning in the field of machine learning and computer vision . most existing works directly apply nmf on high-dimensional image datasets for computing the effective representation of the raw images . however , in fact , the common essential information of a given class of images is hidden in their low rank parts . for obtaining an effective low-rank data representation , we in this paper propose a non-negative low-rank matrix factorization ( nlmf ) method for image clustering . for the purpose of improving its robustness for the data in a manifold structure , we further propose a graph regularized nlmf by incorporating the manifold structure information into our proposed objective function . finally , we develop an efficient alternating iterative algorithm to learn the low-dimensional representation of low-rank parts of images for clustering . alternatively , we also incorporate robust principal component analysis into our proposed scheme . experimental results on four image datasets reveal that our proposed methods outperform four representative methods . story_separator_special_tag in physics , it is sometimes desirable to compute the so-called \\emph { density of states } ( dos ) , also known as the \\emph { spectral density } , of a real symmetric matrix $ a $ . the spectral density can be viewed as a probability density distribution that measures the likelihood of finding eigenvalues near some point on the real line . the most straightforward way to obtain this density is to compute all eigenvalues of $ a $ . but this approach is generally costly and wasteful , especially for matrices of large dimension . there exists alternative methods that allow us to estimate the spectral density function at much lower cost . the major computational cost of these methods is in multiplying $ a $ with a number of vectors , which makes them appealing for large-scale problems where products of the matrix $ a $ with arbitrary vectors are relatively inexpensive . this paper defines the problem of estimating the spectral density carefully , and discusses how to measure the accuracy of an approximate spectral density . it then surveys a few known methods for estimating the spectral density , and proposes some story_separator_special_tag nonnegative matrix factorization ( nmf ) is a popular technique for finding parts-based , linear representations of nonnegative data . it has been successfully applied in a wide range of applications such as pattern recognition , information retrieval , and computer vision . however , nmf is essentially an unsupervised method and can not make use of label information . in this paper , we propose a novel semi-supervised matrix decomposition method , called constrained nonnegative matrix factorization ( cnmf ) , which incorporates the label information as additional constraints . specifically , we show how explicitly combining label information improves the discriminating power of the resulting matrix decomposition . we explore the proposed cnmf method with two cost function formulations and provide the corresponding update solutions for the optimization problems . empirical experiments demonstrate the effectiveness of our novel algorithm in comparison to the state-of-the-art approaches through a set of evaluations based on real-world applications . story_separator_special_tag graph convolution network ( gcn ) has been recognized as one of the most effective graph models for semi-supervised learning , but it extracts merely the first-order or few-order neighborhood information through information propagation , which suffers performance drop-off for deeper structure . existing approaches that deal with the higher-order neighbors tend to take advantage of adjacency matrix power . in this paper , we assume a seemly trivial condition that the higher-order neighborhood information may be similar to that of the first-order neighbors . accordingly , we present an unsupervised approach to describe such similarities and learn the weight matrices of higher-order neighbors automatically through lasso that minimizes the feature loss between the first-order and higher-order neighbors , based on which we formulate the new convolutional filter for gcn to learn the better node representations . our model , called higher-order weighted gcn ( hwgcn ) , has achieved the state-of-the-art results on a number of node classification tasks over cora , citeseer and pubmed datasets . story_separator_special_tag recovering images from corrupted observations is necessary for many real-world applications . in this paper , we propose a unified framework to perform progressive image recovery based on hybrid graph laplacian regularized regression . we first construct a multiscale representation of the target image by laplacian pyramid , then progressively recover the degraded image in the scale space from coarse to fine so that the sharp edges and texture can be eventually recovered . on one hand , within each scale , a graph laplacian regularization model represented by implicit kernel is learned , which simultaneously minimizes the least square error on the measured samples and preserves the geometrical structure of the image data space . in this procedure , the intrinsic manifold structure is explicitly considered using both measured and unmeasured samples , and the nonlocal self-similarity property is utilized as a fruitful resource for abstracting a priori knowledge of the images . on the other hand , between two successive scales , the proposed model is extended to a projected high-dimensional feature space through explicit kernel mapping to describe the interscale correlation , in which the local structure regularity is learned and propagated from coarser to finer scales story_separator_special_tag nonnegative matrix factorization ( nmf ) -based models possess fine representativeness of a target matrix , which is critically important in collaborative filtering ( cf ) -based recommender systems . however , current nmf-based cf recommenders suffer from the problem of high computational and storage complexity , as well as slow convergence rate , which prevents them from industrial usage in context of big data . to address these issues , this paper proposes an alternating direction method ( adm ) -based nonnegative latent factor ( anlf ) model . the main idea is to implement the adm-based optimization with regard to each single feature , to obtain high convergence rate as well as low complexity . both computational and storage costs of anlf are linear with the size of given data in the target matrix , which ensures high efficiency when dealing with extremely sparse matrices usually seen in cf problems . as demonstrated by the experiments on large , real data sets , anlf also ensures fast convergence and high prediction accuracy , as well as the maintenance of nonnegativity constraints . moreover , it is simple and easy to implement for real applications of learning systems . story_separator_special_tag feedforward neural networks such as multilayer perceptrons are popular tools for nonlinear regression and classification problems . from a bayesian perspective , a choice of a neural network model can be viewed as defining a prior probability distribution over non-linear functions , and the neural network 's learning process can be interpreted in terms of the posterior probability distribution over the unknown function . ( some learning algorithms search for the function with maximum posterior probability and other monte carlo methods draw samples from this posterior probability ) . in the limit of large but otherwise standard networks , neal ( 1996 ) has shown that the prior distribution over non-linear functions implied by the bayesian neural network falls in a class of probability distributions known as gaussian processes . the hyperparameters of the neural network model determine the characteristic length scales of the gaussian process . neal 's observation motivates the idea of discarding parameterized networks and working directly with gaussian processes . computations in which the parameters of the network are optimized are then replaced by simple matrix operations using the covariance matrix of the gaussian process . in this chapter i will review work on this idea story_separator_special_tag multi-label learning has received significant attention in the research community over the past few years : this has resulted in the development of a variety of multi-label learning methods . in this paper , we present an extensive experimental comparison of 12 multi-label learning methods using 16 evaluation measures over 11 benchmark datasets . we selected the competing methods based on their previous usage by the community , the representation of different groups of methods and the variety of basic underlying machine learning methods . similarly , we selected the evaluation measures to be able to assess the behavior of the methods from a variety of view-points . in order to make conclusions independent from the application domain , we use 11 datasets from different domains . furthermore , we compare the methods by their efficiency in terms of time needed to learn a classifier and time needed to produce a prediction for an unseen example . we analyze the results from the experiments using friedman and nemenyi tests for assessing the statistical significance of differences in performance . the results of the analysis show that for multi-label classification the best performing methods overall are random forests of predictive clustering story_separator_special_tag principal components analysis and , more generally , the singular value decomposition are fundamental data analysis tools that express a data matrix in terms of a sequence of orthogonal or uncorrelated vectors of decreasing importance . unfortunately , being linear combinations of up to all the data points , these vectors are notoriously difficult to interpret in terms of the data and processes generating the data . in this article , we develop cur matrix decompositions for improved data analysis . cur decompositions are low-rank matrix decompositions that are explicitly expressed in terms of a small number of actual columns and/or actual rows of the data matrix . because they are constructed from actual data elements , cur decompositions are interpretable by practitioners of the field from which the data are drawn ( to the extent that the original data are ) . we present an algorithm that preferentially chooses columns and rows that exhibit high statistical leverage and , thus , in a very precise statistical sense , exert a disproportionately large influence on the best low-rank fit of the data matrix . by selecting columns and rows in this manner , we obtain improved relative-error and constant-factor approximation story_separator_special_tag motivated by numerous applications in which the data may be modeled by a variable subscripted by three or more indices , we develop a tensor-based extension of the matrix cur decomposition . the tensor-cur decomposition is most relevant as a data analysis tool when the data consist of one mode that is qualitatively different from the others . in this case , the tensor-cur decomposition approximately expresses the original data tensor in terms of a basis consisting of underlying subtensors that are actual data elements and thus that have a natural interpretation in terms of the processes generating the data . assume the data may be modeled as a $ ( 2+1 ) $ -tensor , i.e. , an $ m \\times n \\times p $ tensor $ \\mathcal { a } $ in which the first two modes are similar and the third is qualitatively different . we refer to each of the $ p $ different $ m \\times n $ matrices as slabs and each of the $ mn $ different $ p $ -vectors as fibers . in this case , the tensor-cur algorithm computes an approximation to the data tensor $ \\mathcal { a } story_separator_special_tag we introduce a general-dimensional , kernel-independent , algebraic fast multipole method and apply it to kernel regression . the motivation for this work is the approximation of kernel matrices , which appear in mathematical physics , approximation theory , non-parametric statistics , and machine learning . existing fast multipole methods are asymptotically optimal , but the underlying constants scale quite badly with the ambient space dimension . we introduce a method that mitigates this shortcoming ; it only requires kernel evaluations and scales well with the problem size , the number of processors , and the ambient dimension -- -as long as the intrinsic dimension of the dataset is small . we test the performance of our method on several synthetic datasets . as a highlight , our largest run was on an image dataset with 10 million points in 246 dimensions . story_separator_special_tag the purpose of this text is to provide an accessible introduction to a set of recently developed algorithms for factorizing matrices . these new algorithms attain high practical speed by reducing the dimensionality of intermediate computations using randomized projections . the algorithms are particularly powerful for computing low-rank approximations to very large matrices , but they can also be used to accelerate algorithms for computing full factorizations of matrices . a key competitive advantage of the algorithms described is that they require less communication than traditional deterministic methods . story_separator_special_tag this survey describes probabilistic algorithms for linear algebraic computations , such as factorizing matrices and solving linear systems . it focuses on techniques that have a proven track record for real-world problems . the paper treats both the theoretical foundations of the subject and practical computational issues . topics include norm estimation , matrix approximation by sampling , structured and unstructured random embeddings , linear regression problems , low-rank approximation , subspace iteration and krylov methods , error estimation and adaptivity , interpolatory and cur factorizations , nystrom approximation of positive semidefinite matrices , single-view ( streaming ) algorithms , full rank-revealing factorizations , solvers for linear systems , and approximation of kernel matrices that arise in machine learning and in scientific computing . story_separator_special_tag signed networks contain both positive and negative kinds of interactions like friendship and enmity . the task of node classification in non-signed graphs has proven to be beneficial in many real world applications , yet extensions to signed networks remain largely unexplored . in this paper we introduce the first analysis of node classification in signed social networks via diffuse interface methods based on the ginzburg-landau functional together with different extensions of the graph laplacian to signed networks . we show that blending the information from both positive and negative interactions leads to performance improvement in real signed social networks , consistently outperforming the current state of the art . story_separator_special_tag multilayer graphs encode different kind of interactions between the same set of entities . when one wants to cluster such a multilayer graph , the natural question arises how one should merge the information different layers . we introduce in this paper a one-parameter family of matrix power means for merging the laplacians from different layers and analyze it in expectation in the stochastic block model . we show that this family allows to recover ground truth clusters under different settings and verify this in real world data . while computing the matrix power mean can be very expensive for large graphs , we introduce a numerical scheme to efficiently compute its eigenvectors for the case of large sparse graphs . story_separator_special_tag signed networks allow to model positive and negative relationships . we analyze existing extensions of spectral clustering to signed networks . it turns out that existing approaches do not recover the ground truth clustering in several situations where either the positive or the negative network structures contain no noise . our analysis shows that these problems arise as existing approaches take some form of arithmetic mean of the laplacians of the positive and negative part . as a solution we propose to use the geometric mean of the laplacians of positive and negative part and show that it outperforms the existing approaches . while the geometric mean of matrices is computationally expensive , we show that eigenvectors of the geometric mean can be computed efficiently , leading to a numerical scheme for sparse matrices which is of independent interest . story_separator_special_tag the principal component analysis ( pca ) is a widely used method of reducing the dimensionality of high-dimensional data , often followed by visualizing two of the components on the scatterplot . although widely used , the method is lacking an easy-to-use web interface that scientists with little programming skills could use to make plots of their own data . the same applies to creating heatmaps : it is possible to add conditional formatting for excel cells to show colored heatmaps , but for more advanced features such as clustering and experimental annotations , more sophisticated analysis tools have to be used . we present a web tool called clustvis that aims to have an intuitive user interface . users can upload data from a simple delimited text file that can be created in a spreadsheet program . it is possible to modify data processing methods and the final appearance of the pca and heatmap plots by using drop-down menus , text boxes , sliders etc . appropriate defaults are given to reduce the time needed by the user to specify input parameters . as an output , users can download pca plot and heatmap in one of the preferred story_separator_special_tag in this paper we study how to compute an estimate of the trace of the inverse of a symmetric matrix by using gauss quadrature and the modified chebyshev algorithm . as auxiliary polynomials we use the shifted chebyshev polynomials . since this can be too costly in computer storage for large matrices we also propose to compute the modified moments with a stochastic approach due to hutchinson ( commun stat simul 18:1059-1076 , 1989 ) . story_separator_special_tag in this article , the author presents a practical and accessible framework to understand some of the basic underpinnings of these methods , with the intention of leading the reader to a broad understanding of how they interrelate . the author also illustrates connections between these techniques and more classical ( empirical ) bayesian approaches . the proposed framework is used to arrive at new insights and methods , both practical and theoretical . in particular , several novel optimality properties of algorithms in wide use such as block-matching and three-dimensional ( 3-d ) filtering ( bm3d ) , and methods for their iterative improvement ( or nonexistence thereof ) are discussed . a general approach is laid out to enable the performance analysis and subsequent improvement of many existing filtering algorithms . while much of the material discussed is applicable to the wider class of linear degradation models beyond noise ( e.g. , blur , ) to keep matters focused , we consider the problem of denoising here . story_separator_special_tag introduction harvey j. miller and jiawei han spatiotemporal data mining paradigms and methodologies john f. roddick and brian g. lees fundamentals of spatial data warehousing for geographic knowledge discovery yvan bedard and jiawei han analysis of spatial data with map cubes : highway traffic data chang-tien lu , arnold p. boedihardjo , and shashi shekhar new ! data quality issues and geographic knowledge discovery marc gervais , yvan bedard , marie-andree levesque , eveline bernier , and rodolphe devillers spatial classification and prediction models for geospatial data mining shashi shekhar , ranga raju vatsavai , and sanjay chawla an overview of clustering methods in geographic data analysis jiawei han , jae-gil lee , and micheline kamber new ! computing medoids in large spatial datasets kyriakos mouratidis , dimitris papadias , spiros papadimitriou new ! looking for a relationship ? try gwr a. stewart fotheringham , martin charlton , and urska demsar leveraging the power of spatial data mining to enhance the applicability of gis technology donato malerba , antonietta lanza , and annalisa appice visual exploration and explanation in geography : analysis with light mark gahegan new ! multivariate spatial clustering and geovisualization diansheng guo new ! toward knowledge discovery story_separator_special_tag in principle , the exponential of a matrix could be computed in many ways . methods involving approximation theory , differential equations , the matrix eigenvalues , and the matrix characteristic polyn . story_separator_special_tag many machine learning algorithms require the summation of gaussian kernel functions , an expensive operation if implemented straightforwardly . several methods have been proposed to reduce the computational complexity of evaluating such sums , including tree and analysis based methods . these achieve varying speedups depending on the bandwidth , dimension , and prescribed error , making the choice between methods difficult for machine learning tasks . we provide an algorithm that combines tree methods with the improved fast gauss transform ( ifgt ) . as originally proposed the ifgt suffers from two problems : ( 1 ) the taylor series expansion does not perform well for very low bandwidths , and ( 2 ) parameter selection is not trivial and can drastically affect performance and ease of use . we address the first problem by employing a tree data structure , resulting in four evaluation methods whose performance varies based on the distribution of sources and targets and input parameters such as desired accuracy and bandwidth . to solve the second problem , we present an online tuning approach that results in a black box method that automatically chooses the evaluation method and its parameters to yield the best story_separator_special_tag this paper studies a generalization of the standard continuous-time consensus protocol , obtained by replacing the laplacian matrix of the communication graph with the so-called deformed laplacian . the deformed laplacian is a second-degree matrix polynomial in the real variable s which reduces to the standard laplacian for s equal to unity . the stability properties of the ensuing deformed consensus protocol are studied in terms of parameter s for some special families of undirected and directed graphs , and for arbitrary graph topologies by leveraging the spectral theory of quadratic eigenvalue problems . examples and simulation results are provided to illustrate our theoretical findings . story_separator_special_tag this paper provides an introduction to support vector machines , kernel fisher discriminant analysis , and kernel principal component analysis , as examples for successful kernel-based learning methods . we first give a short background about vapnik-chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations . we illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and dna analysis . story_separator_special_tag spectral divide and conquer algorithms solve the eigenvalue problem for all the eigenvalues and eigenvectors by recursively computing an invariant subspace for a subset of the spectrum and using it to decouple the problem into two smaller subproblems . a number of such algorithms have been developed over the last 40 years , often motivated by parallel computing and , most recently , with the aim of achieving minimal communication costs . however , none of the existing algorithms has been proved to be backward stable , and they all have a significantly higher arithmetic cost than the standard algorithms currently used . we present new spectral divide and conquer algorithms for the symmetric eigenvalue problem and the singular value decomposition that are backward stable , achieve lower bounds on communication costs recently derived by ballard , demmel , holtz , and schwartz , and have operation counts within a small constant factor of those for the standard algorithms . the new algorithms are built on the polar decompos . story_separator_special_tag the kaczmarz method is an iterative algorithm for solving systems of linear equations ax=b . theoretical convergence rates for this algorithm were largely unknown until recently when work was done on a randomized version of the algorithm . it was proved that for overdetermined systems , the randomized kaczmarz method converges with expected exponential rate , independent of the number of equations in the system . here we analyze the case where the system ax=b is corrupted by noise , so we consider the system where ax is approximately b + r where r is an arbitrary error vector . we prove that in this noisy version , the randomized method reaches an error threshold dependent on the matrix a with the same rate as in the error-free case . we provide examples showing our results are sharp in the general context . story_separator_special_tag there has been considerable recent interest in algorithms for finding communities in networks groups of vertices within which connections are dense , but between which connections are sparser . here we review the progress that has been made towards this end . we begin by describing some traditional methods of community detection , such as spectral bisection , the kernighan-lin algorithm and hierarchical clustering based on similarity measures . none of these methods , however , is ideal for the types of real-world network data with which current research is concerned , such as internet and web data and biological and social networks . we describe a number of more recent algorithms that appear to work well with these data , including algorithms based on edge betweenness scores , on counts of short loops in networks and on voltage differences in resistor networks . story_separator_special_tag despite many empirical successes of spectral clustering methods algorithms that cluster points using eigenvectors of matrices derived from the data there are several unresolved issues . first . there are a wide variety of algorithms that use the eigenvectors in slightly different ways . second , many of these algorithms have no proof that they will actually compute a reasonable clustering . in this paper , we present a simple spectral clustering algorithm that can be implemented using a few lines of matlab . using tools from matrix perturbation theory , we analyze the algorithm , and give conditions under which it can be expected to do well . we also show surprisingly good experimental results on a number of challenging clustering problems .
machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains . often , the training of models requires large , representative datasets , which may be crowdsourced and contain sensitive information . the models should not expose private information in these datasets . addressing this goal , we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy . our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives , under a modest privacy budget , and at a manageable cost in software complexity , training efficiency , and model quality . story_separator_special_tag legacy encryption systems depend on sharing a key ( public or private ) among the peers involved in exchanging an encrypted message . however , this approach poses privacy concerns . especially with popular cloud services , the control over the privacy of the sensitive data is lost . even when the keys are not shared , the encrypted material is shared with a third party that does not necessarily need to access the content . moreover , untrusted servers , providers , and cloud operators can keep identifying elements of users long after users end the relationship with the services . indeed , homomorphic encryption ( he ) , a special kind of encryption scheme , can address these concerns as it allows any third party to operate on the encrypted data without decrypting it in advance . although this extremely useful feature of the he scheme has been known for over 30 years , the first plausible and achievable fully homomorphic encryption ( fhe ) scheme , which allows any computable function to perform on the encrypted data , was introduced by craig gentry in 2009. even though this was a major achievement , different implementations so far story_separator_special_tag sharing and working on sensitive data in distributed settings from healthcare to finance is a major challenge due to security and privacy concerns . secure multiparty computation ( smc ) is a viable panacea for this , allowing distributed parties to make computations while the parties learn nothing about their data , but the final result . although smc is instrumental in such distributed settings , it does not provide any guarantees not to leak any information about individuals to adversaries . differential privacy ( dp ) can be utilized to address this ; however , achieving smc with dp is not a trivial task , either . in this paper , we propose a novel secure multiparty distributed differentially private ( sm-ddp ) protocol to achieve secure and private computations in a multiparty environment . specifically , with our protocol , we simultaneously achieve smc and dp in distributed settings focusing on linear regression on horizontally distributed data . that is , parties do not see each others & # x2019 ; data and further , can not infer information about individuals from the final constructed statistical model . any statistical model function that allows independent calculation of local story_separator_special_tag this paper presents a new privacy-preserving smart metering system . our scheme is private under the differential privacy model and therefore provides strong and provable guarantees.with our scheme , an ( electricity ) supplier can periodically collect data from smart meters and derive aggregated statistics without learning anything about the activities of individual households . for example , a supplier can not tell from a user 's trace whether or when he watched tv or turned on heating . our scheme is simple , efficient and practical . processing cost is very limited : smart meters only have to add noise to their data and encrypt the results with an efficient stream cipher . story_separator_special_tag accurate computer methods are evaluated which transform uniformly distributed random numbers into quantities that follow gamma , beta , poisson , binomial and negative-binomial distributions . all algorithms are designed for variable parameters . the known convenient methods are slow when the parameters are large . therefore new procedures are introduced which can cope efficiently with parameters of all sizes . some algorithms require sampling from the normal distribution as an intermediate step . in the reported computer experiments the normal deviates were obtained from a recent method which is also described . story_separator_special_tag local differential privacy ( lpd ) is a distributed variant of differential privacy ( dp ) in which the obfuscation of the sensitive information is done at the level of the individual records , and in general it is used to sanitize data that are collected for statistical purposes . lpd has the advantage it does not need to assume a trusted third party . on the other hand ldp in general requires more noise than dp to achieve the same level of protection , with negative consequences on the utility . in practice , utility becomes acceptable only on very large collections of data , and this is the reason why ldp is especially successful among big companies such as apple and google , which can count on a huge number of users . in this talk , we propose a variant of ldp suitable for metric spaces , such as location data or energy consumption data , and we show that it provides a much higher utility for the same level of privacy . furthermore , we discuss algorithms to extract the best possible statistical information from the data obfuscated with this metric variant of ldp . story_separator_special_tag we investigate the framework of privacy amplification by iteration , recently proposed by feldman et al. , from an information-theoretic lens . we demonstrate that differential privacy guarantees of iterative mappings can be determined by a direct application of contraction coefficients derived from strong data processing inequalities for f-divergences . in particular , by generalizing the dobrushin s contraction coefficient for total variation distance to an f-divergence known as e -divergence , we derive tighter bounds on the differential privacy parameters of the projected noisy stochastic gradient descent algorithm with hidden intermediate updates . story_separator_special_tag cloud computing helps reduce costs , increase business agility and deploy solutions with a high return on investment for many types of applications . however , data security is of premium importance to many users and often restrains their adoption of cloud technologies . various approaches , i.e. , data encryption , anonymization , replication and verification , help enforce different facets of data security . secret sharing is a particularly interesting cryptographic technique . its most advanced variants indeed simultaneously enforce data privacy , availability and integrity , while allowing computation on encrypted data . the aim of this paper is thus to wholly survey secret sharing schemes with respect to data security , data access and costs in the pay-as-you-go paradigm . story_separator_special_tag differential privacy comes equipped with multiple analytical tools for the design of private data analyses . one important tool is the so-called `` privacy amplification by subsampling '' principle , which ensures that a differentially private mechanism run on a random subsample of a population provides higher privacy guarantees than when run on the entire population . several instances of this principle have been studied for different random subsampling methods , each with an ad-hoc analysis . in this paper we present a general method that recovers and improves prior analyses , yields lower bounds and derives new instances of privacy amplification by subsampling . our method leverages a characterization of differential privacy as a divergence which emerged in the program verification community . furthermore , it introduces new tools , including advanced joint convexity and privacy profiles , which might be of independent interest . story_separator_special_tag differential privacy provides a robust quantifiable methodology to measure and control the privacy leakage of data analysis algorithms.a fundamental insight is that by forcing algorithms to be randomized , their privacy leakage can be characterized by measuring the dissimilarity between output distributions produced by applying the algorithm to pairs datasets differing in one individual.after the introduction of differential privacy , several variants of the original definition have been proposed by changing the measure of dissimilarity between distributions , including concentrated , zero-concentrated and r { \\ ' e } nyi differential privacy . the first contribution of this paper is to introduce the notion of privacy profile of a mechanism.this profile captures all valid $ ( \\varepsilon , \\delta ) $ differential privacy parameters satisfied by a given mechanism , and contrasts with the usual approach of providing guarantees in terms of a single point in this curve.we show that knowledge of this curve is equivalent to knowledge of the privacy guarantees with respect to the alternative definitions listed above.this sheds further light into the connections between different privacy definitions , and suggests that these should be considered alternative but otherwise equivalent points of view . the second contribution of story_separator_special_tag privacy-preserving data aggregation has been widely studied to meet the requirement of timely monitoring measurements of users while protecting individual s privacy in smart grid communications . in this paper , a new secure data aggregation scheme , named d ifferentially p rivate data a ggregation with f ault t olerance ( dpaft ) , is proposed , which can achieve differential privacy and fault tolerance simultaneously . specifically , inspired by the idea of diffie hellman key exchange protocol , an artful constraint relation is constructed for data aggregation . with this novel constraint , dpaft can support fault tolerance of malfunctioning smart meters efficiently and flexibly . in addition , dpaft is also enhanced to resist against differential attacks , which are suffered in most of the existing data aggregation schemes . by improving the basic boneh goh nissim cryptosystem to be more applicable to the practical scenarios , dpaft can resist much stronger adversaries , i.e. , user s privacy can be protected in the honest-but-curious model . extensive performance evaluations are further conducted to illustrate that dpaft outperforms the state-of-the-art data aggregation schemes in terms of storage cost , computation complexity , utility of differential privacy story_separator_special_tag the contingency table is a work horse of official statistics , the format of reported data for the us census , bureau of labor statistics , and the internal revenue service . in many settings such as these privacy is not only ethically mandated , but frequently legally as well . consequently there is an extensive and diverse literature dedicated to the problems of statistical disclosure control in contingency table release . however , all current techniques for reporting contingency tables fall short on at leas one of privacy , accuracy , and consistency ( among multiple released tables ) . we propose a solution that provides strong guarantees for all three desiderata simultaneously.our approach can be viewed as a special case of a more general approach for producing synthetic data : any privacy-preserving mechanism for contingency table release begins with raw data and produces a ( possibly inconsistent ) privacy-preserving set of marginals . from these tables alone-and hence without weakening privacy -- we will find and output the `` nearest '' consistent set of marginals . interestingly , this set is no farther than the tables of the raw data , and consequently the additional error introduced by story_separator_special_tag spectral sparsification in dynamic graph streams.- the online stochastic generalized assignment problem.- on the np-hardness of approximating ordering constraint satisfaction problems.- approximating large frequency moments with pick-and-drop sampling.- generalizing the layering method of indyk and woodruff : recursive sketches for frequency-based vectors on streams.- capacitated network design on undirected graphs.- scheduling subset tests : one-time , continuous , and how they relate.- on the total perimeter of homothetic convex bodies in a convex container.- partial interval set cover - trade-offs between scalability and optimality.- online square-into-square packing.- online non-clairvoyant scheduling to simultaneously minimize all convex functions.- shrinking maxima , decreasing costs : new online packing and covering problems.- multiple traveling salesmen in asymmetric metrics.- approximate indexability and bandit problems with concave rewards and delayed feedback.- the approximability of the binary paintshop problem.- approximation algorithms for movement repairmen.- improved hardness of approximating chromatic number.- a pseudo-approximation for the genus of hamiltonian graphs.- a local computation approximation scheme to maximum matching.- sketching earth-mover distance on graph metrics.- online multidimensional load balancing.- a new regularity lemma and faster approximation algorithms for low threshold rank graphs.- interdiction problems on planar graphs.- conditional random fields , planted constraint satisfaction and entropy concentration.- finding heavy hitters story_separator_special_tag efficient probabilistically checkable proofs and applications to approximation m. bellare * s. goldwassert c. lundi a. russell $ we construct multi-prover proof systems for np which use only a constant number of provers to simultaneously achieve low error , low randomness and low answer size . as a consequence , we obtain asymptotic improvements to approximation hardness results for a wide range of optimization problems including minimum set cover , dominating set , maximum clique , chromatic number , and quartic programming ; and constant factor improvements on the hardness results for maxsnp problems . in particular , we show that approximating minimum set cover within any constant is np-complete ; approximating minimum set cover within c log n , for c < 1/8 , implies np c dtime ( nloglogn ) ; approximat ing the maximum of a quartic program within any constant is np-complete ; approximating maximum clique or chromatic number within nl/29 implies np ~ bpp ; and approximating max-3 sat within 113/112 is npcomplete . * high performance computing and communications , ibm t.j. watson research center , po box 704 , yorktown heights , ny 10598 , usa . e-mail : mihirf.qwatson . ibm . story_separator_special_tag this paper describes a method of dense probabilistic encryption . previous probabilistic encryption methods require large numbers of random bits and product large amounts of ciphertext for the encryption of each bit of plaintext . this paper develops a method of probabilistic encryption in which the ratio of ciphertext text size to plaintext size and theproportion of random bits to plaintext can both be made arbitrarily close to one . the methods described here have applications which are not in any apparent way possible with previous methods . these applications include simple and efficient protocols for noninteractive verifiable secret sharing and a method for conducting practical and verifiable secret-ballot elections . story_separator_special_tag the large-scale monitoring of computer users ' software activities has become commonplace , e.g. , for application telemetry , error reporting , or demographic profiling . this paper describes a principled systems architecture -- -encode , shuffle , analyze ( esa ) -- -for performing such monitoring with high utility while also protecting user privacy . the esa design , and its prochlo implementation , are informed by our practical experiences with an existing , large deployment of privacy-preserving software monitoring . ( cont . ; see the paper ) story_separator_special_tag we introduce the notion of restricted sensitivity as an alternative to global and smooth sensitivity to improve accuracy in differentially private data analysis . the definition of restricted sensitivity is similar to that of global sensitivity except that instead of quantifying over all possible datasets , we take advantage of any beliefs about the dataset that a querier may have , to quantify over a restricted class of datasets . specifically , given a query f and a hypothesis h about the structure of a dataset d , we show generically how to transform f into a new query f_h whose global sensitivity ( over all datasets including those that do not satisfy h ) matches the restricted sensitivity of the query f. moreover , if the belief of the querier is correct ( i.e. , d is in h ) then f_h ( d ) = f ( d ) . if the belief is incorrect , then f_h ( d ) may be inaccurate . we demonstrate the usefulness of this notion by considering the task of answering queries regarding social-networks , which we model as a combination of a graph and a labeling of its vertices . story_separator_special_tag we consider a statistical database in which a trusted administrator introduces noise to the query responses with the goal of maintaining privacy of individual database entries . in such a database , a query consists of a pair ( s , f ) where s is a set of rows in the database and f is a function mapping database rows to { 0 , 1 } . the true answer is iesf ( di ) , and a noisy version is released as the response to the query . results of dinur , dwork , and nissim show that a strong form of privacy can be maintained using a surprisingly small amount of noise -- much less than the sampling error -- provided the total number of queries is sublinear in the number of database rows . we call this query and ( slightly ) noisy reply the sulq ( sub-linear queries ) primitive . the assumption of sublinearity becomes reasonable as databases grow increasingly large.we extend this work in two ways . first , we modify the privacy analysis to real-valued functions f and arbitrary row types , as a consequence greatly improving the bounds on noise required story_separator_special_tag in this article , we demonstrate that , ignoring computational constraints , it is possible to release synthetic databases that are useful for accurately answering large classes of queries while preserving differential privacy . specifically , we give a mechanism that privately releases synthetic data useful for answering a class of queries over a discrete domain with error that grows as a function of the size of the smallest net approximately representing the answers to that class of queries . we show that this in particular implies a mechanism for counting queries that gives error guarantees that grow only with the vc-dimension of the class of queries , which itself grows at most logarithmically with the size of the query class.we also show that it is not possible to release even simple classes of queries ( such as intervals and their generalizations ) over continuous domains with worst-case utility guarantees while preserving differential privacy . in response to this , we consider a relaxation of the utility guarantee and give a privacy preserving polynomial time algorithm that for any halfspace query will provide an answer that is accurate for some small perturbation of the query . this algorithm does not story_separator_special_tag in this work , we consider distributed private learning . for this purpose , companies collect statistics about telemetry , usage and frequent settings from their users without disclosing individual values . we focus on rank-based statistics , specifically , the median which is more robust to outliers than the mean . local differential privacy , where each user shares locally perturbed data with an untrusted server , is often used in private learning but does not provide the same accuracy as the central model , where noise is applied only once by a trusted server . existing solutions to compute the differentially private median provide good accuracy only for large amounts of users ( local model ) , by using a trusted third party ( central model ) , or for a very small data universe ( secure multi-party computation ) . we present a multi-party computation to efficiently compute the exponential mechanism for the median , which also supports , e.g. , general rank-based statistics ( e.g. , pthpercentile , interquartile range ) and convex optimizations for machine learning . our approach is efficient ( practical running time ) , scaleable ( sublinear in the data universe size story_separator_special_tag hardness amplification and error correction.- optimal error correction against computationally bounded noise.- hardness amplification of weakly verifiable puzzles.- on hardness amplification of one-way functions.- graphs and groups.- cryptography in subgroups of .- efficiently constructible huge graphs that preserve first order properties of random graphs.- simulation and secure computation.- comparing two notions of simulatability.- relaxing environmental security : monitored functionalities and client-server computation.- handling expected polynomial-time strategies in simulation-based security proofs.- security of encryption.- adaptively-secure , non-interactive public-key encryption.- adaptive security of symbolic encryption.- chosen-ciphertext security of multiple encryption.- steganography and zero knowledge.- public-key steganography with active attacks.- upper and lower bounds on black-box steganography.- fair-zero knowledge.- secure computation i.- how to securely outsource cryptographic computations.- secure computation of the mean and related statistics.- keyword search and oblivious pseudorandom functions.- secure computation ii.- evaluating 2-dnf formulas on ciphertexts.- share conversion , pseudorandom secret-sharing and applications to secure computation.- toward privacy in public databases.- quantum cryptography and universal composability.- the universal composable security of quantum key distribution.- universally composable privacy amplification against quantum adversaries.- a universally composable secure channel based on the kem-dem framework.- cryptographic primitives and security.- sufficient conditions for collision-resistant hashing.- the relationship between password-authenticated key exchange and other cryptographic story_separator_special_tag candidate multilinear maps from ideal lattices.- lossy codes and a new variant of the learning-with-errors problem.- a toolkit for ring-lwe cryptography.- regularity of lossy rsa on subdomains and its applications.- efficient cryptosystems from 2k-th power residue symbols.- deterministic public-key encryption for adaptively chosen plaintext distributions.- how to watermark cryptographic functions.- security evaluations beyond computing power : how to analyze side-channel attacks you can not mount ? .- masking against side-channel attacks : a formal security proof.- leakage-resilient cryptography from minimal assumptions.- faster index calculus for the medium prime case application to 1175-bit and 1425-bit finite fields.- fast cryptography in genus 2.- graph-theoretic algorithms for the `` isomorphism of polynomials '' problem.- cryptanalysis of full ripemd-128.- new collision attacks on sha-1 based on optimal joint local-collision analysis.- improving local collisions : new attacks on reduced sha-256.- dynamic proofs of retrievability via oblivious ram .- message-locked encryption and secure deduplication.- batch fully homomorphic encryption over the integers.- practical homomorphic macs for arithmetic circuits.- streaming authenticated data structures.- improved key recovery attacks on reduced-round aes in the single-key setting.- new links between differential and linear cryptanalysis.- towards key-length extension with optimal security : cascade encryption and xor-cascade encryption.- ideal-cipher ( ir ) reducibility story_separator_special_tag we prove new upper and lower bounds on the sample complexity of ( , ) differentially private algorithms for releasing approximate answers to threshold functions . a threshold function c over a totally ordered domain x evaluates to cz ( y ) = 1 if y x , and evaluates to 0 otherwise . we give the first nontrivial lower bound for releasing thresholds with ( , ) differential privacy , showing that the task is impossible over an infinite domain x , and moreover requires sample complexity n ( log * |x| ) , which grows with the size of the domain . inspired by the techniques used to prove this lower bound , we give an algorithm for releasing thresholds with n 2 ( 1+ ( 1 ) ) log * |x| samples . this improves the previous best upper bound of 8 ( 1+ ( 1 ) ) log * |x| ( beimel et al. , random '13 ) . our sample complexity upper and lower bounds also apply to the tasks of learning distributions with respect to kolmogorov distance and of properly pac learning thresholds with differential privacy . the lower bound gives the first separation story_separator_special_tag differential privacy ( dp ) has received increasing attention as a rigorous privacy framework . many existing studies employ traditional dp mechanisms ( e.g. , the laplace mechanism ) as primitives to continuously release private data for protecting privacy at each time point ( i.e. , event-level privacy ) , which assume that the data at different time points are independent , or that adversaries do not have knowledge of correlation between data . however , continuously generated data tend to be temporally correlated , and such correlations can be acquired by adversaries . in this paper , we investigate the potential privacy loss of a traditional dp mechanism under temporal correlations . first , we analyze the privacy leakage of a dp mechanism under temporal correlation that can be modeled using markov chain . our analysis reveals that , the event-level privacy loss of a dp mechanism may increase over time . we call the unexpected privacy loss temporal privacy leakage ( tpl ) . although tpl may increase over time , we find that its supremum may exist in some cases . second , we design efficient algorithms for calculating tpl . third , we propose data releasing story_separator_special_tag we study the problem of answering \\emph { $ k $ -way marginal } queries on a database $ d \\in ( \\ { 0,1\\ } ^d ) ^n $ , while preserving differential privacy . the answer to a $ k $ -way marginal query is the fraction of the database 's records $ x \\in \\ { 0,1\\ } ^d $ with a given value in each of a given set of up to $ k $ columns . marginal queries enable a rich class of statistical analyses on a dataset , and designing efficient algorithms for privately answering marginal queries has been identified as an important open problem in private data analysis . for any $ k $ , we give a differentially private online algorithm that runs in time $ $ \\min { \\exp ( d^ { 1-\\omega ( 1/\\sqrt { k } ) } ) , \\exp ( d / \\log^ { .99 } d ) \\ } $ $ per query and answers any ( possibly superpolynomially long and adaptively chosen ) sequence of $ k $ -way marginal queries up to error at most $ \\pm .01 $ on every query , provided story_separator_special_tag privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data , such as medical or financial records , are analyzed . we provide general techniques to produce privacy-preserving approximations of classifiers learned via ( regularized ) empirical risk minimization ( erm ) . these algorithms are private under the $ \\epsilon $ -differential privacy definition due to dwork et al . ( 2006 ) . first we apply the output perturbation ideas of dwork et al . ( 2006 ) , to erm classification . then we propose a new method , objective perturbation , for privacy-preserving machine learning algorithm design . this method entails perturbing the objective function before optimizing over classifiers . if the loss and regularizer satisfy certain convexity and differentiability criteria , we prove theoretical results showing that our algorithms preserve privacy , and provide generalization bounds for linear and nonlinear kernels . we further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms , thereby providing end-to-end privacy guarantees for the training process . we apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines . we obtain encouraging results from story_separator_special_tag a technique based on public key cryptography is presented that allows an electronic mail system to hide who a participant communicates with as well as the content of the communication - in spite of an unsecured underlying telecommunication system . the technique does not require a universally trusted authority . one correspondent can remain anonymous to a second , while allowing the second to respond via an untraceable return address . the technique can also be used to form rosters of untraceable digital pseudonyms from selected applications . applicants retain the exclusive ability to form digital signatures corresponding to their pseudonyms . elections in which any interested party can verify that the ballots have been properly counted are possible if anonymously mailed ballots are signed with pseudonyms from a roster of registered voters . another use allows an individual to correspond with a record-keeping organization under a unique pseudonym , which appears in a roster of acceptable clients . story_separator_special_tag we consider the problem of designing scalable , robust protocols for computing statistics about sensitive data . specifically , we look at how best to design differentially private protocols in a distributed setting , where each user holds a private datum . the literature has mostly considered two models : the central model , in which a trusted server collects users data in the clear , which allows greater accuracy ; and the local model , in which users individually randomize their data , and need not trust the server , but accuracy is limited . attempts to achieve the accuracy of the central model without a trusted server have so far focused on variants of cryptographic multiparty computation ( mpc ) , which limits scalability . story_separator_special_tag we present an overview of the field of anonymous communications , from its establishment in 1981 from david chaum to today . key systems are presented categorized according to their underlying principles : semi-trusted relays , mix systems , remailers , onion routing , and systems to provide robust mixing . we include extended discussions of the threat models and usage models that different schemes provide , and the trade-offs between the security properties offered and the communication characteristics different systems support . story_separator_special_tag through interviews and content analyses , this article conducted a comparative study of the drafting of china s internet security law ( isl ) and e-commerce law ( ecl ) . although both had multiparty par . story_separator_special_tag shortly after it was first introduced in 2006 , differential privacy became the flagship data privacy definition . since then , numerous variants and extensions were proposed to adapt it to different scenarios and attacker models . in this work , we propose a systematic taxonomy of these variants and extensions . we list all data privacy definitions based on differential privacy , and partition them into seven categories , depending on which aspect of the original definition is modified . these categories act like dimensions : variants from the same category can not be combined , but variants from different categories can be combined to form new definitions . we also establish a partial ordering of relative strength between these notions by summarizing existing results . furthermore , we list which of these definitions satisfy some desirable properties , like composition , post-processing , and convexity by either providing a novel proof or collecting existing ones . story_separator_special_tag a sample of n lid random variables with a given unknown density is given . we discuss several issues related to the problem or generating a new sample of lid random variables with almost the same density . in particular , we look at sample independence , consistency , sample indistinguishability , moment matching and generator efficiency . we also introduce the notion of a replacement number , the minimum number of observations in a given sample that have to be replaced to obtain a sample with a given density . story_separator_special_tag we examine the tradeoff between privacy and usability of statistical databases . we model a statistical database by an n-bit string d1 , . , dn , with a query being a subset q [ n ] to be answered by ieqdi . our main result is a polynomial reconstruction algorithm of data from noisy ( perturbed ) subset sums . applying this reconstruction algorithm to statistical databases we show that in order to achieve privacy one has to add perturbation of magnitude ( n ) . that is , smaller perturbation always results in a strong violation of privacy . we show that this result is tight by exemplifying access algorithms for statistical databases that preserve privacy while adding perturbation of magnitude o ( n ) .for time-t bounded adversaries we demonstrate a privacypreserving access algorithm whose perturbation magnitude is t . story_separator_special_tag k-anonymity and e-differential privacy are two mainstream privacy models , the former introduced to anonymize data sets and the latter to limit the knowledge gain that results from including one individual in the data set . whereas basic k-anonymity only protects against identity disclosure , t-closeness was presented as an extension of k-anonymity that also protects against attribute disclosure . we show here that , if not quite equivalent , t-closeness and e-differential privacy are strongly related to one another when it comes to anonymizing data sets . specifically , k-anonymity for the quasi-identifiers combined with e-differential privacy for the confidential attributes yields stochastic t-closeness ( an extension of t-closeness ) , with t a function of k and e. conversely , t-closeness can yield e-differential privacy when t=exp ( e/2 ) and the assumptions made by t-closeness about the prior and posterior views of the data hold . story_separator_special_tag differential privacy is a recent notion of privacy tailored to the problem of statistical disclosure control : how to release statistical information about a set of people without compromising the the privacy of any individual [ 7 ] .we describe new work [ 10 , 9 ] that extends differentially private data analysis beyond the traditional setting of a trusted curator operating , in perfect isolation , on a static dataset . we ask how can we guarantee differential privacy , even against an adversary that has access to the algorithm 's internal state , eg , by subpoena ? an algorithm that achives this is said to be pan-private . how can we guarantee differential privacy when the algorithm must continually produce outputs ? we call this differential privacy under continual observation.we also consider these requirements in conjunction . story_separator_special_tag in the information realm , loss of privacy is usually associated with failure to control access to information , to control the flow of information , or to control the purposes for which information is employed . differential privacy arose in a context in which ensuring privacy is a challenge even if all these control problems are solved : privacy-preserving statistical analysis of data . the problem of statistical disclosure control revealing accurate statistics about a set of respondents while preserving the privacy of individuals has a venerable history , with an extensive literature spanning statistics , theoretical computer science , security , databases , and cryptography ( see , for example , the excellent survey [ 1 ] , the discussion of related work in [ 2 ] and the journal of official statistics 9 ( 2 ) , dedicated to confidentiality and disclosure control ) . this long history is a testament the importance of the problem . statistical databases can be of enormous social value ; they are used for apportioning resources , evaluating medical therapies , understanding the spread of disease , improving economic utility , and informing us about ourselves as a species . the story_separator_special_tag in this work we provide efficient distributed protocols for generating shares of random noise , secure against malicious participants . the purpose of the noise generation is to create a distributed implementation of the privacy-preserving statistical databases described in recent papers [ 14,4,13 ] . in these databases , privacy is obtained by perturbing the true answer to a database query by the addition of a small amount of gaussian or exponentially distributed random noise . the computational power of even a simple form of these databases , when the query is just of the form i f ( d i ) , that is , the sum over all rows i in the database of a function f applied to the data in row i , has been demonstrated in [ 4 ] . a distributed implementation eliminates the need for a trusted database administrator . the results for noise generation are of independent interest . the generation of gaussian noise introduces a technique for distributing shares of many unbiased coins with fewer executions of verifiable secret sharing than would be needed using previous approaches ( reduced by a factor of n ) . the generation of exponentially distributed story_separator_special_tag we continue a line of research initiated in dinur and nissim ( 2003 ) ; dwork and nissim ( 2004 ) ; and blum et al . ( 2005 ) on privacy-preserving statistical databases . consider a trusted server that holds a database of sensitive information . given a query function $ f $ mapping databases to reals , the so-called { \\em true answer } is the result of applying $ f $ to the database . to protect privacy , the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution , and this response , the true answer plus noise , is returned to the user . previous work focused on the case of noisy sums , in which $ f = \\sum_i g ( x_i ) $ , where $ x_i $ denotes the $ i $ th row of the database and $ g $ maps database rows to $ [ 0,1 ] $ . we extend the study to general functions $ f $ , proving that privacy can be preserved by calibrating the standard deviation of the noise according to the { \\em sensitivity } of story_separator_special_tag differential privacy is a recent notion of privacy tailored to privacy-preserving data analysis [ 11 ] . up to this point , research on differentially private data analysis has focused on the setting of a trusted curator holding a large , static , data set ; thus every computation is a `` one-shot '' object : there is no point in computing something twice , since the result will be unchanged , up to any randomness introduced for privacy . however , many applications of data analysis involve repeated computations , either because the entire goal is one of monitoring , e.g. , of traffic conditions , search trends , or incidence of influenza , or because the goal is some kind of adaptive optimization , e.g. , placement of data to minimize access costs . in these cases , the algorithm must permit continual observation of the system 's state . we therefore initiate a study of differential privacy under continual observation . we identify the problem of maintaining a counter in a privacy preserving manner and show its wide applicability to many different problems . story_separator_special_tag we consider private data analysis in the setting in which a trusted and trustworthy curator , having obtained a large data set containing private information , releases to the public a `` sanitization '' of the data set that simultaneously protects the privacy of the individual contributors of data and offers utility to the data analyst . the sanitization may be in the form of an arbitrary data structure , accompanied by a computational procedure for determining approximate answers to queries on the original data set , or it may be a `` synthetic data set '' consisting of data items drawn from the same universe as items in the original data set ; queries are carried out as if the synthetic data set were the actual input . in either case the process is non-interactive ; once the sanitization has been released the original data and the curator play no further role . for the task of sanitizing with a synthetic dataset output , we map the boundary between computational feasibility and infeasibility with respect to a variety of utility measures . for the ( potentially easier ) task of sanitizing with unrestricted output format , we show a story_separator_special_tag consider a database of $ n $ people , each represented by a bit-string of length $ d $ corresponding to the setting of $ d $ binary attributes . a $ k $ -way marginal query is specified by a subset $ s $ of $ k $ attributes , and a $ |s| $ -dimensional binary vector $ \\beta $ specifying their values . the result for this query is a count of the number of people in the database whose attribute vector restricted to $ s $ agrees with $ \\beta $ . privately releasing approximate answers to a set of $ k $ -way marginal queries is one of the most important and well-motivated problems in differential privacy . information theoretically , the error complexity of marginal queries is well-understood : the per-query additive error is known to be at least $ \\omega ( \\min\\ { \\sqrt { n } , d^ { \\frac { k } { 2 } } \\ } ) $ and at most $ \\tilde { o } ( \\min\\ { \\sqrt { n } d^ { 1/4 } , d^ { \\frac { k } { 2 } } \\ story_separator_special_tag in a recent paper dinur and nissim considered a statistical database in which a trusted database administrator monitors queries and introduces noise to the responses with the goal of maintaining data privacy [ 5 ] . under a rigorous definition of breach of privacy , dinur and nissim proved that unless the total number of queries is sub-linear in the size of the database , a substantial amount of noise is required to avoid a breach , rendering the database almost useless . as databases grow increasingly large , the possibility of being able to query only a sub-linear number of times becomes realistic . we further investigate this situation , generalizing the previous work in two important directions : multi-attribute databases ( previous work dealt only with single-attribute databases ) and vertically partitioned databases , in which different subsets of attributes are stored in different databases . in addition , we show how to use our techniques for datamining on published noisy statistics . story_separator_special_tag the problem of privacy-preserving data analysis has a long history spanning multiple disciplines . as electronic data about individuals becomes increasingly detailed , and as technology enables ever more powerful collection and curation of these data , the need increases for a robust , meaningful , and mathematically rigorous definition of privacy , together with a computationally rich class of algorithms that satisfy this definition . differential privacy is such a definition.after motivating and discussing the meaning of differential privacy , the preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy , and application of these techniques in creative combinations , using the query-release problem as an ongoing example . a key point is that , by rethinking the computational goal , one can often obtain far better results than would be achieved by methodically replacing each step of a non-private computation with a differentially private implementation . despite some astonishingly powerful computational results , there are still fundamental limitations not just on what can be achieved with differential privacy but on what can be achieved with any method that protects against a complete breakdown in privacy . virtually all the algorithms discussed herein maintain differential story_separator_special_tag boosting is a general method for improving the accuracy of learning algorithms . we use boosting to construct improved { \\em privacy-preserving synopses } of an input database . these are data structures that yield , for a given set $ \\q $ of queries over an input database , reasonably accurate estimates of the responses to every query in~ $ \\q $ , even when the number of queries is much larger than the number of rows in the database . given a { \\em base synopsis generator } that takes a distribution on $ \\q $ and produces a `` weak '' synopsis that yields `` good '' answers for a majority of the weight in $ \\q $ , our { \\em boosting for queries } algorithm obtains a synopsis that is good for all of~ $ \\q $ . we ensure privacy for the rows of the database , but the boosting is performed on the { \\em queries } . we also provide the first synopsis generators for arbitrary sets of arbitrary low-sensitivity queries , { \\it i.e . } , queries whose answers do not vary much under the addition or deletion of a story_separator_special_tag privacy-preserving distributed data mining is the study of mining on distributed data owned by multiple data owners in a non-secure environment , where the mining protocol does not reveal any sensitive information to the data owners , the individual privacy is preserved , and the output mining model is practically useful . in this thesis , we propose a secure two-party protocol for building a privacy-preserving decision tree classifier over distributed data using differential privacy . we utilize secure multiparty computation to ensure that the protocol is privacy-preserving . our algorithm also utilizes parallel and sequential compositions , and applies distributed exponential mechanism to ensure that the output is differentially-private . we implemented our protocol in a distributed environment on real-life data , and the experimental results show that the protocol produces decision tree classifiers with high utility while being reasonably efficient and scalable . story_separator_special_tag a new signature scheme is proposed , together with an implementation of the diffie-hellman key distribution scheme that achieves a public key cryptosystem . the security of both systems relies on the difficulty of computing discrete logarithms over finite fields . story_separator_special_tag randomized aggregatable privacy-preserving ordinal response , or rappor , is a technology for crowdsourcing statistics from end-user client software , anonymously , with strong privacy guarantees . in short , rappors allow the forest of client data to be studied , without permitting the possibility of looking at individual trees . by applying randomized response in a novel manner , rappor provides the mechanisms for such collection as well as for efficient , high-utility analysis of the collected data . in particular , rappor permits statistics to be collected on the population of client-side strings with strong privacy guarantees for each client , and without linkability of their reports . this paper describes and motivates rappor , details its differential-privacy and utility guarantees , discusses its practical deployment and properties in the face of different attack models , and , finally , gives results of its application to both synthetic and real-world data . story_separator_special_tag many commonly used learning algorithms work by iteratively updating an intermediate solution using one or a few data points in each iteration . analysis of differential privacy for such algorithms often involves ensuring privacy of each step and then reasoning about the cumulative privacy cost of the algorithm . this is enabled by composition theorems for differential privacy that allow releasing of all the intermediate results . in this work , we demonstrate that for contractive iterations , not releasing the intermediate results strongly amplifies the privacy guarantees . we describe several applications of this new analysis technique to solving convex optimization problems via noisy stochastic gradient descent . for example , we demonstrate that a relatively small number of non-private data points from the same distribution can be used to close the gap between private and non-private convex optimization . in addition , we demonstrate that we can achieve guarantees similar to those obtainable using the privacy-amplification-by-sampling technique in several natural settings where that technique can not be applied . story_separator_special_tag an { n , k ) -bit-fixing source is a distribution x over { 0 , 1 } /sup n/ such that there is a subset of k variables in x/sub 1/ , . , x/sub n/ which are uniformly distributed and independent of each other , and the remaining n - k variables are fixed . a deterministic bit-fixing source extractor is a function e : { 0 , l } /sup n/ /spl rarr/ { 0 , l } /sup m/ which on an arbitrary ( n , k ) -bit-fixing source outputs m bits that are statistically-close to uniform . recently , kamp and zuckerman ( 2003 ) gave a construction of deterministic bit-fixing source extractor that extracts /spl omega/ ( k/sup 2//n ) bits , and requires k > /spl radic/n . in this paper we give constructions of deterministic bit-fixing source extractors that extract ( 1 -o ( 1 ) ) k bits whenever k > ( log n ) /sup c/ for some universal constant c > 0. thus , our constructions extract almost all the randomness from bit-fixing sources and work even when k is small . for k /spl gt/ /spl radic/n story_separator_special_tag a mechanism for releasing information about a statistical database with sensitive data must resolve a trade-off between utility and privacy . privacy can be rigorously quantified using the framework of { \\em differential privacy } , which requires that a mechanism 's output distribution is nearly the same whether or not a given database row is included or excluded . the goal of this paper is strong and general utility guarantees , subject to differential privacy . we pursue mechanisms that guarantee near-optimal utility to every potential user , independent of its side information ( modeled as a prior distribution over query results ) and preferences ( modeled via a loss function ) . our main result is : for each fixed count query and differential privacy level , there is a { \\em geometric mechanism } $ m^ * $ -- a discrete variant of the simple and well-studied laplace mechanism -- that is { \\em simultaneously expected loss-minimizing } for every possible user , subject to the differential privacy constraint . this is an extremely strong utility guarantee : { \\em every } potential user $ u $ , no matter what its side information and preferences , story_separator_special_tag this paper proposes an encryption scheme that possess the following property : an adversary , who knows the encryption algorithm and is given the cyphertext , can not obtain any information about the clear-text . any implementation of a public key cryptosystem , as proposed by diffie and hellman in [ 8 ] , should possess this property . our encryption scheme follows the ideas in the number theoretic implementations of a public key cryptosystem due to rivest , shamir and adleman [ 13 ] , and rabin [ 12 ] . story_separator_special_tag key distribution is the process of sharing the key between the parties who intend to communicate with each other such that any unintended party would not intercept the key . the classical approach is less secure as the key can be intercepted within fixed interval . therefor more secure approach is devised called as quantum key distribution which can detect the intrusion while communication is going on . this paper aims at surveying some of these quantum key distribution algorithms keywords quantum computing quantum key distribution , three party authentication inroduction the most important aspect of any encryption technique is the key used for the ciphering the plain text . security of whole cryptosystem depends on the key used in encryption . every algorithm devised for encryption is worthless , if the key used is not strong and secure . a strong , unique and untraceable key strengthen the cryptosystem , whereas a weak key destroy its integrity and make it vulnerable . therefore key distribution is the inextricable part in any encryption algorithm . since key distribution must be secure enough to prevent any attempts to compromise the system . so this paper aims at studying various quantum key story_separator_special_tag this paper considers the problem of secure data aggregation ( mainly summation ) in a distributed setting , while ensuring differential privacy of the result . we study secure multiparty addition protocols using well known security schemes : shamir s secret sharing , perturbation-based , and various encryptions . we supplement our study with our new enhanced encryption scheme eft , which is efficient and fault tolerant.differential privacy of\xa0 the final result is achieved by either distributed laplace or geometric mechanism ( respectively dlpa or dgpa ) , while approximated differential privacy is achieved by diluted mechanisms.\xa0 distributed random noise is generated collectively by all participants , which draw random variables from one of several distributions : gamma , gauss , geometric , or their diluted versions.\xa0 we introduce a new distributed privacy mechanism with noise drawn from the laplace distribution , which achieves smaller redundant noise with efficiency.\xa0 we compare complexity and security characteristics of the protocols with different differential privacy mechanisms and security schemes.\xa0 more importantly , we implemented all protocols and present an experimental comparison on their performance and scalability in a real distributed environment.\xa0 based on the evaluations , we identify our security scheme and laplace story_separator_special_tag this paper considers the problem of secure data aggregation in a distributed setting while preserving differential privacy for the aggregated data . in particular , we focus on the secure sum aggregation . security is guaranteed by secure multiparty computation protocols using well known security schemes : shamir 's secret sharing , perturbation-based , and various encryption schemes . differential privacy of the final result is achieved by distributed laplace perturbation mechanism ( dlpa ) . partial random noise is generated by all participants , which draw random variables from gamma or gaussian distributions , such that the aggregated noise follows laplace distribution to satisfy differential privacy . we also introduce a new efficient distributed noise generation scheme with partial noise drawn from laplace distributions.we compare the protocols with different privacy mechanisms and security schemes in terms of their complexity and security characteristics . more importantly , we implemented all protocols , and present an experimental comparison on their performance and scalability in a real distributed environment . story_separator_special_tag abstract in this work we demonstrate that allowing differentially private leakage can significantly improve the concrete performance of secure 2-party computation ( 2pc ) protocols . specifically , we focus on the private set intersection ( psi ) protocol of rindal and rosulek ( ccs 2017 ) , which is the fastest psi protocol with security against malicious participants . we show that if differentially private leakage is allowed , the cost of the protocol can be reduced by up to 63 % , depending on the desired level of differential privacy . on the technical side , we introduce a security model for differentially-private leakage in malicious-secure 2pc . we also introduce two new and improved mechanisms for differentially private histogram overestimates , the main technical challenge for differentially-private psi . story_separator_special_tag suppose we would like to know all answers to a set of statistical queries c on a data set up to small error , but we can only access the data itself using statistical queries . a trivial solution is to exhaustively ask all queries in c. can we do any better ? + we show that the number of statistical queries necessary and sufficient for this task is -- -up to polynomial factors -- -equal to the agnostic learning complexity of c in kearns ' statistical query ( sq ) model . this gives a complete answer to the question when running time is not a concern . + we then show that the problem can be solved efficiently ( allowing arbitrary error on a small fraction of queries ) whenever the answers to c can be described by a submodular function . this includes many natural concept classes , such as graph cuts and boolean disjunctions and conjunctions . while interesting from a learning theoretic point of view , our main applications are in privacy-preserving data analysis : here , our second result leads to the first algorithm that efficiently releases differentially private answers to of all boolean story_separator_special_tag local differential privacy ( ldp ) is popularly used in practice for privacy-preserving data collection . although existing ldp protocols offer high utility for large user populations ( 100,000 or more users ) , they perform poorly in scenarios with small user populations ( such as those in the cybersecurity domain ) and lack perturbation mechanisms that are effective for both ordinal and non-ordinal item sequences while protecting sequence length and content simultaneously . in this paper , we address the small user population problem by introducing the concept of condensed local differential privacy ( cldp ) as a specialization of ldp , and develop a suite of cldp protocols that offer desirable statistical utility while preserving privacy . our protocols support different types of client data , ranging from ordinal data types in finite metric spaces ( numeric malware infection statistics ) , to non-ordinal items ( os versions , transaction categories ) , and to sequences of ordinal and non-ordinal items . extensive experiments are conducted on multiple datasets , including datasets that are an order of magnitude smaller than those used in existing approaches , which show that proposed cldp protocols yield high utility.furthermore , case studies story_separator_special_tag we present new theoretical results on differentially private data release useful with respect to any target class of counting queries , coupled with experimental results on a variety of real world data sets . specifically , we study a simple combination of the multiplicative weights approach of [ hardt and rothblum , 2010 ] with the exponential mechanism of [ mcsherry and talwar , 2007 ] . the multiplicative weights framework allows us to maintain and improve a distribution approximating a given data set with respect to a set of counting queries . we use the exponential mechanism to select those queries most incorrectly tracked by the current distribution . combing the two , we quickly approach a distribution that agrees with the data set on the given set of queries up to small error . the resulting algorithm and its analysis is simple , but nevertheless improves upon previous work in terms of both error and running time . we also empirically demonstrate the practicality of our approach on several data sets commonly used in the statistical community for contingency table release . story_separator_special_tag this work considers computationally efficient privacy-preserving data release . we study the task of analyzing a database containing sensitive information about individual participants . given a set of statistical queries on the data , we want to release approximate answers to the queries while also guaranteeing differential privacy -- -protecting each participant 's sensitive data . our focus is on computationally efficient data release algorithms ; we seek algorithms whose running time is polynomial , or at least sub-exponential , in the data dimensionality . our primary contribution is a computationally efficient reduction from differentially private data release for a class of counting queries , to learning thresholded sums of predicates from a related class . we instantiate this general reduction with a variety of algorithms for learning thresholds . these instantiations yield several new results for differentially private data release . as two examples , taking { 0,1 } ^d to be the data domain ( of dimension d ) , we obtain differentially private algorithms for : ( * ) releasing all k-way conjunctions . for any given k , the resulting data release algorithm has bounded error as long as the database is of size at least story_separator_special_tag modern cyber physical systems ( cpss ) has widely being used in our daily lives because of development of information and communication technologies ( ict ) . with the provision of cpss , the security and privacy threats associated to these systems are also increasing . passive attacks are being used by intruders to get access to private information of cpss . in order to make cpss data more secure , certain privacy preservation strategies such as encryption , and k-anonymity have been presented in the past . however , with the advances in cpss architecture , these techniques also need certain modifications . meanwhile , differential privacy emerged as an efficient technique to protect cpss data privacy . in this paper , we present a comprehensive survey of differential privacy techniques for cpss . in particular , we survey the application and implementation of differential privacy in four major applications of cpss named as energy systems , transportation systems , healthcare and medical systems , and industrial internet of things ( iiot ) . furthermore , we present open issues , challenges , and future research direction for differential privacy techniques for cpss . this survey can serve as story_separator_special_tag in the field of social survey of misconduct and legal consultation , the features of confidentiality , integrity , deniable authentication , and non-repudiation are needed for the sake of preserving privacy . for this special kind of application scenario , we propose an efficient deniable authentication encryption scheme . our scheme can achieve the four secure features in a single logical step . and compared with the latest scheme , our scheme reduces the computational cost of encryption by about 30 % , reduces computational cost of decryption by about 50 % , and reduces the length of ciphertext by about 33 % . its security is shown in the random oracle model . story_separator_special_tag the logistic model is a very elementary and important model in the field of machine learning . in this article , an efficient differential privacy logistic classification mechanism is proposed . the proposed mechanism is better than object function perturbation mechanism in terms of running time and accuracy . regarding accuracy , the proposed mechanism s accuracy is almost the same as the no differential privacy ( non-dp ) mechanism , and the proposed mechanism is better than that of the object function perturbation mechanism in both the test accuracy and the train accuracy . as for the running time of the training model , the proposed mechanism is better than the object function mechanism and is the same as the non-dp mechanism . story_separator_special_tag the objective of machine learning is to extract useful information from data , while privacy is preserved by concealing information . thus it seems hard to reconcile these competing interests . however , they frequently must be balanced when mining sensitive data . for example , medical research represents an important application where it is necessary both to extract useful information and protect patient privacy . one way to resolve the conflict is to extract general characteristics of whole populations without disclosing the private information of individuals . in this paper , we consider differential privacy , one of the most popular and powerful definitions of privacy . we explore the interplay between machine learning and differential privacy , namely privacy-preserving machine learning algorithms and learning-based data release mechanisms . we also describe some theoretical results that address what can be learned differentially privately and upper bounds of loss functions for differentially private algorithms . finally , we present some open questions , including how to incorporate public data , how to deal with missing data in private datasets , and whether , as the number of observed samples grows arbitrarily large , differentially private machine learning algorithms can be story_separator_special_tag differential privacy promises to enable general data analytics while protecting individual privacy , but existing differential privacy mechanisms do not support the wide variety of features and databases used in real-world sql-based analytics systems.this paper presents the first practical approach for differential privacy of sql queries . using 8.1 million real-world queries , we conduct an empirical study to determine the requirements for practical differential privacy , and discuss limitations of previous approaches in light of these requirements . to meet these requirements we propose elastic sensitivity , a novel method for approximating the local sensitivity of queries with general equijoins . we prove that elastic sensitivity is an upper bound on local sensitivity and can therefore be used to enforce differential privacy using any local sensitivity-based mechanism.we build flex , a practical end-to-end system to enforce differential privacy for sql queries using elastic sensitivity . we demonstrate that flex is compatible with any existing database , can enforce differential privacy for real-world sql queries , and incurs negligible ( 0.03 % ) performance overhead . story_separator_special_tag the aim of this paper is twofold : to introduce the mathematics of stochastic differential equations ( sdes ) for forest dynamics modeling and to describe how such a model can be applied to aid our understanding of tree height distribution corresponding to a given diameter using the large dataset provided by the lithuanian national forest inventory ( lnfi ) . tree height-diameter dynamics was examined with ornstein-uhlenbeck family mixed effects sdes . dynamics of a tree height , volume and their coefficients of variation , quantile regression curves of the tree height , and height-diameter ratio were demonstrated using newly developed tree height distributions for a given diameter . the parameters were estimated by considering a discrete sample of the diameter and height and by using an approximated maximum likelihood procedure . all models were evaluated using a validation dataset . the dataset provided by the lnfi ( 2006 2010 ) of scots pine trees is used in this study to estimate parameters and validate our modeling technique . the verification indicated that the newly developed models are able to accurately capture the behavior of tree height distribution corresponding to a given diameter . all of the results were story_separator_special_tag learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets . we ask : what concept classes can be learned privately , namely , by an algorithm whose output does not depend too heavily on any one input or specific training example ? more precisely , we investigate learning algorithms that satisfy differential privacy , a notion that provides strong confidentiality guarantees in contexts where aggregate information is released about a database containing sensitive information about individuals . we demonstrate that , ignoring computational constraints , it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the concept class . therefore , almost anything learnable is learnable privately : specifically , if a concept class is learnable by a ( non-private ) algorithm with polynomial sample complexity and output size , then it can be learned privately using a polynomial number of samples . we also present a computationally efficient private pac learner for the class of parity functions . local ( or randomized response ) algorithms are a practical class of private algorithms that have received extensive story_separator_special_tag marginal ( contingency ) tables are the method of choice for government agencies releasing statistical summaries of categorical data . in this paper , we derive lower bounds on how much distortion ( noise ) is necessary in these tables to ensure the privacy of sensitive data . we extend a line of recent work on impossibility results for private data analysis [ 9 , 12 , 13 , 15 ] to a natural and important class of functionalities . consider a database consisting of n rows ( one per individual ) , each row comprising d binary attributes . for any subset of t attributes of size |t|=k , the marginal table for t has 2k entries ; each entry counts how many times in the database a particular setting of these attributes occurs . we provide lower bounds for releasing all d k k-attribute marginal tables under several different notions of privacy . ( 1 ) we give efficient polynomial time attacks which allow an adversary to reconstruct sensitive information given insufficiently perturbed marginal table releases . in particular , for a constant k , we obtain a tight bound of ~ ( min n , dk-1 ) story_separator_special_tag in this paper , we study the problem of learning in the presence of classification noise in the probabilistic learning model of valiant and its variants . in order to identify the class of robust learning algorithms in the most general way , we formalize a new but related model of learning from statistical queries . intuitively , in this model , a learning algorithm is forbidden to examine individual examples of the unknown target function , but is given access to an oracle providing estimates of probabilities over the sample space of random examples . one of our main results shows that any class of functions learnable from statistical queries is in fact learnable with classification noise in valiant s model , with a noise rate approaching the informationtheoretic barrier of 1/2 . we then demonstrate the generality of the statistical query model , showing that practically every class learnable in valiant s model and its variants can also be learned in the new model ( and thus can be learned in the presence of noise ) . a notable exception to this statement is the class of parity functions , which we prove is not learnable from statistical story_separator_special_tag piracy in digital content distribution systems is usually identified as the illegal reception of the material by an unauthorized ( pirate ) device . a well known method for discouraging piracy in this setting is the usage of a traitor tracing scheme that enables the recovery of the identities of the subscribers who collaborated in the construction of the pirate decoder ( the traitors ) . an important type of tracing which we deal with here is black-box traitor tracing which reveals the traitors ' identity using only black-box access to the pirate decoder . the only existing general scheme which is successful in general black-box traitor tracing was introduced by chor fiat and naor . still , this scheme employs a pirate decoder model that despite its generality it is not intended to apply to all settings . in particular it is assumed that ( 1 ) the pirate decoder is resettable , i.e . the tracer is allowed to reset the pirate decoder to its initial state after each trial ( but in many settings this is not possible : the pirate decoder is history-recording ) , and that ( 2 ) the pirate decoder is available , story_separator_special_tag scientific collaborations benefit from sharing information and data from distributed sources , but protecting privacy is a major concern . researchers , funders , and the public in general are getting increasingly worried about the potential leakage of private data . advanced security methods have been developed to protect the storage and computation of sensitive data in a distributed setting . however , they do not protect against information leakage from the outcomes of data analyses . to address this aspect , studies on differential privacy ( a state-of-the-art privacy protection framework ) demonstrated encouraging results , but most of them do not apply to distributed scenarios . combining security and privacy methodologies is a natural way to tackle the problem , but naive solutions may lead to poor analytical performance . in this paper , we introduce a novel strategy that combines differential privacy methods and homomorphic encryption techniques to achieve the best of both worlds . using logistic regression ( a popular model in biomedicine ) , we demonstrated the practicability of building secure and privacy-preserving models with high efficiency ( less than 3 min ) and good accuracy [ < 1 % of difference in the area story_separator_special_tag massive increases in the availability of informative social science data are making dramatic progress possible in analyzing , understanding , and addressing many major societal problems . yet the same forces pose severe challenges to the scientific infrastructure supporting data sharing , data management , informatics , statistical methodology , and research ethics and policy , and these are collectively holding back progress . i address these changes and challenges and suggest what can be done . story_separator_special_tag many applications that employ data mining techniques involve mining data that include private and sensitive information about the subjects . one way to enable effective data mining while preserving privacy is to anonymize the data set that includes private information about subjects before being released for data mining . one way to anonymize data set is to manipulate its content so that the records adhere to k-anonymity . two common manipulation techniques used to achieve k-anonymity of a data set are generalization and suppression . generalization refers to replacing a value with a less specific but semantically consistent value , while suppression refers to not releasing a value at all . generalization is more commonly applied in this domain since suppression may dramatically reduce the quality of the data mining results if not properly used . however , generalization presents a major drawback as it requires a manually generated domain hierarchy taxonomy for every quasi-identifier in the data set on which k-anonymity has to be performed . in this paper , we propose a new method for achieving k-anonymity named k-anonymity of classification trees using suppression ( kactus ) . in kactus , efficient multidimensional suppression is performed , i.e. story_separator_special_tag this book describes the inferential and modeling advantages that this distribution , together with its generalizations and modifications , offers . the exposition systematically unfolds with many examples , tables , illustrations , and exercises . a comprehensive index and extensive bibliography also make this book an ideal text for a senior undergraduate and graduate seminar on statistical distributions , or for a short half-term academic course in statistics , applied probability , and finance . story_separator_special_tag a central challenge in differential privacy is to design computationally efficient non-interactive algorithms that can answer large numbers of statistical queries on a sensitive dataset . that is , we would like to design a differentially private algorithm that takes a dataset \\ ( d \\in x^n\\ ) consisting of some small number of elements n from some large data universe x , and efficiently outputs a summary that allows a user to efficiently obtain an answer to any query in some large family q . story_separator_special_tag the imperative for improving health in the world 's poorest regions lies in research , yet there is no question that low participation , a lack of trained staff , and limited opportunities for data sharing in developing countries impede advances in medical practice and public health knowledge . extensive studies are essential to develop new treatments and to identify better ways to manage healthcare issues . recent rapid advances in availability and uptake of digital technologies , especially of mobile networks , have the potential to overcome several barriers to collaborative research in remote places with limited access to resources . many research groups are already taking advantage of these technologies for data sharing and capture , and these initiatives indicate that increasing acceptance and use of digital technology could promote rapid improvements in global medical science . story_separator_special_tag since big data becomes a main impetus to the next generation of it industry , data privacy has received considerable attention in recent years . to deal with the privacy challenges , differential privacy has been widely discussed and related private mechanisms are proposed as privacy-enhancing techniques . however , with today 's differential privacy techniques , it is difficult to generate a sanitized dataset that can suit every machine learning task . in order to adapt to various tasks and budgets , different kinds of privacy mechanisms have to be implemented , which inevitably incur enormous costs for computation and interaction . to this end , in this paper , we propose two novel schemes for outsourcing differential privacy . the first scheme efficiently achieves outsourcing differential privacy by using our preprocessing method and secure building blocks . to support the queries from multiple evaluators , we give the second scheme that employs a trusted execution environment to aggregately implement privacy mechanisms on multiple queries . during data publishing , our proposed schemes allow providers to go off-line after uploading their datasets , so that they achieve a low communication cost which is one of the critical requirements for story_separator_special_tag the k-anonymity privacy requirement for publishing microdata requires that each equivalence class ( i.e. , a set of records that are indistinguishable from each other with respect to certain `` identifying '' attributes ) contains at least k records . recently , several authors have recognized that k-anonymity can not prevent attribute disclosure . the notion of l-diversity has been proposed to address this ; l-diversity requires that each equivalence class has at least l well-represented values for each sensitive attribute . in this paper we show that l-diversity has a number of limitations . in particular , it is neither necessary nor sufficient to prevent attribute disclosure . we propose a novel privacy notion called t-closeness , which requires that the distribution of a sensitive attribute in any equivalence class is close to the distribution of the attribute in the overall table ( i.e. , the distance between the two distributions should be no more than a threshold t ) . we choose to use the earth mover distance measure for our t-closeness requirement . we discuss the rationale for t-closeness and illustrate its advantages through examples and experiments . story_separator_special_tag this paper aims at answering the following two questions in privacy-preserving data analysis and publishing : what formal privacy guarantee ( if any ) does $ k $ -anonymization provide ? how to benefit from the adversary 's uncertainty about the data ? we have found that random sampling provides a connection that helps answer these two questions , as sampling can create uncertainty . the main result of the paper is that $ k $ -anonymization , when done `` safely '' , and when preceded with a random sampling step , satisfies $ ( \\epsilon , \\delta ) $ -differential privacy with reasonable parameters . this result illustrates that `` hiding in a crowd of $ k $ '' indeed offers some privacy guarantees . this result also suggests an alternative approach to output perturbation for satisfying differential privacy : namely , adding a random sampling step in the beginning and pruning results that are too sensitive to change of a single tuple . regarding the second question , we provide both positive and negative results . on the positive side , we show that adding a random-sampling pre-processing step to a differentially-private algorithm can greatly amplify the story_separator_special_tag we propose a new notion of secure multiparty computation aided by a computationallypowerful but untrusted cloud server . in this notion that we call on-the-fly multiparty computation ( mpc ) , the cloud can non-interactively perform arbitrary , dynamically chosen computations on data belonging to arbitrary sets of users chosen on-the-fly . all user s input data and intermediate results are protected from snooping by the cloud as well as other users . this extends the standard notion of fully homomorphic encryption ( fhe ) , where users can only enlist the cloud s help in evaluating functions on their own encrypted data . in on-the-fly mpc , each user is involved only when initially uploading his ( encrypted ) data to the cloud , and in a final output decryption phase when outputs are revealed ; the complexity of both is independent of the function being computed and the total number of users in the system . when users upload their data , they need not decide in advance which function will be computed , nor who they will compute with ; they need only retroactively approve the eventuallychosen functions and on whose data the functions were evaluated . story_separator_special_tag publishing data about individuals without revealing sensitive information about them is an important problem . in recent years , a new definition of privacy called k-anonymity has gained popularity . in a k-anonymized dataset , each record is indistinguishable from at least k 1 other records with respect to certain identifying attributes.in this article , we show using two simple attacks that a k-anonymized dataset has some subtle but severe privacy problems . first , an attacker can discover the values of sensitive attributes when there is little diversity in those sensitive attributes . this is a known problem . second , attackers often have background knowledge , and we show that k-anonymity does not guarantee privacy against attackers using background knowledge . we give a detailed analysis of these two attacks , and we propose a novel and powerful privacy criterion called e-diversity that can defend against such attacks . in addition to building a formal foundation for e-diversity , we show in an experimental evaluation that e-diversity is practical and can be implemented efficiently . story_separator_special_tag a normal random variable x may be generated in terms of uniform random variables $ u_1 $ , $ u_2 $ , in the following simple way : 86 percent of the time , put $ x = 2 ( u_1 + u_2 + u_3 - 1.5 ) $ ,11 percent o . story_separator_special_tag we provide a new version of our ziggurat method for generating a random variable from a given decreasing density . it is faster and simpler than the original , and will produce , for example , normal or exponential variates at the rate of 15 million per second with a c version on a 400mhz pc . it uses two tables , integers k i , and reals w i . some 99 % of the time , the required x is produced by : generate a random 32-bit integer j and let i be the index formed from the rightmost 8 bits of j. if j < k , return x = j x w i . we illustrate with c code that provides for inline generation of both normal and exponential variables , with a short procedure for settting up the necessary tables . story_separator_special_tag as they grapple with increasingly large data sets , biologists and computer scientists uncork new bottlenecks . story_separator_special_tag the $ \\gamma_2 $ norm of a real $ m\\times n $ matrix $ a $ is the minimum number $ t $ such that the column vectors of $ a $ are contained in a $ 0 $ -centered ellipsoid $ e\\subseteq\\mathbb { r } ^m $ which in turn is contained in the hypercube $ [ -t , t ] ^m $ . we prove that this classical quantity approximates the \\emph { hereditary discrepancy } $ \\mathrm { herdisc } \\ a $ as follows : $ \\gamma_2 ( a ) = { o ( \\log m ) } \\cdot \\mathrm { herdisc } \\ a $ and $ \\mathrm { herdisc } \\ a = o ( \\sqrt { \\log m } \\ , ) \\cdot\\gamma_2 ( a ) $ . since $ \\gamma_2 $ is polynomial-time computable , this gives a polynomial-time approximation algorithm for hereditary discrepancy . both inequalities are shown to be asymptotically tight . we then demonstrate on several examples the power of the $ \\gamma_2 $ norm as a tool for proving lower and upper bounds in discrepancy theory . most notably , we prove a new lower bound of $ story_separator_special_tag we explore a new security model for secure computation on large datasets . we assume that two servers have been employed to compute on private data that was collected from many users , and , in order to improve the efficiency of their computation , we establish a new tradeoff with privacy . specifically , instead of claiming that the servers learn nothing about the input values , we claim that what they do learn from the computation preserves the differential privacy of the input . leveraging this relaxation of the security model allows us to build a protocol that leaks some information in the form of access patterns to memory , while also providing a formal bound on what is learned from the leakage . we then demonstrate that this leakage is useful in a broad class of computations . we show that computations such as histograms , pagerank and matrix factorization , which can be performed in common graph-parallel frameworks such as mapreduce or pregel , benefit from our relaxation . we implement a protocol for securely executing graph-parallel computations , and evaluate the performance on the three examples just mentioned above . we demonstrate marked improvement over story_separator_special_tag we study the role that privacy-preserving algorithms , which prevent the leakage of specific information about participants , can play in the design of mechanisms for strategic agents , which must encourage players to honestly report information . specifically , we show that the recent notion of differential privacv , in addition to its own intrinsic virtue , can ensure that participants have limited effect on the outcome of the mechanism , and as a consequence have limited incentive to lie . more precisely , mechanisms with differential privacy are approximate dominant strategy under arbitrary player utility functions , are automatically resilient to coalitions , and easily allow repeatability . we study several special cases of the unlimited supply auction problem , providing new results for digital goods auctions , attribute auctions , and auctions with arbitrary structural constraints on the prices . as an important prelude to developing a privacy-preserving auction mechanism , we introduce and study a generalization of previous privacy work that accommodates the high sensitivity of the auction setting , where a single participant may dramatically alter the optimal fixed price , and a slight change in the offered price may take the revenue from optimal story_separator_special_tag ldp ( local differential privacy ) has been widely studied to estimate statistics of personal data ( e.g. , distribution underlying the data ) while protecting users ' privacy . although ldp does not require a trusted third party , it regards all personal data equally sensitive , which causes excessive obfuscation hence the loss of utility . in this paper , we introduce the notion of uldp ( utility-optimized ldp ) , which provides a privacy guarantee equivalent to ldp only for sensitive data . we first consider the setting where all users use the same obfuscation mechanism , and propose two mechanisms providing uldp : utility-optimized randomized response and utility-optimized rappor . we then consider the setting where the distinction between sensitive and non-sensitive data can be different from user to user . for this setting , we propose a personalized uldp mechanism with semantic tags to estimate the distribution of personal data with high utility while keeping secret what is sensitive for each user . we show theoretically and experimentally that our mechanisms provide much higher utility than the existing ldp mechanisms when there are a lot of non-sensitive data . we also show that when most story_separator_special_tag a range counting problem is specified by a set $ p $ of size $ |p| = n $ of points in $ \\mathbb { r } ^d $ , an integer weight $ x_p $ associated to each point $ p \\in p $ , and a range space $ { \\cal r } \\subseteq 2^ { p } $ . given a query range $ r \\in { \\cal r } $ , the target output is $ r ( \\vec { x } ) = \\sum_ { p \\in r } { x_p } $ . range counting for different range spaces is a central problem in computational geometry . we study $ ( \\epsilon , \\delta ) $ -differentially private algorithms for range counting . our main results are for the range space given by hyperplanes , that is , the halfspace counting problem . we present an $ ( \\epsilon , \\delta ) $ -differentially private algorithm for halfspace counting in $ d $ dimensions which achieves $ o ( n^ { 1-1/d } ) $ average squared error . this contrasts with the $ \\omega ( n ) $ lower bound established by the story_separator_special_tag in this work , we study trade-offs between accuracy and privacy in the context of linear queries over histograms . this is a rich class of queries that includes contingency tables and range queries , and has been a focus of a long line of work . for a set of $ d $ linear queries over a database $ x \\in \\r^n $ , we seek to find the differentially private mechanism that has the minimum mean squared error . for pure differential privacy , an $ o ( \\log^2 d ) $ approximation to the optimal mechanism is known . our first contribution is to give an $ o ( \\log^2 d ) $ approximation guarantee for the case of $ ( \\eps , \\delta ) $ -differential privacy . our mechanism is simple , efficient and adds correlated gaussian noise to the answers . we prove its approximation guarantee relative to the hereditary discrepancy lower bound of muthukrishnan and nikolov , using tools from convex geometry . we next consider this question in the case when the number of queries exceeds the number of individuals in the database , i.e . when $ d > n \\triangleq story_separator_special_tag we introduce a new , generic framework for private data analysis.the goal of private data analysis is to release aggregate information about a data set while protecting the privacy of the individuals whose information the data set contains.our framework allows one to release functions f of the data withinstance-based additive noise . that is , the noise magnitude is determined not only by the function we want to release , but also bythe database itself . one of the challenges is to ensure that the noise magnitude does not leak information about the database . to address that , we calibrate the noise magnitude to the smoothsensitivity of f on the database x -- - a measure of variabilityof f in the neighborhood of the instance x. the new frameworkgreatly expands the applicability of output perturbation , a technique for protecting individuals ' privacy by adding a smallamount of random noise to the released statistics . to our knowledge , this is the first formal analysis of the effect of instance-basednoise in the context of data privacy.our framework raises many interesting algorithmic questions . namely , to apply the framework one must compute or approximate the smoothsensitivity of f on story_separator_special_tag climate data are dramatically increasing in volume and complexity , just as the users of these data in the scientific community and the public are rapidly increasing in number . a new paradigm of more open , user-friendly data access is needed to ensure that society can reduce vulnerability to climate variability and change , while at the same time exploiting opportunities that will occur . story_separator_special_tag private set intersection ( psi ) allows two parties to compute the intersection of private sets while revealing nothing more than the intersection itself . psi needs to be applied to large data sets in scenarios such as measurement of ad conversion rates , data sharing , or contact discovery . existing psi protocols do not scale up well , and therefore some applications use insecure solutions instead . we describe a new approach for designing psi protocols based on permutation-based hashing , which enables to reduce the length of items mapped to bins while ensuring that no collisions occur . we denote this approach as phasing , for permutation-based hashing set intersection . phasing can dramatically improve the performance of psi protocols whose overhead depends on the length of the representations of input items . we apply phasing to design a new approach for circuit-based psi protocols . the resulting protocol is up to 5 times faster than the previously best sort-compareshuffle circuit of huang et al . ( ndss 2012 ) . we also apply phasing to the ot-based psi protocol of pinkas et al . ( usenix security 2014 ) , which is the fastest psi protocol story_separator_special_tag private set intersection ( psi ) allows two parties to compute the intersection of their sets without revealing any information about items that are not in the intersection . it is one of the best studied applications of secure computation and many psi protocols have been proposed . however , the variety of existing psi protocols makes it difficult to identify the solution that performs best in a respective scenario , especially since they were not compared in the same setting . in addition , existing psi protocols are several orders of magnitude slower than an insecure naive hashing solution , which is used in practice.in this article , we review the progress made on psi protocols and give an overview of existing protocols in various security models . we then focus on psi protocols that are secure against semi-honest adversaries and take advantage of the most recent efficiency improvements in oblivious transfer ( ot ) extension , propose significant optimizations to previous psi protocols , and suggest a new psi protocol whose runtime is superior to that of existing protocols . we compare the performance of the protocols , both theoretically and experimentally , by implementing all protocols on story_separator_special_tag we propose the first differentially private aggregation algorithm for distributed time-series data that offers good practical utility without any trusted server . this addresses two important challenges in participatory data-mining applications where ( i ) individual users collect temporally correlated time-series data ( such as location traces , web history , personal health data ) , and ( ii ) an untrusted third-party aggregator wishes to run aggregate queries on the data . to ensure differential privacy for time-series data despite the presence of temporal correlation , we propose the fourier perturbation algorithm ( fpak ) . standard differential privacy techniques perform poorly for time-series data . to answer n queries , such techniques can result in a noise of ( n ) to each query answer , making the answers practically useless if n is large . our fpak algorithm perturbs the discrete fourier transform of the query answers . for answering n queries , fpak improves the expected error from ( n ) to roughly ( k ) where k is the number of fourier coefficients that can ( approximately ) reconstruct all the n query answers . our experiments show that k n for many real-life data-sets story_separator_special_tag origin and development 588. review of directive 95/46 for almost 15 years , directive 95/46 stood strong as the central instrument of data protection regulation in the eu . the european commission assessed its implementation in 2003 and 2007 , both times concluding there was no need for revisions . in 2010 , however , the commission announced that the time for revisions had come . the commission argued that while the objectives and principles underlying directive 95/46 remained sound , revisions were necessary in order to meet the challenges of technological developments and globalisation . 589. a changing environment formal preparations for the review began in july 2009 , when the european commission launched a public consultation on the legal framework for the fundamental right to protection of personal data . the consultation revealed concerns regarding the impact of new technologies on data protection , as well as a desire for a more comprehensive and coherent approach to data protection . perhaps more significantly , 2009 was also the year when the lisbon treaty entered into force . article 16 of the lisbon treaty provided the eu with a legal basis to enact comprehensive data protection legislation across union story_separator_special_tag encryption is a well known technique for preserving the privacy of sensitive information . one of the basic , apparently inherent , limitations of this technique is that an information system working with encrypted data can at most store or retrieve the data for the user ; any more complicated operations seem to require that the data be decrypted before being operated on . this limitation follows from the choice of encryption functions used , however , and although there are some truly inherent limitations on what can be accomplished , we shall see that it appears likely that there exist encryption functions which permit encrypted data to be operated on without preliminary decryption of the operands , for many sets of interesting operations . these special encryption functions we call privacy homomorphisms ; they form an interesting subset of arbitrary encryption schemes ( called privacy transformations ) . story_separator_special_tag an encryption method is presented with the novel property that publicly revealing an encryption key does not thereby reveal the corresponding decryption key . this has two important consequences : couriers or other secure means are not needed to transmit keys , since a message can be enciphered using an encryption key publicly revealed by the intended recipient . only he can decipher the message , since only he knows the corresponding decryption key . a message can be signed using a privately held decryption key . anyone can verify this signature using the corresponding publicly revealed encryption key . signatures can not be forged , and a signer can not later deny the validity of his signature . this has obvious applications in electronic mail and electronic funds transfer systems . a message is encrypted by representing it as a number m , raising m to a publicly specified power e , and then taking the remainder when the result is divided by the publicly specified product , n , of two large secret prime numbers p and q. decryption is similar ; only a different , secret , power d is used , where e * d = story_separator_special_tag this report reviews the strengths and weaknesses of the eu data protection directive and proposes avenues for improvement . the ideas presented here provide some ideas on how to improve the data protection regime for european citizens . story_separator_special_tag the internet has undergone dramatic changes in the past 15 years , and now forms a global communication platform that billions of users rely on for their daily activities . while this transformation has brought tremendous benefits to society , it has also created new threats to online privacy , ranging from profiling of users for monetizing personal information to nearly omnipotent governmental surveillance . as a result , public interest in systems for anonymous communication has drastically increased . several such systems have been proposed in the literature , each of which offers anonymity guarantees in different scenarios and under different assumptions , reflecting the plurality of approaches for how messages can be anonymously routed to their destination . understanding this space of competing approaches with their different guarantees and assumptions is vital for users to understand the consequences of different design options . in this work , we survey previous research on designing , developing , and deploying systems for anonymous communication . to this end , we provide a taxonomy for clustering all prevalently considered approaches ( including mixnets , dc-nets , onion routing , and dht-based protocols ) with respect to their unique routing characteristics , story_separator_special_tag consider a data holder , such as a hospital or a bank , that has a privately held collection of person-specific , field structured data . suppose the data holder wants to share a version of the data with researchers . how can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data can not be re-identified while the data remain practically useful ? the solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment . a release provides k-anonymity protection if the information for each person contained in the release can not be distinguished from at least k-1 individuals whose information also appears in the release . this paper also examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected . the k-anonymity protection model is important because it forms the basis on which the real-world systems known as datafly , \xb5-argus and k-similar provide guarantees of privacy protection . story_separator_special_tag we study the problem of releasing $ k $ -way marginals of a database $ d \\in ( \\ { 0,1\\ } ^d ) ^n $ , while preserving differential privacy . the answer to a $ k $ -way marginal query is the fraction of $ d $ 's records $ x \\in \\ { 0,1\\ } ^d $ with a given value in each of a given set of up to $ k $ columns . marginal queries enable a rich class of statistical analyses of a dataset , and designing efficient algorithms for privately releasing marginal queries has been identified as an important open problem in private data analysis ( cf . barak et . al. , pods '07 ) . we give an algorithm that runs in time $ d^ { o ( \\sqrt { k } ) } $ and releases a private summary capable of answering any $ k $ -way marginal query with at most $ \\pm .01 $ error on every query as long as $ n \\geq d^ { o ( \\sqrt { k } ) } $ . to our knowledge , ours is the first algorithm capable of privately story_separator_special_tag a central problem in differentially private data analysis is how to design efficient algorithms capable of answering large numbers of counting queries on a sensitive database . counting queries are of the form what fraction of individual records in the database satisfy the property $ q $ ? we prove that if one-way functions exist , then there is no algorithm that takes as input a database $ d \\in ( \\ { 0,1\\ } ^d ) ^n $ , and $ k = \\tilde { \\theta } ( n^2 ) $ arbitrary efficiently computable counting queries , runs in time $ \\mathrm { poly } ( d , n ) $ , and returns an approximate answer to each query , while satisfying differential privacy . we also consider the complexity of answering simple counting queries , and make some progress in this direction by showing that the above result holds even when we require that the queries are computable by constant-depth $ ( ac^0 ) $ circuits . our result is almost tight because it is known that $ \\tilde { \\omega } ( n^2 ) $ counting queries can be answered efficiently while satisfying differential privacy . story_separator_special_tag assuming the existence of one-way functions , we show that there is no polynomial-time differentially private algorithm $ $ { \\mathcal { a } } $ $ that takes a database $ $ d\\in ( \\ { 0,1\\ } ^d ) ^n $ $ and outputs a synthetic database $ $ { \\hat { d } } $ $ all of whose two-way marginals are approximately equal to those of d. ( a two-way marginal is the fraction of database rows $ $ x\\in \\ { 0,1\\ } ^d $ $ with a given pair of values in a given pair of columns . ) this answers a question of barak et al . ( pods 07 ) , who gave an algorithm running in time $ $ \\mathrm { poly } ( n,2^d ) $ $ . our proof combines a construction of hard-to-sanitize databases based on digital signatures ( by dwork et al. , stoc 09 ) with encodings based on the pcp theorem . we also present both negative and positive results for generating relaxed synthetic data , where the fraction of rows in d satisfying a predicate c are estimated by applying c to each row story_separator_special_tag we describe a very simple somewhat homomorphic encryption scheme using only elementary modular arithmetic , and use gentry s techniques to convert it into a fully homomorphic scheme . compared to gentry s construction , our somewhat homomorphic scheme merely uses addition and multiplication over the integers rather than working with ideal lattices over a polynomial ring . the main appeal of our approach is the conceptual simplicity . we reduce the security of our somewhat homomorphic scheme to finding an approximate integer gcd i.e. , given a list of integers that are near-multiples of a hidden integer , output that hidden integer . we investigate the hardness of this task , building on earlier work of howgravegraham . story_separator_special_tag in this work , we investigate if statistical privacy can enhance the performance of oram mechanisms while providing rigorous privacy guarantees . we propose a formal and rigorous framework for developing oram protocols with statistical security viz. , a differentially private oram ( dp-oram ) . we present root oram , a family of dp-orams that provide a tunable , multi-dimensional trade-off between the desired bandwidth overhead , local storage and system security . we theoretically analyze root oram to quantify both its security and performance . we experimentally demonstrate the benefits of root oram and find that ( 1 ) root oram can reduce local storage overhead by about 2x for a reasonable values of privacy budget , significantly enhancing performance in memory limited platforms such as trusted execution environments , and ( 2 ) root oram allows tunable trade-offs between bandwidth , storage , and privacy , reducing bandwidth overheads by up to 2x-10x ( at the cost of increased storage/statistical privacy ) , enabling significant reductions in oram access latencies for cloud environments . we also analyze the privacy guarantees of dp-orams through the lens of information theoretic metrics of shannon entropy and min-entropy [ 16 ] story_separator_special_tag differential privacy ( dp ) has arisen as the state-of-the-art metric for quantifying individual privacy when sensitive data are analyzed , and it is starting to see practical deployment in organizations such as the us census bureau , apple , google , etc . there are two popular models for deploying differential privacy - standard differential privacy ( sdp ) , where a trusted server aggregates all the data and runs the dp mechanisms , and local differential privacy ( ldp ) , where each user perturbs their own data and perturbed data is analyzed . due to security concerns arising from aggregating raw data at a single server , several real world deployments in industry have embraced the ldp model . however , systems based on the ldp model tend to have poor utility - `` a gap '' in the utility achieved as compared to systems based on the sdp model . in this work , we survey and synthesize emerging directions of research at the intersection of differential privacy and cryptography . first , we survey solutions that combine cryptographic primitives like secure computation and anonymous communication with differential privacy to give alternatives to the ldp model story_separator_special_tag local differential privacy ( ldp ) is a recently proposed privacy standard for collecting and analyzing data , which has been used , e.g. , in the chrome browser , ios and macos . in ldp , each user perturbs her information locally , and only sends the randomized version to an aggregator who performs analyses , which protects both the users and the aggregator against private information leaks . although ldp has attracted much research attention in recent years , the majority of existing work focuses on applying ldp to complex data and/or analysis tasks . in this paper , we point out that the fundamental problem of collecting multidimensional data under ldp has not been addressed sufficiently , and there remains much room for improvement even for basic tasks such as computing the mean value over a single numeric attribute under ldp . motivated by this , we first propose novel ldp mechanisms for collecting a numeric attribute , whose accuracy is at least no worse ( and usually better ) than existing solutions in terms of worst-case noise variance . then , we extend these mechanisms to multidimensional data that can contain both numeric and categorical attributes story_separator_special_tag abstract for various reasons individuals in a sample survey may prefer not to confide to the interviewer the correct answers to certain questions . in such cases the individuals may elect not to reply at all or to reply with incorrect answers . the resulting evasive answer bias is ordinarily difficult to assess . in this paper it is argued that such bias is potentially removable through allowing the interviewee to maintain privacy through the device of randomizing his response . a randomized response method for estimating a population proportion is presented as an example . unbiased maximum likelihood estimates are obtained and their mean square errors are compared with the mean square errors of conventional estimates under various assumptions about the underlying population . story_separator_special_tag how to achieve differential privacy in the distributed setting , where the dataset is distributed among the istrustful parties , is an important problem . we consider in what condition can a protocol inherit the differential privacy property of a function it computes . the heart of the problem is the secure multiparty computation of randomized function . a notion obliviousness is introduced , which captures the key security problems when computing a randomized function from a deterministic one in the distributed setting . by this observation , a sufficient and necessary condition about securely computing a randomized function from a deterministic one is given . the above result can not only be used to determine whether a protocol computing differentially private function is secure , but also be used to construct a secure one . then we prove that the differential privacy property of a function can be inherited by the protocol computing it if the protocol securely computes it . a composition theorem of differentially private protocols is also presented . finally , we construct protocols of gaussian mechanism and laplace mechanism , which inherit the differential privacy property . story_separator_special_tag with the fast development of information technology , a tremendous amount of data have been generated and collected for research and analysis purposes . as an increasing number of users are growing concerned about their personal information , privacy preservation has become an urgent problem to be solved and has attracted significant attention . local differential privacy ( ldp ) , as a strong privacy tool , has been widely deployed in the real world in recent years . it breaks the shackles of the trusted third party , and allows users to perturb their data locally , thus providing much stronger privacy protection . this survey provides a comprehensive and structured overview of the local differential privacy technology . we summarise and analyze state-of-the-art research in ldp and compare a range of methods in the context of answering a variety of queries and training different machine learning models . we discuss the practical deployment of local differential privacy and explore its application in various domains . furthermore , we point out several research gaps , and discuss promising future research directions . story_separator_special_tag two millionaires wish to know who is richer ; however , they do not want to find out inadvertently any additional information about each other s wealth . how can they carry out such a conversation ? this is a special case of the following general problem . suppose m people wish to compute the value of a function f ( x1 , x2 , x3 , . , xm ) , which is an integer-valued function of m integer variables xi of bounded range . assume initially person pi knows the value of xi and no other x s. is it possible for them to compute the value of f , by communicating among themselves , without unduly giving away any information about the values of their own variables ? the millionaires problem corresponds to the case when m = 2 and f ( x1 , x2 ) = 1 if x1 < x2 , and 0 otherwise . in this paper , we will give precise formulation of this general problem and describe three ways of solving it by use of one-way functions ( i.e. , functions which are easy to evaluate but hard to invert ) . story_separator_special_tag in this paper we introduce a new tool for controlling the knowledge transfer process in cryptographic protocol design . it is applied to solve a general class of problems which include most of the two-party cryptographic problems in the literature . specifically , we show how two parties a and b can interactively generate a random integer n = p ? q such that its secret , i.e. , the prime factors ( p , q ) , is hidden from either party individually but is recoverable jointly if desired . this can be utilized to give a protocol for two parties with private values i and j to compute any polynomially computable functions f ( i , j ) and g ( i , j ) with minimal knowledge transfer and a strong fairness property . as a special case , a and b can exchange a pair of secrets sa , sb , e.g . the factorization of an integer and a hamiltonian circuit in a graph , in such a way that sa becomes computable by b when and only when sb becomes computable by a. all these results are proved assuming only that the problem of story_separator_special_tag one of the biggest concerns of big data is privacy . however , the study on big data privacy is still at a very early stage . we believe the forthcoming solutions and theories of big data privacy root from the in place research output of the privacy discipline . motivated by these factors , we extensively survey the existing research outputs and achievements of the privacy field in both application and theoretical angles , aiming to pave a solid starting ground for interested readers to address the challenges in the big data case . we first present an overview of the battle ground by defining the roles and operations of privacy systems . second , we review the milestones of the current two major research categories of privacy : data clustering and privacy frameworks . third , we discuss the effort of privacy study from the perspectives of different disciplines , respectively . fourth , the mathematical description , measurement , and modeling on privacy are presented . we summarize the challenges and opportunities of this promising topic at the end of this paper , hoping to shed light on the exciting and almost uncharted land . story_separator_special_tag the best existing pairing-based traitor tracing schemes have \\ ( o ( \\sqrt { n } ) \\ ) -sized parameters , which has stood since 2006. this intuitively seems to be consistent with the fact that pairings allow for degree-2 computations , yielding a quadratic compression . story_separator_special_tag \\epsilon-differential privacy is the state-of-the-art model for releasing sensitive information while protecting privacy . numerous methods have been proposed to enforce epsilon-differential privacy in various analytical tasks , e.g. , regression analysis . existing solutions for regression analysis , however , are either limited to non-standard types of regression or unable to produce accurate regression results . motivated by this , we propose the functional mechanism , a differentially private method designed for a large class of optimization-based analyses . the main idea is to enforce epsilon-differential privacy by perturbing the objective function of the optimization problem , rather than its results . as case studies , we apply the functional mechanism to address two most widely used regression models , namely , linear regression and logistic regression . both theoretical analysis and thorough experimental evaluations show that the functional mechanism is highly effective and efficient , and it significantly outperforms existing solutions . story_separator_special_tag internet of connected vehicles ( iov ) are expected to enable intelligent traffic management , intelligent dynamic information services , intelligent vehicle control , etc . however , vehicles data privacy is argued to be a major barrier toward the application and development of iov , thus causing a wide range of attentions . local differential privacy ( ldp ) is the relaxed version of the privacy standard , differential privacy , and it can protect users data privacy against the untrusted third party in the worst adversarial setting . therefore , ldp is potential to protect vehicles data privacy in the practical scenario , iov , although vehicles exhibit unique features , e.g. , high mobility , short connection times , etc . to this end , in this paper , we first give an overview of the existing ldp techniques and present the thorough comparisons of these work in terms of advantages , disadvantages , and computation cost , in order to get the readers well acquainted with ldp . thereafter , we investigate the potential applications of ldp in securing iov in detail . last , we direct several future research directions of ldp in iov , story_separator_special_tag differential privacy is an essential and prevalent privacy model that has been widely explored in recent decades . this survey provides a comprehensive and structured overview of two research directions : differentially private data publishing and differentially private data analysis . we compare the diverse release mechanisms of differentially private data publishing given a variety of input data in terms of query type , the maximum number of queries , efficiency , and accuracy . we identify two basic frameworks for differentially private data analysis and list the typical algorithms used within each framework . the results are compared and discussed based on output accuracy and efficiency . further , we propose several possible directions for future research and possible applications .
a new proof is presented of the serpe-fierz equivalence theorem for the free neutrino case . the interaction of neutrinos with their sources is seen to restrict the freedom in the neutrino description . it is proved that the only two state theory for the neutrino which does not give rise to double -decay is the weyl-lee-yang theory . story_separator_special_tag a massive dirac neutrino has a magnetic moment , which causes its spin to precess in a magnetic field . this reduces the effective weak cross sections for relativistic neutrinos . an estimate on the basis of phenomenological considerations as well as the standard electroweak theory indicates that massive neutrinos from supernovae and neutron stars may contain significant mixtures of negative- and positive-helicity states . story_separator_special_tag introduction . experimental and theoretical studies of flavour conversion in solar , atmospheric , reactor and accelerator neutrino fluxes give strong evidence of non-zero neutrino mass . a massive neutrino can have non-trivial electromagnetic properties [ 1 ] . for a recent review on neutrino electromagnetic properties see [ 2 ] . the neutrino dipole magnetic moment ( along with the electric dipole moment ) is the most well studied among neutrino electromagnetic properties . the effective lagrangian , that is in charge of a neutrino coupling to the electromagnetic field , can be written in the form story_separator_special_tag the main goal of the paper is to give a short review on neutrino electromagnetic properties . in the introductory part of the paper a summary on what we really know about neutrinos is given : we discuss the basics of neutrino mass and mixing as well as the phenomenology of neutrino oscillations . this is important for the following discussion on neutrino electromagnetic properties that starts with a derivation of the neutrino electromagnetic vertex function in the most general form , that follows from the requirement of lorentz invariance , for both the dirac and majorana cases . then , the problem of the neutrino form factors definition and calculation within gauge models is considered . in particular , we discuss the neutrino electric charge form factor and charge radius , dipole magnetic and electric and anapole form factors . available experimental constraints on neutrino electromagnetic properties are also discussed , and the recently obtained experimental limits on neutrino magnetic moments are reviewed . the most important neutrino electromagnetic processes involving a direct neutrino coupling with photons ( such as neutrino radiative decay , neutrino cherenkov radiation , spin light of neutrino and plasmon decay into neutrino-antineutrino pair in story_separator_special_tag in this paper , we discuss the main theoretical aspects and experimental effects of neutrino electromagnetic properties . we start with a general description of the electromagnetic form factors of dirac and majorana neutrinos . then , we discuss the theory and phenomenology of the magnetic and electric dipole moments , summarizing the experimental results and the theoretical predictions . we discuss also the phenomenology of a neutrino charge radius and radiative decay . finally , we describe the theory of neutrino spin and spin-flavor precession in a transverse magnetic field and we summarize its phenomenological applications . story_separator_special_tag we review the theory and phenomenology of neutrino electromagnetic interactions , which give us powerful tools to probe the physics beyond the standard model . after a derivation of the general structure of the electromagnetic interactions of dirac and majorana neutrinos in the one-photon approximation , we discuss the effects of neutrino electromagnetic interactions in terrestrial experiments and in astrophysical environments . we present the experimental bounds on neutrino electromagnetic properties and we confront them with the predictions of theories beyond the standard model . story_separator_special_tag to help develop a picture of majorana neutrinos , we study their electromagnetic properties . we show that cpt invariance forbids a majorana neutrino from having a magnetic or electric dipole moment . then , by considering the process .gamma -- > nu.nu-bar , we find the most general expression for the matrix element of the electromagnetic current of a majorana neutrino . the result is verified in a way which leads us to explore the behavior under parity of such a particle . next , we see how electromagnetic properties which follow from one-loop diagrams conform to our general results . finally , we show how the striking electromagnetic differences between majorana and dirac neutrinos can become invisible as the neutrino mass goes to zero . story_separator_special_tag the electromagnetic properties of majorana neutrinos are studied in general terms , with a careful discussion of the difference between the cases of majorana and dirac neutrinos . some peculiarities associated with the majorana character of the neutrinos are noted ; for example , it is shown that for two majorana neutrinos with the same cp parity their transition magnetic moment is of the type sigma/sub munu/.gamma./sub 5/ and not sigma/sub munu/ in contrast to the situation for the diagonal magnetic moment of dirac neutrinos ( or charged leptons ) . we also indicate how the electromagnetic form factors in the majorana case can be obtained from those calculated as if the neutrinos were dirac particles . story_separator_special_tag dispersion relations allow for a coherent description of the nucleon electromagnetic form factors measured over a large range of momentum transfer , $ q^2 \\simeq 0 \\ldots 35 $ gev $ ^2 $ . including constraints from unitarity and perturbative qcd , we present a novel parametrisation of the absorptive parts of the various isoscalar and isovector nucleon form factors . using the current world data , we obtain results for the electromagnetic form factors , nucleon radii and meson couplings . we stress the importance of measurements at large momentum transfer to test the predictions of perturbative qcd . story_separator_special_tag experimentally it has been known for a long time that the electric charges of the observed particles appear to be quantized . an approach to understanding electric charge quantization that can be used for gauge theories with explicit $ u ( 1 ) $ factors -- such as the standard model and its variants -- is pedagogically reviewed and discussed in this article . this approach uses the allowed invariances of the lagrangian and their associated anomaly cancellation equations . we demonstrate that charge may be de-quantized in the three-generation standard model with massless neutrinos , because differences in family-lepton -- numbers are anomaly-free . we also review the relevant experimental limits . our approach to charge quantization suggests that the minimal standard model should be extended so that family-lepton -- number differences are explicitly broken . we briefly discuss some candidate extensions ( e.g . the minimal standard model augmented by majorana right-handed neutrinos ) . story_separator_special_tag within weinberg 's model of weak and electromagnetic interactions , we calculate the static quantities of the charged intermediate bosons . we also prove that the neutrino charge remains zero in second order , and discuss its charge radius . finally , an unambiguous calculation of the muon g 2 is presented . all calculations are done using the n dimensional regularization procedure of 't hooft and veltman . our results support the claim that weinberg 's model is renormalizable . story_separator_special_tag abstract . we present a computation of the charge and the magnetic moment of the neutrino in the recently developed electro-weak background field method and in the linear $ r_ { \\xi } ^l $ gauge . first , we deduce a formal ward-takahashi identity which implies the immediate cancellation of the neutrino electric charge . this ward-takahashi identity is as simple as that for qed . the computation of the ( proper and improper ) one loop vertex diagrams contributing to the neutrino electric charge is also presented in an arbitrary gauge , checking in this way the ward-takahashi identity previously obtained . finally , the calculation of the magnetic moment of the neutrino , in the minimal extension of the standard model with massive dirac neutrinos , is presented , showing its gauge parameter and gauge structure independence explicitly . story_separator_special_tag we consider a massive dirac neutrino electric charge and magnetic moment within the context of the standard model supplied with su ( 2 ) -singlet right-handed neutrino in arbitrary r_\\xi gauge . using the dimensional-regularization scheme we start with calculations of the one-loop contributions to the neutrino electromagnetic vertex function exactly accounting for the neutrino and other particles masses . we examine the decomposition of the massive neutrino electromagnetic vertex function and show that it contains only the four form factors . then we get the closed integral expressions for different contributions to the neutrino electric charge and magnetic form factors . these calculations enable us to follow the neutrino and corresponding charged lepton masses and gauge-fixing parameters dependence . for several one-loop contributions to the neutrino charge and magnetic moment , that were calculated previously by the other authors with mistakes , we find the correct results . we show that the neutrino charge for massive neutrino is a gauge independent and vanishing value in the first two orders of the expansion over the neutrino mass parameter b. in the particular choice of the 't hooft-feynman gauge we also demonstrate that the neutrino charge is zero for arbitrary mass story_separator_special_tag electromagnetic form factors of a massive neutrino are studied in a minimally extended standard model in an arbitrary $ r_ { \\xi } $ gauge and taking into account the dependence on the masses of all interacting particles . the contribution from all feynman diagrams to the charge , magnetic , and anapole form factors , in which the dependence on the masses of all particles as well as on gauge parameters is accounted for exactly , are obtained for the first time in explicit form . the asymptotic behavior of the magnetic form factor for large negative squares of the momentum of an external photon is analyzed and expression for the anapole moment of a massive neutrino is derived . the results are generalized to the case of mixing between various generations of the neutrino . explicit expressions are obtained for the charge , magnetic , and electric dipole and anapole transition form factors as well as for the transition electric dipole moment . story_separator_special_tag a bstractwe consider the constraints from supernova 1987a on particles with small couplings to the standard model . we discuss a model with a fermion coupled to a dark photon , with various mass relations in the dark sector ; millicharged particles ; dark-sector fermions with inelastic transitions ; the hadronic qcd axion ; and an axion-like particle that couples to standard model fermions with couplings proportional to their mass . in the fermion cases , we develop a new diagnostic for assessing when such a particle is trapped at large mixing angles . our bounds for a fermion coupled to a dark photon constrain small couplings and masses 200 mev , and do not decouple for low fermion masses . they exclude parameter space that is otherwise unconstrained by existing accelerator-based and direct-detection searches . in addition , our bounds are complementary to proposed laboratory searches for sub-gev dark matter , and do not constrain several benchmark-model targets in parameter space for which the dark matter obtains the correct relic abundance from interactions with the standard model . for a millicharged particle , we exclude charges between 10 9 few\xd710 6 in units of the electron charge , also story_separator_special_tag abstract we show how the crucial gauge cancellations leading to a physical definition of an effective neutrino charge radius persist in the presence of non-vanishing fermion masses . an explicit one-loop calculation demonstrates that , as happens in the massless case , the pinch technique rearrangement of the feynman amplitudes , together with the judicious exploitation of the fundamental current relation j ( 3 ) = 2 ( j z + sin w 2 j ) , leads to a completely gauge independent definition of the effective neutrino charge radius . using the formalism of the nielsen identities it is further proved that the same cancellation mechanism operates unaltered to all orders in perturbation theory . story_separator_special_tag abstract we discuss the electromagnetic properties and decays of dirac and majorana neutrinos in a general class of gauge theories . specific results for the standard su ( 2 ) l \xd7 u ( 1 ) and a ( not necessarily left-right symmetric ) su ( 2 ) l \xd7 su ( 2 ) r \xd7 u ( 1 ) theory are analyzed . story_separator_special_tag the theory of neutrino mixing and neutrino oscillations , as well as the properties of massive neutrinos ( dirac and majorana ) , are reviewed . more specifically , the following topics are discussed in detail : ( i ) the possible types of neutrino mass terms ; ( ii ) oscillations of neutrinos ( iii ) the implications of $ \\mathrm { cp } $ invariance for the mixing and oscillations of neutrinos in vacuum ; ( iv ) possible varieties of massive neutrinos ( dirac , majorana , pseudo-dirac ) ; ( v ) the physical differences between massive dirac and massive majorana neutrinos and the possibilities of distinguishing experimentally between them ; ( vi ) the electromagnetic properties of massive neutrinos . some of the proposed mechanisms of neutrino mass generation in gauge theories of the electroweak interaction and in grand unified theories are also discussed . the lepton number nonconserving processes $ \\ensuremath { \\mu } \\ensuremath { \\rightarrow } e\\ensuremath { \\gamma } $ and $ \\ensuremath { \\mu } \\ensuremath { \\rightarrow } 3e $ in theories with massive neutrinos are considered . the basic elements of the theory of neutrinoless double- $ \\ensuremath story_separator_special_tag it is stressed that if neutrinos are massive they are probably of `` majorana '' type . this implies that their magnetic-moment form factor vanishes identically so that the previously discussed phenomenon of spin rotation in a magnetic field would not appear to take place . we point out that majorana neutrinos can , however , have transition moments . this enables an inhomogeneous magnetic field to rotate both spin and `` flavor '' of a neutrino . in this case the spin rotation changes particle to antiparticle . the spin-flavor-rotation effect is worked out in detail . we also discuss the parametrization and calculation of the electromagnetic form factors of majorana neutrinos . our discussion takes into account the somewhat unusual quantum theory of massive majorana particles . story_separator_special_tag general formulas are given for the decay rate $ { \\ensuremath { u } } _ { 2 } \\ensuremath { \\rightarrow } { \\ensuremath { u } } _ { 1 } +\\ensuremath { \\gamma } $ in the su ( 2 ) \\ifmmode\\times\\else\\texttimes\\fi { } u ( 1 ) model for neutrinos with a small mass . the emphasis is on distinguishing between the cases of dirac and majorana neutrinos . possible enhancements of the rate due to methods of eluding the glashow-iliopoulos-maiani suppression and due to charged higgs bosons are considered . story_separator_special_tag the search for the effects of heavy fermions in the extension of the standard model with a fourth generation is part of the experimental program of the tevatron and lhc experiments . besides being directly produced , these states affect drastically the production and decay properties of the higgs boson . in this note , we first reemphasize the known fact that in the case of a light and long lived fourth neutrino , the present collider searches do not permit to exclude a higgs boson with a mass below the ww threshold . in a second step , we show that the recent results from the atlas and cms collaborations which observe an excess in the and 4l \xb1 search channels corresponding to a higgs boson with a mass mh 125 gev , can not rule out the fourth generation possibility if the h ! decay rate is evaluated when naively implementing the leading o ( gfm 2 ) electroweak corrections . including the exact next-to-leading order electroweak corrections leads to a strong suppression of the h ! rate and makes this channel unobservable with present data . finally , we point out that the observation by the tevatron story_separator_special_tag this review has four parts . in part i , we describe the reactions that produce neutrinos in the sun and the expected flux of those neutrinos on the earth . we then discuss the detection of these neutrinos , and how the results obtained differ from the theoretical expectations , leading to what is known as the solar neutrino problem . in part ii , we show how neutrino oscillations can provide a solution to the solar neutrino problem . this includes vacuum oscillations , as well as matter enhanced oscillations . in part iii , we discuss the possibility of time variation of the neutrino flux and how a magnetic moment of the neutrino can explain the phenomenon . we also discuss particle physics models which can give rise to the required values of magnetic moments . in part iv , we present some concluding remarks and outlook for the near future . story_separator_special_tag we derive model-independent , `` naturalness '' upper bounds on the magnetic moments \xb5nu of dirac neutrinos generated by physics above the scale of electroweak symmetry breaking . in the absence of fine-tuning of effective operator coefficients , we find that current information on neutrino mass implies that |\xb5nu| < ~10-14 bohr magnetons . this bound is several orders of magnitude stronger than those obtained from analyses of solar and reactor neutrino data and astrophysical observations . story_separator_special_tag current experimental and observational limits on the neutrino magnetic moment are reviewed . implications of the recent results from the solar and reactor neutrino experiments for the value of the neutrino magnetic moment are discussed . it is shown that spin-flavor precession in the sun is suppressed . story_separator_special_tag a general mechanism , based on very simple considerations of spin , is proposed , which has the effect of enhancing the neutrino magnetic dipole moment relative to the neutrino mass . story_separator_special_tag finite neutrino magnetic moments are consequences of nonzero neutrino masses . the particle physics aspects of the neutrino electromagnetic interactions are reviewed . the astrophysical bounds and the results from recent direct experiments are reviewed , with emphasis on the reactor neutrino experiments . future projects and prospects are surveyed . story_separator_special_tag programme educational objectives ( peos ) : master programme in applied geology aims to provide comprehensive knowledge based on various branches of geology , with special focus on applied geology subjects in the areas of geomorphology , structural geology , hydrogeology , petroleum geology , mining geology , remote sensing and environmental geology . to provide an in-depth knowledge and hands-on training to learners in the area of applied geology and enable them to work independently at a higher level education / career . to gain knowledge on the significance of dynamics of earth , basic principles of sedimentology and stratigraphy and economic mineral formations and related exploration operations in industries . to impart fundamental concepts of economic mineral explorations , geological mapping techniques , geomorphologic principles , and applications of geology in engineering and story_separator_special_tag it has been suggested that an apparent correlation of the flux of detected solar neutrinos with solar activity is due to a neutrino magnetic moment . here several terrestrial experiments that might observe the magnetic moment are considered , with emphasis on those employing reactor neutrinos . the neutrino charge radius , and prospects for observing it , are also discussed . an appendix collects all relevant neutrino scattering cross sections . story_separator_special_tag this paper proposes a method of modeling and simulation of photovoltaic arrays . the main objective is to find the parameters of the nonlinear i-v equation by adjusting the curve at three points : open circuit , maximum power , and short circuit . given these three points , which are provided by all commercial array data sheets , the method finds the best i-v equation for the single-diode photovoltaic ( pv ) model including the effect of the series and parallel resistances , and warranties that the maximum power of the model matches with the maximum power of the real array . with the parameters of the adjusted i-v equation , one can build a pv circuit model with any circuit simulator by using basic math blocks . the modeling method and the proposed circuit model are useful for power electronics designers who need a simple , fast , accurate , and easy-to-use modeling method for using in simulations of pv systems . in the first pages , the reader will find a tutorial on pv devices and will understand the parameters that compose the single-diode pv model . the modeling method is then introduced and presented in details story_separator_special_tag abstract the munu detector was designed to study \xaf e e elastic scattering at low energy . the central component is a time projection chamber filled with cf 4 gas , surrounded by an anti-compton detector . the experiment was carried out at the bugey ( france ) nuclear reactor . in this letter we present the final analysis of the data recorded at 3 bar and 1 bar pressure . both the energy and the scattering angle of the recoil electron are measured . from the 3 bar data a new upper limit on the neutrino magnetic moment e short 9 \xd7 10 11 b at 90 % cl was derived . at 1 bar electron tracks down to 150 kev were reconstructed , demonstrating the potentiality of the experimental technique for future applications in low energy neutrino physics . story_separator_special_tag a search of neutrino magnetic moments was carried out at the kuo-sheng nuclear power station at a distance of 28 m from the 2.9 gw reactor core . with a high purity germanium detector of mass 1.06 kg surrounded by scintillating nai ( tl ) and csi ( tl ) crystals as anti-compton detectors , a detection threshold of 5 kev and a background level of 1 kg { sup -1 } kev { sup -1 } day { sup -1 } near threshold were achieved . details of the reactor neutrino source , experimental hardware , background understanding , and analysis methods are presented . based on 570.7 and 127.8 days of reactor on and off data , respectively , at an average reactor on electron antineutrino flux of 6.4x10 { sup 12 } cm { sup -2 } s { sup -1 } , the limit on the neutrino magnetic moments of { mu } { sub { nu } { sub e } } < 7.4x10 { sup -11 } { mu } { sub b } at 90 % confidence level was derived . indirect bounds on the { nu } { sub e } radiative story_separator_special_tag a search for a nonzero neutrino magnetic moment has been conducted using 1496 live days of solar neutrino data from super-kamiokande-i . specifically , we searched for distortions to the energy spectrum of recoil electrons arising from magnetic scattering due to a nonzero neutrino magnetic moment . in the absence of a clear signal , we found micro ( nu ) < /= ( 3.6x10 ( -10 ) ) micro ( b ) at 90 % c.l . by fitting to the super-kamiokande day-night spectra . the fitting took into account the effect of neutrino oscillation on the shapes of energy spectra . with additional information from other solar neutrino and kamland experiments constraining the oscillation region , a limit of micro ( nu ) < /= ( 1.1x10 ( -10 ) ) micro ( b ) at 90 % c.l . was obtained . story_separator_special_tag using the new limit on the neutrino anomalous magnetic moment recently obtained by the gemma experiment on measurements of the cross section for the reactor antineutrino scattering on free electrons , we get a new direct upper bound on the neutrino millicharge $ \\mid q_ { u } \\mid < 1.5 \\times 10^ { -12 } e_0 $ . this is a factor of 2 more stringent constraint than the previous bound obtained from the texono reactor experiment data that is included to the review of particle properties 2012. we predict that with data from the ongoing new phase of the gemma experiment the upper bound on the neutrino millicharge will be reduced to $ \\mid q_ { u } \\mid < 3.7 \\times 10^ { -13 } e_0 $ within two years . we also predict that with the next phase of the considered experiment the upper bound on the millicharge will be reduced by an order of magnitude over the present bound and reach the level $ \\mid q_ { u } \\mid < 1.8 \\times 10^ { -13 } e_0 $ within approximately four years . story_separator_special_tag the impact of perceived corporate social responsibility on consumer behavior ronald paul hill karen l. becker-olsen ronald paul hill is founding dean and bank of america professor of corporate social responsibility at university of south florida st. petersburg , karen l. becker-olsen is assistant professor of marketing at lehigh university . story_separator_special_tag preface acknowledgments 1 : the energy-loss argument 2 : anomalous stellar energy losses bounded by observations 3 : particles interacting with electrons and baryons 4 : processes in a nuclear medium 5 : two-photon coupling of low-mass bosons 6 : particle dispersion and decays in media 7 : nonstandard neutrinos 8 : neutrino oscillations 9 : oscillations of trapped neutrinos 10 : solar neutrinos 11 : supernova neutrinos 12 : radiative particle decays from distant sources 13 : what have we learned from sn 1987a ? 14 : axions 15 : miscellaneous exotica 16 : neutrinos : the bottom line app . a. units and dimensions app . b. neutrino coupling constants app . c. numerical neutrino energy-loss rates app . d. characteristics of stellar plasmas references acronyms symbols subject index story_separator_special_tag a new type of electromagnetic radiation by a neutrino with non-zero magnetic ( and/or electric ) moment moving in background matter and electromagnetic field is considered . this radiation originates from the quantum spin flip transitions and we have named it as `` spin light of neutrino '' ( $ sl u $ ) . the neutrino initially unpolarized beam ( equal mixture of $ u_ { l } $ and $ u_ { r } $ ) can be converted to the totally polarized beam composed of only $ u_ { r } $ by the neutrino spin light in matter and electromagnetic fields . the quasi-classical theory of this radiation is developed on the basis of the generalized bargmann-michel-telegdi equation . the considered radiation is important for environments with high effective densities , $ n $ , because the total radiation power is proportional to $ n^ { 4 } $ . the spin light of neutrino , in contrast to the cherenkov or transition radiation of neutrino in matter , does not vanish in the case of the refractive index of matter is equal to unit . the specific features of this new radiation are : ( story_separator_special_tag abstract on the basis of the exact solutions of the modified dirac equation for a massive neutrino moving in matter we develop the quantum theory of the spin light of neutrino ( sl ) . the expression for the emitted photon energy is derived as a function of the density of matter for different matter compositions . the dependence of the photon energy on the helicities of the initial and final neutrino states is shown explicitly . the rate and radiation power of the sl in matter are obtained with the emitted photon linear and circular polarizations being accounted for . the developed quantum approach to the sl in matter ( which is similar to the furry representation of electrodynamics ) can be used in the studies of other processes with neutrinos in the presence of matter . story_separator_special_tag the quantum theory of the spin light of neutrino ( sl ) exactly accounting for the effect of the background matter is developed . contrary to the already performed studies of the sl , in this paper we derive expressions for the sl rate and power and also for the emitted photon s energy that are valid for an arbitrary value of the matter density including the case of a very dense matter . the spatial distribution of the radiation power and the dependence of the emitted photon s energy on the direction of radiation are also studied in detail for the first time . we analyze the sl polarization properties and show that in a wide range of the neutrino momentum and density of matter the sl radiation is nearly totally circular polarized . conditions for the effective sl photon propagation in the electron plasma are discussed . story_separator_special_tag abstract we develop the theory of spin light of neutrino in matter ( sl ) and include the effect of plasma influence on the emitted photon . we use the special technique based on exact solutions of particles wave equations in matter to perform all the relevant calculations , and track how the plasmon mass enters the process characteristics including the neutrino energy spectrum , sl rate and power . the new feature it induces is the existence of the process threshold for which we have found the exact expression and the dependence of the rate and power on this threshold condition . the sl spatial distribution accounting for the above effects has been also obtained . these results might be of interest in connection with the recently reported hints of ultra-high energy neutrinos e = 1 \xf7 10 pev observed by icecube . story_separator_special_tag the combined effect of matter and magnetic fields on neutrino spin and flavor precession is examined . we find a potential new kind of resonant solar-neutrino conversion .nu./sub e//sub > //sub l//sub =/. -- > nu./sub .mu.//sub =/ or .nu./sub tau//sub > //sub r//sub =/ ( for dirac neutrinos ) or .nu./sub e/. -- > .nu-bar/sub .mu./ or nu-bar/sub tau/ ( for majorana neutrinos ) . such a resonance could help account for the lower than expected solar-neutrino .nu./sub e/ flux and/or indications of an anticorrelation between fluctuations in the .nu./sub e/ flux and sunspot activity . consequences of spin-flavor precession for supernova neutrinos are also briefly discussed . story_separator_special_tag abstract it is shown that in the presence of matter there can occur resonant amplification of the flavor-changing neutrino spin rotation in transverse magnetic fields , which is roughly analogous to the mikheyev-smirnov-wolfenstein effect in neutrino oscillations . possible consequences for solar neutrinos are briefly discussed . story_separator_special_tag neutrino dipole moments mu ( nu ) would increase the core mass of red giants at the helium flash by delta ( mc ) = 0.015 solar mass x mu ( nu ) /10 to the -12th mub ( where mub is the bohr magneton ) because of enhanced neutrino losses . existing measurements of the bolometric magnitudes of the brightest red giants in 26 globular clusters , number counts of horizontal-branch stars and red giants in 15 globular clusters , and statistical parallax determinations of field rr lyr luminosities yield delta ( mc ) = 0.009 + or - 0.012 solar mass , so that conservatively mu ( nu ) is less than 3 x 10 to the -12th mub . story_separator_special_tag new exact solutions of the modified dirac equation describing a neutrino with nontrivial electromagnetic properties in extreme background conditions are obtained . within the quasi-classical treatment the effective lorentz force that describes the neutrino propagation in the magnetized rotating matter is introduced . we predict the effect of the spatial separation of different types of relativistic neutrinos and antineutrinos ( different in flavors and energies ) by the magnetized rotating matter of a star . low energy neutrinos can be even trapped inside the star . we also predict two new phenomena : a new type of the neutrino electromagnetic radiation ( termed light of ( milli ) charged neutrino , lc ) and a new mechanism of the star angular velocity shift due to neutrinos escaping the star ( termed neutrino star turning mechanism , st ) . the possible impact of the st mechanism on a supernova explosion yields a new astrophysical limit on the neutrino millicharge q < 1.3\xd710 19e0 . in addition , the st mechanism can be also used to explain the origin of pulsar anti-glitches and ordinary glitches as well .
determinations of the motion of the sun with respect to the extra-galactic nebulae have involved a k term of several hundred kilometers which appears to be variable . explanations of this paradox have been sought in a correlation between apparent radial velocities and distances , but so far the results have not been convincing . the present paper is a re-examination of the question , based on only those nebular distances which are believed to be fairly reliable . distances of extra-galactic nebulae depend ultimately upon the application of absolute-luminosity criteria to involved stars whose types can be recognized . these include , among others , cepheid variables , novae , and blue stars involved in emission nebulosity . numerical values depend upon the zero point of the period-luminosity relation among cepheids , the other criteria merely check the order of the distances . this method is restricted to the few nebulae which are well resolved by existing instruments . a study of these nebulae , together with those in which any stars at all can be recognized , indicates the probability of an approximately uniform upper limit to the absolute luminosity of stars , in the late-type spirals and story_separator_special_tag the radboud university nijmegen in collaboration with the nova optical infrared instrumentation group at astron is currently leading the development and realization of the blackgem observing facility . the blackgem science team aims to be the first to catch the optical counterpart of a gravitational wave event . the blackgem project will put an array of three medium-sized optical telescopes at the la silla site of the european southern observatory in chile . it is uniquely equipped to achieve a combination of wide-field and high sensitivity through its array-like approach . each blackgem unit telescope is a modified dall-kirkham-type telescope consisting of a 65cm primary mirror , a 21cm spherical secondary mirror and a triplet corrector lens . the spatial resolution on the sky will be 0.56 asec/pixel and the total field-of-view per telescope is 2.7 square degrees . the main requirement is to achieve a 5-sigma sensitivity of 23rd magnitude within a 5-minute exposure under 15 m/s wind gust conditions . this demands a very stable optical system with tight control of all the error contributions . this has been realized with a spreadsheet based integrated instrument model . the model contains all relevant telescope instrument parameters and environmental story_separator_special_tag shifting the focus of type ia supernova ( sn ia ) cosmology to the near-infrared ( nir ) is a promising way to significantly reduce the systematic errors , as the strategy minimizes our reliance on the empirical width-luminosity relation and uncertain dust laws . observations in the nir are also crucial for our understanding of the origins and evolution of these events , further improving their cosmological utility . any future experiments in the rest-frame nir will require knowledge of the sn ia nir spectroscopic diversity , which is currently based on a small sample of observed spectra . along with the accompanying paper , phillips et al . ( 2018 ) , we introduce the carnegie supernova project-ii ( csp-ii ) , to follow up nearby sne ia in both the optical and the nir . in particular , this paper focuses on the csp-ii nir spectroscopy program , describing the survey strategy , instrumental setups , data reduction , sample characteristics , and future analyses on the data set . in collaboration with the harvard-smithsonian center for astrophysics ( cfa ) supernova group , we obtained 661 nir spectra of 157 sne ia . within this sample story_separator_special_tag we present our current best estimate of the plausible observing scenarios for the advanced ligo , advanced virgo and kagra gravitational-wave detectors over the next several years , with the intention of providing information to facilitate planning for multi-messenger astronomy with gravitational waves . we estimate the sensitivity of the network to transient gravitational-wave signals for the third ( o3 ) , fourth ( o4 ) and fifth observing ( o5 ) runs , including the planned upgrades of the advanced ligo and advanced virgo detectors . we study the capability of the network to determine the sky location of the source for gravitational-wave signals from the inspiral of binary systems of compact objects , that is binary neutron star , neutron star black hole , and binary black hole systems . the ability to localize the sources is given as a sky-area probability , luminosity distance , and comoving volume . the median sky localization area ( 90 % credible region ) is expected to be a few hundreds of square degrees for all types of binary systems during o3 with the advanced ligo and virgo ( hlv ) network . the median sky localization area will improve to story_separator_special_tag japaneses encephalitis ( je ) is most common zoonoses caused by japanese encephalitis virus ( jev ) with a high mortality and disability rate . to take timely preventive and control measures , early and rapid detection of je rna is necessary . but due to characteristic brief and low viraemia , je rna detection remains challenging . in this study , a real-time nucleic acid sequence-based amplification ( rt-nasba ) was developed for rapid and simultaneous detection of jev . four pairs of primer were designed using a multiple genome alignment of all jev strains from genbank . nasba assay established and optimal reaction conditions were confirmed by using primers and probe on ns1 gene of jev . the specificity and sensitivity of the assay were compared with rt-pcr by using serial rna and virus cultivation dilutions . the results showed that jev rt-nasba assay was established , and robust signals could be observed in 10 min with high specificity . the limit of dectetion of rt-nasba was 6 copies per reaction . the assay was thus 100 to 1 , 000 times more sensitive than rt-pcr . the cross-reaction was performed with other porcine pathogens , and negative story_separator_special_tag in this paper we set bounds on the radiation content of\xa0the universe and neutrino properties by using the wmap ( wilkinson microwave anisotropy probe ) five-year cmb ( cosmic microwave background ) measurements complemented with most of the existing cmb and lss ( large scale structure ) data ( wmap5+all ) , imposing also self-consistent bbn ( big bang nucleosynthesis ) constraints on the primordial helium abundance . we consider lepton asymmetric cosmological models parametrized by the neutrino degeneracy parameter and the variation of the relativistic degrees of freedom , neffoth , due to possible other physical processes occurring between bbn and structure formation epochs.we get a mean value of the effective number of relativistic neutrino species of neff = 2.98 2.273.60 1.654.37 , providing an important improvement over the similar result obtained from wmap5+bao+sn+hst ( bao : baryonic acoustic oscillations ; sn : supernovae ; hst : hubble space telescope ) data ( komatsu et al ( wmap collaboration ) , 2008 astrophys . j. suppl . submitted\xa0 [ 0803.0547 ] ) . we also find a strong correlation between mh2 and zeq , showing that we observe neff mainly via the effect of zeq , rather than via
the aim of this paper is to give the first general and abstract treatment of the algebraic properties of lie algebroids . the concept of lie algebroid was introduced in 1967 by pradines [ ls ] , as the basic infinitesimal invariant of a differentiable groupoid ; the construction of the lie algebroid of a differentiable groupoid is a direct generalization of the construction of the lie algebra of a lie group , and [ 183 described a full lie theory for differentiable groupoids and lie algebroids , encompassing many phenomena in the foundations of differential geometry . ( a more detailed account and further references are given in [ 13 ] . ) however , at this stage the algebraic properties of lie algebroids were not pursued . in [ 13 , iii , section 2 , iv , section 11 , one of us gave a fairly detailed account of the abstract algebra of lie algebroids over a fixed base : lie algebroids are vector bundles with several additional structures , and a category of lie algebroids on a given base , and morphisms which preserve that base , has properties similar to those of the category of story_separator_special_tag the most important examples of a double vector bundle are provided by iterated tangent and cotangent functors : ttm , tt^ * m , t^ * tm , and t^ * t^ * m. we introduce the notions of the dual double vector bundle and the dual double vector bundle morphism . theorems on canonical isomorphisms are formulated and proved . several examples are given . story_separator_special_tag we introduce and study some mixed product poisson structures on product manifolds associated to poisson lie groups and lie bialgebras . for quasitriangular lie bialgebras , our construction is equivalent to that of fusion products of quasi-poisson g-manifolds introduced by alekseev , kosmannschwarzbach , and meinrenken . our primary examples include four series of holomorphic poisson structures on products of flag varieties and related spaces of complex semi-simple lie groups . story_separator_special_tag we show that to any poisson manifold and , more generally , to any triangular lie bialgebroid in the sense of mackenzie and xu , there correspond two differential gerstenhaber algebras in duality , one of which is canonically equipped with an operator generating the graded lie algebra bracket , i.e . with the structure of a batalin-vilkovisky algebra . story_separator_special_tag \xa9 annales de l institut fourier , 1996 , tous droits r\xe9serv\xe9s . l acc\xe8s aux archives de la revue \xab annales de l institut fourier \xbb ( http : //annalif.ujf-grenoble.fr/ ) implique l accord avec les conditions g\xe9n\xe9rales d utilisation ( http : //www.numdam.org/legal.php ) . toute utilisation commerciale ou impression syst\xe9matique est constitutive d une infraction p\xe9nale . toute copie ou impression de ce fichier doit contenir la pr\xe9sente mention de copyright . story_separator_special_tag we survey the many instances of derived bracket construction in differential geometry , lie algebroid and courant algebroid theories , and their properties . we recall and compare the constructions of buttin and vinogradov , and we prove that the vinogradov bracket is the skew-symmetrization of a derived bracket . odd ( resp. , even ) poisson brackets on supermanifolds are derived brackets of canonical even ( resp. , odd ) poisson brackets on their cotangent bundle ( resp. , parity-reversed cotangent bundle ) . lie algebras have analogous properties , and the theory of lie algebroids unifies the results valid for manifolds on the one hand , and for lie algebras on the other . we outline the role of derived brackets in the theory of `` poisson structures with background '' . story_separator_special_tag in his study of dirac structures , a notion which includes both poisson structures and closed 2-forms , t. courant introduced a bracket on the direct sum of vector fields and 1-forms . this bracket does not satisfy the jacobi identity except on certain subspaces . in this paper we systematize the properties of this bracket in the definition of a courant algebroid . this structure on a vector bundle $ e\\rightarrow m $ , consists of an antisymmetric bracket on the sections of $ e $ whose `` jacobi anomaly '' has an explicit expression in terms of a bundle map $ e\\rightarrow tm $ and a field of symmetric bilinear forms on $ e $ . when $ m $ is a point , the definition reduces to that of a lie algebra carrying an invariant nondegenerate symmetric bilinear form . for any lie bialgebroid $ ( a , a^ { * } ) $ over $ m $ ( a notion defined by mackenzie and xu ) , there is a natural courant algebroid structure on $ a\\oplus a^ { * } $ which is the drinfel 'd double of a lie bialgebra when $ m $ story_separator_special_tag \xa9 andr\xe9e c. ehresmann et les auteurs , 1987 , tous droits r\xe9serv\xe9s . l acc\xe8s aux archives de la revue \xab cahiers de topologie et g\xe9om\xe9trie diff\xe9rentielle cat\xe9goriques \xbb implique l accord avec les conditions g\xe9n\xe9rales d utilisation ( http : //www.numdam.org/conditions ) . toute utilisation commerciale ou impression syst\xe9matique est constitutive d une infraction p\xe9nale . toute copie ou impression de ce fichier doit contenir la pr\xe9sente mention de copyright . story_separator_special_tag we complete the construction of the double lie algebroid of a double lie groupoid begun in the first paper of this title . we show that the lie algebroid structure of an la -- groupoid may be prolonged to the lie algebroid of its lie groupoid structure ; in the case of a double groupoid this prolonged structure for either la -- groupoid is canonically isomorphic to the lie algebroid structure associated with the other ; this extends many canonical isomorphisms associated with iterated tangent and cotangent structures . we also show that the cotangent of a double lie groupoid is a symplectic double groupoid , and that the side groupoids of any symplectic double groupoid are poisson groupoids in duality . thus any double lie groupoid gives rise to a dual pair of poisson groupoids . story_separator_special_tag the core diagram of a double lie algebroid consists of the core of the double lie algebroid , together with the two core-anchor maps to the sides of the double lie algebroid . if these two core-anchors are surjective , then the double lie algebroid and its core diagram are called transitive . this paper establishes an equivalence between transitive double lie algebroids , and transitive core diagrams over a fixed base manifold . in other words , it proves that a transitive double lie algebroid is completely determined by its core diagram.the comma double lie algebroid associated to a morphism of lie algebroids is defined . if the latter morphism is one of the core-anchors of a transitive core diagram , then the comma double algebroid can be quotiented out by the second core-anchor , yielding a transitive double lie algebroid , which is the one that is equivalent to the transitive core diagram.brown 's and mackenzie 's equivalence of transitive core diagrams ( of lie groupoids ) with transitive double lie groupoids is then used in order to show that a transitive double lie algebroid with integrable sides and core is automatically integrable to a transitive double lie story_separator_special_tag we show that the manin triple characterization of lie bialgebras in terms of the drinfel d double may be extended to arbitrary poisson manifolds and indeed lie bialgebroids by using double cotangent bundles , rather than the direct sum structures ( courant algebroids ) utilized for similar purposes by liu , weinstein and xu . this is achieved in terms of an abstract notion of double lie algebroid ( where double is now used in the ehresmann sense ) which unifies many iterated constructions in differential geometry . story_separator_special_tag we prove that the cotangent of a double lie groupoid s has itself a double groupoid structure with sides the duals of associated lie algebroids , and double base the dual of the lie algebroid of the core of s. using this , we prove a result outlined by weinstein in 1988 , that the side groupoids of a general symplectic double groupoid are poisson groupoids in duality . further , we prove that any double lie groupoid gives rise to a pair of poisson groupoids ( and thus of lie bialgebroids ) in duality . to handle the structures involved effectively we extend to this context the dualities and canonical isomorphisms for tangent and cotangent structures of the author and ping xu . story_separator_special_tag this text is meant to be a brief overview of the topics announced in the title and is based on my talk in vienna ( august/september 2007 ) . it does not contain new results ( except probably for a remark concerning q-manifold homology , which i wish to elaborate elsewhere ) . `` mackenzie theory '' stands for the rich circle of notions that have been put forward by kirill mackenzie ( solo or in collaboration ) : double structures such as double lie groupoids and double lie algebroids , lie bialgebroids and their doubles , nontrivial dualities for double and multiple vector bundles , etc . `` q-manifolds '' are ( super ) manifolds with a homological vector field , i.e. , a self-commuting odd vector field . they may have an extra z-grading ( called weight ) not necessarily linked with the z_2-grading ( parity ) . i discuss double lie algebroids ( discovered by mackenzie ) and explain how this quite complicated fundamental notion is equivalent to a very simple one if the language of q-manifolds is used . in particular , it shows how the two seemingly different notions of a `` drinfeld double '' story_separator_special_tag the canonical involution of a double ( =iterated ) tangent bundle may be dualized in different ways to yield relations between the tulczyjew diffeomorphism , the poisson anchor associated with the standard symplectic structure on the cotangent space , and the reversal diffeomorphism . we show that the constructions which yield these maps extend very generally to the double lie algebroids of double lie groupoids , where they play a crucial role in the relations between double lie algebroids and lie bialgebroids . story_separator_special_tag we recall the basic theory of double vector bundles and the canonical pairing of their duals , introduced by the author and by konieczna and urbanski . we then show that the relationship between a double vector bundle and its two duals can be understood simply in terms of an associated cotangent triple vector bundle structure . in particular , we show that the dihedral group of the triangle acts on this triple via forms of the isomorphisms r , introduced by the author and ping xu . we then consider the three duals of a general triple vector bundle and show that the corresponding group is neither the dihedral group of the square nor the symmetry group on four symbols . story_separator_special_tag the word ` double ' was used by ehresmann to mean ` an object x in the category of all x ' . double categories , double groupoids and double vector bundles are instances , but the notion of lie algebroid can not readily be doubled in the ehresmann sense , since a lie algebroid bracket can not be defined diagrammatically . in this paper we use the duality of double vector bundles to define a notion of double lie algebroid , and we show that this abstracts the infinitesimal structure ( at second order ) of a double lie groupoid . we further show that the cotangent of either lie algebroid in a lie bialgebroid has a double lie algebroid structure , and that a pair of lie algebroid structures on dual vector bundles forms a lie bialgebroid if and only if the structures which they canonically induce on their cotangents form a double lie algebroid . in particular , the drinfel 'd double of a lie bialgebra has a double lie algebroid structure . we also show that matched pairs of lie algebroids , as used by j.-h. lu in the classification of poisson group actions , are story_separator_special_tag lie bialgebras arise as infinitesimal invariants of poisson lie groups . a lie bialgebra is a lie algebra g with a lie algebra structure on the dual g which is compatible with the lie algebra g in a certain sense . for a poisson group g , the multiplicative poisson structure induces a lie algebra structure on the lie algebra dual g which makes ( g , g ) into a lie bialgebra . in fact , there is a one-one correspondence between poisson lie groups and lie bialgebras if the lie groups are assumed to be simply connected [ 7 ] , [ 16 ] , [ 19 ] . the importance of poisson lie groups themselves arises in part from their role as classical limits of quantum groups [ 8 ] and in part because they provide a class of poisson structures for which the realization problem is tractable [ 15 ] . poisson groupoids were introduced by weinstein [ 24 ] as a generalization of both poisson lie groups and the symplectic groupoids which arise in the integration of arbitrary poisson manifolds [ 4 ] , [ 11 ] . he noted that the lie algebroid dual story_separator_special_tag this paper is devoted to studying some properties of the courant algebroids : we explain the so-called `` conducting bundle construction '' and use it to attach the courant algebroid to dixmier-douady gerbe ( following ideas of p. severa ) . we remark that wznw-poisson condition of klimcik and strobl ( math.sg/0104189 ) is the same as dirac structure in some particular courant algebroid . we propose the construction of the lie algebroid on the loop space starting from the lie algebroid on the manifold and conjecture that this construction applied to the dirac structure above should give the lie algebroid of symmetries in the wznw-poisson $ \\sigma $ -model , we show that it is indeed true in the particular case of poisson $ \\sigma $ -model . story_separator_special_tag optically addressable spins in wide-bandgap semiconductors are a promising platform for exploring quantum phenomena . while colour centres in three-dimensional crystals such as diamond and silicon carbide were studied in detail , they were not observed experimentally in two-dimensional ( 2d ) materials . here , we report spin-dependent processes in the 2d material hexagonal boron nitride ( hbn ) . we identify fluorescence lines associated with a particular defect , the negatively charged boron vacancy ( $ $ { \\mathrm { v } } _ { \\mathrm { b } } ^ - $ $ v b ) , showing a triplet ( s = 1 ) ground state and zero-field splitting of ~3.5 ghz . we establish that this centre exhibits optically detected magnetic resonance at room temperature and demonstrate its spin polarization under optical pumping , which leads to optically induced population inversion of the spin ground state a prerequisite for coherent spin-manipulation schemes . our results constitute a step forward in establishing 2d hbn as a prime platform for scalable quantum technologies , with potential for spin-based quantum information and sensing applications . an ensemble of spins associated with an intrinsic defect of two-dimensional hexagonal boron story_separator_special_tag we define \\textit { graded manifolds } as a version of supermanifolds endowed with an additional $ \\mathbb z $ -grading in the structure sheaf , called \\textit { weight } ( not linked with parity ) . examples are ordinary supermanifolds , vector bundles over supermanifolds , double vector bundles , iterated constructions like $ ttm $ , etc . i give a construction of \\textit { doubles } for \\textit { graded } $ qs $ - and \\textit { graded $ qp $ -manifolds } ( graded manifolds endowed with a homological vector field and a schouten/poisson bracket ) . relation is explained with drinfeld 's lie bialgebras and their doubles . graded $ qs $ -manifolds can be considered , roughly , as `` generalized lie bialgebroids '' . the double for them is closely related with the analog of drinfeld 's double for lie bialgebroids recently suggested by roytenberg . lie bialgebroids as a generalization of lie bialgebras , over some base manifold , were defined by mackenzie and p. xu . graded $ qp $ -manifolds give an { odd version } for all this , in particular , they contain `` odd analogs story_separator_special_tag we give a construction of homotopy algebras based on higher derived brackets . more precisely , the data include a lie superalgebra with a projector on an abelian subalgebra satisfying a certain axiom , and an odd element . given this , we introduce an infinite sequence of higher brackets on the image of the projector , and explicitly calculate their jacobiators in terms of 2. this allows to control higher jacobi identities in terms of the order of 2. examples include stasheff 's strongly homotopy lie algebras and variants of homotopy batalin vilkovisky algebras . there is a generalization with replaced by an arbitrary odd derivation . we discuss applications and links with other constructions .
we show that the field equations for the supercoordinates and the self -- dual antisymmetric tensor field derived from the recently constructed kappa-invariant action for the m theory five-brane are equivalent to the equations of motion obtained in the doubly supersymmetric geometrical approach at the worldvolume component level . story_separator_special_tag recently an action based on lie 3-algebras was proposed to describe m2-branes . we study the case of infinite dimensional lie 3-algebras based on the nambu-poisson structure of three dimensional manifolds . we show that the model contains self-dual 2-form gauge fields in 6 dimensions , and the result may be interpreted as the m5-brane world-volume action . story_separator_special_tag we investigate the bagger-lambert-gustavsson model associated with the nambu-poisson algebra as a theory describing a single m5-brane . we argue that the model is a gauge theory associated with the volume-preserving diffeomorphism in the three-dimensional internal space . we derive gauge transformations , actions , supersymmetry transformations , and equations of motions in terms of six-dimensional fields . the equations of motions are written in gauge-covariant form , and the equations for tensor fields have manifest self-dual structure . we demonstrate that the double dimensional reduction of the model reproduces the non-commutative u ( 1 ) gauge theory on a d4-brane with a small non-commutativity parameter . we establish relations between parameters in the blg model and those in m-theory . this shows that the model describes an m5-brane in a large c-field background . story_separator_special_tag recently a three-dimensional field theory was derived that is consistent with all the symmetries expected of the worldvolume action for multiple m2-branes . in this note we examine several physical predictions of this model and show that they are in agreement with expected m2-brane dynamics . in particular , we discuss the quantization of the chern-simons coefficient , the vacuum moduli space , a massive deformation leading to fuzzy three-sphere vacua , and a possible large n limit . in this large n limit , the fuzzy funnel solution correctly reproduces the mass of an m5-brane . story_separator_special_tag we give two independent arguments why the classical membrane fields should be loops . the first argument comes from how we may construct selfdual strings in the m5 brane from a loop space version of the nahm equations . the second argument is that there appears to be no infinite set of finite-dimensional lie algebras ( such as $ su ( n ) $ for any $ n $ ) that satisfies the algebraic structure of the membrane theory . story_separator_special_tag in this article we give a concise review of recent progress in our understanding of the lie 3-algebra and their application to the bagger-lambert-gustavsson model describing multiple m2-branes in m theory . story_separator_special_tag we show that there exists a cut-off version of nambu-poisson bracket which defines a finite dimensional lie 3-algebra . the algebra still satisfies the fundamental identity and thus produces n=8 supersymmetric blg type equation of motion for multiple m2 branes . by counting the number of the moduli and the degree of freedom , we derive an entropy formula which scales as n^ { 3/2 } as expected for the multiple m2 branes . story_separator_special_tag in a previous paper we provided a consistent quantization of open strings ending on d-branes with a background $ b $ field . in this letter , we show that the same result can also be obtained using the more traditional method of dirac 's constrained quantization . we also extend the discussion to the fermionic sector . story_separator_special_tag in this note we explain how world-volume geometries of d-branes can be reconstructed within the microscopic framework where d-branes are described through boundary conformal field theory . we extract the ( non-commutative ) world-volume algebras from the operator product expansions of open string vertex operators . for branes in a flat background with constant non-vanishing b-field , the operator products are computed perturbatively to all orders in the field strength . the resulting series coincides with kontsevich 's presentation of the moyal product . after extending these considerations to fermionic fields we conclude with some remarks on the generalization of our approach to curved backgrounds . story_separator_special_tag we extend earlier ideas about the appearance of noncommutative geometry in string theory with a nonzero b-field . we identify a limit in which the entire string dynamics is described by a minimally coupled ( supersymmetric ) gauge theory on a noncommutative space , and discuss the corrections away from this limit . our analysis leads us to an equivalence between ordinary gauge fields and noncommutative gauge fields , which is realized by a change of variables that can be described explicitly . this change of variables is checked by comparing the ordinary dirac-born-infeld theory with its noncommutative counterpart . we obtain a new perspective on noncommutative gauge theory on a torus , its t-duality , and morita equivalence . we also discuss the d0/d4 system , the relation to m-theory in dlcq , and a possible noncommutative version of the six-dimensional ( 2,0 ) theory . story_separator_special_tag we investigate the deformation of d-brane world-volumes in curved backgrounds . we calculate the leading corrections to the boundary conformal field theory involving the background fields , and in particular we study the correlation functions of the resulting system . this allows us to obtain the world-volume deformation , identifying the open string metric and the noncommutative deformation parameter . the picture that unfolds is the following : when the gauge invariant combination \\omega = b + f is constant one obtains the standard moyal deformation of the brane world-volume . similarly , when d\\omega = 0 one obtains the noncommutative kontsevich deformation , physically corresponding to a curved brane in a flat background . when the background is curved , h = d\\omega ot= 0 , we find that the relevant algebraic structure is still based on the kontsevich expansion , which now defines a nonassociative star product with an a_\\infty homotopy associative algebraic structure . we then recover , within this formalism , some known results of matrix theory in curved backgrounds . in particular , we show how the effective action obtained in this framework describes , as expected , the dielectric effect of d-branes . the story_separator_special_tag based on results about open string correlation functions , a nonassociative algebra was proposed in a recent paper for d-branes in a background with nonvanishing $ h $ . we show that our associative algebra obtained by quantizing the endpoints of an open string in an earlier work can also be used to reproduce the same correlation functions . the novelty of this algebra is that functions on the d-brane do not form a closed algebra . this poses a problem to define gauge transformations on such noncommutative spaces . we propose a resolution by generalizing the description of gauge transformations which naturally involves global symmetries . this can be understood in the context of matrix theory . story_separator_special_tag we investigate , in a certain decoupling limit , the effect of having a constant c-field on the m-theory five-brane using an open membrane probe . we define an open membrane metric for the five-brane that remains non-degenerate in the limit . the canonical quantisation of the open membrane boundary leads to a noncommutative loop space which is a functional analogue of the noncommutative geometry that occurs for d-branes . story_separator_special_tag we analyze open membranes immersed in a magnetic three-form field-strength $ c $ . while cylindrical membranes in the absence of $ c $ behave like tensionless strings , when the $ c $ flux is present the strings polarize into thin membrane ribbons , locally orthogonal to the momentum density , thus providing the strings with an effective tension . the effective dynamics of the ribbons can be described by a simple deformation of the schild action for null strings . interactions become non-local due to the polarization , and lead to a deformation of the string field theory , whereby string vertices receive a phase factor proportional to the volume swept out by the ribbons . in a particular limit , this reduces to the non-commutative loop space found previously . story_separator_special_tag in recent years there has been some progress in understanding how one might model the interactions of branes in m-theory despite not having a fundamental perturbative description . the goal of this review is to describe different approaches to m-theory branes and their interactions . this includes : a review of m-theory branes themselves and their properties ; brane interactions ; the self-dual string and its properties ; the role of anomalies in learning about brane systems ; the recent work of basu and harvey with subsequent developments ; and how these complimentary approaches might fit together . story_separator_special_tag we construct a simple physical model of a particle moving on the infinite noncommutative 2-plane . the model consists of a pair of opposite charges moving in a strong magnetic field . in addition , the charges are connected by a spring . in the limit of large magnetic field , the charges are frozen into the lowest landau level . interaction of such particles include moyal bracket phases characteristics of field theory on noncommutative space . the simple system arises in lightcone quantization of open strings attached to d-branes in a.s. tensor background . we use the model to work out the general form of lightcone vertices from string splitting . we then consider feynman diagrams in uncompactified nc ym theories and find that for all planar diagrams the comm . and noncomm . theories are the same . this means large n theories are equivalent in the 't hooft limit . non planar diagrams convergence is improved . story_separator_special_tag based on an explicit computation of the scattering amplitude of four open membranes in a constant 3-form background , we construct a toy model of the field theory for open membranes in the large c field limit . it is a generalization of the noncommutative field theories which describe open strings in a constant 2-form flux . the noncommutativity due to the b-field background is now replaced by a nonassociative triplet product . the triplet product satisfies the consistency conditions of lattice 3d gravity , which is inherent in the world-volume theory of open membranes . we show the uv/ir mixing of the toy model by computing some feynman diagrams . inclusion of the internal degree of freedom is also possible through the idea of the cubic matrix . story_separator_special_tag taking the liouville theorem as a guiding principle , we propose a possible generalization of classical hamiltonian dynamics to a three-dimensional phase space . the equation of motion involves two hamiltonians and three canonical variables . the fact that the euler equations for a rotator can be cast into this form suggests the potential usefulness of the formalism . in this article we study its general properties and the problem of quantization . story_separator_special_tag we outline basic principles of canonical formalism for the nambu mechanics a generalization of hamiltonian mechanics proposed by yoichiro nambu in 1973. it is based on the notion of nambu bracket , which generalizes the poisson bracket a binary operation on classical observables on the phase space , to the multiple operation of higher order n 3. nambu dynamics is described by the phase flow given by nambu-hamilton equations of motion a system of ode s which involves n 1 hamiltonians . we introduce the fundamental identity for the nambu bracket a generalization of the jacobi identity , as a consistency condition for the dynamics . we show that nambu bracket structure defines an hierarchy of infinite families of subordinated structures of lower order , including poisson bracket structure , which satisfy certain matching conditions . the notion of nambu bracket enables to define nambu-poisson manifolds phase spaces for the nambu mechanics , which turn out to be more rigid than poisson manifolds phase spaces for the hamiltonian mechanics . we introduce the analog of the action form and the action principle for the nambu mechanics . in its formulation dynamics of loops ( n 2-dimensional chains for the general story_separator_special_tag the notion of $ n $ -ary algebras , that is vector spaces with a multiplication concerning $ n $ -arguments , $ n \\geq 3 $ , became fundamental since the works of nambu . here we first present general notions concerning $ n $ -ary algebras and associative $ n $ -ary algebras . then we will be interested in the notion of $ n $ -lie algebras , initiated by filippov , and which is attached to the nambu algebras . we study the particular case of nilpotent or filiform $ n $ -lie algebras to obtain a beginning of classification . this notion of $ n $ -lie algebra admits a natural generalization in strong homotopy $ n $ -lie algebras in which the maurer cartan calculus is well adapted . story_separator_special_tag motivated by the recent proposal of an n = 8 supersymmetric action for multiple m2-branes , we study the lie 3-algebra in detail . in particular , we focus on the fundamental identity and the relation with nambu-poisson bracket . some new algebras not known in the literature are found . next we consider cubic matrix representations of lie 3-algebras . we show how to obtain higher dimensional representations by tensor products for a generic 3-algebra . a criterion of reducibility is presented . we also discuss the application of lie 3-algebra to the membrane physics , including the basu-harvey equation and the bagger-lambert model . story_separator_special_tag n-lie algebra structures on smooth function algebras given by means of multi-differential operators , are studied . necessary and sufficient conditions for the sum and the wedge product of two $ n $ -poisson sructures to be again a multi-poisson are found . it is proven that the canonical $ n $ -vector on the dual of an n-lie algebra g is n-poisson iff dim ( g ) are not greater than n+1 . the problem of compatibility of two n-lie algebra structures is analyzed and the compatibility relations connecting hereditary structures of a given n-lie algebra are obtained . ( n+1 ) -dimensional n-lie algebras are classified and their `` elementary particle-like '' structure is discovered . some simple applications to dynamics are discussed . story_separator_special_tag the paper provides a survey of known results on geometric aspects related to nambu-poisson brackets . story_separator_special_tag it is frequently useful to construct dual descriptions of theories containing antisymmetric tensor fields by introducing a new potential whose curl gives the dual field strength , thereby interchanging field equations with bianchi identities . we describe a general procedure for constructing actions containing both potentials at the same time , such that the dual relationship of the field strengths arises as an equation of motion . the price for doing this is the sacrifice of manifest lorentz invariance or general coordinate invariance , though both symmetries can be realized nonetheless . there are various examples of global symmetries that have been realized as symmetries of field equations but not actions . these can be elevated to symmetries of the action by our method . the main example that we focus on is the low-energy effective action description of the heterotic string theory compactified on a six-torus to four dimensions . we show that the sl ( 2 , r ) symmetry , whose sl ( 2 , z ) subgroup has been conjectured to be an exact symmetry of the full string theory , can be realized on the action in a way that brings out a remarkable similarity story_separator_special_tag we reveal non-manifest gauge and so ( 1,5 ) lorentz symmetries in the lagrangian description of a six-dimensional free chiral field derived from the bagger-lambert-gustavsson model in arxiv:0804.3629 and make this formulation covariant with the use of a triplet of auxiliary scalar fields . we consider the coupling of this self-dual construction to gravity and its supersymmetrization . in the case of the non-linear model of arxiv:0805.2898 we solve the equations of motion of the gauge field , prove that its non-linear field strength is self-dual and find a gauge-covariant form of the non-linear action . issues of the relation of this model to the known formulations of the m5-brane worldvolume theory are discussed . story_separator_special_tag abstract we obtain a bps soliton of the m-theory fivebrane 's equations of motion representing a supersymmetric self-dual string . the resulting solution is then dimensionally reduced and used to obtain 0-brane and ( p 3 ) -brane solitons on dp-branes . story_separator_special_tag we study bps solutions for a self-dual string and a neutral string in m5-brane worldvolume theory with constant three-form eld . we further generalize such solitons to superpose with a calibrated surface . we also study a traveling wave on a calibrated surface in the constant three-form eld background . story_separator_special_tag we analyze bps equations for string-like configurations derived from the m5-brane worldvolume action with a nambu-poisson structure constructed in ref . [ 1 , 2 ] . we solve the bps equations up to the first order in the parameter g which characterizes the strength of the nambu-poisson bracket . we compare our solutions to previously constructed bps string solitons in the conventional description of m5-brane in a constant three-form background via seiberg-witten map , and find agreement . story_separator_special_tag we show that the nahm equation which describes a fuzzy d3-brane in the presence of a b-field can be derived as a boundary condition of the f1-strings ending on the d3-brane , and that the modifications of the original nahm equation by a b-field can be understood in terms of the noncommutative geometry of the d3-brane . naturally this is consistent with the alternative derivation by quantising the open strings in the b-field background . we then consider a configuration of multiple m2-branes ending on an m5-brane with a constant 3-form c-field . by analogy with the case of strings ending on a d3-brane with a constant b-field , one can expect that this system can be described in terms of the boundary of the m2-branes moving within a certain kind of quantum geometry on the m5-brane worldvolume . by repeating our analysis , we show that the analogue of the b-field modified nahm equation , the c-field modified basu-harvey equation can also be understood as a boundary condition of the m2-branes . we then compare this to the m5-brane bion description and show that the two descriptions match provided we postulate a new type of quantum geometry on the story_separator_special_tag we present the light-cone gauge fixed lagrangian for the m5-brane ; it has a residual 'exotic ' gauge invariance with the group of 5-volume preserving diffeomorphisms , sdiff5 , as gauge group . for an m5-brane of topology , for closed 3-manifold m3 , we find an infinite tension limit that yields an so ( 8 ) -invariant ( 1 + 2 ) -dimensional field theory with 'exotic ' sdiff3 gauge invariance . we show that this field theory is the carrollian limit of the nambu bracket realization of the 'blg ' model for multiple m2-branes . story_separator_special_tag we develop a general formalism for the construction , in d-dimensional minkowski space , of gauge theories for which the gauge group is the infinite-dimensional group sdiffn of volume-preserving diffeomorphisms of some closed n-dimensional manifold . we then focus on the d = 3 sdiff3 superconformal gauge theory describing a condensate of m2-branes ; in particular , we derive its = 8 superfield equations from a pure-spinor superspace action , and we describe its relationship to the d = 3 sdiff2 super-yang-mills theory describing a condensate of d2-branes .
spam is considered an invasion of privacy . its changeable structures and variability raise the need for new spam classification techniques . the present study proposes using bayesian additive regression trees ( bart ) for spam classification and evaluates its performance against other classification methods , including logistic regression , support vector machines , classification and regression trees , neural networks , random forests , and naive bayes . bart in its original form is not designed for such problems , hence we modify bart and make it applicable to classification problems . we evaluate the classifiers using three spam datasets ; ling-spam , pu1 , and spambase to determine the predictive accuracy and the false positive rate . story_separator_special_tag parametric model-based regression imputation is commonly applied to missing-data problems , but is sensitive to misspecification of the imputation model . little and an ( 2004 ) proposed a semiparametric approach called penalized spline propensity prediction ( pspp ) , where the variable with missing values is modeled by a penalized spline ( p-spline ) of the response propensity score , which is logit of the estimated probability of being missing given the observed variables . variables other than the response propensity are included parametrically in the imputation model . however they only considered point estimation based on single imputation with pspp . we consider here three approaches to standard errors estimation incorporating the uncertainty due to non response : ( a ) standard errors based on the asymptotic variance of the pspp estimator , ignoring sampling error in estimating the response propensity ; ( b ) standard errors based on the bootstrap method ; and ( c ) multiple imputation-based standard errors using draws . story_separator_special_tag on september 14 , 2015 at 09:50:45 utc the two detectors of the laser interferometer gravitational-wave observatory simultaneously observed a transient gravitational-wave signal . the signal sweeps upwards in frequency from 35 to 250 hz with a peak gravitational-wave strain of 1.0\xd710 ( -21 ) . it matches the waveform predicted by general relativity for the inspiral and merger of a pair of black holes and the ringdown of the resulting single black hole . the signal was observed with a matched-filter signal-to-noise ratio of 24 and a false alarm rate estimated to be less than 1 event per 203,000 years , equivalent to a significance greater than 5.1 . the source lies at a luminosity distance of 410 ( -180 ) ( +160 ) mpc corresponding to a redshift z=0.09 ( -0.04 ) ( +0.03 ) . in the source frame , the initial black hole masses are 36 ( -4 ) ( +5 ) m and 29 ( -4 ) ( +4 ) m , and the final black hole mass is 62 ( -4 ) ( +4 ) m , with 3.0 ( -0.5 ) ( +0.5 ) m c ( 2 ) radiated in gravitational waves story_separator_special_tag this report provides a descriptive comparison of data from the strategic highway research program 2 ( shrp 2 ) naturalistic driving study ( nds ) sample and national data . the primary objective of the shrp 2 nds is to support analyses relating crash risk to driver , vehicle , roadway , and environmental characteristics . since age is one of the most important driver characteristics , this objective is best supported by adequate sample sizes across all age groups . the national population of drivers has the greatest number of drivers in the middle age groups and progressively fewer in the younger and older ages . in contrast , the nds oversampled younger and older drivers . in addition , the nds oversampled newer-model-year vehicles because these vehicles provided useful data through their vehicle networks . it is important for users of the nds data to have information on the relationship of the nds sample to the national population . in general , many statistics taken directly from the nds sample will not be nationally representative unless they are adjusted to account for relevant characteristics of the nds sample . story_separator_special_tag survey researchers routinely conduct studies that use different methods of data collection and inference . but for at least the past 60 years , the probabilitysampling framework has been used in most surveys . more recently , concerns about coverage and nonresponse coupled with rising costs have led some to wonder whether non-probability sampling methods might be an acceptable alternative , at least under some conditions ( groves 2006 ; savage and burrows 2007 ) . a wide range of non-probability designs exist and are being used in various settings , including case control studies , clinical trials , evaluation research story_separator_special_tag the goal of this article is to construct doubly robust ( dr ) estimators in ignorable missing data and causal inference models . in a missing data model , an estimator is dr if it remains consistent when either ( but not necessarily both ) a model for the missingness mechanism or a model for the distribution of the complete data is correctly specified . because with observational data one can never be sure that either a missingness model or a complete data model is correct , perhaps the best that can be hoped for is to find a dr estimator . dr estimators , in contrast to standard likelihood-based or ( nonaugmented ) inverse probability-weighted estimators , give the analyst two chances , instead of only one , to make a valid inference . in a causal inference model , an estimator is dr if it remains consistent when either a model for the treatment assignment mechanism or a model for the distribution of the counterfactual data is correctly specified . because with observational data one can never be sure that a model for the treatment assignment mechanism or a model for the counterfactual data is correct , inference story_separator_special_tag . 6 executive summary . 7 introduction to the study . 7 what are big data ? . 7 errors impairing inference from big data sources . 8 framework for focusing on selectivity . 8 selectivity issues for different data sources . 9 methods for correcting for selectivity . 10 story_separator_special_tag the ability to conduct surveys using opt-in web respondents has raised concerns about whether these samples are valid . probability sampling theory is not applicable because the units are not subject to being sampled with a known and non-zero probability of selection . frameworks have been proposed for web opt-in surveys , but these generally have features that are not well suited for general-purpose surveys . this paper proposes a model-based framework for making inferences from non-probability samples that we refer to as a compositional approach . the paper outlines the assumptions required for making inferences from these types of samples , and suggests some evaluation measures for assessing the assumptions . story_separator_special_tag the second strategic highway research program is conducting the largest-ever naturalistic driving study at six sites . the high-technology program will support a comprehensive assessment of how driver behavior and performance interact with roadway , environmental , vehicular , and human factors and how these interactions affect collision risk . the information will support new and improved countermeasures to prevent traffic collisions and injuries . story_separator_special_tag considerable recent interest has focused on doubly robust estimators for a population mean response in the presence of incomplete data , which involve models for both the propensity score and the regression of outcome on covariates . the usual doubly robust estimator may yield severely biased inferences if neither of these models is correctly specified and can exhibit nonnegligible bias if the estimated propensity score is close to zero for some observations . we propose alternative doubly robust estimators that achieve comparable or improved performance relative to existing methods , even with some estimated propensity scores close to zero . story_separator_special_tag let s = { s } be the set of subsets of { 1 , . , n } such that each s es contains n elements . we consider in this paper only designs p ( s ) with fixed sample size , that is p ( s ) > 0 only if s es and esp ( s ) = 1 , where es denotes summation over s es . let p ( s ) be independent of the yk , where yk is the value of the variable of interest for the population unit labelled k ( k = 1 , . , n ) . we consider a superpopulation model : yl , . , yn is a realization of y1 , . , yn with joint distribution 6. if d ( . ) denotes expectationwith respect to 6 , let , for k , i = 1 , . , n , 6 be further specified by story_separator_special_tag we establish a general framework for statistical inferences with nonprobability survey samples when relevant auxiliary information is available from a probability survey sample . we develop a rigoro . story_separator_special_tag we develop a bayesian `` sum-of-trees '' model , named bart , where each tree is constrained by a prior to be a weak learner . fitting and inference are accomplished via an iterative backfitting mcmc algorithm . this model is motivated by ensemble methods in general , and boosting algorithms in particular . like boosting , each weak learner ( i.e. , each weak tree ) contributes a small amount to the overall model . however , our procedure is defined by a statistical model : a prior and a likelihood , while boosting is defined by an algorithm . this model-based approach enables a full and accurate assessment of uncertainty in model predictions , while remaining highly competitive in terms of predictive accuracy . story_separator_special_tag we develop a bayesian `` sum-of-trees '' model where each tree is constrained by a regularization prior to be a weak learner , and fitting and inference are accomplished via an iterative bayesian backfitting mcmc algorithm that generates samples from a posterior . effectively , bart is a nonparametric bayesian regression approach which uses dimensionally adaptive random basis elements . motivated by ensemble methods in general , and boosting algorithms in particular , bart is defined by a statistical model : a prior and a likelihood . this approach enables full posterior inference including point and interval estimates of the unknown regression function as well as the marginal effects of potential predictors . by keeping track of predictor inclusion frequencies , bart can also be used for model-free variable selection . bart 's many features are illustrated with a bake-off against competing methods on 42 different data sets , with a simulation experiment and on a drug discovery classification problem . story_separator_special_tag in this paper i review three key technology-related trends : 1 ) big data , 2 ) non-probability samples , and 3 ) mobile data collection . i focus on the implications of these trends for survey research and the research profession . with regard to big data , i review a number of concerns that need to be addressed , and argue for a balanced and careful evaluation of the role that big data can play in the future . i argue that these developments are unlikely to replace transitional survey data collection , but will supplement surveys and expand the range of research methods . i also argue for the need for the survey research profession to adapt to changing circumstances . story_separator_special_tag more and more data are being produced by an increasing number of electronic devices physically surrounding us and on the internet . the large amount of data and the high frequency at which they are produced have resulted in the introduction of the term big data . because these data reflect many different aspects of our daily lives and because of their abundance and availability , big data sources are very interesting from an official statistics point of view . this article discusses the exploration of both opportunities and challenges for official statistics associated with the application of big data . experiences gained with analyses of large amounts of dutch traffic loop detection records and dutch social media messages are described to illustrate the topics characteristic of the statistical analysis and use of big data . story_separator_special_tag abstract suppose that a forecaster sequentially assigns probabilities to events . he is well calibrated if , for example , of those events to which he assigns a probability 30 percent , the long-run proportion that actually occurs turns out to be 30 percent . we prove a theorem to the effect that a coherent bayesian expects to be well calibrated , and consider its destructive implications for the theory of coherence . story_separator_special_tag outside of the survey sampling literature , samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed ( iid ) samples . many statistical methods are developed largely in this iid world . application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences . hence , much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design . this issue is particularly important when generating synthetic populations using finite population bayesian inference , as is often done in missing data or disclosure risk settings , or when combining data from multiple surveys . by extending previous work in finite population bayesian bootstrap literature , we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view , making adjustment on the complex data so that they can be analyzed as simple random samples . we consider a simulation study with a stratified , clustered unequal-probability story_separator_special_tag although selecting a probability sample has been the standard for decades when making inferences from a sample to a finite population , incentives are increasing to use nonprobability samples . in a world of big data , large amounts of data are available that are faster and easier to collect than are probability samples . design-based inference , in which the distribution for inference is generated by the random mechanism used by the sampler , can not be used for nonprobability samples . one alternative is quasi-randomization in which pseudo-inclusion probabilities are estimated based on covariates available for samples and nonsample units . another is superpopulation modeling for the analytic variables collected on the sample units in which the model is used to predict values for the nonsample units . we discuss the pros and cons of each approach . story_separator_special_tag this paper proposes a regression model where the response is beta distributed using a parameterization of the beta law that is indexed by mean and dispersion parameters . the proposed model is useful for situations where the variable of interest is continuous and restricted to the interval ( 0 , 1 ) and is related to other variables through a regression structure . the regression parameters of the beta regression model are interpretable in terms of the mean of the response and , when the logit link is used , of an odds ratio , unlike the parameters of a linear regression that employs a transformed response . estimation is performed by maximum likelihood . we provide closed-form expressions for the score function , for fisher 's information matrix and its inverse . hypothesis testing is performed using approximations obtained from the asymptotic normality of the maximum likelihood estimator . some diagnostic measures are introduced . finally , practical applications that employ real data are presented and discussed . story_separator_special_tag summary the need for new methods to deal with big data is a common theme in most scientific fields , although its definition tends to vary with the context . statistical ideas are an essential part of this , and as a partial response , a thematic program on statistical inference , learning and models in big data was held in 2015 in canada , under the general direction of the canadian statistical sciences institute , with major funding from , and most activities located at , the fields institute for research in mathematical sciences . this paper gives an overview of the topics covered , describing challenges and strategies that seem common to many different areas of application and including some examples of applications to make these challenges and strategies more concrete . story_separator_special_tag the general principles of bayesian data analysis imply that models for survey responses should be constructed conditional on all variables that affect the probability of inclusion and nonresponse , which are also the variables used in survey weighting and clustering . however , such models can quickly become very complicated , with potentially thousands of poststratification cells . it is then a challenge to develop general families of multilevel probability models that yield reasonable bayesian inferences . we discuss in the context of several ongoing public health and social surveys . this work is currently open-ended , and we conclude with thoughts on how research could proceed to solve these problems . story_separator_special_tag although survey research is a young field relative to many scientific domains , it has already experienced three distinct stages of de- velopment . in the first era ( 1930-1960 ) , the founders of the field invented the basic components of the design of data collection and the tools to produce the statistical information from surveys . as they were inventing the method , they were also building the institutions that conduct surveys in the private , academic , and government sectors . the second era ( 1960-1990 ) witnessed a vast growth in the use of the survey method . this growth was aided by the needs of the u.s. federal government to monitor the effects of investments in human and physical infrastructure , the growth of the quantitative social sciences , and the use of quantitative information to study consumer behaviors . the third era ( 1990 and forward ) witnessed the declines in survey participation rates , the growth of alternative modes of data collection , the weakening of sampling frames , and the growth of continuously produced process data from digital systems in all sectors , but especially those emanating from the internet . throughout story_separator_special_tag summary we propose an estimator that is more robust than doubly robust estimators , based on weighting completecasesusingweightsotherthaninverseprobabilitywhenestimatingthepopulationmean of a response variable subject to ignorable missingness . we allow multiple models for both the propensity score and the outcome regression . our estimator is consistent if any of the multiple models is correctly specified . such multiple robustness against model misspecification is a significant improvement over double robustness , which allows only one propensity score model and one outcome regression model . our estimator attains the semiparametric efficiency bound when one propensity score model and one outcome regression model are correctly specified , without requiring knowledge of which models are correct . story_separator_special_tag abstract nonresponse is a very common phenomenon in survey sampling . nonignorable nonresponse that is , a response mechanism that depends on the values of the variable having nonresponse is the most difficult type of nonresponse to handle . this article develops a robust estimation approach to estimating equations ( ees ) by incorporating the modelling of nonignorably missing data , the generalized method of moments ( gmm ) method and the imputation of ees via the observed data rather than the imputed missing values when some responses are subject to nonignorably missingness . based on a particular semiparametric logistic model for nonignorable missing response , this paper proposes the modified ees to calculate the conditional expectation under nonignorably missing data . we can apply the gmm to infer the parameters . the advantage of our method is that it replaces the non-parametric kernel-smoothing with a parametric sampling importance resampling ( sir ) procedure to avoid nonparametric kernel-smoothing problems with high dimensional covariates . the proposed method is shown to be more robust than some current approaches by the simulations . story_separator_special_tag causal inference in observational studies typically requires making comparisons between groups that are dissimilar . for instance , researchers investigating the role of a prolonged duration of breastfeeding on child outcomes may be forced to make comparisons between women with substantially different characteristics on average . in the extreme there may exist neighborhoods of the covariate space where there are not sufficient numbers of both groups of women ( those who breastfed for prolonged periods and those who did not ) to make inferences about those women . this is referred to as lack of common support . problems can arise when we try to estimate causal effects for units that lack common support , thus we may want to avoid inference for such units . if ignorability is satisfied with respect to a set of potential confounders , then identifying whether , or for which units , the common support assumption holds is an empirical question . however , in the high-dimensional covariate space often required to satisfy ignorability such identification may not be trivial . existing methods used to address this problem often require reliance on parametric assumptions and most , if not all , ignore the information story_separator_special_tag this article explores some of the challenges that arise when trying to implement propensity score strategies to answer a causal question using data with a large number of covariates . we discuss choices in propensity score estimation strategies , matching and weighting implementation strategies , balance diagnostics , and final analysis models . we demonstrate the wide range of estimates that can result from different combinations of these choices . finally , an alternative estimation strategy is presented that may have benefits in terms of simplicity and reliability . these issues are explored in the context of an empirical example that uses data from the early childhood longitudinal study , kindergarten cohort to investigate the potential effect of grade retention after the 1st-grade year on subsequent cognitive outcomes . story_separator_special_tag propensity score methods are an important tool to help reduce confounding in non-experimental studies and produce more accurate causal effect estimates . most propensity score methods assume that covariates are measured without error . however , covariates are often measured with error . recent work has shown that ignoring such error could lead to bias in treatment effect estimates . in this paper , we consider an additional complication : that of differential measurement error across treatment groups , such as can occur if a covariate is measured differently in the treatment and control groups . we propose two flexible bayesian approaches for handling differential measurement error when estimating average causal effects using propensity score methods . we consider three scenarios : systematic ( i.e. , a location shift ) , heteroscedastic ( i.e. , different variances ) , and mixed ( both systematic and heteroscedastic ) measurement errors . we also explore various prior choices ( i.e. , weakly informative or point mass ) on the sensitivity parameters related to the differential measurement error . we present results from simulation studies evaluating the performance of the proposed methods and apply these approaches to an example estimating the effect of story_separator_special_tag background the purpose of this study was to examine the association between secondary task involvement and risk of crash and near-crash involvement among older drivers using naturalistic driving data . methods data from drivers aged 70 years in the strategic highway research program ( shrp2 ) naturalistic driving study database was utilized . the personal vehicle of study participants was equipped with four video cameras enabling recording of the driver and the road environment . secondary task involvement during a crash or near-crash event was compared to periods of noncrash involvement in a case-crossover study design . conditional logistic regression was used to generate odds ratios ( ors ) and 95 % confidence intervals ( ci ) . results overall , engaging in any secondary task was not associated with crash ( or = 0.94 , 95 % ci 0.68-1.29 ) or near-crash ( or = 1.08 , 95 % ci 0.79-1.50 ) risk . the risk of a major crash event with cell phone use was 3.79 times higher than the risk with no cell phone use ( 95 % ci 1.00-14.37 ) . other glances into the interior of the vehicle were associated with an increased risk of story_separator_special_tag applications frequently involve logistic regression analysis with clustered data where there are few positive outcomes in some of the independent variable categories . for example , an application is given here that analyzes the association of asthma with various demographic variables and risk factors using data from the third national health and nutrition examination survey , a weighted multi stage cluster sample . although there are 742 asthma cases in all ( out of 18 395 individuals ) , for one of the categories of one of the independent variables there are only 25 asthma cases ( out of 695 individuals ) . generalized wald and score hypothesis tests , which use appropriate cluster-level variance estimators , and a bootstrap hypothesis test have been proposed for testing logistic regression coefficients with cluster samples . when there are few positive outcomes , simulations presented in this paper show that these tests can sometimes have either inflated or very conservative levels . a simulation-based method is proposed for testing logistic regression coefficients with cluster samples when there are few positive outcomes . this testing methodology is shown to compare favorably with the generalized wald and score tests and the bootstrap hypothesis test story_separator_special_tag when outcomes are missing for reasons beyond an investigator 's control , there are two different ways to adjust a parameter estimate for covariates that may be related both to the outcome and to missingness . one approach is to model the relationships between the covariates and the outcome and use those relationships to predict the missing values . another is to model the probabilities of missingness given the covariates and incorporate them into a weighted or stratified estimate . doubly robust ( dr ) procedures apply both types of model simultaneously and produce a consistent estimate of the parameter if either of the two models has been correctly specified . in this article , we show that dr estimates can be constructed in many ways . we compare the performance of various dr and non-dr estimates of a population mean in a simulated example where both models are incorrect but neither is grossly misspecified . methods that use inverse-probabilities as weights , whether they are dr or not , are sensitive to misspecification of the propensity model when some estimated propensities are small . many dr methods perform better than simple inverse-probability weighting . none of the dr methods story_separator_special_tag a two-step bayesian propensity score approach is introduced that incorporates prior information in the propensity score equation and outcome equation without the problems associated with simultaneous bayesian propensity score approaches . the corresponding variance estimators are also provided . the two-step bayesian propensity score is provided for three methods of implementation : propensity score stratification , weighting , and optimal full matching . three simulation studies and one case study are presented to elaborate the proposed two-step bayesian propensity score approach . results of the simulation studies reveal that greater precision in the propensity score equation yields better recovery of the frequentist-based treatment effect . a slight advantage is shown for the bayesian approach in small samples . results also reveal that greater precision around the wrong treatment effect can lead to seriously distorted results . however , greater precision around the correct treatment effect parameter yields quite good results , with slight improvement seen with greater precision in the propensity score equation . a comparison of coverage rates for the conventional frequentist approach and proposed bayesian approach is also provided . the case study reveals that credible intervals are wider than frequentist confidence intervals when priors are non-informative . story_separator_special_tag randomized experiments are considered the gold standard for causal inference , as they can provide unbiased estimates of treatment effects for the experimental participants . however , researchers and policymakers are often interested in using a specific experiment to inform decisions about other target populations . in education research , increasing attention is being paid to the potential lack of generalizability of randomized experiments , as the experimental participants may be unrepresentative of the target population of interest . this paper examines whether generalization may be assisted by statistical methods that adjust for observed differences between the experimental participants and members of a target population . the methods examined include approaches that reweight the experimental data so that participants more closely resemble the target population and methods that utilize models of the outcome . two simulation studies and one empirical analysis investigate and compare the methods ' performance . one simulation uses purely simulated data while the other utilizes data from an evaluation of a school-based dropout prevention program . our simulations suggest that machine learning methods outperform regression-based methods when the required structural ( ignorability ) assumptions are satisfied . when these assumptions are violated , all of the story_separator_special_tag e de montr eal abstract : statistical inference with missing data requires assumptions about the population or about the response probability . doubly robust ( dr ) estimators use both relationships to estimate the parameters of interest , so that they are consistent even when one of the models is misspecified . in this paper , we propose a method of computing propensity scores that leads to dr estimation . in addition , we discuss dr variance estimation so that the resulting inference is doubly robust . some asymptotic properties are discussed . results from two limited simulation studies are also presented . story_separator_special_tag the authors propose a new ratio imputation method using response probability . their estimator can be justified either under the response model or under the imputation mo del ; it is thus doubly protected against the failure of either of these models . the authors also propose a variance estimator that can be justified under the two models . their methodology is applicable whether the re sponse probabilities are estimated or known . a small simulation study illustrates their technique . utilisation de probabilit \xb4 es de r\xb4 eponse ` a des fins d'imputation r\xb4\xb4 e : les auteurs proposent une nouvelle md'imputation par quotient bassur les probabilit \xb4 es imputation is a commonly used method of compensating for item nonresponse in sample sur- veys . reasons for conducting imputation are to facilitate analyses using complete data analysis methods , to ensure that the results obtained by different an alyses are consistent with one another , and to reduce nonresponse bias . kalton ( 1983 ) and groves , dillman , eltinge & little ( 2002 ) provide a comprehensive overview of imputation methods in survey sampling . many imputation methods such as ratio imputation or regression imputation use auxiliary story_separator_special_tag this paper presents theoretical results on combining non-probability and probability survey samples through mass imputation , an approach originally proposed by rivers ( 2007 ) as sample matching without rigorous theoretical justification . under suitable regularity conditions , we establish the consistency of the mass imputation estimator and derive its asymptotic variance formula . variance estimators are developed using either linearization or bootstrap . finite sample performances of the mass imputation estimator are investigated through simulation studies and an application to analyzing a non-probability sample collected by the pew research centre . story_separator_special_tag combining information from two or more independent surveys is a problem frequently encountered in survey sampling . we consider the case of two independent surveys , where a large sample from survey 1 collects only auxiliary information and a much smaller sample from survey 2 provides information on both the variables of interest and the auxiliary variables . we propose a model-assisted projection method of estimation based on a working model , but the reference distribution is design-based . we generate synthetic or proxy values of a variable of interest by first fitting the working model , relating the variable of interest to the auxiliary variables , to the data from survey 2 and then predicting the variable of interest associated with the auxiliary variables observed in survey 1. the projection estimator of a total is simply obtained from the survey 1 weights and associated synthetic values . we identify the conditions for the projection estimator to be asymptotically unbiased . domain estimation using the projection method is also considered . replication variance estimators are obtained by augmenting the synthetic data file for survey 1 with additional synthetic columns associated with the columns of replicate weights . results from a story_separator_special_tag the statistical challenges in using big data for making valid statistical inference in the finite population have been well documented in literature . these challenges are due primarily to statistical bias arising from under coverage in the big data source to represent the population of interest and measurement errors in the variables available in the data set . by stratifying the population into a big data stratum and a missing data stratum , we can estimate the missing data stratum by using a fully responding probability sample and hence the population as a whole by using a data integration estimator . by expressing the data integration estimator as a regression estimator , we can handle measurement errors in the variables in big data and also in the probability sample . we also propose a fully nonparametric classification method for identifying the overlapping units and develop a bias corrected data integration estimator under misclassification errors . finally , we develop a two step regression data integration estimator to deal with measurement errors in the probability sample . an advantage of the approach advocated in this paper is that we do not have to make unrealistic missing at random assumptions for the story_separator_special_tag combining information from different sources is an important practical problem in survey sampling . using a hierarchical area-level model , we establish a framework to integrate auxiliary information to improve state-level area estimates . the best predictors are obtained by the conditional expectations of latent variables given observations , and an estimate of the mean squared prediction error is discussed . sponsored by the national agricultural statistics service of the us department of agriculture , the proposed model is applied to the planted crop acreage estimation problem by combining information from three sources , including the june area survey obtained by a probability-based sampling of lands , administrative data about the planted acreage and the cropland data layer , which is a commodity-specific classification product derived from remote sensing data . the proposed model combines the available information at a sub-state level called the agricultural statistics district and aggregates to improve state-level estimates of planted acreages for different crops . supplementary materials accompanying this paper appear on-line . story_separator_special_tag the development of big data is set to be a significant disruptive innovation in the production of official statistics offering a range of opportunities , challenges and risks to the work of national statistical institutions ( nsis ) . this paper provides a synoptic overview of these issues in detail , mapping out the various pros and cons of big data for producing official statistics , examining the work to date by nsis in formulating a strategic and operational response to big data , and plotting some suggestions with respect to on-going change management needed to address the use of big data for official statistics . story_separator_special_tag abstract two distinct types of models are used for handling nonresponse in survey sampling theory . in a response ( or quasi-randomization ) model , the propensity of survey response is modeled as a random process , an additional phase of sample selection . in a parametric ( or superpopulation ) model , the survey data are themselves modeled . these two models can be used simultaneously in the estimation of a population mean so that one provides some protection against the potential for failure in the other . two different estimators are discussed in this article . the first is a regression estimator that is both unbiased under the parametric model and nearly quasi-design unbiased under the response model . the second is a direct expansion estimator with imputed missing values . the imputed values are such that the estimator is both nearly quasi-design unbiased and unbiased under the combination of the parametric model and the original sampling design . the article includes a discussion of variance estimation with the g . story_separator_special_tag calibration weighting can be used to adjust for unit nonresponse and/or coverage errors under appropriate quasi\xad randomization\xa0models . alternative\xa0calibration\xa0adjustments\xa0that are\xa0asymptotically identical in\xa0a\xa0purely sampling context can diverge\xa0when\xa0used\xa0in\xa0this\xa0manner . introducing instrumental variables\xa0into calibration\xa0weighting makes\xa0it possible\xa0for nonresponse\xa0 ( say ) to be\xa0a\xa0function\xa0of a\xa0set of characteristics\xa0other than\xa0those\xa0in\xa0the\xa0calibration\xa0vector . when\xa0the\xa0calibration adjustment has a nonlinear form , a variant of the jackknife can remove the need for iteration in variance estimation . story_separator_special_tag when calibration weighting is be used to adjust for unit nonresponse in a sample survey , the response/nonresponse mechanism is often assumed to be a function of a set of covariates , which we call model variables . these model variables usually also serve as the benchmark variables in the calibration equation . in principle , however , the model variables do not have to coincide with the benchmark variables . since the model-variable values need only be known for the respondents , this allows the treatment of what is usually considered nonignorable nonresponse in the prediction approach to survey sampling . one can invoke either a quasi-randomization or prediction approach to justify calibration weighting as a means for adjusting for nonresponse . both frameworks rely on unverifiable model assumptions , and both require large samples to produce nearly unbiased estimators even when those assumptions hold . we will explore these issues theoretically using a joint framework and with an empirical study . story_separator_special_tag introduction big data pose several interesting and new challenges to statisticians and others who want to extract information from data . as groves pointedly commented , the era is appropriately called big data as opposed to big information , because there is a lot of work for analysts before information can be gained from auxiliary traces of some process that is going on in the society . the analytic challenges most often discussed are those related to three of the vs that are used to characterize big data . the volume of truly massive data requires expansion of processing techniques that match modern hardware infrastructure , cloud computing with appropriate optimization mechanisms , and re-engineering of storage systems . the velocity of the data calls for algorithms that allow learning and updating on a continuous basis , and of course the computing infrastructure to do so . finally , the variety of the data structures requires statistical methods that more easily allow for the combination of different data types collected at different levels , sometimes with a temporal and geographic structure . however , when it comes to privacy and confidentiality , the challenges of extracting ( meaningful ) information story_separator_special_tag propensity score adjustment ( psa ) has been suggested as an approach to adjustment for volunteer panel web survey data . psa attempts to decrease , if not remove , the biases arising from noncoverage , nonprobability sampling , and nonresponse in volunteer panel web surveys . although psa is an appealing method , its application in web survey practice is not well documented , and its effectiveness is not well understood . this study attempts to provide an overview of the psa application by demystifying its performance for web surveys . findings are three-fold : ( a ) psa decreases bias but increases variance , ( b ) it is critical to include covariates that are highly related to the study outcomes , and ( c ) the role of nondemographic variables does not seem critical to improving psa . story_separator_special_tag collecting data using probability samples can be expensive , and response rates for many household surveys are decreasing . the increasing availability of large data sources opens new opportunities for statisticians to use the information in survey data more efficiently by combining survey data with information from these other sources . we review some of the work done to date on statistical methods for combining information from multiple data sources , discuss the limitations and challenges for different methods that have been proposed , and describe research that is needed for combining survey estimates . story_separator_special_tag missing attributes are ubiquitous in causal inference , as they are in most applied statistical work . in this paper , we consider various sets of assumptions under which causal inference is possible despite missing attributes and discuss corresponding approaches to average treatment effect estimation , including generalized propensity score methods and multiple imputation . across an extensive simulation study , we show that no single method systematically out-performs others . we find , however , that doubly robust modifications of standard methods for average treatment effect estimation with missing data repeatedly perform better than their non-doubly robust baselines ; for example , doubly robust generalized propensity score methods beat inverse-weighting with the generalized propensity score . this finding is reinforced in an analysis of an observations study on the effect on mortality of tranexamic acid administration among patients with traumatic brain injury in the context of critical care management . here , doubly robust estimators recover confidence intervals that are consistent with evidence from randomized trials , whereas non-doubly robust estimators do not . story_separator_special_tag \xa9 2017 john wiley & sons a/s and the gerodontology association . published by john wiley & sons ltd the national public health system of brazil ( sistema \xfanico de sa\xfade or sus ) is based on the principles of integrating health promotion , protection and rehabilitation for everyone.1 it offers primary , secondary and tertiary care , including oral health care at each level . the family health strategy ( fhs ) , as a means of delivering care , was introduced in 2011 to administer multidisciplinary teams of physicians , nurses , auxiliary nurses , community health workers and occasionally dentists . the teams are organised geographically to cover populations of up to 1000 households each focused on communitybased care with domiciliary visits.1 through this strategy almost 100 million brazilians ( ~50 % of population ) of all age groups get dental care annually with a high level of satisfaction.2,3 geriatric dentistry became a speciality recognised by the brazilian dental council in 2001 and , in 2016 , there were 271 registered dental geriatricians . in 2016 , there were more than 279 000 general dentists in brazil , mostly working in private practice , and in the story_separator_special_tag statisticians are increasingly posed with thought-provoking and even paradoxical questions , challenging our qualifications for entering the statistical paradises created by big data . by developing measures for data quality , this article suggests a framework to address such a question : which one should i trust more : a 1 % survey with 60 % response rate or a self-reported administrative dataset covering 80 % of the population ? a 5-element eulerformula-like identity shows that for any dataset of size n , probabilistic or not , the difference between the sample average xn and the population average xn is the product of three terms : ( 1 ) a data quality measure , r , x , the correlation between xj and the response/recording indicator rj ; ( 2 ) a data quantity measure , ( n n ) /n , where n is the population size ; and ( 3 ) a problem difficulty measure , x , the standard deviation of x. this decomposition provides multiple insights : ( i ) probabilistic sampling ensures high data quality by controlling r , x at the level of n 1/2 ; ( ii ) when we lose this control story_separator_special_tag big data are a big challenge for finite population inference . lack of control over data-generating processes by researchers in the absence of a known random selection mechanism may lead to biased estimates . further , larger sample sizes increase the relative contribution of selection bias to squared or absolute error . one approach to mitigate this issue is to treat big data as a random sample and estimate the pseudo-inclusion probabilities through a benchmark survey with a set of relevant auxiliary variables common to the big data . since the true propensity model is usually unknown , and big data tend to be poor in such variables that fully govern the selection mechanism , the use of flexible non-parametric models seems to be essential . traditionally , a weighted logistic model is recommended to account for the sampling weights in the benchmark survey when estimating the propensity scores . however , handling weights is a hurdle when seeking a broader range of predictive methods . to further protect against model misspecification , we propose using an alternative pseudo-weighting approach that allows us to fit more flexible modern predictive tools such as bayesian additive regression trees ( bart ) , story_separator_special_tag abstract : the results of observational studies are often disputed because of nonrandom treatment assignment . for example , patients at greater risk may be overrepresented in some treatment group . this paper discusses the central role of propensity scores and balancing scores in the analysis of observational studies . the propensity score is the ( estimated ) conditional probability of assignment to a particular treatment given a vector of observed covariates . both large and small sample theory show that adjustment for the scalar propensity score is sufficient to remove bias due to all observed covariates . applications include : matched sampling on the univariate propensity score which is equal percent bias reducing under more general conditions than required for discriminant matching , multivariate adjustment by subclassification on balancing scores where the same subclasses are used to estimate treatment effects for all outcome variables and in all subpopulations , and visual representation of multivariate adjustment by a two-dimensional plot . ( author ) story_separator_special_tag abstract consider a study whose design calls for the study subjects to be followed from enrollment ( time t = 0 ) to time t = t , at which point a primary endpoint of interest y is to be measured . the design of the study also calls for measurements on a vector v t ) of covariates to be made at one or more times t during the interval [ 0 , t ) . we are interested in making inferences about the marginal mean 0 of y when some subjects drop out of the study at random times q prior to the common fixed end of follow-up time t. the purpose of this article is to show how to make inferences about 0 when the continuous drop-out time q is modeled semiparametrically and no restrictions are placed on the joint distribution of the outcome and other measured variables . in particular , we consider two models for the conditional hazard of drop-out given ( v ( t ) , y ) , where v ( t ) denotes the history of the process v t ) through time t , t [ 0 , t ) . story_separator_special_tag a systematic literature review of papers on big data in healthcare published between 2010 and 2015 was conducted . this paper reviews the definition , process , and use of big data in healthcare management . unstructured data are growing very faster than semi-structured and structured data . 90 percentages of the big data are in a form of unstructured data , major steps of big data management in healthcare industry are data acquisition , storage of data , managing the data , analysis on data and data visualization . recent researches targets on big data visualization tools . in this paper the authors analysed the effective tools used for visualization of big data and suggesting new visualization tools to manage the big data in healthcare industry . this article will be helpful to understand the processes and use of big data in healthcare management . story_separator_special_tag high-dimensional data provides many potential confounders that may bolster the plausibility of the ignorability assumption in causal inference problems . propensity score methods are powerful causal inference tools , which are popular in health care research and are particularly useful for high-dimensional data . recent interest has surrounded a bayesian treatment of propensity scores in order to flexibly model the treatment assignment mechanism and summarize posterior quantities while incorporating variance from the treatment model . we discuss methods for bayesian propensity score analysis of binary treatments , focusing on modern methods for high-dimensional bayesian regression and the propagation of uncertainty . we introduce a novel and simple estimator for the average treatment effect that capitalizes on conjugacy of the beta and binomial distributions . through simulations , we show the utility of horseshoe priors and bayesian additive regression trees paired with our new estimator , while demonstrating the importance of including variance from the treatment regression model . an application to cardiac stent data with almost 500 confounders and 9000 patients illustrates approaches and facilitates comparison with existing alternatives . as measured by a falsifiability endpoint , we improved confounder adjustment compared with past observational research of the same problem story_separator_special_tag the rise of big data changes the context in which organisations producing official statistics operate . big data provides opportunities , but in order to make optimal use of big data , a number of challenges have to be addressed . this stimulates increased collaboration between national statistical institutes , big data holders , businesses and universities . in time , this may lead to a shift in the role of statistical institutes in the provision of high-quality and impartial statistical information to society . in this paper , the changes in context , the opportunities , the challenges and the way to collaborate are addressed . the collaboration between the various stakeholders will involve each partner building on and contributing different strengths . for national statistical offices , traditional strengths include , on the one hand , the ability to collect data and combine data sources with statistical products and , on the other hand , their focus on quality , transparency and sound methodology . in the big data era of competing and multiplying data sources , they continue to have a unique knowledge of official statistical production methods . and their impartiality and respect for privacy as story_separator_special_tag as connected autonomous vehicles ( cavs ) enter the fleet , there will be a long period when these vehicles will have to interact with human drivers . one of the challenges for cavs is that human drivers do not communicate their decisions well . fortunately , the kinematic behavior of a human-driven vehicle may be a good predictor of driver intent within a short time frame . we analyzed the kinematic time series data ( e.g. , speed ) for a set of drivers making left turns at intersections to predict whether the driver would stop before executing the turn . we used principal components analysis ( pca ) to generate independent dimensions that explain the variation in vehicle speed before a turn . these dimensions remained relatively consistent throughout the maneuver , allowing us to compute independent scores on these dimensions for different time windows throughout the approach to the intersection . we then linked these pca scores to whether a driver would stop before executing a left turn using the random intercept bayesian additive regression trees . five more road and observable vehicle characteristics were included to enhance prediction . our model achieved an area under the story_separator_special_tag the development of driverless vehicles has spurred the need to predict human driving behavior to facilitate interaction between driverless and human-driven vehicles . predicting human driving movements can be challenging , and poor prediction models can lead to accidents between the driverless and human-driven vehicles . we used the vehicle speed obtained from a naturalistic driving dataset to predict whether a human-driven vehicle would stop before executing a left turn . in a preliminary analysis , we found that bart produced less variable and higher auc values compared to a variety of other state-of-the-art binary predictor methods . however , bart assumes independent observations , but our dataset consists of multiple observations clustered by driver . although methods extending bart to clustered or longitudinal data are available , they lack readily available software and can only be applied to clustered continuous outcomes . we extend bart to handle correlated binary observations by adding a random intercept and used a simulation study to determine bias , root mean squared error , 95 % coverage , and average length of 95 % credible interval in a correlated data setting . we then successfully implemented our random intercept bart model to our clustered story_separator_special_tag examples of `` doubly robust '' estimator for missing data include augmented inverse probability weighting ( aipwt ) models ( robins et al. , 1994 ) and penalized splines of propensity prediction ( pspp ) models ( zhang and little , 2009 ) . doubly-robust estimators have the property that , if either the response propensity or the mean is modeled correctly , a consistent estimator of the population mean is obtained . however , doubly-robust estimators can perform poorly when modest misspecification is present in both models ( kang and schafer , 2007 ) . here we consider extensions of the aipwt and pspp models that use bayesian additive regression trees ( bart ; chipman et al. , 2010 ) to provide highly robust propensity and mean model estimation . we term these `` robust-squared '' in the sense that the propensity score , the means , or both can be estimated with minimal model misspecification , and applied to the doubly-robust estimator . we consider their behavior via simulations where propensities and/or mean models are misspecified . we apply our proposed method to impute missing instantaneous velocity ( delta-v ) values from the 2014 national automotive sampling system story_separator_special_tag drawing inferences about the effects of treatments and actions is a common challenge in economics , epidemiology , and other fields . we adopt rubin 's potential outcomes framework for causal inference and propose two methods serving complementary purposes . one can be used to estimate average causal effects , assuming no confounding given measured covariates . the other can be used to assess how the estimates might change under various departures from no confounding . both methods are developed from a nonparametric likelihood perspective . the propensity score plays a central role and is estimated through a parametric model . under the assumption of no confounding , the joint distribution of covariates and each potential outcome is estimated as a weighted empirical distribution . expectations from the joint distribution are estimated as weighted averages or , equivalently to first order , regression estimates . the likelihood estimator is at least as efficient and the regression estimator is at least as efficient and r . story_separator_special_tag three approaches to estimation from nonprobability samples are quasi-randomization , superpopulation modeling , and doubly robust estimation . in the first , the sample is treated as if it were obtained via a probability mechanism , but unlike in probability sampling , that mechanism is unknown . pseudo selection probabilities of being in the sample are estimated by using the sample in combination with some external data set that covers the desired population . in the superpopulation approach , observed values of analysis variables are treated as if they had been generated by some model . the model is estimated from the sample and , along with external population control data , is used to project the sample to the population . the specific techniques are the same or similar to ones commonly\xa0employed for estimation from probability samples and include binary regression , regression trees , and calibration . when quasi-randomization and superpopulation modeling are combined , this is referred\xa0to as doubly robust estimation . this article reviews some of the estimation options and compares them in a series of simulation studies . story_separator_special_tag panels of persons who volunteer to participate in web surveys are used to make estimates for entire populations , including persons who have no access to the internet . one method of adjusting a volu . story_separator_special_tag election forecasts have traditionally been based on representative polls , in which randomly sampled individuals are asked for whom they intend to vote . while representative polling has historically proven to be quite eective , it comes at considerable nancial and time costs . moreover , as response rates have declined over the past several decades , the statistical benets of representative sampling have diminished . in this paper , we show that with proper statistical adjustment , non-representative polls can be used to generate accurate election forecasts , and often faster and at less expense than traditional survey methods . we demonstrate this approach by creating forecasts from a novel and highly non-representative survey dataset : a series of daily voter intention polls for the 2012 presidential election conducted on the xbox gaming platform . after adjusting the xbox responses via multilevel regression and poststratication , we obtain estimates in line with forecasts from leading poll analysts , which were based on aggregating hundreds of traditional polls conducted during the election cycle . we conclude by arguing that non-representative polling shows promise not only for election forecasting , but also for measuring public opinion on a broad range of story_separator_special_tag there is growing interest in using routinely collected data from health care databases to study the safety and effectiveness of therapies in `` real-world '' conditions , as it can provide complementary evidence to that of randomized controlled trials . causal inference from health care databases is challenging because the data are typically noisy , high dimensional , and most importantly , observational . it requires methods that can estimate heterogeneous treatment effects while controlling for confounding in high dimensions . bayesian additive regression trees , causal forests , causal boosting , and causal multivariate adaptive regression splines are off-the-shelf methods that have shown good performance for estimation of heterogeneous treatment effects in observational studies of continuous outcomes . however , it is not clear how these methods would perform in health care database studies where outcomes are often binary and rare and data structures are complex . in this study , we evaluate these methods in simulation studies that recapitulate key characteristics of comparative effectiveness studies . we focus on the conditional average effect of a binary treatment on a binary outcome using the conditional risk difference as an estimand . to emulate health care database studies , we story_separator_special_tag suppose that the finite population consists of n identifiable units . associated with the ith unit are the study variable , yi , and a vector of auxiliary variables , xi . the values x1 , x2 , , xn are known for the entire population ( i.e. , complete ) but yi is known only if the ith unit is selected in the sample . one of the fundamental questions is how to effectively use the complete auxiliary information at the estimation stage . in this article , a unified model-assisted framework has been attempted using a proposed model-calibration technique . the proposed model-calibration estimators can handle any linear or nonlinear working models and reduce to the conventional calibration estimators of deville and sarndal and/or the generalized regression estimators in the linear model case . the pseudoempirical maximum likelihood estimator of chen and sitter , when used in this setting , gives an estimator that is asymptotically equivalent to the model-calibration estimator but with positive weights . some existing estimators . story_separator_special_tag multiple data sources are becoming increasingly available for statistical analyses in the era of big data . as an important example in finite-population inference , we consider an imputation approach to combining a probability sample with big observational data . unlike the usual imputation for missing data analysis , we create imputed values for the whole elements in the probability sample . such mass imputation is attractive in the context of survey data integration ( kim and rao , 2012 ) . we extend mass imputation as a tool for data integration of survey data and big non-survey data . the mass imputation methods and their statistical properties are presented . the matching estimator of rivers ( 2007 ) is also covered as a special case . variance estimation with mass-imputed data is discussed . the simulation results demonstrate the proposed estimators outperform existing competitors in terms of robustness and efficiency . story_separator_special_tag we consider integrating a non-probability sample with a probability sample which provides high dimensional representative covariate information of the target population . we propose a two-step approach for variable selection and finite population inference . in the first step , we use penalized estimating equations with folded concave penalties to select important variables and show selection consistency for general samples . in the second step , we focus on a doubly robust estimator of the finite population mean and re-estimate the nuisance model parameters by minimizing the asymptotic squared bias of the doubly robust estimator . this estimating strategy mitigates the possible first-step selection error and renders the doubly robust estimator root n consistent if either the sampling probability or the outcome model is correctly specified . story_separator_special_tag we study bayesian inference for the population total in probability-proportional-to-size ( pps ) sampling . the sizes of non-sampled units are not required for the usual horvitz-thompson or hajek estimates , and this information is rarely included in public use data files . zheng and little ( 2003 ) showed that including the non-sampled sizes as predictors in a spline model can result in improved point estimates of the finite population total . in little and zheng ( 2007 ) , the spline model is combined with a bayesian bootstrap ( bb ) model for the sizes , for point estimation when the sizes are only known for the sampled units . we further develop their methods by ( a ) including an unknown parameter to model heteroscedastic error variance in the spline model , an important modeling feature in the pps setting ; and ( b ) developing an improved bayesian method for including summary information about the aggregate size of non-sampled units . simulation studies suggest that the resulting bayesian method , which includes information on the number and total size of the non-sampled units , recovers most of the information in the individual sizes of the non-sampled story_separator_special_tag doubly robust ( dr ) estimators of the mean with missing data are compared . an estimator is dr if either the regression of the missing variable on the observed variables or the missing data mechanism is correctly specified . one method is to include the inverse of the propensity score as a linear term in the imputation model [ d. firth and k.e . bennett , robust models in probability sampling , j. r. statist . soc . ser . b . 60 ( 1998 ) , pp . 3 21 ; d.o . scharfstein , a. rotnitzky , and j.m . robins , adjusting for nonignorable drop-out using semiparametric nonresponse models ( with discussion ) , j. am . statist . assoc . 94 ( 1999 ) , pp . 1096 1146 ; h. bang and j.m . robins , doubly robust estimation in missing data and causal inference models , biometrics 61 ( 2005 ) , pp . 962 972 ] . another method is to calibrate the predictions from a parametric model by adding a mean of the weighted residuals [ j.m robins , a. rotnitzky , and l.p. zhao , estimation of regression coefficients when story_separator_special_tag observational data with clustered structure may have confounding at one or more levels which when combined critically undermine result validity . we propose using multilevel models in bayesian propensity score analysis to account for cluster and individual level confounding in the estimation of both propensity score and in turn treatment effect . in addition , our approach includes confounders in the outcome model for more flexibility to model outcome-covariate surface , minimizing the influence of feedback effect in bayesian joint modeling of propensity score model and outcome model . in an extensive simulation study , we compare several propensity score analysis approaches with varying complexity of multilevel modeling structures . with each of proposed propensity score model , random intercept outcome model augmented with covariates adjustment well maintains the property of propensity score as balancing score and outperforms single level outcome model . to illustrate the proposed models , a case study is considered , which investigates the impact of lipid screening on lipid management in youth from three different health care systems . story_separator_special_tag methods based on the propensity score comprise one set of valuable tools for comparative effectiveness research and for estimating causal effects more generally . these methods typically consist of two distinct stages : ( 1 ) a propensity score stage where a model is fit to predict the propensity to receive treatment ( the propensity score ) , and ( 2 ) an outcome stage where responses are compared in treated and untreated units having similar values of the estimated propensity score . traditional techniques conduct estimation in these two stages separately ; estimates from the first stage are treated as fixed and known for use in the second stage . bayesian methods have natural appeal in these settings because separate likelihoods for the two stages can be combined into a single joint likelihood , with estimation of the two stages carried out simultaneously . one key feature of joint estimation in this context is `` feedback '' between the outcome stage and the propensity score stage , meaning that quantities in a model for the outcome contribute information to posterior distributions of quantities in the model for the propensity score . we provide a rigorous assessment of bayesian propensity
word sense disambiguation wsd systems automatically choose the intended meaning of a word in context . in this article we present a wsd algorithm based on random walks over large lexical knowledge bases lkb . we show that our algorithm performs better than other graph-based methods when run on a graph built from wordnet and extended wordnet . our algorithm and lkb combination compares favorably to other knowledge-based approaches in the literature that use similar knowledge on a variety of english data sets and a data set on spanish . we include a detailed analysis of the factors that affect the algorithm . the algorithm and the lkbs used are publicly available , and the results easily reproducible . story_separator_special_tag contents preface acknowledgements 1 introduction 2 user interfaces for search by marti hearst 3 modeling 4 retrieval evaluation 5 relevance feedback and query expansion 6 documents : languages & properties with gonzalo navarro and nivio ziviani 7 queries : languages & properties with gonzalo navarro 8 text classification with marcos gonccalves 9 indexing and searching with gonzalo navarro 10 parallel and distributed ir with eric brown 11 web retrieval with yoelle maarek 12 web crawling with carlos castillo 13 structured text retrieval with mounia lalmas 14 multimedia information retrieval by dulce poncele'on and malcolm slaney 15 enterprise search by david hawking 16 library systems by edie rasmussen 17 digital libraries by marcos gonccalves a open source search engines with christian middleton b biographies bibliography index story_separator_special_tag knowledge graphs are graphical representations of large databases of facts , which typically suffer from incompleteness . inferring missing relations ( links ) between entities ( nodes ) is the task of link prediction . a recent state-of-the-art approach to link prediction , conve , implements a convolutional neural network to extract features from concatenated subject and relation vectors . whilst results are impressive , the method is unintuitive and poorly understood . we propose a hypernetwork architecture that generates simplified relation-specific convolutional filters that ( i ) outperforms conve and all previous approaches across standard datasets ; and ( ii ) can be framed as tensor factorization and thus set within a well established family of factorization models for link prediction . we thus demonstrate that convolution simply offers a convenient computational means of introducing sparsity and parameter tying to find an effective trade-off between non-linear expressiveness and the number of parameters to learn . story_separator_special_tag knowledge graphs are structured representations of real world facts . however , they typically contain only a small subset of all possible facts . link prediction is a task of inferring missing facts based on existing ones . we propose tucker , a relatively straightforward but powerful linear model based on tucker decomposition of the binary tensor representation of knowledge graph triples . tucker outperforms previous state-of-the-art models across standard link prediction datasets , acting as a strong baseline for more elaborate models . we show that tucker is a fully expressive model , derive sufficient bounds on its embedding dimensionalities and demonstrate that several previously introduced linear models can be viewed as special cases of tucker . story_separator_special_tag in this paper , we train a semantic parser that scales up to freebase . instead of relying on annotated logical forms , which is especially expensive to obtain at large scale , we learn from question-answer pairs . the main challenge in this setting is narrowing down the huge number of possible logical predicates for a given question . we tackle this problem in two ways : first , we build a coarse mapping from phrases to predicates using a knowledge base and a large text corpus . second , we use a bridging operation to generate additional predicates based on neighboring predicates . on the dataset of cai and yates ( 2013 ) , despite not having annotated logical forms , our system outperforms their state-of-the-art parser . additionally , we collected a more realistic and challenging dataset of question-answer pairs and improves over a natural baseline . story_separator_special_tag freebase is a practical , scalable tuple database used to structure general human knowledge . the data in freebase is collaboratively created , structured , and maintained . freebase currently contains more than 125,000,000 tuples , more than 4000 types , and more than 7000 properties . public read/write access to freebase is allowed through an http-based graph-query api using the metaweb query language ( mql ) as a data query and manipulation language . mql provides an easy-to-use object-oriented interface to the tuple data in freebase and is designed to facilitate the creation of collaborative , web-based data-oriented applications . story_separator_special_tag many knowledge bases ( kbs ) are now readily available and encompass colossal quantities of information thanks to either a long-term funding effort ( e.g . wordnet , opencyc ) or a collaborative process ( e.g . freebase , dbpedia ) . however , each of them is based on a different rigid symbolic framework which makes it hard to use their data in other systems . it is unfortunate because such rich structured knowledge might lead to a huge leap forward in many other areas of ai like natural language processing ( word-sense disambiguation , natural language understanding , . ) , vision ( scene classification , image semantic annotation , . ) or collaborative filtering . in this paper , we present a learning process based on an innovative neural network architecture designed to embed any of these symbolic representations into a more flexible continuous vector space in which the original knowledge is kept and enhanced . these learnt embeddings would allow data from any kb to be easily used in recent machine learning methods for prediction and information retrieval . we illustrate our method on wordnet and freebase and also present a way to adapt it to story_separator_special_tag large-scale relational learning becomes crucial for handling the huge amounts of structured data generated daily in many application domains ranging from computational biology or information retrieval , to natural language processing . in this paper , we present a new neural network architecture designed to embed multi-relational graphs into a flexible continuous vector space in which the original data is kept and enhanced . the network is trained to encode the semantics of these graphs in order to assign high probabilities to plausible components . we empirically show that it reaches competitive performance in link prediction on standard datasets from the literature as well as on data from a real-world knowledge base ( wordnet ) . in addition , we present how our method can be applied to perform word-sense disambiguation in a context of open-text semantic parsing , where the goal is to learn to assign a structured meaning representation to almost any sentence of free text , demonstrating that it can scale up to tens of thousands of nodes and thousands of types of relation . story_separator_special_tag we consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces . our objective is to propose a canonical model which is easy to train , contains a reduced number of parameters and can scale up to very large databases . hence , we propose transe , a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities . despite its simplicity , this assumption proves to be powerful since extensive experiments show that transe significantly outperforms state-of-the-art methods in link prediction on two knowledge bases . besides , it can be successfully trained on a large scale data set with 1m entities , 25k relationships and more than 17m training samples . story_separator_special_tag we introduce kbgan , an adversarial learning framework to improve the performances of a wide range of existing knowledge graph embedding models . because knowledge graphs typically only contain positive facts , sampling useful negative training examples is a nontrivial task . replacing the head or tail entity of a fact with a uniformly randomly selected entity is a conventional method for generating negative facts , but the majority of the generated negative facts can be easily discriminated from positive facts , and will contribute little towards the training . inspired by generative adversarial networks ( gans ) , we use one knowledge graph embedding model as a negative sample generator to assist the training of our desired model , which acts as the discriminator in gans . this framework is independent of the concrete form of generator and discriminator , and therefore can utilize a wide variety of knowledge graph embedding models as its building blocks . in experiments , we adversarially train two translation-based models , transe and transd , each with assistance from one of the two probability-based models , distmult and complex . we evaluate the performances of kbgan on the link prediction task , using story_separator_special_tag incorporating knowledge graph ( kg ) into recommender system is promising in improving the recommendation accuracy and explainability . however , existing methods largely assume that a kg is complete and simply transfer the `` knowledge '' in kg at the shallow level of entity raw data or embeddings . this may lead to suboptimal performance , since a practical kg can hardly be complete , and it is common that a kg has missing facts , relations , and entities . thus , we argue that it is crucial to consider the incomplete nature of kg when incorporating it into recommender system . in this paper , we jointly learn the model of recommendation and knowledge graph completion . distinct from previous kg-based recommendation methods , we transfer the relation information in kg , so as to understand the reasons that a user likes an item . as an example , if a user has watched several movies directed by ( relation ) the same person ( entity ) , we can infer that the director relation plays a critical role when the user makes the decision , thus help to understand the user 's preference at a finer story_separator_special_tag we consider here the problem of building a never-ending language learner ; that is , an intelligent computer agent that runs forever and that each day must ( 1 ) extract , or read , information from the web to populate a growing structured knowledge base , and ( 2 ) learn to perform this task better than on the previous day . in particular , we propose an approach and a set of design principles for such an agent , describe a partial implementation of such a system that has already learned to extract a knowledge base containing over 242,000 beliefs with an estimated precision of 74 % after running for 67 days , and discuss lessons learned from this preliminary attempt to build a never-ending learning agent . story_separator_special_tag knowledge graph embedding aims at representing entities and relations in a knowledge graph as dense , low-dimensional and real-valued vectors . it can efficiently measure semantic correlations of entities and relations in knowledge graphs , and improve the performance of knowledge acquisition , fusion and inference . among various embedding models appeared in recent years , the translation-based models such as transe , transh , transr and transparse achieve state-of-the-art performance . however , the translation principle applied in these models is too strict and can not deal with complex entities and relations very well . in this paper , by introducing parameter vectors into the translation principle which treats each relation as a translation from the head entity to the tail entity , we propose a novel dynamic translation principle which supports flexible translation between the embeddings of entities and relations . we use this principle to improve the transe , transr and transparse models respectively and build new models named transe-dt , transr-dt and transparse-dt correspondingly . experimental results show that our dynamic translation principle achieves great improvement in both the link prediction task and the triple classification task . story_separator_special_tag inferring missing links in knowledge graphs ( kg ) has attracted a lot of attention from the research community . in this paper , we tackle a practical query answering task involving predicting the relation of a given entity pair . we frame this prediction problem as an inference problem in a probabilistic graphical model and aim at resolving it from a variational inference perspective . in order to model the relation between the query entity pair , we assume that there exists an underlying latent variable ( paths connecting two nodes ) in the kg , which carries the equivalent semantics of their relations . however , due to the intractability of connections in large kgs , we propose to use variation inference to maximize the evidence lower bound . more specifically , our framework ( diva ) is composed of three modules , i.e . a posterior approximator , a prior ( path finder ) , and a likelihood ( path reasoner ) . by using variational inference , we are able to incorporate them closely into a unified architecture and jointly optimize them to perform kg reasoning . with active interactions among these sub-modules , diva is story_separator_special_tag our goal is to combine the rich multi-step inference of symbolic logical reasoning with the generalization capabilities of neural networks . we are particularly interested in complex reasoning about entities and relations in text and large-scale knowledge bases ( kbs ) . neelakantan et al . ( 2015 ) use rnns to compose the distributed semantics of multi-hop paths in kbs ; however for multiple reasons , the approach lacks accuracy and practicality . this paper proposes three significant modeling advances : ( 1 ) we learn to jointly reason about relations , entities , and entity-types ; ( 2 ) we use neural attention modeling to incorporate multiple paths ; ( 3 ) we learn to share strength in a single rnn that represents logical composition across all relations . on a large-scale freebase+clueweb prediction task , we achieve 25 % error reduction , and a 53 % error reduction on sparse relations due to shared strength . on chains of reasoning in wordnet we reduce error in mean quantile by 84 % versus previous state-of-the-art . story_separator_special_tag knowledge bases ( kb ) , both automatically and manually constructed , are often incomplete many valid facts can be inferred from the kb by synthesizing existing information . a popular approach to kb completion is to infer new relations by combinatory reasoning over the information found along other paths connecting a pair of entities . given the enormous size of kbs and the exponential number of paths , previous path-based models have considered only the problem of predicting a missing relation given two entities , or evaluating the truth of a proposed triple . additionally , these methods have traditionally used random paths between fixed entity pairs or more recently learned to pick paths between them . we propose a new algorithm , minerva , which addresses the much more difficult and practical task of answering questions where the relation is known , but only one entity . since random walks are impractical in a setting with unknown destination and combinatorially many paths from a start node , we present a neural reinforcement learning approach which learns how to navigate the graph conditioned on the input query to find predictive paths . on a comprehensive evaluation on seven knowledge story_separator_special_tag link prediction for knowledge graphs is the task of predicting missing relationships between entities . previous work on link prediction has focused on shallow , fast models which can scale to large knowledge graphs . however , these models learn less expressive features than deep , multi-layer models -- which potentially limits performance . in this work , we introduce conve , a multi-layer convolutional network model for link prediction , and report state-of-the-art results for several established datasets . we also show that the model is highly parameter efficient , yielding the same performance as distmult and r-gcn with 8x and 17x fewer parameters . analysis of our model suggests that it is particularly effective at modelling nodes with high indegree -- which are common in highly-connected , complex knowledge graphs such as freebase and yago3 . in addition , it has been noted that the wn18 and fb15k datasets suffer from test set leakage , due to inverse relations from the training set being present in the test set -- however , the extent of this issue has so far not been quantified . we find this problem to be severe : a simple rule-based model can achieve story_separator_special_tag we introduce a new language representation model called bert , which stands for bidirectional encoder representations from transformers . unlike recent language representation models ( peters et al. , 2018a ; radford et al. , 2018 ) , bert is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers . as a result , the pre-trained bert model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks , such as question answering and language inference , without substantial task-specific architecture modifications . bert is conceptually simple and empirically powerful . it obtains new state-of-the-art results on eleven natural language processing tasks , including pushing the glue score to 80.5 ( 7.7 point absolute improvement ) , multinli accuracy to 86.7 % ( 4.6 % absolute improvement ) , squad v1.1 question answering test f1 to 93.2 ( 1.5 point absolute improvement ) and squad v2.0 test f1 to 83.1 ( 5.1 point absolute improvement ) . story_separator_special_tag recent years have witnessed a proliferation of large-scale knowledge bases , including wikipedia , freebase , yago , microsoft 's satori , and google 's knowledge graph . to increase the scale even further , we need to explore automatic methods for constructing knowledge bases . previous approaches have primarily focused on text-based extraction , which can be very noisy . here we introduce knowledge vault , a web-scale probabilistic knowledge base that combines extractions from web content ( obtained via analysis of text , tabular data , page structure , and human annotations ) with prior knowledge derived from existing knowledge repositories . we employ supervised machine learning methods for fusing these distinct information sources . the knowledge vault is substantially bigger than any previously published structured knowledge repository , and features a probabilistic inference system that computes calibrated probabilities of fact correctness . we report the results of multiple studies that explore the relative utility of the different information sources and extraction methods . story_separator_special_tag we present kblrn , a framework for end-to-end learning of knowledge base representations from latent , relational , and numerical features . kblrn integrates feature types with a novel combination of neural representation learning and probabilistic product of experts models . to the best of our knowledge , kblrn is the first approach that learns representations of knowledge bases by integrating latent , relational , and numerical features . we show that instances of kblrn outperform existing methods on a range of knowledge base completion tasks . we contribute a novel data sets enriching commonly used knowledge base completion benchmarks with numerical features . the data sets are available under a permissive bsd-3 license . we also investigate the impact numerical features have on the kb completion performance of kblrn . story_separator_special_tag identifying and linking named entities across information sources is the basis of knowledge acquisition and at the heart of web search , recommendations , and analytics . an important problem in this context is cross-document co-reference resolution ( ccr ) : computing equivalence classes of textual mentions denoting the same entity , within and across documents . prior methods employ ranking , clustering , or probabilistic graphical models using syntactic features and distant features from knowledge bases . however , these methods exhibit limitations regarding run-time and robustness . this paper presents the crocs framework for unsupervised ccr , improving the state of the art in two ways . first , we extend the way knowledge bases are harnessed , by constructing a notion of semantic summaries for intra-document co-reference chains using co-occurring entity mentions belonging to different chains . second , we reduce the computational cost by a new algorithm that embeds sample-based bisection , using spectral clustering or graph partitioning , in a hierarchical clustering process . this allows scaling up ccr to large corpora . experiments with three datasets show significant gains in output quality , compared to the best prior methods , and the run-time efficiency story_separator_special_tag knowledge graphs are useful for many artificial intelligence ( ai ) tasks . however , knowledge graphs often have missing facts . to populate the graphs , knowledge graph embedding models have been developed . knowledge graph embedding models map entities and relations in a knowledge graph to a vector space and predict unknown triples by scoring candidate triples . transe is the first translation-based method and it is well known because of its simplicity and efficiency for knowledge graph completion . it employs the principle that the differences between entity embeddings represent their relations . the principle seems very simple , but it can effectively capture the rules of a knowledge graph . however , transe has a problem with its regularization . transe forces entity embeddings to be on a sphere in the embedding vector space . this regularization warps the embeddings and makes it difficult for them to fulfill the abovementioned principle . the regularization also affects adversely the accuracies of the link predictions . on the other hand , regularization is important because entity embeddings diverge by negative sampling without it . this paper proposes a novel embedding model , toruse , to solve the regularization story_separator_special_tag we consider the problem of open-domain question answering ( open qa ) over massive knowledge bases ( kbs ) . existing approaches use either manually curated kbs like freebase or kbs automatically extracted from unstructured text . in this paper , we present oqa , the first approach to leverage both curated and extracted kbs.a key technical challenge is designing systems that are robust to the high variability in both natural language questions and massive kbs . oqa achieves robustness by decomposing the full open qa problem into smaller sub-problems including question paraphrasing and query reformulation . oqa solves these sub-problems by mining millions of rules from an unlabeled question corpus and across multiple kbs . oqa then learns to integrate these rules by performing discriminative training on question-answer pairs using a latent-variable structured perceptron algorithm . we evaluate oqa on three benchmark question sets and demonstrate that it achieves up to twice the precision and recall of a state-of-the-art open qa system . story_separator_special_tag inferring commonsense knowledge is a key challenge in machine learning . due to the sparsity of training data , previous work has shown that supervised methods for commonsense knowledge mining underperform when evaluated on novel data . in this work , we develop a method for generating commonsense knowledge using a large , pre-trained bidirectional language model . by transforming relational triples into masked sentences , we can use this model to rank a triple s validity by the estimated pointwise mutual information between the two entities . since we do not update the weights of the bidirectional model , our approach is not biased by the coverage of any one commonsense knowledge base . though we do worse on a held-out test set than models explicitly trained on a corresponding training set , our approach outperforms these methods when mining commonsense knowledge from new sources , suggesting that our unsupervised technique generalizes better than current supervised approaches . story_separator_special_tag part 1 the lexical database : nouns in wordnet , george a. miller modifiers in wordnet , katherine j. miller a semantic network of english verbs , christiane fellbaum design and implementation of the wordnet lexical database and searching software , randee i. tengi . part 2 : automated discovery of wordnet relations , marti a. hearst representing verb alterations in wordnet , karen t. kohl et al the formalization of wordnet by methods of relational concept analysis , uta e. priss . part 3 applications of wordnet : building semantic concordances , shari landes et al performance and confidence in a semantic annotation task , christiane fellbaum et al wordnet and class-based probabilities , philip resnik combining local context and wordnet similarity for word sense identification , claudia leacock and martin chodorow using wordnet for text retrieval , ellen m. voorhees lexical chains as representations of context for the detection and correction of malapropisms , graeme hirst and david st-onge temporal indexing through lexical chaining , reem al-halimi and rick kazman color-x - using knowledge from wordnet for conceptual modelling , j.f.m . burg and r.p . van de riet knowledge processing on an extended wordnet , sanda m. story_separator_special_tag knowledge graph embedding refers to projecting entities and relations in knowledge graph into continuous vector spaces . current state-of-the-art models are translation-based model , which build embeddings by treating relation as translation from head entity to tail entity . however , previous models is too strict to model the complex and diverse entities and relations ( e.g . symmetric/transitive/one-to-many/many-to-many relations ) . to address these issues , we propose a new principle to allow flexible translation between entity and relation vectors . we can design a novel score function to favor flexible translation for each translation-based models without increasing model complexity . to evaluate the proposed principle , we incorporate it into previous method and conduct triple classification on benchmark datasets . experimental results show that the principle can remarkably improve the performance compared with several state-of-the-art baselines . story_separator_special_tag knowledge embedding , which projects triples in a given knowledge base to d-dimensional vectors , has attracted considerable research efforts recently . most existing approaches treat the given knowledge base as a set of triplets , each of whose representation is then learned separately . however , as a fact , triples are connected and depend on each other . in this paper , we propose a graph aware knowledge embedding method ( gake ) , which formulates knowledge base as a directed graph , and learns representations for any vertices or edges by leveraging the graph s structural information . we introduce three types of graph context for embedding : neighbor context , path context , and edge context , each reflects properties of knowledge from different perspectives . we also design an attention mechanism to learn representative power of different vertices or edges . to validate our method , we conduct several experiments on two tasks . experimental results suggest that our method outperforms several state-of-art knowledge embedding models . story_separator_special_tag in 2007 , ibm research took on the grand challenge of building a computer system that could compete with champions at the game of jeopardy ! . in 2011 , the open-domain question-answering ( qa ) system , dubbed watson , beat the two highest ranked players in a nationally televised two-game jeopardy ! match . this paper provides a brief history of the events and ideas that positioned our team to take on the jeopardy ! challenge , build watson , ibm watson , and ultimately triumph . it describes both the nature of the qa challenge represented by jeopardy ! and our overarching technical approach . the main body of this paper provides a narrative of the deepqa processing pipeline to introduce the articles in this special issue and put them in context of the overall system . finally , this paper summarizes our main results , describing how the system , as a holistic combination of many diverse algorithmic techniques , performed at champion levels , and it briefly discusses the team 's future research plans . story_separator_special_tag performing link prediction in knowledge bases ( kbs ) with embedding-based models , like with the model transe ( bordes et al. , 2013 ) which represents relationships as translations in the embedding space , have shown promising results in recent years . most of these works are focused on modeling single relationships and hence do not take full advantage of the graph structure of kbs . in this paper , we propose an extension of transe that learns to explicitly model composition of relationships via the addition of their corresponding translation vectors . we show empirically that this allows to improve performance for predicting single relationships as well as compositions of pairs of them . story_separator_special_tag this paper tackles the problem of endogenous link prediction for knowledge base completion . knowledge bases can be represented as directed graphs whose nodes correspond to entities and edges to relationships . previous attempts either consist of powerful systems with high capacity to model complex connectivity patterns , which unfortunately usually end up overfitting on rare relationships , or in approaches that trade capacity for simplicity in order to fairly model all relationships , frequent or not . in this paper , we propose tatec , a happy medium obtained by complementing a high-capacity model with a simpler one , both pre-trained separately and then combined . we present several variants of this model with different kinds of regularization and combination strategies and show that this approach outperforms existing methods on different types of relationships by achieving state-of-the-art results on four benchmarks of the literature . story_separator_special_tag we explore some of the practicalities of using random walk inference methods , such as the path ranking algorithm ( pra ) , for the task of knowledge base completion . we show that the random walk probabilities computed ( at great expense ) by pra provide no discernible benefit to performance on this task , so they can safely be dropped . this allows us to define a simpler algorithm for generating feature matrices from graphs , which we call subgraph feature extraction ( sfe ) . in addition to being conceptually simpler than pra , sfe is much more efficient , reducing computation by an order of magnitude , and more expressive , allowing for much richer features than paths between two nodes in a graph . we show experimentally that this technique gives substantially better performance than pra and its variants , improving mean average precision from .432 to .528 on a knowledge base completion task using the nell kb . story_separator_special_tag much work in recent years has gone into the construction of large knowledge bases ( kbs ) , such as freebase , dbpedia , nell , and yago . while these kbs are very large , they are still very incomplete , necessitating the use of inference to fill in gaps . prior work has shown how to make use of a large text corpus to augment random walk inference over kbs . we present two improvements to the use of such large corpora to augment kb inference . first , we present a new technique for combining kb relations and surface text into a single graph representation that is much more compact than graphs used in prior work . second , we describe how to incorporate vector space similarity into random walk inference over kbs , reducing the feature sparsity inherent in using surface text . this allows us to combine distributional similarity with symbolic logical inference in novel and effective ways . with experiments on many relations from two separate kbs , we show that our methods significantly outperform prior work on kb inference , both in the size of problem our methods can handle and in the story_separator_special_tag embedding knowledge graphs into continuous vector spaces has recently attracted increasing interest . most existing methods perform the embedding task using only fact triples . logical rules , although containing rich background information , have not been well studied in this task . this paper proposes a novel method of jointly embedding knowledge graphs and logical rules . the key idea is to represent and model triples and rules in a unified framework . specifically , triples are represented as atomic formulae and modeled by the translation assumption , while rules represented as complex formulae and modeled by t-norm fuzzy logics . embedding then amounts to minimizing a global loss over both atomic and complex formulae . in this manner , we learn embeddings compatible not only with triples but also with rules , which will certainly be more predictive for knowledge acquisition and inference . we evaluate our method with link prediction and triple classification tasks . experimental results show that joint embedding brings significant and consistent improvements over stateof-the-art methods . particularly , it enhances the prediction of new facts which can not even be directly inferred by pure logical inference , demonstrating the capability of our method story_separator_special_tag path queries on a knowledge graph can be used to answer compositional questions such as what languages are spoken by people living in lisbon ? . however , knowledge graphs often have missing facts ( edges ) which disrupts path queries . recent models for knowledge base completion impute missing facts by embedding knowledge graphs in vector spaces . we show that these models can be recursively applied to answer path queries , but that they suffer from cascading errors . this motivates a new compositional training objective , which dramatically improves all models ability to answer path queries , in some cases more than doubling accuracy . on a standard knowledge base completion task , we also demonstrate that compositional training acts as a novel form of structural regularization , reliably improving performance across all base models ( reducing errors by up to 43 % ) and achieving new state-of-the-art results . story_separator_special_tag the representation of a knowledge graph ( kg ) in a latent space recently has attracted more and more attention . to this end , some proposed models ( e.g. , transe ) embed entities and relations of a kg into a `` point '' vector space by optimizing a global loss function which ensures the scores of positive triplets are higher than negative ones . we notice that these models always regard all entities and relations in a same manner and ignore their ( un ) certainties . in fact , different entities and relations may contain different certainties , which makes identical certainty insufficient for modeling . therefore , this paper switches to density-based embedding and propose kg2e for explicitly modeling the certainty of entities and relations , which learn the representations of kgs in the space of multi-dimensional gaussian distributions . each entity/relation is represented by a gaussian distribution , where the mean denotes its position and the covariance ( currently with diagonal covariance ) can properly represent its certainty . in addition , compared with the symmetric measures used in point-based methods , we employ the kl-divergence for scoring triplets , which is a natural asymmetry story_separator_special_tag modeling the complex interactions between users and items as well as amongst items themselves is at the core of designing successful recommender systems . one classical setting is predicting users ' personalized sequential behavior ( or ` next-item ' recommendation ) , where the challenges mainly lie in modeling ` third-order ' interactions between a user , her previously visited item ( s ) , and the next item to consume . existing methods typically decompose these higher-order interactions into a combination of pairwise relationships , by way of which user preferences ( user-item interactions ) and sequential patterns ( item-item interactions ) are captured by separate components . in this paper , we propose a unified method , transrec , to model such third-order relationships for large-scale sequential prediction . methodologically , we embed items into a ` transition space ' where users are modeled as translation vectors operating on item sequences . empirically , this approach outperforms the state-of-the-art on a wide spectrum of real-world datasets . data and code are available at this https url . story_separator_special_tag many data such as social networks , movie preferences or knowledge bases are multi-relational , in that they describe multiple relations between entities . while there is a large body of work focused on modeling these data , modeling these multiple types of relations jointly remains challenging . further , existing approaches tend to breakdown when the number of these types grows . in this paper , we propose a method for modeling large multi-relational datasets , with possibly thousands of relations . our model is based on a bilinear structure , which captures various orders of interaction of the data , and also shares sparse latent factors across different relations . we illustrate the performance of our approach on standard tensor-factorization datasets where we attain , or outperform , state-of-the-art results . finally , a nlp application demonstrates our scalability and the ability of our model to learn efficient and semantically meaningful verb representations . story_separator_special_tag knowledge graphs are useful resources for numerous ai applications , but they are far from completeness . previous work such as transe , transh and transr/ctransr regard a relation as translation from head entity to tail entity and the ctransr achieves state-of-the-art performance . in this paper , we propose a more fine-grained model named transd , which is an improvement of transr/ctransr . in transd , we use two vectors to represent a named symbol object ( entity and relation ) . the first one represents the meaning of a ( n ) entity ( relation ) , the other one is used to construct mapping matrix dynamically . compared with transr/ctransr , transd not only considers the diversity of relations , but also entities . transd has less parameters and has no matrix-vector multiplication operations , which makes it can be applied on large scale graphs . in experiments , we evaluate our model on two typical tasks including triplets classification and link prediction . evaluation results show that our approach outperforms state-of-the-art methods . story_separator_special_tag we model knowledge graphs for their completion by encoding each entity and relation into a numerical space . all previous work including trans ( e , h , r , and d ) ignore the heterogeneity ( some relations link many entity pairs and others do not ) and the imbalance ( the number of head entities and that of tail entities in a relation could be different ) of knowledge graphs . in this paper , we propose a novel approach transparse to deal with the two issues . in transparse , transfer matrices are replaced by adaptive sparse matrices , whose sparse degrees are determined by the number of entities ( or entity pairs ) linked by relations . in experiments , we design structured and unstructured sparse patterns for transfer matrices and analyze their advantages and disadvantages . we evaluate our approach on triplet classification and link prediction tasks . experimental results show that transparse outperforms trans ( e , h , r , and d ) significantly , and achieves state-of-the-art performance . story_separator_special_tag knowledge graphs contain knowledge about the world and provide a structured representation of this knowledge . current knowledge graphs contain only a small subset of what is true in the world . link prediction approaches aim at predicting new links for a knowledge graph given the existing links among the entities . tensor factorization approaches have proved promising for such link prediction problems . proposed in 1927 , canonical polyadic ( cp ) decomposition is among the first tensor factorization approaches . cp generally performs poorly for link prediction as it learns two independent embedding vectors for each entity , whereas they are really tied . we present a simple enhancement of cp ( which we call simple ) to allow the two embeddings of each entity to be learned dependently . the complexity of simple grows linearly with the size of embeddings . the embeddings learned through simple are interpretable , and certain types of background knowledge can be incorporated into these embeddings through weight tying . we prove simple is fully expressive and derive a bound on the size of its embeddings for full expressivity . we show empirically that , despite its simplicity , simple outperforms several story_separator_special_tag we present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs . we motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions . our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes . in a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin . story_separator_special_tag we present a method for training a semantic parser using only a knowledge base and an unlabeled text corpus , without any individually annotated sentences . our key observation is that multiple forms of weak supervision can be combined to train an accurate semantic parser : semantic supervision from a knowledge base , and syntactic supervision from dependency-parsed sentences . we apply our approach to train a semantic parser that uses 77 relations from freebase in its knowledge representation . this semantic parser extracts instances of binary relations with state-of-the-art accuracy , while simultaneously recovering much richer semantic structures , such as conjunctions of multiple relations with partially shared arguments . we demonstrate recovery of this richer structure by extracting logical forms from natural language queries against freebase . on this task , the trained semantic parser achieves 80 % precision and 56 % recall , despite never having seen an annotated logical form . story_separator_special_tag large knowledge graphs increasingly add value to various applications that require machines to recognize and understand queries and their semantics , as in search or question answering systems . latent variable models have increasingly gained attention for the statistical modeling of knowledge graphs , showing promising results in tasks related to knowledge graph completion and cleaning . besides storing facts about the world , schema-based knowledge graphs are backed by rich semantic descriptions of entities and relation-types that allow machines to understand the notion of things and their semantic relationships . in this work , we study how type-constraints can generally support the statistical modeling with latent variable models . more precisely , we integrated prior knowledge in form of type-constraints in various state of the art latent variable approaches . our experimental results show that prior knowledge on relation-types significantly improves these models up to 77 % in link-prediction tasks . the achieved improvements are especially prominent when a low model complexity is enforced , a crucial requirement when these models are applied to very large datasets . unfortunately , type-constraints are neither always available nor always complete e.g. , they can become fuzzy when entities lack proper typing story_separator_special_tag the problem of knowledge base completion can be framed as a 3rd-order binary tensor completion problem . in this light , the canonical tensor decomposition ( cp ) ( hitchcock , 1927 ) seems like a natural solution ; however , current implementations of cp on standard knowledge base completion benchmarks are lagging behind their competitors . in this work , we attempt to understand the limits of cp for knowledge base completion . first , we motivate and test a novel regularizer , based on tensor nuclear p-norms . then , we present a refor-mulation of the problem that makes it invariant to arbitrary choices in the inclusion of predicates or their reciprocals in the dataset . these two methods combined allow us to beat the current state of the art on several datasets with a cp decomposition , and obtain even better results using the more advanced complex model . story_separator_special_tag scientific literature with rich metadata can be represented as a labeled directed graph . this graph representation enables a number of scientific tasks such as ad hoc retrieval or named entity recognition ( ner ) to be formulated as typed proximity queries in the graph . one popular proximity measure is called random walk with restart ( rwr ) , and much work has been done on the supervised learning of rwr measures by associating each edge label with a parameter . in this paper , we describe a novel learnable proximity measure which instead uses one weight per edge label sequence : proximity is defined by a weighted combination of simple `` path experts '' , each corresponding to following a particular sequence of labeled edges . experiments on eight tasks in two subdomains of biology show that the new learning method significantly outperforms the rwr model ( both trained and untrained ) . we also extend the method to support two additional types of experts to model intrinsic properties of entities : query-independent experts , which generalize the pagerank measure , and popular entity experts which allow rankings to be adjusted for particular entities that are especially important story_separator_special_tag we consider the problem of performing learning and inference in a large scale knowledge base containing imperfect knowledge with incomplete coverage . we show that a soft inference procedure based on a combination of constrained , weighted , random walks through the knowledge base graph can be used to reliably infer new beliefs for the knowledge base . more specifically , we show that the system can learn to infer different target relations by tuning the weights associated with random walks that follow different paths through the graph , using a version of the path ranking algorithm ( lao and cohen , 2010b ) . we apply this approach to a knowledge base of approximately 500,000 beliefs extracted imperfectly from the web by nell , a never-ending language learner ( carlson et al. , 2010 ) . this new system improves significantly over nell 's earlier horn-clause learning and inference method : it obtains nearly double the precision at rank 100 , and the new learning method is also applicable to many more inference tasks . story_separator_special_tag the dbpedia community project extracts structured , multilingual knowledge from wikipedia and makes it freely available on the web using semantic web and linked data technologies . the project extracts knowledge from 111 different language editions of wikipedia . the largest dbpedia knowledge base which is extracted from the english edition of wikipedia consists of over 400 million facts that describe 3.7 million things . the dbpedia knowledge bases that are extracted from the other 110 wikipedia editions together consist of 1.46 billion facts and describe 10 million additional things . the dbpedia project maps wikipedia infoboxes from 27 different language editions to a single shared ontology consisting of 320 classes and 1,650 properties . the mappings are created via a world-wide crowd-sourcing effort and enable knowledge from the different wikipedia editions to be combined . the project publishes releases of all dbpedia knowledge bases for download and provides sparql query access to 14 out of the 111 language editions via a global network of local dbpedia chapters . in addition to the regular releases , the project maintains a live knowledge base which is updated whenever a page in wikipedia changes . dbpedia sets 27 million rdf links pointing story_separator_special_tag fast and efficient learning over large bodies of commonsense knowledge is a key requirement for cognitive systems . semantic web knowledge bases provide an important new resource of ground facts from which plausible inferences can be learned . this paper applies structured logistic regression with analogical generalization ( slogan ) to make use of structural as well as statistical information to achieve rapid and robust learning . slogan achieves state-of-the-art performance in a standard triplet classification task on two data sets and , in addition , can provide understandable explanations for its answers . story_separator_special_tag representation learning of knowledge bases ( kbs ) aims to embed both entities and relations into a low-dimensional space . most existing methods only consider direct relations in representation learning . we argue that multiple-step relation paths also contain rich inference patterns between entities , and propose a path-based representation learning model . this model considers relation paths as translations between entities for representation learning , and addresses two key challenges : ( 1 ) since not all relation paths are reliable , we design a path-constraint resource allocation algorithm to measure the reliability of relation paths . ( 2 ) we represent relation paths via semantic composition of relation embeddings . experimental results on real-world datasets show that , as compared with baselines , our model achieves significant and consistent improvements on knowledge base completion and relation extraction from text . story_separator_special_tag knowledge graph completion aims to perform link prediction between entities . in this paper , we consider the approach of knowledge graph embeddings . recently , models such as transe and transh build entity and relation embeddings by regarding a relation as translation from head entity to tail entity . we note that these models simply put both entities and relations within the same semantic space . in fact , an entity may have multiple aspects and various relations may focus on different aspects of entities , which makes a common space insufficient for modeling . in this paper , we propose transr to build entity and relation embeddings in separate entity space and relation spaces . afterwards , we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities . in experiments , we evaluate our models on three tasks including link prediction , triple classification and relational fact extraction . experimental results show significant and consistent improvements compared to state-of-the-art baselines including transe and transh . the source code of this paper can be obtained from https : //github.com/mrlyk423/relation_extraction . story_separator_special_tag relational inference is a crucial technique for knowledge base population . the central problem in the study of relational inference is to infer unknown relations between entities from the facts given in the knowledge bases . two popular models have been put forth recently to solve this problem , which are the latent factor models and the random-walk models , respectively . however , each of them has their pros and cons , depending on their computational efficiency and inference accuracy . in this paper , we propose a hierarchical random-walk inference algorithm for relational learning in large scale graph-structured knowledge bases , which not only maintains the computational simplicity of the random-walk models , but also provides better inference accuracy than related works . the improvements come from two basic assumptions we proposed in this paper . firstly , we assume that although a relation between two entities is syntactically directional , the information conveyed by this relation is equally shared between the connected entities , thus all of the relations are semantically bidirectional . secondly , we assume that the topology structures of the relation-specific subgraphs in knowledge bases can be exploited to improve the performance of the story_separator_special_tag large-scale multi-relational embedding refers to the task of learning the latent representations for entities and relations in large knowledge graphs . an effective and scalable solution for this problem is crucial for the true success of knowledge-based inference in a broad range of applications . this paper proposes a novel framework for optimizing the latent representations with respect to the \\textit { analogical } properties of the embedded entities and relations . by formulating the learning objective in a differentiable fashion , our model enjoys both theoretical power and computational scalability , and significantly outperformed a large number of representative baseline methods on benchmark datasets . furthermore , the model offers an elegant unification of several well-known methods in multi-relational embedding , which can be proven to be special instantiations of our framework . story_separator_special_tag we consider the problem of embedding knowledge graphs ( kgs ) into continuous vector spaces . existing methods can only deal with explicit relationships within each triple , i.e. , local connectivity patterns , but can not handle implicit relationships across different triples , i.e. , contextual connectivity patterns . this paper proposes context-dependent kg embedding , a twostage scheme that takes into account both types of connectivity patterns and obtains more accurate embeddings . we evaluate our approach on the tasks of link prediction and triple classification , and achieve significant and consistent improvements over state-of-the-art methods . story_separator_special_tag knowledge base ( kb ) completion aims to infer missing facts from existing ones in a kb . among various approaches , path ranking ( pr ) algorithms have received increasing attention in recent years . pr algorithms enumerate paths between entity pairs in a kb and use those paths as features to train a model for missing fact prediction . due to their good performances and high model interpretability , several methods have been proposed . however , most existing methods suffer from scalability ( high ram consumption ) and feature explosion ( trains on an exponentially large number of features ) problems . this paper proposes a context-aware path ranking ( c-pr ) algorithm to solve these problems by introducing a selective path exploration strategy . c-pr learns global semantics of entities in the kb using word embedding and leverages the knowledge of entity semantics to enumerate contextually relevant paths using bidirectional random walk . experimental results on three large kbs show that the path features ( fewer in number ) discovered by c-pr not only improve predictive performance but also are more interpretable than existing baselines . story_separator_special_tag the recently introduced continuous skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships . in this paper we present several extensions that improve both the quality of the vectors and the training speed . by subsampling of the frequent words we obtain significant speedup and also learn more regular word representations . we also describe a simple alternative to the hierarchical softmax called negative sampling . an inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases . for example , the meanings of `` canada '' and `` air '' can not be easily combined to obtain `` air canada '' . motivated by this example , we present a simple method for finding phrases in text , and show that learning good vector representations for millions of phrases is possible . story_separator_special_tag the goal of this project is to provide lexical resources for natural language research . the primary emphases are on the further development and dissemination of the on-line lexical database , wordnet . a secondary goal is to learn how to develop contextual representations for different senses of a polysemous word , where a contextual representation is comprised of topical and local context for each sense . story_separator_special_tag the recent proliferation of knowledge graphs ( kgs ) coupled with incomplete or partial information , in the form of missing relations ( links ) between entities , has fueled a lot of research on knowledge base completion ( also known as relation prediction ) . several recent works suggest that convolutional neural network ( cnn ) based models generate richer and more expressive feature embeddings and hence also perform well on relation prediction . however , we observe that these kg embeddings treat triples independently and thus fail to cover the complex and hidden information that is inherently implicit in the local neighborhood surrounding a triple . to this effect , our paper proposes a novel attention-based feature embedding that captures both entity and relation features in any given entity s neighborhood . additionally , we also encapsulate relation clusters and multi-hop relations in our model . our empirical study offers insights into the efficacy of our attention-based model and we show marked performance gains in comparison to state-of-the-art methods on all datasets . story_separator_special_tag word sense disambiguation ( wsd ) is traditionally considered an ai-hard problem . a break-through in this field would have a significant impact on many relevant web-based applications , such as web information retrieval , improved access to web services , information extraction , etc . early approaches to wsd , based on knowledge representation techniques , have been replaced in the past few years by more robust machine learning and statistical techniques . the results of recent comparative evaluations of wsd systems , however , show that these methods have inherent limitations . on the other hand , the increasing availability of large-scale , rich lexical knowledge resources seems to provide new challenges to knowledge-based approaches . in this paper , we present a method , called structural semantic interconnections ( ssi ) , which creates structural specifications of the possible senses for each word in a context and selects the best hypothesis according to a grammar g , describing relations between sense specifications . sense specifications are created from several available lexical resources that we integrated in part manually , in part with the help of automatic procedures . the ssi algorithm has been applied to different semantic story_separator_special_tag knowledge base ( kb ) completion adds new facts to a kb by making inferences from existing facts , for example by inferring with high likelihood nationality ( x , y ) from bornin ( x , y ) . most previous methods infer simple one-hop relational synonyms like this , or use as evidence a multi-hop relational path treated as an atomic feature , like bornin ( x , z ) - > containedin ( z , y ) . this paper presents an approach that reasons about conjunctions of multi-hop relations non-atomically , composing the implications of a path using a recursive neural network ( rnn ) that takes as inputs vector embeddings of the binary relation in the path . not only does this allow us to generalize to paths unseen at training time , but also , with a single high-capacity rnn , to predict new relation types not seen when the compositional model was trained ( zero-shot learning ) . we assemble a new dataset of over 52m relational triples , and show that our method improves over a traditional classifier by 11 % , and a method leveraging pre-trained embeddings by 7 % . story_separator_special_tag knowledge bases of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks . however , because knowledge bases are typically incomplete , it is useful to be able to perform link prediction , i.e. , predict whether a relationship not in the knowledge base is likely to be true . this paper combines insights from several previous link prediction models into a new embedding model stranse that represents each entity as a lowdimensional vector , and each relation by two matrices and a translation vector . stranse is a simple combination of the se and transe models , but it obtains better link prediction performance on two benchmark datasets than previous embedding models . thus , stranse can serve as a new baseline for the more complex models in the link prediction task . story_separator_special_tag knowledge bases are useful resources for many natural language processing tasks , however , they are far from complete . in this paper , we define a novel entity representation as a mixture of its neighborhood in the knowledge base and apply this technique on transe-a well-known embedding model for knowledge base completion . experimental results show that the neighborhood information significantly helps to improve the results of the transe model , leading to better performance than obtained by other state-of-the-art embedding models on three benchmark datasets for triple classification , entity prediction and relation prediction tasks . story_separator_special_tag in this paper , we propose a novel embedding model , named convkb , for knowledge base completion . our model convkb advances state-of-the-art models by employing a convolutional neural network , so that it can capture global relationships and transitional characteristics between entities and relations in knowledge bases . in convkb , each triple ( head entity , relation , tail entity ) is represented as a 3-column matrix where each column vector represents a triple element . this 3-column matrix is then fed to a convolution layer where multiple filters are operated on the matrix to generate different feature maps . these feature maps are then concatenated into a single feature vector representing the input triple . the feature vector is multiplied with a weight vector via a dot product to return a score . this score is then used to predict whether the triple is valid or not . experiments show that convkb achieves better link prediction performance than previous state-of-the-art embedding models on two benchmark datasets wn18rr and fb15k-237 . story_separator_special_tag in this paper , we propose a novel embedding model , named convkb , for knowledge base completion . our model convkb advances state-of-the-art models by employing a convolutional neural network , so that it can capture global relationships and transitional characteristics between entities and relations in knowledge bases . in convkb , each triple ( head entity , relation , tail entity ) is represented as a 3-column matrix where each column vector represents a triple element . this 3-column matrix is then fed to a convolution layer where multiple filters are operated on the matrix to generate different feature maps . these feature maps are then concatenated into a single feature vector representing the input triple . the feature vector is multiplied with a weight vector via a dot product to return a score . this score is then used to predict whether the triple is valid or not . experiments show that convkb obtains better link prediction and triple classification results than previous state-of-the-art models on benchmark datasets wn18rr , fb15k-237 , wn11 and fb13 . we further apply our convkb to a search personalization problem which aims to tailor the search results to each specific user story_separator_special_tag in this paper , we introduce an embedding model , named capse , exploring a capsule network to model relationship triples ( subject , relation , object ) . our capse represents each triple as a 3-column matrix where each column vector represents the embedding of an element in the triple . this 3-column matrix is then fed to a convolution layer where multiple filters are operated to generate different feature maps . these feature maps are reconstructed into corresponding capsules which are then routed to another capsule to produce a continuous vector . the length of this vector is used to measure the plausibility score of the triple . our proposed capse obtains better performance than previous state-of-the-art embedding models for knowledge graph completion on two benchmark datasets wn18rr and fb15k-237 , and outperforms strong search personalization baselines on search17 . story_separator_special_tag relational learning is becoming increasingly important in many areas of application . here , we present a novel approach to relational learning based on the factorization of a three-way tensor . we show that unlike other tensor approaches , our method is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorization . we substantiate our theoretical considerations regarding the collective learning capabilities of our model by the means of experiments on both a new dataset and a dataset commonly used in entity resolution . furthermore , we show on common benchmark datasets that our approach achieves better or on-par results , if compared to current state-of-the-art relational learning solutions , while it is significantly faster to compute . story_separator_special_tag relational machine learning studies methods for the statistical analysis of relational , or graph-structured , data . in this paper , we provide a review of how such statistical models can be `` trained '' on large knowledge graphs , and then used to predict new facts about the world ( which is equivalent to predicting new edges in the graph ) . in particular , we discuss two fundamentally different kinds of statistical relational models , both of which can scale to massive datasets . the first is based on latent feature models such as tensor factorization and multiway neural networks . the second is based on mining observable patterns in the graph . we also show how to combine these latent and observable models to get improved modeling power at decreased computational cost . finally , we discuss how such statistical models of graphs can be combined with text-based information extraction methods for automatically constructing knowledge graphs from the web . to this end , we also discuss google 's knowledge vault project as an example of such combination . story_separator_special_tag learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs . in this work , we propose holographic embeddings ( hole ) to learn compositional vector space representations of entire knowledge graphs . the proposed method is related to holographic models of associative memory in that it employs circular correlation to create compositional representations . by using correlation as the compositional operator hole can capture rich interactions but simultaneously remains efficient to compute , easy to train , and scalable to very large datasets . in extensive experiments we show that holographic embeddings are able to outperform state-of-the-art methods for link prediction in knowledge graphs and relational learning benchmark datasets . story_separator_special_tag we present discriminative gaifman models , a novel family of relational machine learning models . gaifman models learn feature representations bottom up from representations of locally connected and bounded-size regions of knowledge bases ( kbs ) . considering local and bounded-size neighborhoods of knowledge bases renders logical inference and learning tractable , mitigates the problem of overfitting , and facilitates weight sharing . gaifman models sample neighborhoods of knowledge bases so as to make the learned relational models more robust to missing objects and relations which is a common situation in open-world kbs . we present the core ideas of gaifman models and apply them to large-scale relational learning problems . we also discuss the ways in which gaifman models relate to some existing relational machine learning approaches . story_separator_special_tag recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic , but the origin of these regularities has remained opaque . we analyze and make explicit the model properties needed for such regularities to emerge in word vectors . the result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature : global matrix factorization and local context window methods . our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix , rather than on the entire sparse matrix or on individual context windows in a large corpus . the model produces a vector space with meaningful substructure , as evidenced by its performance of 75 % on a recent word analogy task . it also outperforms related models on similarity tasks and named entity recognition . story_separator_special_tag in this paper we present an extension of a machine learning based coreference resolution system which uses features induced from different semantic knowledge sources . these features represent knowledge mined from wordnet and wikipedia , as well as information about semantic role labels . we show that semantic features indeed improve the performance on different referring expression types such as pronouns and common nouns . story_separator_special_tag a capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or object part . we use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters . active capsules at one level make predictions , via transformation matrices , for the instantiation parameters of higher-level capsules . when multiple predictions agree , a higher level capsule becomes active . we show that a discrimininatively trained , multi-layer capsule system achieves state-of-the-art performance on mnist and is considerably better than a convolutional net at recognizing highly overlapping digits . to achieve these results we use an iterative routing-by-agreement mechanism : a lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule . story_separator_special_tag knowledge graphs enable a wide variety of applications , including question answering and information retrieval . despite the great effort invested in their creation and maintenance , even the largest ( e.g. , yago , dbpedia or wikidata ) remain incomplete . we introduce relational graph convolutional networks ( r-gcns ) and apply them to two standard knowledge base completion tasks : link prediction ( recovery of missing facts , i.e . subject-predicate-object triples ) and entity classification ( recovery of missing entity attributes ) . r-gcns are related to a recent class of neural networks operating on graphs , and are developed specifically to handle the highly multi-relational data characteristic of realistic knowledge bases . we demonstrate the effectiveness of r-gcns as a stand-alone model for entity classification . we further show that factorization models for link prediction such as distmult can be significantly improved through the use of an r-gcn encoder model to accumulate evidence over multiple inference steps in the graph , demonstrating a large improvement of 29.8 % on fb15k-237 over a decoder-only baseline . story_separator_special_tag a method for knowledge base completion includes encoding a knowledge base comprising entities and relations between the entities into embeddings for the entities and embeddings for the relations . the embeddings for the entities are encoded based on a graph convolutional network ( gcn ) with different weights for at least some different types of the relations , which gcn is called a weighted gcn ( wgcn ) . the method further includes decoding the embeddings by a convolutional network for relation prediction . the convolutional network is configured to apply one dimensional ( 1d ) convolutional filters on the embeddings , which convolutional network is called conv-transe . the method further includes at least partially complete the knowledge base based on the relation prediction . story_separator_special_tag recent studies on knowledge base completion , the task of recovering missing relationships based on recorded relations , demonstrate the importance of learning embeddings from multi-step relations . however , due to the size of knowledge bases , learning multi-step relations directly on top of observed triplets could be costly . hence , a manually designed procedure is often used when training the models . in this paper , we propose implicit reasonets ( irns ) , which is designed to perform multi-step inference implicitly through a controller and shared memory . without a human-designed inference procedure , irns use training data to learn to perform multi-step inference in an embedding neural space through the shared memory and controller . while the inference procedure does not explicitly operate on top of observed triplets , our proposed model outperforms all previous approaches on the popular fb15k benchmark by more than 5.7 % . story_separator_special_tag with the large volume of new information created every day , determining the validity of information in a knowledge graph and filling in its missing parts are crucial tasks for many researchers and practitioners . to address this challenge , a number of knowledge graph completion methods have been developed using low-dimensional graph embeddings . although researchers continue to improve these models using an increasingly complex feature space , we show that simple changes in the architecture of the underlying model can outperform state-of-the-art models without the need for complex feature engineering . in this work , we present a shared variable neural network model called proje that fills-in missing information in a knowledge graph by learning joint embeddings of the knowledge graph 's entities and edges , and through subtle , but important , changes to the standard loss function . in doing so , proje has a parameter size that is smaller than 11 out of 15 existing methods while performing 37 % better than the current-best method on standard datasets . we also show , via a new fact checking task , that proje is capable of accurately determining the veracity of many declarative statements . story_separator_special_tag knowledge graphs ( kgs ) have been applied to many tasks including web search , link prediction , recommendation , natural language processing , and entity linking . however , most kgs are far from complete and are growing at a rapid pace . to address these problems , knowledge graph completion ( kgc ) has been proposed to improve kgs by filling in its missing connections . unlike existing methods which hold a closed-world assumption , i.e. , where kgs are fixed and new entities can not be easily added , in the present work we relax this assumption and propose a new open-world kgc task . as a first attempt to solve this task we introduce an open-world kgc model called conmask . this model learns embeddings of the entity 's name and parts of its text-description to connect unseen entities to the kg . to mitigate the presence of noisy text descriptions , conmask uses a relationship-dependent content masking to extract relevant snippets and then trains a fully convolutional neural network to fuse the extracted snippets with entities in the kg . experiments on large data sets , both old and new , show that conmask performs story_separator_special_tag knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships . in this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities . previous work represented entities as either discrete atomic units or with a single entity vector representation . we show that performance can be improved when entities are represented as an average of their constituting word vectors . this allows sharing of statistical strength between , for instance , facts involving the `` sumatran tiger '' and `` bengal tiger . '' lastly , we demonstrate that all models improve when these word vectors are initialized with vectors learned from unsupervised large corpora . we assess the model by considering the problem of predicting additional true relations between entities given a subset of the knowledge base . our model outperforms previous models and can classify unseen relationships in wordnet and freebase with an accuracy of 86.2 % and 90.0 % , respectively . story_separator_special_tag we present yago , a light-weight and extensible ontology with high coverage and quality . yago builds on entities and relations and currently contains more than 1 million entities and 5 million facts . this includes the is-a hierarchy as well as non-taxonomic relations between entities ( such as hasoneprize ) . the facts have been automatically extracted from wikipedia and unified with wordnet , using a carefully designed combination of rule-based and heuristic methods described in this paper . the resulting knowledge base is a major step beyond wordnet : in quality by adding knowledge about individuals like persons , organizations , products , etc . with their semantic relationships - and in quantity by increasing the number of facts by more than an order of magnitude . our empirical evaluation of fact correctness shows an accuracy of about 95 % . yago is based on a logically clean model , which is decidable , extensible , and compatible with rdfs . finally , we show how yago can be further extended by state-of-the-art information extraction techniques . story_separator_special_tag we study the problem of learning representations of entities and relations in knowledge graphs for predicting missing links . the success of such a task heavily relies on the ability of modeling and inferring the patterns of ( or between ) the relations . in this paper , we present a new approach for knowledge graph embedding called rotate , which is able to model and infer various relation patterns including : symmetry/antisymmetry , inversion , and composition . specifically , the rotate model defines each relation as a rotation from the source entity to the target entity in the complex vector space . in addition , we propose a novel self-adversarial negative sampling technique for efficiently and effectively training the rotate model . experimental results on multiple benchmark knowledge graphs show that the proposed rotate model is not only scalable , but also able to infer and model various relation patterns and significantly outperform existing state-of-the-art models for link prediction . story_separator_special_tag convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks . since 2014 very deep convolutional networks started to become mainstream , yielding substantial gains in various benchmarks . although increased model size and computational cost tend to translate to immediate quality gains for most tasks ( as long as enough labeled data is provided for training ) , computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios . here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization . we benchmark our methods on the ilsvrc 2012 classification challenge validation set demonstrate substantial gains over the state of the art : 21.2 % top-1 and 5.6 % top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters . with an ensemble of 4 models and multi-crop evaluation , we report 3.5 % top-5 error on the validation set ( 3.6 % error on the story_separator_special_tag embedding models for entities and relations are extremely useful for recovering missing facts in a knowledge base . intuitively , a relation can be modeled by a matrix mapping entity vectors . however , relations reside on low dimension sub-manifolds in the parameter space of arbitrary matrices for one reason , composition of two relations m1 , m2 may match a third m3 ( e.g . composition of relations currency_of_country and country_of_film usually matches currency_of_film_budget ) , which imposes compositional constraints to be satisfied by the parameters ( i.e . m1 * m2=m3 ) . in this paper we investigate a dimension reduction technique by training relations jointly with an autoencoder , which is expected to better capture compositional constraints . we achieve state-of-the-art on knowledge base completion tasks with strongly improved mean rank , and show that joint training with an autoencoder leads to interpretable sparse codings of relations , helps discovering compositional constraints and benefits from compositional training . our source code is released at github.com/tianran/glimvec . story_separator_special_tag link prediction on knowledge graphs is useful in numerous application areas such as semantic search , question answering , entity disambiguation , enterprise decision support , recommender systems and so on . while many of these applications require a reasonably quick response and may operate on data that is constantly changing , existing methods often lack speed and adaptability to cope with these requirements . this is aggravated by the fact that knowledge graphs are often extremely large and may easily contain millions of entities rendering many of these methods impractical . in this paper , we address the weaknesses of current methods by proposing random semantic tensor ensemble ( rste ) , a scalable ensemble-enabled framework based on tensor factorization . our proposed approach samples a knowledge graph tensor in its graph representation and performs link prediction via ensembles of tensor factorization . our experiments on both publicly available datasets and real world enterprise/sales knowledge bases have shown that our approach is not only highly scalable , parallelizable and memory efficient , but also able to increase the prediction accuracy significantly across all datasets . story_separator_special_tag in this paper we show the surprising effectiveness of a simple observed features model in comparison to latent feature models on two benchmark knowledge base completion datasets , fb15k and wn18 . we also compare latent and observed feature models on a more challenging dataset derived from fb15k , and additionally coupled with textual mentions from a web-scale corpus . we show that the observed features model is most effective at capturing the information present for entity pairs with textual relations , and a combination of the two combines the strengths of both model types . story_separator_special_tag modeling relation paths has offered significant gains in embedding models for knowledge base ( kb ) completion . however , enumerating paths between two entities is very expensive , and existing approaches typically resort to approximation with a sampled subset . this problem is particularly acute when text is jointly modeled with kb relations and used to provide direct evidence for facts mentioned in it . in this paper , we propose the first exact dynamic programming algorithm which enables efficient incorporation of all relation paths of bounded length , while modeling both relation types and intermediate nodes in the compositional path representations . we conduct a theoretical analysis of the efficiency gain from the approach . experiments on two datasets show that it addresses representational limitations in prior approaches and improves accuracy in kb completion . story_separator_special_tag in statistical relational learning , the link prediction problem is key to automatically understand the structure of large knowledge bases . as in previous studies , we propose to solve this problem through latent factorization . however , here we make use of complex valued embeddings . the composition of complex embeddings can handle a large variety of binary relations , among them symmetric and antisymmetric relations . compared to state-of-the-art models such as neural tensor network and holographic embeddings , our approach based on complex embeddings is arguably simpler , as it only uses the hermitian dot product , the complex counterpart of the standard dot product between real vectors . our approach is scalable to large datasets as it remains linear in both space and time , while consistently outperforming alternative approaches on standard link prediction benchmarks . story_separator_special_tag conventional network representation learning ( nrl ) models learn low-dimensional vertex representations by simply regarding each edge as a binary or continuous value . however , there exists rich semantic information on edges and the interactions between vertices usually preserve distinct meanings , which are largely neglected by most existing nrl models . in this work , we present a novel translation-based nrl model , transnet , by regarding the interactions between vertices as a translation operation . moreover , we formalize the task of social relation extraction ( sre ) to evaluate the capability of nrl methods on modeling the relations between vertices . experimental results on sre demonstrate that transnet significantly outperforms other baseline methods by 10 % to 20 % on hits @ 1. the source code and datasets can be obtained from https : //github.com/thunlp/transnet . story_separator_special_tag most existing knowledge graphs suffer from incompleteness , which can be alleviated by inferring missing links based on known facts . one popular way to accomplish this is to generate low-dimensional embeddings of entities and relations , and use these to make inferences . conve , a recently proposed approach , applies convolutional filters on 2d reshapings of entity and relation embeddings in order to capture rich interactions between their components . however , the number of interactions that conve can capture is limited . in this paper , we analyze how increasing the number of these interactions affects link prediction performance , and utilize our observations to propose interacte . interacte is based on three key ideas feature permutation , a novel feature reshaping , and circular convolution . through extensive experiments , we find that interacte outperforms state-of-the-art convolutional link prediction baselines on fb15k-237 . further , interacte achieves an mrr score that is 9 % , 7.5 % , and 23 % better than conve on the fb15k-237 , wn18rr and yago3-10 datasets respectively . the results validate our central hypothesis that increasing feature interaction is beneficial to link prediction performance . we make the source code story_separator_special_tag we present graph attention networks ( gats ) , novel neural network architectures that operate on graph-structured data , leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations . by stacking layers in which nodes are able to attend over their neighborhoods ' features , we enable ( implicitly ) specifying different weights to different nodes in a neighborhood , without requiring any kind of costly matrix operation ( such as inversion ) or depending on knowing the graph structure upfront . in this way , we address several key challenges of spectral-based graph neural networks simultaneously , and make our model readily applicable to inductive as well as transductive problems . our gat models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks : the cora , citeseer and pubmed citation network datasets , as well as a protein-protein interaction dataset ( wherein test graphs remain unseen during training ) . story_separator_special_tag recent research has shown that the performance of search personalization depends on the richness of user profiles which normally represent the user 's topical interests . in this paper , we propose a new embedding approach to learning user profiles , where users are embedded on a topical interest space . we then directly utilize the user profiles for search personalization . experiments on query logs from a major commercial web search engine demonstrate that our embedding approach improves the performance of the search engine and also achieves better search performance than other strong baselines . story_separator_special_tag learning the representations of a knowledge graph has attracted significant research interest in the field of intelligent web . by regarding each relation as one translation from head entity to tail entity , translation-based methods including transe , transh and transr are simple , effective and achieving the state-of-the-art performance . however , they still suffer the following issues : ( i ) low performance when modeling 1-to-n , n-to-1 and n-to-n relations . ( ii ) limited performance due to the structure sparseness of the knowledge graph . in this paper , we propose a novel knowledge graph representation learning method by taking advantage of the rich context information in a text corpus . the rich textual context information is incorporated to expand the semantic structure of the knowledge graph and each relation is enabled to own different representations for different head and tail entities to better handle 1-to-n , n-to-1 and n-to-n relations . experiments on multiple benchmark datasets show that our proposed method successfully addresses the above issues and significantly outperforms the state-of-the-art methods . story_separator_special_tag we deal with embedding a large scale knowledge graph composed of entities and relations into a continuous vector space . transe is a promising method proposed recently , which is very efficient while achieving state-of-the-art predictive performance . we discuss some mapping properties of relations which should be considered in embedding , such as reflexive , one-to-many , many-to-one , and many-to-many . we note that transe does not do well in dealing with these properties . some complex models are capable of preserving these mapping properties but sacrifice efficiency in the process . to make a good trade-off between model capacity and efficiency , in this paper we propose transh which models a relation as a hyperplane together with a translation operation on it . in this way , we can well preserve the above mapping properties of relations with almost the same model complexity of transe . additionally , as a practical knowledge graph is often far from completed , how to construct negative examples to reduce false negative labels in training is very important . utilizing the one-to-many/many-to-one mapping property of a relation , we propose a simple trick to reduce the possibility of false negative labeling story_separator_special_tag knowledge bases ( kbs ) are often greatly incomplete , necessitating a demand for kb completion . the path ranking algorithm ( pra ) is one of the most promising approaches to this task . previous work on pra usually follows a single-task learning paradigm , building a prediction model for each relation independently with its own training data . it ignores meaningful associations among certain relations , and might not get enough training data for less frequent relations . this paper proposes a novel multi-task learning framework for pra , referred to as coupled pra ( cpra ) . it first devises an agglomerative clustering strategy to automatically discover relations that are highly correlated to each other , and then employs a multi-task learning strategy to effectively couple the prediction of such relations . as such , cpra takes into account relation association and enables implicit data sharing among them . we empirically evaluate cpra on benchmark data created from freebase . experimental results show that cpra can effectively identify coherent clusters in which relations are highly correlated . by further coupling such relations , cpra significantly outperforms pra , in terms of both predictive accuracy and model interpretability story_separator_special_tag knowledge graph ( kg ) embedding is to embed components of a kg including entities and relations into continuous vector spaces , so as to simplify the manipulation while preserving the inherent structure of the kg . it can benefit a variety of downstream tasks such as kg completion and relation extraction , and hence has quickly gained massive attention . in this article , we provide a systematic review of existing techniques , including not only the state-of-the-arts but also those with latest trends . particularly , we make the review based on the type of information used in the embedding task . techniques that conduct embedding using only facts observed in the kg are first introduced . we describe the overall framework , specific model design , typical training procedures , as well as pros and cons of such techniques . after that , we discuss techniques that further incorporate additional information besides facts . we focus specifically on the use of entity types , relation paths , textual descriptions , and logical rules . finally , we briefly introduce how kg embedding can be applied to and benefit a wide variety of downstream tasks such as kg story_separator_special_tag deep inference on a large-scale knowledge base ( kb ) needs a mass of formulas , but it is almost impossible to create all formulas manually . data-driven methods have been proposed to mine formulas from kbs automatically , where random sampling and approximate calculation are common techniques to handle big data . among a series of methods , random walk is believed to be suitable for knowledge graph data . however , a pure random walk without goals still has a poor efficiency of mining useful formulas , and even introduces lots of noise which may mislead inference . although several heuristic rules have been proposed to direct random walks , they do not work well due to the diversity of formulas . to this end , we propose a novel goaldirected inference formula mining algorithm , which directs random walks by the specific inference target at each step . the algorithm is more inclined to visit benefic structures to infer the target , so it can increase efficiency of random walks and avoid noise simultaneously . the experiments on both wordnet and freebase prove that our approach is has a high efficiency and performs best on the task story_separator_special_tag over the past few years , massive amounts of world knowledge have been accumulated in publicly available knowledge bases , such as freebase , nell , and yago . yet despite their seemingly huge size , these knowledge bases are greatly incomplete . for example , over 70 % of people included in freebase have no known place of birth , and 99 % have no known ethnicity . in this paper , we propose a way to leverage existing web-search-based question-answering technology to fill in the gaps in knowledge bases in a targeted way . in particular , for each entity attribute , we learn the best set of queries to ask , such that the answer snippets returned by the search engine are most likely to contain the correct value for that attribute . for example , if we want to find frank zappa 's mother , we could ask the query ` who is the mother of frank zappa ' . however , this is likely to return ` the mothers of invention ' , which was the name of his band . our system learns that it should ( in this case ) add disambiguating terms story_separator_special_tag security researchers have used natural language processing ( nlp ) and deep learning techniques for programming code analysis tasks such as automated bug detection and vulnerability prediction or classification . these studies mainly generate the input vectors for the deep learning models based on the nlp embedding methods . nevertheless , while there are many existing embedding methods , the structures of neural networks are diverse and usually heuristic . this makes it difficult to select effective combinations of neural models and the embedding techniques for training the code vulnerability detectors . to address this challenge , we extended a benchmark system to analyze the compatibility of four popular word embedding techniques with four different neural networks , including the standard bidirectional long short-term memory ( bi-lstm ) , the bi-lstm applied attention mechanism , the convolutional neural network ( cnn ) , and the classic deep neural network ( dnn ) . we trained and tested the models by using two types of vulnerable function datasets written in c code . our results revealed that the bi-lstm model combined with the fasttext embedding technique showed the most efficient detection rate on a real-world but not on an artificially constructed story_separator_special_tag recently , knowledge graph embedding , which projects symbolic entities and relations into continuous vector space , has become a new , hot topic in artificial intelligence . this paper proposes a novel generative model ( transg ) to address the issue of multiple relation semantics that a relation may have multiple meanings revealed by the entity pairs associated with the corresponding triples . the new model can discover latent semantics for a relation and leverage a mixture of relationspecific component vectors to embed a fact triple . to the best of our knowledge , this is the first generative model for knowledge graph embedding , and at the first time , the issue of multiple relation semantics is formally discussed . extensive experiments show that the proposed model achieves substantial improvements against the state-of-the-art baselines . story_separator_special_tag knowledge representation is an important , long-history topic in ai , and there have been a large amount of work for knowledge graph embedding which projects symbolic entities and relations into low-dimensional , real-valued vector space . however , most embedding methods merely concentrate on data fitting and ignore the explicit semantic expression , leading to uninterpretable representations . thus , traditional embedding methods have limited potentials for many applications such as question answering , and entity classification . to this end , this paper proposes a semantic representation method for knowledge graph \\textbf { ( ksr ) } , which imposes a two-level hierarchical generative process that globally extracts many aspects and then locally assigns a specific category in each aspect for every triple . since both aspects and categories are semantics-relevant , the collection of categories in each aspect is treated as the semantic representation of this triple . extensive experiments justify our model outperforms other state-of-the-art baselines substantially . story_separator_special_tag knowledge bases are important resources for a variety of natural language processing tasks but suffer from incompleteness . we propose a novel embedding model , itransf , to perform knowledge base completion . equipped with a sparse attention mechanism , itransf discovers hidden concepts of relations and transfer statistical strength through the sharing of concepts . moreover , the learned associations between relations and concepts , which are represented by sparse attention vectors , can be interpreted easily . we evaluate itransf on two benchmark datasets wn18 and fb15k for knowledge base completion and obtains improvements on both the mean rank and hits @ 10 metrics , over all baselines that do not use additional information . story_separator_special_tag the goal of knowledge graph embedding ( kge ) is to learn how to represent the low dimensional vectors for entities and relations based on the observed triples . the conventional shallow models are limited to their expressiveness . conve ( dettmers et al. , 2018 ) takes advantage of cnn and improves the expressive power with parameter efficient operators by increasing the interactions between head and relation embeddings . however , there is no structural information in the embedding space of conve , and the performance is still limited by the number of interactions . the recent kbgat ( nathani et al. , 2019 ) provides another way to learn embeddings by adaptively utilizing structural information . in this paper , we take the benefits of conve and kbgat together and propose a relation-aware inception network with joint local-global structural information for knowledge graph embedding ( reinceptione ) . specifically , we first explore the inception network to learn query embedding , which aims to further increase the interactions between head and relation embeddings . then , we propose to use a relation-aware attention mechanism to enrich the query embedding with the local neighborhood and global entity information . story_separator_special_tag link prediction is critical for the application of incomplete knowledge graph ( kg ) in the downstream tasks . as a family of effective approaches for link predictions , embedding methods try to learn low-rank representations for both entities and relations such that the bilinear form defined therein is a well-behaved scoring function . despite of their successful performances , existing bilinear forms overlook the modeling of relation compositions , resulting in lacks of interpretability for reasoning on kg . to fulfill this gap , we propose a new model called dihedral , named after dihedral symmetry group . this new model learns knowledge graph embeddings that can capture relation compositions by nature . furthermore , our approach models the relation embeddings parametrized by discrete values , thereby decrease the solution space drastically . our experiments show that dihedral is able to capture all desired properties such as ( skew- ) symmetry , inversion and ( non- ) abelian composition , and outperforms existing bilinear form based approach and is comparable to or better than deep learning models such as conve . story_separator_special_tag we consider learning representations of entities and relations in kbs using the neural-embedding approach . we show that most existing models , including ntn ( socher et al. , 2013 ) and transe ( bordes et al. , 2013b ) , can be generalized under a unified learning framework , where entities are low-dimensional vectors learned from a neural network and relations are bilinear and/or linear mapping functions . under this framework , we compare a variety of embedding models on the link prediction task . we show that a simple bilinear formulation achieves new state-of-the-art results for the task ( achieving a top-10 accuracy of 73.2 % vs. 54.7 % by transe on freebase ) . furthermore , we introduce a novel approach that utilizes the learned relation embeddings to mine logical rules such as `` bornincity ( a , b ) and cityincountry ( b , c ) = > nationality ( a , c ) '' . we find that embeddings learned from the bilinear objective are particularly good at capturing relational semantics and that the composition of relations is characterized by matrix multiplication . more interestingly , we demonstrate that our embedding-based rule extraction approach successfully story_separator_special_tag we study the problem of learning probabilistic first-order logical rules for knowledge base reasoning . this learning problem is difficult because it requires learning the parameters in a continuous space as well as the structure in a discrete space . we propose a framework , neural logic programming , that combines the parameter and structure learning of first-order logical rules in an end-to-end differentiable model . this approach is inspired by a recently-developed differentiable logic called tensorlog [ 5 ] , where inference tasks can be compiled into sequences of differentiable operations . we design a neural controller system that learns to compose these operations . empirically , our method outperforms prior work on multiple knowledge base benchmark datasets , including freebase and wikimovies . story_separator_special_tag large scale knowledge graphs ( kgs ) such as freebase are generally incomplete . reasoning over multi-hop ( mh ) kg paths is thus an important capability that is needed for question answering or other nlp tasks that require knowledge about the world . mh-kg reasoning includes diverse scenarios , e.g. , given a head entity and a relation path , predict the tail entity ; or given two entities connected by some relation paths , predict the unknown relation between them . we present rops , recurrent one-hop predictors , that predict entities at each step of mh-kb paths by using recurrent neural networks and vector representations of entities and relations , with two benefits : ( i ) modeling mh-paths of arbitrary lengths while updating the entity and relation representations by the training signal at each step ; ( ii ) handling different types of mh-kg reasoning in a unified framework . our models show state-of-the-art for two important multi-hop kg reasoning tasks : knowledge base completion and path query answering . story_separator_special_tag this paper proposes a novel translation-based knowledge graph embedding that preserves the logical properties of relations such as transitivity and symmetricity . the embedding space generated by existing translation-based embeddings do not represent transitive and symmetric relations precisely , because they ignore the role of entities in triples . thus , we introduce a role-specific projection which maps an entity to distinct vectors according to its role in a triple . that is , a head entity is projected onto an embedding space by a head projection operator , and a tail entity is projected by a tail projection operator . this idea is applied to transe , transr , and transd to produce lpptranse , lpptransr , and lpptransd , respectively . according to the experimental results on link prediction and triple classification , the proposed logical property preserving embeddings show the state-of-the-art performance at both tasks . these results prove that it is critical to preserve logical properties of relations while embedding knowledge graphs , and the proposed method does it effectively . story_separator_special_tag among different recommendation techniques , collaborative filtering usually suffer from limited performance due to the sparsity of user-item interactions . to address the issues , auxiliary information is usually used to boost the performance . due to the rapid collection of information on the web , the knowledge base provides heterogeneous information including both structured and unstructured data with different semantics , which can be consumed by various applications . in this paper , we investigate how to leverage the heterogeneous information in a knowledge base to improve the quality of recommender systems . first , by exploiting the knowledge base , we design three components to extract items ' semantic representations from structural content , textual content and visual content , respectively . to be specific , we adopt a heterogeneous network embedding method , termed as transr , to extract items ' structural representations by considering the heterogeneity of both nodes and relationships . we apply stacked denoising auto-encoders and stacked convolutional auto-encoders , which are two types of deep learning based embedding techniques , to extract items ' textual representations and visual representations , respectively . finally , we propose our final integrated framework , which is story_separator_special_tag visual relations , such as `` person ride bike '' and `` bike next to car '' , offer a comprehensive scene understanding of an image , and have already shown their great utility in connecting computer vision and natural language . however , due to the challenging combinatorial complexity of modeling subject-predicate-object relation triplets , very little work has been done to localize and predict visual relations . inspired by the recent advances in relational representation learning of knowledge bases and convolutional object detection networks , we propose a visual translation embedding network ( vtranse ) for visual relation detection . vtranse places objects in a low-dimensional relation space where a relation can be modeled as a simple vector translation , i.e. , subject + predicate $ \\approx $ object . we propose a novel feature extraction layer that enables object-relation knowledge transfer in a fully-convolutional fashion that supports training and inference in a single forward/backward pass . to the best of our knowledge , vtranse is the first end-to-end relation detection network . we demonstrate the effectiveness of vtranse over other state-of-the-art methods on two large-scale datasets : visual relationship and visual genome . note that even though story_separator_special_tag the rapid development of knowledge graphs ( kgs ) , such as freebase and wordnet , has changed the paradigm for ai-related applications . however , even though these kgs are impressively large , most of them are suffering from incompleteness , which leads to performance degradation of ai applications . most existing researches are focusing on knowledge graph embedding ( kge ) models . nevertheless , those models simply embed entities and relations into latent vectors without leveraging the rich information from the relation structure . indeed , relations in kgs conform to a three-layer hierarchical relation structure ( hrs ) , i.e. , semantically similar relations can make up relation clusters and some relations can be further split into several fine-grained sub-relations . relation clusters , relations and sub-relations can fit in the top , the middle and the bottom layer of three-layer hrs respectively . to this end , in this paper , we extend existing kge models transe , transh and distmult , to learn knowledge representations by leveraging the information from the hrs . particularly , our approach is capable to extend other kge models . finally , the experiment results clearly validate the effectiveness story_separator_special_tag in this work , we move beyond the traditional complex-valued representations , introducing more expressive hypercomplex representations to model entities and relations for knowledge graph embeddings . more specifically , quaternion embeddings , hypercomplex-valued embeddings with three imaginary components , are utilized to represent entities . relations are modelled as rotations in the quaternion space . the advantages of the proposed approach are : ( 1 ) latent inter-dependencies ( between all components ) are aptly captured with hamilton product , encouraging a more compact interaction between entities and relations ; ( 2 ) quaternions enable expressive rotation in four-dimensional space and have more degree of freedom than rotation in complex plane ; ( 3 ) the proposed framework is a generalization of complex on hypercomplex space while offering better geometrical interpretations , concurrently satisfying the key desiderata of relational representation learning ( i.e. , modeling symmetry , anti-symmetry and inversion ) . experimental results demonstrate that our method achieves state-of-the-art performance on four well-established knowledge graph completion benchmarks . story_separator_special_tag markov logic networks ( mlns ) , which elegantly combine logic rules and probabilistic graphical models , can be used to address many knowledge graph problems . however , inference in mln is computationally intensive , making the industrial-scale application of mln very difficult . in recent years , graph neural networks ( gnns ) have emerged as efficient and effective tools for large-scale graph problems . nevertheless , gnns do not explicitly incorporate prior logic rules into the models , and may require many labeled examples for a target task . in this paper , we explore the combination of mlns and gnns , and use graph neural networks for variational inference in mln . we propose a gnn variant , named expressgnn , which strikes a nice balance between the representation power and the simplicity of the model . our extensive experiments on several benchmark datasets demonstrate that expressgnn leads to effective and efficient probabilistic logic reasoning . story_separator_special_tag abstract we consider the problem of learning and inference in a large-scale knowledge graph containing incomplete knowledge . we show that a simple neural network module for relational reasoning through the path extracted from the knowledge base can be used to reliably infer new facts for the missing link . in our work , we used path ranking algorithm to extract the relation path from knowledge graph and use it to build train data . in order to learn the characteristics of relation , a detour path between nodes was created as training data using the extracted relation path . using this , we trained a model that can predict whether a given triple ( head entity , relation , tail entity ) is valid or not . experiments show that our model obtains better link prediction , relation prediction and triple classification results than previous state-of-the-art models on benchmark datasets wn18rr , fb15k-237 , wn11 and fb13 .
algorithm selection methods can be speeded-up substantially by incorporating multi-objective measures that give preference to algorithms that are both promising and fast to evaluate . in this paper , we introduce such a measure , a3r , and incorporate it into two algorithm selection techniques : average ranking and active testing . average ranking combines algorithm rankings observed on prior datasets to identify the best algorithms for a new dataset . the aim of the second method is to iteratively select algorithms to be tested on the new dataset , learning from each new evaluation to intelligently select the next best candidate . we show how both methods can be upgraded to incorporate a multi-objective measure a3r that combines accuracy and runtime . it is necessary to establish the correct balance between accuracy and runtime , as otherwise time will be wasted by conducting less informative tests . the correct balance can be set by an appropriate parameter setting within function a3r that trades off accuracy and runtime . our results demonstrate that the upgraded versions of average ranking and active testing lead to much better mean interval loss values than their accuracy-based counterparts . story_separator_special_tag we study the problem of learning associative memory -- a system which is able to retrieve a remembered pattern based on its distorted or incomplete version . attractor networks provide a sound model of associative memory : patterns are stored as attractors of the network dynamics and associative retrieval is performed by running the dynamics starting from a query pattern until it converges to an attractor . in such models the dynamics are often implemented as an optimization procedure that minimizes an energy function , such as in the classical hopfield network . in general it is difficult to derive a writing rule for a given dynamics and energy that is both compressive and fast . thus , most research in energy-based memory has been limited either to tractable energy models not expressive enough to handle complex high-dimensional objects such as natural images , or to models that do not offer fast writing . we present a novel meta-learning approach to energy-based memory models ( ebmm ) that allows one to use an arbitrary neural architecture as an energy model and quickly store patterns in its weights . we demonstrate experimentally that our ebmm approach can build compressed memories for story_separator_special_tag preface . 1. introduction : distributions and inference for categorical data . 1.1 categorical response data . 1.2 distributions for categorical data . 1.3 statistical inference for categorical data . 1.4 statistical inference for binomial parameters . 1.5 statistical inference for multinomial parameters . notes . problems . 2. describing contingency tables . 2.1 probability structure for contingency tables . 2.2 comparing two proportions . 2.3 partial association in stratified 2 x 2 tables . 2.4 extensions for i x j tables . notes . problems . 3. inference for contingency tables . 3.1 confidence intervals for association parameters . 3.2 testing independence in two way contingency tables . 3.3 following up chi squared tests . 3.4 two way tables with ordered classifications . 3.5 small sample tests of independence . 3.6 small sample confidence intervals for 2 x 2 tables . 3.7 extensions for multiway tables and nontabulated responses . notes . problems . 4. introduction to generalized linear models . 4.1 generalized linear model . 4.2 generalized linear models for binary data . 4.3 generalized linear models for counts . 4.4 moments and likelihood for generalized linear models . 4.5 inference for generalized linear models . 4.6 fitting story_separator_special_tag this paper introduces a new method for learning algorithm evaluation and selection , with empirical results based on classification . the empirical study has been conducted among 8 algorithms/classifiers with 100 different classification problems . we evaluate the algorithms ' performance in terms of a variety of accuracy and complexity measures . consistent with the no free lunch theorem , we do not expect to identify the single algorithm that performs best on all datasets . rather , we aim to determine the characteristics of datasets that lend themselves to superior modelling by certain learning algorithms . our empirical results are used to generate rules , using the rule-based learning algorithm c5.0 , to describe which types of algorithms are suited to solving which types of classification problems . most of the rules are generated with a high confidence rating . story_separator_special_tag appropriate choice of a kernel is the most important ingredient of the kernel-based learning methods such as support vector machine ( svm ) . automatic kernel selection is a key issue given the number of kernels available , and the current trial-and-error nature of selecting the best kernel for a given problem . this paper introduces a new method for automatic kernel selection , with empirical results based on classification . the empirical study has been conducted among five kernels with 112 different classification problems , using the popular kernel based statistical learning algorithm svm . we evaluate the kernels performance in terms of accuracy measures . we then focus on answering the question : which kernel is best suited to which type of classification problem ? our meta-learning methodology involves measuring the problem characteristics using classical , distance and distribution-based statistical information . we then combine these measures with the empirical results to present a rule-based method to select the most appropriate kernel for a classification problem . the rules are generated by the decision tree algorithm c5.0 and are evaluated with 10 fold cross validation . all generated rules offer high accuracy ratings . story_separator_special_tag the move from hand-designed features to learned features in machine learning has been wildly successful . in spite of this , optimization algorithms are still designed by hand . in this paper we show how the design of an optimization algorithm can be cast as a learning problem , allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way . our learned algorithms , implemented by lstms , outperform generic , hand-designed competitors on the tasks for which they are trained , and also generalize well to new tasks with similar structure . we demonstrate this on a number of tasks , including simple convex problems , training neural networks , and styling images with neural art . story_separator_special_tag abstract forecasting is a critical activity for numerous organizations . it is often costly and complex for reasons which include : a multiplicity of forecasting methods and possible combinations ; the absence of an overall best forecasting method ; and the context-dependence of applicable methods , based on available models , data characteristics , and the environment . in recent years , artificial intelligence ( ai ) -based techniques have been developed to support various operations management activities . this research describes the use of one such ai technique , namely rule induction , to improve forecasting accuracy . specifically , the proposed methodology involves training a rule induction-based expert system ( es ) with a set of time series data ( the training set ) . inputs to the es include selected time series features , and for each time series , the most accurate forecasting method from those available . subsequently , the es is used to recommend the most accurate forecasting method for a new set of time series ( the testing set ) . the results of this experiment , which appear promising , are presented , together with guidelines for the methodology 's use . story_separator_special_tag modeling a collection of similar regression or classification tasks can be improved by making the tasks 'learn from each other ' . in machine learning , this subject is approached through 'multitask learning ' , where parallel tasks are modeled as multiple outputs of the same network . in multilevel analysis this is generally implemented through the mixed-effects linear model where a distinction is made between 'fixed effects ' , which are the same for all tasks , and 'random effects ' , which may vary between tasks . in the present article we will adopt a bayesian approach in which some of the model parameters are shared ( the same for all tasks ) and others more loosely connected through a joint prior distribution that can be learned from the data . we seek in this way to combine the best parts of both the statistical multilevel approach and the neural network machinery . the standard assumption expressed in both approaches is that each task can learn equally well from any other task . in this article we extend the model by allowing more differentiation in the similarities between tasks . one such extension is to make the prior story_separator_special_tag hyperparameter learning has traditionally been a manual task because of the limited number of trials . today 's computing infrastructures allow bigger evaluation budgets , thus opening the way for algorithmic approaches . recently , surrogate-based optimization was successfully applied to hyperparameter learning for deep belief networks and to weka classifiers . the methods combined brute force computational power with model building about the behavior of the error function in the hyperparameter space , and they could significantly improve on manual hyperparameter tuning . what may make experienced practitioners even better at hyperparameter optimization is their ability to generalize across similar learning problems . in this paper , we propose a generic method to incorporate knowledge from previous experiments when simultaneously tuning a learning algorithm on new problems at hand . to this end , we combine surrogate-based ranking and optimization techniques for surrogate-based collaborative tuning ( scot ) . we demonstrate scot in two experiments where it outperforms standard tuning techniques and single-problem surrogate-based optimization . story_separator_special_tag we develop an object classification method that can learn a novel class from a single training example . in this method , experience with already learned classes is used to facilitate the learning of novel classes . our classification scheme employs features that discriminate between class and non-class images . for a novel class , new features are derived by selecting features that proved useful for already learned classification tasks , and adapting these features to the new classification task . this adaptation is performed by replacing the features from already learned classes with similar features taken from the novel class . a single example of a novel class is sufficient to perform feature adaptation and achieve useful classification performance . experiments demonstrate that the proposed algorithm can learn a novel class from a single training example , using 10 additional familiar classes . the performance is significantly improved compared to using no feature adaptation . the robustness of the proposed feature adaptation concept is demonstrated by similar performance gains across 107 widely varying object categories . story_separator_special_tag this chapter contains sections titled : the problem , the generalized delta rule , simulation results , some further generalizations , conclusion story_separator_special_tag in this paper , we present a framework where a learning rule can be optimized within a parametric learning rule space . we define what we callparametric learning rules and present a theoretical study of theirgeneralization properties when estimated from a set of learning tasks and tested over another set of tasks . we corroborate the results of this study with practical experiments . story_separator_special_tag deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations , often at multiple levels , with higher-level learned features defined in terms of lower-level features . the objective is to make these higher-level representations more abstract , with their individual features more invariant to most of the variations that are typically present in the training distribution , while collectively preserving as much as possible of the information in the input . ideally , we would like these representations to disentangle the unknown factors of variation that underlie the training distribution . such unsupervised learning of representations can be exploited usefully under the hypothesis that the input distribution p ( x ) is structurally related to some task of interest , say predicting p ( y/x ) . this paper focuses on the context of the unsupervised and transfer learning challenge , on why unsupervised pre-training of representations can be useful , and how it can be exploited in the transfer learning scenario , where we care about predictions on examples that are not from the same distribution as the training distribution . story_separator_special_tag this paper investigates the use of meta-learning to estimate the predictive accuracy of a classifier . we present a scenario where meta-learning is seen as a regression task and consider its potential in connection with three strategies of dataset characterization . we show that it is possible to estimate classifier performance with a high degree of confidence and gain knowledge about the classifier through the regression models generated . we exploit the results of the models to predict the ranking of the inducers . we also show that the best strategy for performance estimation is not necessarily the best one for ranking generation . story_separator_special_tag arguably , model selection is one of the major obstacles , and a key once solved , to the widespread use of machine learning/data mining technology in business . landmarking is a novel and promising metalearning approach to model selection . it uses accuracy estimates from simple and efficient learners to describe tasks and subsequently construct meta-classifiers that predict which one of a set of more elaborate learning algorithms is appropriate for a given problem . experiments show that landmarking compares favourably with the traditional statistical approach to meta-learning . story_separator_special_tag meta-learning , as applied to model selection , consists of inducing mappings from tasks to learners . traditionally , tasks are characterised by the values of pre-computed meta-attributes , such as statistical and information-theoretic measures , induced decision trees '' characteristics and/or landmarkers '' performances . in this position paper , we propose to ( meta- ) learn directly from induced decision trees , rather than rely on an \\em ad hoc set of pre-computed characteristics . such meta-learning is possible within the framework of the typed higher-order inductive learning framework we have developed . story_separator_special_tag the task of algorithm selection involves choosing an algorithm from a set of algorithms on a per-instance basis in order to exploit the varying performance of algorithms over a set of instances . the algorithm selection problem is attracting increasing attention from researchers and practitioners in ai . years of fruitful applications in a number of domains have resulted in a large amount of data , but the community lacks a standard format or repository for this data . this situation makes it difficult to share and compare different approaches effectively , as is done in other , more established fields . it also unnecessarily hinders new researchers who want to work in this area . to address this problem , we introduce a standardized format for representing algorithm selection scenarios and a repository that contains a growing number of data sets from the literature . our format has been designed to be able to express a wide variety of different scenarios . demonstrating the breadth and power of our platform , we describe a set of example experiments that build and evaluate algorithm selection models through a common interface . the results display the potential of algorithm selection to story_separator_special_tag this is the first textbook on pattern recognition to present the bayesian viewpoint . the book presents approximate inference algorithms that permit fast approximate answers in situations where exact answers are not feasible . it uses graphical models to describe probability distributions when no other books apply graphical models to machine learning . no previous knowledge of pattern recognition or machine learning concepts is assumed . familiarity with multivariate calculus and basic linear algebra is required , and some experience in the use of probabilities would be helpful though not essential as the book includes a self-contained introduction to basic probability theory . story_separator_special_tag we present a meta-learning method to support selection of candidate learning algorithms . it uses a k-nearest neighbor algorithm to identify the datasets that are most similar to the one at hand . the distance between datasets is assessed using a relatively small set of data characteristics , which was selected to represent properties that affect algorithm performance . the performance of the candidate algorithms on those datasets is used to generate a recommendation to the user in the form of a ranking . the performance is assessed using a multicriteria evaluation measure that takes not only accuracy , but also time into account . as it is not common in machine learning to work with rankings , we had to identify and adapt existing statistical techniques to devise an appropriate evaluation methodology . using that methodology , we show that the meta-learning method presented leads to significantly better rankings than the baseline ranking method . the evaluation methodology is general and can be adapted to other ranking problems . although here we have concentrated on ranking classification algorithms , the meta-learning framework presented can provide assistance in the selection of combinations of methods or more complex problem solving strategies story_separator_special_tag metalearning is the study of principled methods that exploit metaknowledge to obtain efficient models and solutions by adapting machine learning and data mining processes . while the variety of machine learning and data mining techniques now available can , in principle , provide good model solutions , a methodology is still needed to guide the search for the most appropriate model in an efficient way . metalearning provides one such methodology that allows systems to become more effective through experience . this book discusses several approaches to obtaining knowledge concerning the performance of machine learning and data mining algorithms . it shows how this knowledge can be reused to select , combine , compose and adapt both algorithms and models to yield faster , more effective solutions to data mining problems . it can thus help developers improve their algorithms and also develop learning systems that can improve themselves . the book will be of interest to researchers and graduate students in the areas of machine learning , data mining and artificial intelligence . story_separator_special_tag hinton [ 6 ] proposed that generalization in artificial neural nets should improve if nets learn to represent the domain 's underlying regularities . abu-mustafa 's hints work [ 1 ] shows that the outputs of a backprop net can be used as inputs through which domain-specific information can be given to the net . we extend these ideas by showing that a backprop net learning many related tasks at the same time can use these tasks as inductive bias for each other and thus learn better . we identify five mechanisms by which multitask backprop improves generalization and give empirical evidence that multitask backprop generalizes better in real domains . story_separator_special_tag multitask learning is an approach to inductive transfer that improves learning for one task by using the information contained in the training signals of other related tasks . it does this by learning tasks in parallel while using a shared representation ; what is learned for each task can help other tasks be learned better . in this thesis we demonstrate multitask learning for a dozen problems . we explain how multitask learning works and show that there are many opportunities for multitask learning in real domains . we show that in some cases features that would normally be used as inputs work better if used as multitask outputs instead . we present suggestions for how to get the most out of multitask learning in artificial neural nets , present an algorithm for multitask learning with case-based methods like k-nearest neighbor and kernel regression , and sketch an algorithm for multitask learning in decision trees . multitask learning improves generalization performance , can be applied in many different kinds of domains , and can be used with different learning algorithms . we conjecture there will be many opportunities for its use on real-world problems . story_separator_special_tag common inductive learning strategies offer the tools for knowledge acquisition , but possess some inherent limitations due to the use of fixed bias during the learning process . to overcome limitations of such base-learning approaches , a novel research trend is oriented to explore the potentialities of meta-learning , which is oriented to the development of mechanisms based on a dynamical search of bias . this could lead to an improvement of the base-learner performance on specific learning tasks , by profiting of the accumulated past experience . as a significant set of i/o data is needed for efficient base-learning , appropriate meta-data characterization is of crucial importance for useful meta-learning . in order to characterize meta-data , firstly a collection of meta-features discriminating among different base-level tasks should be identified . this paper focuses on the characterization of meta-data , through an analysis of meta-features that can capture the properties of specific tasks to be solved at base level . this kind of approach represents a first step toward the development of a meta-learning system , capable of suggesting the proper bias for base-learning different specific task domains . story_separator_special_tag this paper explores how an evolutionary process can produce systems that learn . a general framework for the evolution of learning is outlined , and is applied to the task of evolving mechanisms suitable for supervised learning in single-layer neural networks . dynamic properties of a network 's information-processing capacity are encoded genetically , and these properties are subjected to selective pressure based on their success in producing adaptive behavior in diverse environments . as a result of selection and genetic recombination , various successful learning mechanisms evolve , including the well-known delta rule . the effect of environmental diversity on the evolution of learning is investigated , and the role of different kinds of emergent phenomena in genetic and connectionist systems is discussed . story_separator_special_tag recent work has shown that the usage of extrapolation of learning curves to determine when to terminate a training build has been shown to be effective in reducing the number of epochs of training required for finding a good performing hyper-parameter configuration . however , the current technique uses the information only from the current build to make the prediction . we propose the usage of a simple regression based extrapolation model that uses the trajectories from previous builds to make predictions of new builds . this can be used to terminate poorly performing builds and thus , speed up hyper-parameter search with performance comparable to non-augmented hyper-parameter optimization techniques . we compare the predictions made by our model against that of the existing extrapolation technique in different tasks . we incorporate our approach into a pre-existing termination criterion . we incorporate this termination criterion into an existing hyper-parameter optimization toolkit . we analyze the performance of our approach and contrast it against a baseline in terms of quality of prediction in three different tasks . we show that our approach yields builds with performance comparable to the non-augmented version with fewer epochs , and outperforms an existing parametric extrapolation story_separator_special_tag we learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent . we show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions , including gaussian process bandits , simple control objectives , global optimization benchmarks and hyper-parameter tuning tasks . up to the training horizon , the learned optimizers learn to trade-off exploration and exploitation , and compare favourably with heavily engineered bayesian optimization packages for hyper-parameter tuning . story_separator_special_tag the label ranking problem consists of learning a model that maps instances to total orders over a finite set of predefined labels . this paper introduces new methods for label ranking that complement and improve upon existing approaches . more specifically , we propose extensions of two methods that have been used extensively for classification and regression so far , namely instance-based learning and decision tree induction . the unifying element of the two methods is a procedure for locally estimating predictive probability models for label rankings . story_separator_special_tag abstract the problem of aggregating a set of ordinal rankings of n alternatives has given rise to a number of consensus models . among the most common of these models are those due to borda and kendall , which amount to using average ranks , and the l1 and l2 distance models . a common criticism of these approaches is their use of ordinal rank position numbers directly as the values of being ranked at those levels . this paper presents a general framework for associating value or worth with ordinal ranks , and develops models for deriving a consensus based on this framework . it is shown that the lp distance models using this framework are equivalent to the conventional ordinal models for any p 1. this observation can be seen as a form of validation of the practice of using ordinal data in a manner for which it was presumably not designed . in particular , it establishes the robustness of the simple borda , kendall and median ranking models . story_separator_special_tag this paper investigates algorithms to automatically adapt the learning rate of neural networks ( nns ) . starting with stochastic gradient descent , a large variety of learning methods has been proposed for the nn setting . however , these methods are usually sensitive to the initial learning rate which has to be chosen by the experimenter . we investigate several features and show how an adaptive controller can adjust the learning rate without prior knowledge of the learning problem at hand . story_separator_special_tag first iberian conference on pattern recognition and image analysis ibpria'2003.- solids characterization using modeling wave structures.- a probabilistic model for the cooperative modular neural network.- robust learning algorithm for the mixture of experts.- a robust and effective learning algorithm for feedforward neural networks based on the influence function.- regularization of 3d cylindrical surfaces.- non-rigid registration of vessel structures in ivus images.- underwater cable tracking by visual feedback.- a hierarchical clustering strategy and its application to proteomic interaction data.- a new optimal classifier architecture to aviod the dimensionality curse.- learning from imbalanced sets through resampling and weighting.- morphological recognition of olive grove patterns.- combining multi-variate statistics and dempster-shafer theory for edge detection in multi-channel sar images.- high-level clothes description based on colour-texture and structural features.- a new method for detection and initial pose estimation based on mumford-shah segmentation functional.- tracking heads using piecewise planar models.- support vector machines for crop classification using hyperspectral data.- vehicle license plate segmentation in natural images.- high-accuracy localization of an underwater robot in a~structured environment using computer vision.- determine the composition of honeybee pollen by texture classification.- automatic word codification for the recontra connectionist translator.- the encara system for face detection and normalization.- prediction and discrimination story_separator_special_tag automatic machine learning is a growing area of machine learning that has a similar objective to the area of hyper-heuristics : to automatically recommend optimized pipelines , algorithms or appropriate parameters to specific tasks without much dependency on user knowledge . the background knowledge required to solve the task at hand is actually embedded into a search mechanism that builds personalized solutions to the task . following this idea , this paper proposes recipe ( resilient classification pipeline evolution ) , a framework based on grammar-based genetic programming that builds customized classification pipelines . the framework is flexible enough to receive different grammars and can be easily extended to other machine learning tasks . recipe overcomes the drawbacks of previous evolutionary-based frameworks , such as generating invalid individuals , and organizes a high number of possible suitable data pre-processing and classification methods into a grammar . results of f-measure obtained by recipe are compared to those two state-of-the-art methods , and shown to be as good as or better than those previously reported in the literature . recipe represents a first step towards a complete framework for dealing with different machine learning tasks with the minimum required human intervention . story_separator_special_tag while methods for comparing two learning algorithms on a single data set have been scrutinized for quite some time already , the issue of statistical tests for comparisons of more algorithms on multiple data sets , which is even more essential to typical machine learning studies , has been all but ignored . this article reviews the current practice and then theoretically and empirically examines several suitable tests . based on that , we recommend a set of simple , yet safe and robust non-parametric tests for statistical comparisons of classifiers : the wilcoxon signed ranks test for comparison of two classifiers and the friedman test with the corresponding post-hoc tests for comparison of more classifiers over multiple data sets . results of the latter can also be neatly presented with the newly introduced cd ( critical difference ) diagrams . story_separator_special_tag ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a ( weighted ) vote of their predictions . the original ensemble method is bayesian averaging , but more recent algorithms include error-correcting output coding , bagging , and boosting . this paper reviews these methods and explains why ensembles can often perform better than any single classifier . some previous studies comparing ensemble methods are reviewed , and some new experiments are presented to uncover the reasons that adaboost does not overfit rapidly . story_separator_special_tag in many reinforcement learning applications , the set of possible actions can be partitioned by the programmer into subsets of similar actions . this paper presents a technique for exploiting this form of prior information to speed up model-based reinforcement learning . we call it an action refinement method , because it treats each subset of similar actions as a single abstract action early in the learning process and then later refines the abstract action into individual actions as more experience is gathered . our method estimates the transition probabilities p ( s |s , a ) for an action a by combining the results of executions of action a with executions of other actions in the same subset of similar actions . this is a form of smoothing of the probability estimates that trades increased bias for reduced variance . the paper derives a formula for optimal smoothing which shows that the degree of smoothing should decrease as the amount of data increases . experiments show that probability smoothing is better than two simpler action refinement methods on a synthetic maze problem . action refinement is most useful in problems , such as robotics , where training experiences are story_separator_special_tag we evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large , fixed set of object recognition tasks can be re-purposed to novel generic tasks . our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks . we investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks , including scene recognition , domain adaptation , and fine-grained recognition challenges . we compare the efficacy of relying on various network levels to define a fixed feature , and report novel results that significantly outperform the state-of-the-art on several important vision challenges . we are releasing decaf , an open-source implementation of these deep convolutional activation features , along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms . story_separator_special_tag in this work , we proposed to use the zoomed ranking approach to rank and select time series models . zoomed ranking , originally proposed to generate a ranking of candidate algorithms , is employed to solve a given classification problem based on performance information from previous problems . the problem of model selection in zoomed ranking was solved in two distinct phases . in the first phase , we selected a subset of problems from the instances base that were similar to the new problem at hand . this selection is made using the k-nearest neighbor algorithm , whose distance function uses the characteristics of the series . in the second phase , the ranking of candidate models was generated based on performance information ( accuracy and execution time ) of the models in the series selected from the previous phase . our experiments using the zoomed ranking revealed encouraging results . story_separator_special_tag we introduce alphad3m , an automatic machine learning ( automl ) system based on meta reinforcement learning using sequence models with self play . alphad3m is based on edit operations performed over machine learning pipeline primitives providing explainability . we compare alphad3m with state-of-the-art automl systems : autosklearn , autostacker , and tpot , on openml datasets . alphad3m achieves competitive performance while being an order of magnitude faster , reducing computation time from hours to minutes , and is explainable by design . story_separator_special_tag deep reinforcement learning ( deep rl ) has been successful in learning sophisticated behaviors automatically ; however , the learning process requires a huge number of trials . in contrast , animals can learn new tasks in just a few trials , benefiting from their prior knowledge about the world . this paper seeks to bridge this gap . rather than designing a `` fast '' reinforcement learning algorithm , we propose to represent it as a recurrent neural network ( rnn ) and learn it from data . in our proposed method , rl $ ^2 $ , the algorithm is encoded in the weights of the rnn , which are learned slowly through a general-purpose ( `` slow '' ) rl algorithm . the rnn receives all information a typical rl algorithm would receive , including observations , actions , rewards , and termination flags ; and it retains its state across episodes in a given markov decision process ( mdp ) . the activations of the rnn store the state of the `` fast '' rl algorithm on the current ( previously unseen ) mdp . we evaluate rl $ ^2 $ experimentally on both small-scale and story_separator_special_tag the optimization of algorithm ( hyper- ) parameters is crucial for achieving peak performance across a wide range of domains , ranging from deep neural networks to solvers for hard combinatorial problems . the resulting algorithm configuration ( ac ) problem has attracted much attention from the machine learning community . however , the proper evaluation of new ac procedures is hindered by two key hurdles . first , ac benchmarks are hard to set up . second and even more significantly , they are computationally expensive : a single run of an ac procedure involves many costly runs of the target algorithm whose performance is to be optimized in a given ac benchmark scenario . one common workaround is to optimize cheap-to-evaluate artificial benchmark functions ( e.g. , branin ) instead of actual algorithms ; however , these have different properties than realistic ac problems . here , we propose an alternative benchmarking approach that is similarly cheap to evaluate but much closer to the original ac problem : replacing expensive benchmarks by surrogate benchmarks constructed from ac benchmarks . these surrogate benchmarks approximate the response surface corresponding to true target algorithm performance using a regression model , and story_separator_special_tag deep learning has enabled remarkable progress over the last years on a variety of tasks , such as image recognition , speech recognition , and machine translation . one crucial aspect for this progress are novel neural architectures . currently employed architectures have mostly been developed manually by human experts , which is a time-consuming and error-prone process . because of this , there is growing interest in automated neural architecture search methods . we provide an overview of existing work in this field of research and categorize them according to three dimensions : search space , search strategy , and performance estimation strategy . story_separator_special_tag past empirical work has shown that learning multiple related tasks from data simultaneously can be advantageous in terms of predictive performance relative to learning these tasks independently . in this paper we present an approach to multi -- task learning based on the minimization of regularization functionals similar to existing ones , such as the one for support vector machines ( svms ) , that have been successfully used in the past for single -- task learning . our approach allows to model the relation between tasks in terms of a novel kernel function that uses a task -- coupling parameter . we implement an instance of the proposed approach similar to svms and test it empirically using simulated as well as real data . the experimental results show that the proposed method performs better than existing multi -- task learning methods and largely outperforms single -- task learning using svms . story_separator_special_tag we study the problem of learning many related tasks simultaneously using kernel methods and regularization . the standard single-task kernel methods , such as support vector machines and regularization networks , are extended to the case of multi-task learning . our analysis shows that the problem of estimating many task functions with regularization can be cast as a single task learning problem if a family of multi-task kernel functions we define is used . these kernels model relations among the tasks and are derived from a novel form of regularizers . specific kernels that can be used for multi-task learning are provided and experimentally tested on two real data sets . in agreement with past empirical work on multi-task learning , the experiments show that learning multiple related tasks simultaneously using the proposed approach can significantly outperform standard single-task learning particularly when there are many related tasks but few data per task . story_separator_special_tag learning to recognize of object classes is one of the most important functionalities of vision . it is estimated that humans are able to learn tens of thousands of visual categori es in their life . given the photometric and geometric variabilities displayed by objects as well as the high degree of intra-clas s variabilities , we hypothesize that humans achieve such a fe at by using knowledge and information cumulated throughout the learning process . in recent years , a handful of pioneering p apers have applied various forms of knowledge transfer algorithms to the problem of learning object classes . we first review some o f these papers by loosely grouping them into three categories : transfer through prior parameters , transfer through shared features or parts , and transfer through contextual information . in the second half of the paper , we detail a recent algorithm proposed by the author . this incremental learning scheme us es information from object classes previously learned in the f orm of prior models to train a new object class model . training images can be presented in an incremental way . we present experimental results tested with this model on story_separator_special_tag learning visual models of object categories notoriously requires hundreds or thousands of training examples . we show that it is possible to learn much information about a category from just one , or a handful , of images . the key insight is that , rather than learning from scratch , one can take advantage of knowledge coming from previously learned categories , no matter how different these categories might be . we explore a bayesian implementation of this idea . object categories are represented by probabilistic models . prior knowledge is represented as a probability density function on the parameters of these models . the posterior model for an object category is obtained by updating the prior in the light of one or more observations . we test a simple implementation of our algorithm on a database of 101 diverse object categories . we compare category models learned by an implementation of our bayesian approach to models learned from by maximum likelihood ( ml ) and maximum a posteriori ( map ) methods . we find that on a database of more than 100 categories , the bayesian approach produces informative models when the number of training examples is story_separator_special_tag bayesian optimization has become a standard technique for hyperparameter optimization , including data-intensive models such as deep neural networks that may take days or weeks to train . we consider the setting where previous optimization runs are available , and we wish to use their results to warm-start a new optimization run . we develop an ensemble model that can incorporate the results of past optimization runs , while avoiding the poor scaling that comes with putting all results into a single gaussian process model . the ensemble combines models from past runs according to estimates of their generalization performance on the current optimization . results from a large collection of hyperparameter optimization benchmark problems and from optimization of a production computer vision platform at facebook show that the ensemble can substantially reduce the time it takes to obtain near-optimal configurations , and is useful for warm-starting expensive searches or running quick re-optimizations . story_separator_special_tag model selection and hyperparameter optimization is crucial in applying machine learning to a novel dataset . recently , a sub-community of machine learning has focused on solving this problem with sequential model-based bayesian optimization ( smbo ) , demonstrating substantial successes in many applications . however , for expensive algorithms the computational overhead of hyperparameter optimization can still be prohibitive . in this paper we explore the possibility of speeding up smbo by transferring knowledge from previous optimization runs on similar datasets ; specifically , we propose to initialize smbo with a small number of configurations suggested by a metalearning procedure . the resulting simple mi-smbo technique can be trivially applied to any smbo method , allowing us to perform experiments on two quite different smbo methods with complementary strengths applied to optimize two machine learning frameworks on 57 classification datasets . we find that our initialization procedure mildly improves the state of the art in low-dimensional hyperparameter optimization and substantially improves the state of the art in the more complex problem of combined model selection and hyperparameter optimization . story_separator_special_tag the success of machine learning in a broad range of applications has led to an ever-growing demand for machine learning systems that can be used off the shelf by non-experts . to be effective in practice , such systems need to automatically choose a good algorithm and feature preprocessing steps for a new dataset at hand , and also set their respective hyperparameters . recent work has started to tackle this automated machine learning ( automl ) problem with the help of efficient bayesian optimization methods . building on this , we introduce a robust new automl system based on scikit-learn ( using 15 classifiers , 14 feature preprocessing methods , and 4 data preprocessing methods , giving rise to a structured hypothesis space with 110 hyperparameters ) . this system , which we dub auto-sklearn , improves on existing automl methods by automatically taking into account past performance on similar datasets , and by constructing ensembles from the models evaluated during the optimization . our system won the first phase of the ongoing chalearn automl challenge , and our comprehensive analysis on over 100 diverse datasets shows that it substantially outperforms the previous state of the art in automl story_separator_special_tag bayesian optimization has become a standard technique for hyperparameter optimization of machine learning algorithms . we consider the setting where previous optimization runs are available , and we wish to use their results to warm-start a new optimization run . we develop a new ensemble model for bayesian optimization that can incorporate the results of past optimization runs , while avoiding the poor scaling that comes with putting all results into a single gaussian process model . our experiments show that the ensemble can substantially reduce optimization time compared to standard gaussian process models and improves over the current state-of-the-art model for warm-starting bayesian optimization . story_separator_special_tag meta-learning is an approach for solving the algorithm selection problem , which is how to choose the best algorithm for a certain task . this task corresponds to a dataset in machine learning and data mining . the main challenge in meta-learning is to engineer a meta-feature description for datasets . in the paper we apply meta-learning for feature selection . we found a meta-feature set which showed the best result in predicting proper feature selection algorithms . we also suggested a novel approach to engineer meta-features for data preprocessing algorithms , which is based on estimating the best parametrization of processing algorithms on small subsamples . story_separator_special_tag we describe a framework for learning an object classifier from a single example . this goal is achieved by emphasizing the relevant dimensions for classification using available examples of related classes . learning to accurately classify objects from a single training example is often unfeasible due to overfitting effects . however , if the instance representation provides that the distance between each two instances of the same class is smaller than the distance between any two instances from different classes , then a nearest neighbor classifier could achieve perfect performance with a single training example . we therefore suggest a two stage strategy . first , learn a metric over the instances that achieves the distance criterion mentioned above , from available examples of other related classes . then , using the single examples , define a nearest neighbor classifier where distance is evaluated by the learned class relevance metric . finding a metric that emphasizes the relevant dimensions for classification might not be possible when restricted to linear projections . we therefore make use of a kernel based metric learning algorithm . our setting encodes object instances as sets of locality based descriptors and adopts an appropriate image kernel story_separator_special_tag learning to learn is a powerful paradigm for enabling models to learn from data more effectively and efficiently . a popular approach to meta-learning is to train a recurrent model to read in a training dataset as input and output the parameters of a learned model , or output predictions for new test inputs . alternatively , a more recent approach to meta-learning aims to acquire deep representations that can be effectively fine-tuned , via standard gradient descent , to new tasks . in this paper , we consider the meta-learning problem from the perspective of universality , formalizing the notion of learning algorithm approximation and comparing the expressive power of the aforementioned recurrent models to the more recent approaches that embed gradient descent into the meta-learner . in particular , we seek to answer the following question : does deep representation combined with standard gradient descent have sufficient capacity to approximate any learning algorithm ? we find that this is indeed true , and further find , in our experiments , that gradient-based meta-learning consistently leads to learning strategies that generalize more widely compared to those represented by recurrent models . story_separator_special_tag we propose an algorithm for meta-learning that is model-agnostic , in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems , including classification , regression , and reinforcement learning . the goal of meta-learning is to train a model on a variety of learning tasks , such that it can solve new learning tasks using only a small number of training samples . in our approach , the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task . in effect , our method trains the model to be easy to fine-tune . we demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks , produces good results on few-shot regression , and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies . story_separator_special_tag landmarking is a novel technique for data characterization in metalearning . while conventional approaches typically describe a database with its statistical measurements and properties , landmarking proposes to enrich such a description with quick and easy-to-obtain performance measures of simple learning algorithms . in this paper , we will discuss two novel aspects of landmarking . first , we investigate relative landmarking , which tries to exploit the relative order of the landmark measures instead of their absolute value . second , we propose to use subsampling estimates as a different way for efficiently obtaining landmarks . in general , our results are mostly negative . the most interesting result is a surprisingly simple rule that predicts quite accurately when it is worth to boost decision trees . story_separator_special_tag in order to achieve state-of-the-art performance , modern machine learning techniques require careful data pre-processing and hyperparameter tuning . moreover , given the ever increasing number of machine learning models being developed , model selection is becoming increasingly important . automating the selection and tuning of machine learning pipelines , which can include different data pre-processing methods and machine learning models , has long been one of the goals of the machine learning community . in this paper , we propose to solve this meta-learning task by combining ideas from collaborative filtering and bayesian optimization . specifically , we use a probabilistic matrix factorization model to transfer knowledge across experiments performed in hundreds of different datasets and use an acquisition function to guide the exploration of the space of possible ml pipelines . in our experiments , we show that our approach quickly identifies high-performing pipelines across a wide range of datasets , significantly outperforming the current state-of-the-art . story_separator_special_tag while many problems could benefit from recent advances in machine learning , significant time and expertise are required to design customized solutions to each problem . prior attempts to automate machine learning have focused on generating multi-step solutions composed of primitive steps for feature engineering and modeling , but using already clean and featurized data and carefully curated primitives . however , cleaning and featurization are often the most time-consuming steps in a data science pipeline . we present a novel approach that works with naturally occurring data of any size and type , and with diverse third-party data processing and modeling primitives that can lead to better quality solutions . the key idea is to generate multi-step pipelines ( or workflows ) by factoring the search for solutions into phases that apply a different expert-like strategy designed to improve performance . this approach is implemented in the p4ml system , and demonstrates superior performance over other systems on a variety of raw datasets . story_separator_special_tag we present a preliminary analysis of the fundamental viability of meta-learning , revisiting the no free lunch ( nfl ) theorem . the analysis shows that given some simple and very basic assumptions , the nfl theorem is of little relevance to research in machine learning . we augment the basic nfl framework to illustrate that the notion of an ultimate learning algorithm is well defined . we show that , although cross-validation still is not a viable way to construct general-purpose learning algorithms , meta-learning offers a natural alternative . we still have to pay for our lunch , but the cost is reasonable : the necessary fundamental assumptions are ones we all make anyway . story_separator_special_tag any sufficiently complex system acts as a black box when it becomes easier to experiment with than to understand . hence , black-box optimization has become increasingly important as systems have become more complex . in this paper we describe google vizier , a google-internal service for performing black-box optimization that has become the de facto parameter tuning engine at google . google vizier is used to optimize many of our machine learning models and other systems , and also provides core capabilities to google 's cloud machine learning hypertune subsystem . we discuss our requirements , infrastructure design , underlying algorithms , and advanced features such as transfer learning and automated early stopping that the service provides . story_separator_special_tag support vector machines ( svms ) have achieved very good performance on different learning problems . however , the success of svms depends on the adequate choice of the values of a number of parameters ( e.g. , the kernel and regularization parameters ) . in the current work , we propose the combination of meta-learning and search algorithms to deal with the problem of svm parameter selection . in this combination , given a new problem to be solved , meta-learning is employed to recommend svm parameter values based on parameter configurations that have been successfully adopted in previous similar problems . the parameter values returned by meta-learning are then used as initial search points by a search technique , which will further explore the parameter space . in this proposal , we envisioned that the initial solutions provided by meta-learning are located in good regions of the search space ( i.e . they are closer to optimum solutions ) . hence , the search algorithm would need to evaluate a lower number of candidate solutions when looking for an adequate solution . in this work , we investigate the combination of meta-learning with two search algorithms : particle story_separator_special_tag meta-learning allows an intelligent agent to leverage prior learning episodes as a basis for quickly improving performance on a novel task . bayesian hierarchical modeling provides a theoretical framework for formalizing meta-learning as inference for a set of parameters that are shared across tasks . here , we reformulate the model-agnostic meta-learning algorithm ( maml ) of finn et al . ( 2017 ) as a method for probabilistic inference in a hierarchical bayesian model . in contrast to prior methods for meta-learning via hierarchical bayes , maml is naturally applicable to complex function approximators through its use of a scalable gradient descent procedure for posterior inference . furthermore , the identification of maml as hierarchical bayes provides a way to understand the algorithm 's operation as a meta-learning procedure , as well as an opportunity to make use of computational strategies for efficient inference . we use this opportunity to propose an improvement to the maml algorithm that makes use of techniques from approximate inference and curvature estimation . story_separator_special_tag we extend the capabilities of neural networks by coupling them to external memory resources , which they can interact with by attentional processes . the combined system is analogous to a turing machine or von neumann architecture but is differentiable end-to-end , allowing it to be efficiently trained with gradient descent . preliminary results demonstrate that neural turing machines can infer simple algorithms such as copying , sorting , and associative recall from input and output examples . story_separator_special_tag in this work , we proposed the use of support vector machines ( svm ) to predict the performance of machine learning algorithms based on features of the learning problems . this work is related to the meta-regression approach , which has been successfully applied to predict learning performance , supporting algorithm selection . experiments were performed in a case study in which svms with different kernel functions were used to predict the performance of multi-layer perceptron ( mlp ) networks . the svms obtained better results in the evaluated task , when compared to different algorithms that have been applied as meta-regressors in previous work . story_separator_special_tag an open problem in reinforcement learning is discovering hierarchical structure . hexq , an algorithm which automatically attempts to decompose and solve a model-free factored mdp hierarchically is described . by searching for aliased markov sub-space regions based on the state variables the algorithm uses temporal and state abstraction to construct a hierarchy of interlinked smaller mdps . story_separator_special_tag meta-learning for model selection , as reported in the symbolic machine learning community , can be described as follows . first , it is cast as a purely data-driven predictive task . second , it typically relies on a mapping of dataset characteristics to some measure of generalization performance ( e.g. , error ) . third , it tends to ignore the role of algorithm parameters by relying mostly on default settings . this paper describes a case-based system for model selection which combines knowledge and data in selecting a ( set of ) algorithm ( s ) to recommend for a given task . the knowledge consists mainly of the similarity measures used to retrieve records of past learning experiences as well as profiles of learning algorithms incorporated into the conceptual meta-model . in addition to the usual dataset characteristics and error rates , the case base includes objects describing the evaluation strategy and the learner parameters used . these have two major roles : they ensure valid and meaningful comparisons between independently reported findings , and they facilitate replication of past experiments . finally , the case-based meta-learner can be used not only as a predictive tool but story_separator_special_tag we studied a number of measures that characterize the difficulty of a classification problem , focusing on the geometrical complexity of the class boundary . we compared a set of real-world problems to random labelings of points and found that real problems contain structures in this measurement space that are significantly different from the random sets . distributions of problems in this space show that there exist at least two independent factors affecting a problem 's difficulty . we suggest using this space to describe a classifier 's domain of competence . this can guide static and dynamic selection of classifiers for specific problems as well as subproblems formed by confinement , projection , and transformations of the feature vectors . story_separator_special_tag this paper introduces the application of gradient descent methods to meta-learning . the concept of `` meta-learning '' , i.e . of a system that improves or discovers a learning algorithm , has been of interest in machine learning for decades because of its appealing applications . previous meta-learning approaches have been based on evolutionary methods and , therefore , have been restricted to small models with few free parameters . we make meta-learning in large systems feasible by using recurrent neural networks withth eir attendant learning routines as meta-learning systems . our system derived complex well performing learning algorithms from scratch . in this paper we also show that our approachp erforms non-stationary time series prediction . story_separator_special_tag learning to store information over extended time intervals by recurrent backpropagation takes a very long time , mostly because of insufficient , decaying error backflow . we briefly review hochreiter 's ( 1991 ) analysis of this problem , then address it by introducing a novel , efficient , gradient based method called long short-term memory ( lstm ) . truncating the gradient where this does not do harm , lstm can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units . multiplicative gate units learn to open and close access to the constant error flow . lstm is local in space and time ; its computational complexity per time step and weight is o . 1. our experiments with artificial data involve local , distributed , real-valued , and noisy pattern representations . in comparisons with real-time recurrent learning , back propagation through time , recurrent cascade correlation , elman nets , and neural sequence chunking , lstm leads to many more successful runs , and learns much faster . lstm also solves complex , artificial long-time-lag tasks that have never been solved by story_separator_special_tag the performance of many machine learning methods depends critically on hyperparameter settings . sophisticated bayesian optimization methods have recently achieved considerable successes in optimizing these hyperparameters , in several cases surpassing the performance of human experts . however , blind reliance on such methods can leave end users without insight into the relative importance of different hyperparameters and their interactions . this paper describes efficient methods that can be used to gain such insight , leveraging random forest models fit on the data already gathered by bayesian optimization . we first introduce a novel , linear-time algorithm for computing marginals of random forest predictions and then show how to leverage these predictions within a functional anova framework , to quantify the importance of both single hyperparameters and of interactions between hyperparameters . we conducted experiments with prominent machine learning frameworks and state-of-the-art solvers for combinatorial problems . we show that our methods provide insight into the relationship between hyperparameter settings and performance , and demonstrate that -- even in very highdimensional cases -- most performance variation is attributable to just a few hyperparameters . story_separator_special_tag perhaps surprisingly , it is possible to predict how long an algorithm will take to run on a previously unseen input , using machine learning techniques to build a model of the algorithm s runtime as a function of problem-specific instance features . such models have many important applications and over the past decade , a wide variety of techniques have been studied for building such models . in this extended abstract of our 2014 ai journal article of the same title , we summarize existing models and describe new model families and various extensions . in a comprehensive empirical analyis using 11 algorithms and 35 instance distributions spanning a wide range of hard combinatorial problems , we demonstrate that our new models yield substantially better runtime predictions than previous approaches in terms of their generalization to new problem instances , to new algorithms from a parameterized space , and to both simultaneously . story_separator_special_tag in many engineering optimization problems , the number of function evaluations is severely limited by time or cost . these problems pose a special challenge to the field of global optimization , since existing methods often require more function evaluations than can be comfortably afforded . one way to address this challenge is to fit response surfaces to data collected by evaluating the objective and constraint functions at a few points . these surfaces can then be used for visualization , tradeoff analysis , and optimization . in this paper , we introduce the reader to a response surface methodology that is especially good at modeling the nonlinear , multimodal functions that often occur in engineering . we then show how these approximating functions can be used to construct an efficient global optimization algorithm with a credible stopping rule . the key to using response surfaces for global optimization lies in balancing the need to exploit the approximating surface ( by sampling where it is minimized ) with the need to improve the approximation ( by sampling where prediction error may be high ) . striking this balance requires solving certain auxiliary problems which have previously been considered intractable , story_separator_special_tag the goal of this thesis is to provide support to the analyst in selecting the appropriate classification algorithm for a specific problem , taking into consideration the nature of the problem . we make no distinction between an algorithm and the representational model of the algorithm , we consider the learning algorithm as the entity to be selected [ . ] story_separator_special_tag to address the problem of algorithm selection for the classification task , we equip a relational case base with new similarity measures that are able to cope with multirelational representations . the proposed approach builds on notions from clustering and is closely related to ideas developed in similarity-based relational learning . the results provide evidence that the relational representation coupled with the appropriate similarity measure can improve performance . the ideas presented are pertinent not only for meta-learning representational issues , but for all domains with similar representation requirements . story_separator_special_tag the selection of an appropriate inducer is crucial for performing effective classification . in previous work we presented a system called noemon which relied on a mapping between dataset characteristics and inducer performance to propose inducers for specific datasets . instance based learning was used to create that mapping . here we extend and refine the set of data characteristics ; we also use a wider range of base-level inducers and a much larger collection of datasets to create the meta-models . we compare the performance of meta-models produced by instance based learners , decision trees and boosted decision trees . the results show that decision trees and boosted decision trees models enhance the perfomance of the system . story_separator_special_tag 1. in psychological work the problem of comparing two different rankings of the same set of individuals may be divided into two types . in the first type the individuals have a given order a which is objectively defined with reference to some quality , and a characteristic question is : if an observer ranks the individuals in an order b , does a comparison of b with a suggest that he possesses a reliable judgment of the quality , or , alternatively , is it probable that b could have arisen by chance ? in the second type no objective order is given . two observers consider the individuals and rank them in orders a and b. the question now is , are these orders sufficiently alike to indicate similarity of taste in the observers , or , on the other hand , are a and b incompatible within assigned limits of probability ? an example of the first type occurs in the familiar experiments wherein an observer has to arrange a known set of weights in ascending order of weight ; the second type would arise if two observers had to rank a set of musical compositions in story_separator_special_tag knowledge discovery in databases ( kdd ) has evolved a lot during the last years and reached a mature stage offering plenty of operators to solve complex data analysis tasks . however , the user support for building workflows has not progressed accordingly . the large number of operators currently available in kdd systems makes it difficult for users to successfully analyze data . in addition , the cor- rectness of workflows is not checked before execution . hence , the execution of a workflow frequently stops with an error after several hours of runtime.this paper presents our tools , eproplan and eida , which solve the above problems by supporting the whole life-cycle of ( semi- ) auto- matic workflow generation . our modeling tool eproplan allows to describe operators and build a task/method decomposition grammar to specify the desired workflows . additionally , our intelligent dis- covery assistant , eida , allows to place workflows into data mining ( dm ) tools or workflow engines for execution . story_separator_special_tag hyperparameter optimization aims to find the optimal hyperparameter configuration of a machine learning model , which provides the best performance on a validation dataset . manual search usually leads to get stuck in a local hyperparameter configuration , and heavily depends on human intuition and experience . a simple alternative of manual search is random/grid search on a space of hyperparameters , which still undergoes extensive evaluations of validation errors in order to find its best configuration . bayesian optimization that is a global optimization method for black-box functions is now popular for hyperparameter optimization , since it greatly reduces the number of validation error evaluations required , compared to random/grid search . bayesian optimization generally finds the best hyperparameter configuration from random initialization without any prior knowledge . this motivates us to let bayesian optimization start from the configurations that were successful on similar datasets , which are able to remarkably minimize the number of evaluations . in this paper , we propose deep metric learning to learn meta-features over datasets such that the similarity over them is effectively measured by euclidean distance between their associated meta-features . to this end , we introduce a siamese network composed of story_separator_special_tag we address the problem of finding the parameter settings that will result in optimal performance of a given learning algorithm using a particular dataset as training data . we describe a wrapper method , considering determination of the best parameters as a discrete function optimization problem . the method uses best-first search and crossvalidation to wrap around the basic induction algorithm : the search explores the space of parameter values , running the basic algorithm many times on training and holdout sets produced by crossvalidation to get an estimate of the expected error of each parameter setting . thus , the final selected parameter settings are tuned for the specific induction algorithm and dataset being studied . we report experiments with this method on 33 datasets selected from the uci and statlog collections using c4.5 as the basic induction algorithm . at a 90 % confidence level , our method improves the performance of c4.5 on nine domains , degrades performance on one , and is statistically indistinguishable from c4.5 on the rest . on the sample of datasets used for comparison , our method yields an average 13 % relative decrease in error rate . we expect to see story_separator_special_tag describing a learning task is crucial , not only for metalearning but also to gain insight in this learning task . the paper evaluates the performance of a recent method for assessing quality standards for case bases when used for a supervised meta-learning . empirical results on real-world data show this approach in combination with others as a promising one . story_separator_special_tag deep learning ( dl ) methods have gained considerable attention since 2014. in this chapter we briefly review the state of the art in dl and then give several examples of applications from diverse areas of application . we will focus on convolutional neural networks ( cnns ) , which have since the seminal work of krizhevsky et al . ( imagenet classification with deep convolutional neural networks . advances in neural information processing systems 25 , pp . 1097 1105 , 2012 ) revolutionized image classification and even started surpassing human performance on some benchmark data sets ( ciresan et al. , multi-column deep neural network for traffic sign classification , 2012a ; he et al. , delving deep into rectifiers : surpassing human-level performance on imagenet classification . corr , vol . 1502.01852 , 2015a ) . while deep neural networks have become popular primarily for image classification tasks , they can also be successfully applied to other areas and problems with some local structure in the data . we will first present a classical application of cnns on image-like data , in particular , phenotype classification of cells based on their morphology , and then extend the story_separator_special_tag it is a known fact that good parameter settings affect the performance of many machine learning algorithms . support vector machines ( svm ) and neural networks are particularly affected . in this paper , we concentrate on svm and discuss some ways to set its parameters . the first approach uses small samples , while the second one exploits meta-learning and past results . both methods have been thoroughly evaluated . we show that both approaches enable us to obtain quite good results with significant savings in experimentation time . story_separator_special_tag the information deviation between any two finite measures can not be increased by any statistical operations ( markov morphisms ) . it is invarient if and only if the morphism is sufficient for these two measures story_separator_special_tag we propose a method for producing ensembles of predictors based on holdout estimations of their generalization performances . this approach uses a prior directly on the performance of predictors taken from a finite set of candidates and attempts to infer which one is best . using bayesian inference , we can thus obtain a posterior that represents our uncertainty about that choice and construct a weighted ensemble of predictors accordingly . this approach has the advantage of not requiring that the predictors be probabilistic themselves , can deal with arbitrary measures of performance and does not assume that the data was actually generated from any of the predictors in the ensemble . since the problem of finding the best ( as opposed to the true ) predictor among a class is known as agnostic pac-learning , we refer to our method as agnostic bayesian learning . we also propose a method to address the case where the performance estimate is obtained from k-fold cross validation . while being efficient and easily adjustable to any loss function , our experiments confirm that the agnostic bayes approach is state of the art compared to common baselines such as model selection based on story_separator_special_tag recent progress in artificial intelligence has renewed interest in building systems that learn and think like people . many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition , video games , and board games , achieving performance that equals or even beats that of humans in some respects . despite their biological inspiration and performance achievements , these systems differ from human intelligence in crucial ways . we review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn and how they learn it . specifically , we argue that these machines should ( 1 ) build causal models of the world that support explanation and understanding , rather than merely solving pattern recognition problems ; ( 2 ) ground learning in intuitive theories of physics and psychology to support and enrich the knowledge that is learned ; and ( 3 ) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations . we suggest concrete challenges and promising routes toward these goals that can combine the strengths of recent neural network story_separator_special_tag this paper is concerned with the problem of predicting relative performance of classification algorithms . it focusses on methods that use results on small samples and discusses the shortcomings of previous approaches . a new variant is proposed that exploits , as some previous approaches , meta-learning . the method requires that experiments be conducted on few samples . the information gathered is used to identify the nearest learning curve for which the sampling procedure was carried out fully . this in turn permits to generate a prediction regards the relative performance of algorithms . experimental evaluation shows that the method competes well with previous approaches and provides quite good and practical solution to this problem . story_separator_special_tag this paper concerns the problem of predicting the relative performance of classification algorithms . our approach requires that experiments are conducted on small samples . the information gathered is used to identify the nearest learning curve for which the sampling procedure was fully carried out . this allows the generation of a prediction regarding the relative performance of the algorithms . the method automatically establishes how many samples are needed and their sizes . this is done iteratively by taking into account the results of all previous experiments - both on other datasets and on the new dataset obtained so far . experimental evaluation has shown that the method achieves better performance than previous approaches . story_separator_special_tag given the large amount of data mining algorithms , their combinations ( e.g . ensembles ) and possible parameter settings , finding the most adequate method to analyze a new dataset becomes an ever more challenging task . this is because in many cases testing all possibly useful alternatives quickly becomes prohibitively expensive . in this paper we propose a novel technique , called active testing , that intelligently selects the most useful cross-validation tests . it proceeds in a tournament-style fashion , in each round selecting and testing the algorithm that is most likely to outperform the best algorithm of the previous round on the new dataset . this most promising ' competitor is chosen based on a history of prior duels between both algorithms on similar datasets . each new cross-validation test will contribute information to a better estimate of dataset similarity , and thus better predict which algorithms are most promising on the new dataset . we have evaluated this approach using a set of 292 algorithm-parameter combinations on 76 uci datasets for classification . the results show that active testing will quickly yield an algorithm whose performance is very close to the optimum , after relatively story_separator_special_tag currently many classification algorithms exist and there is no algorithm that would outperform all the others in all tasks . therefore it is of interest to determine which classification algorithm is the best one for a given task . although direct comparisons can be made for any given problem using a cross-validation evaluation , it is desirable to avoid this , as the computational costs are significant . we describe a method which relies on relatively fast pairwise comparisons involving two algorithms . this method exploits sampling landmarks , that is information about learning curves besides classical data characteristics . one key feature of this method is an iterative procedure for extending the series of experiments used to gather new information in the form of sampling landmarks . metalearning plays also a vital role . the comparisons between various pairs of algorithm are repeated and the result is represented in the form of a partially ordered ranking . evaluation is done by comparing the partial order of algorithm that has been predicted to the partial order representing the supposedly correct result . the results of our analysis show that the method has good performance and could be of help in story_separator_special_tag metalearning attracted considerable interest in the machine learning community in the last years . yet , some disagreement remains on what does or what does not constitute a metalearning problem and in which contexts the term is used in . this survey aims at giving an all-encompassing overview of the research directions pursued under the umbrella of metalearning , reconciling different definitions given in scientific literature , listing the choices involved when designing a metalearning system and identifying some of the future research challenges in this domain . story_separator_special_tag in this paper , we present a framework for metalearning that adopts the use of regression-based landmarkers . each such landmarker exploits the correlations between the various patterns of performance for a given set of algorithms so as to construct a regression function that represents the pattern of performance of one algorithm from that set . the idea is that the independents utilised by these regression functions i.e . landmarkers correspond to the performance of a subset of the given algorithms . in this manner , we may control the number of algorithms being landmarked ; the more that are landmarked , the fewer independents or evidence we have to make those approximations , and less accurate the landmarkers are . we investigate the ability of such landmarkers in combination with metalearners to learn how to predict the most accurate algorithm from a given set . while our results show that the accuracy of the meta-learning solutions increases as the quality of the metaattributes improves ; i.e . when fewer algorithm performance measurements are landmarked and instead evaluated as independents , we find that in general , the results are still poor . however , we find that when a story_separator_special_tag algorithm design is a laborious process and often requires many iterations of ideation and validation . in this paper , we explore automating algorithm design and present a method to learn an optimization algorithm , which we believe to be the first method that can automatically discover a better algorithm . we approach this problem from a reinforcement learning perspective and represent any particular optimization algorithm as a policy . we learn an optimization algorithm using guided policy search and demonstrate that the resulting algorithm outperforms existing hand-engineered algorithms in terms of convergence speed and/or the final objective value . story_separator_special_tag learning to optimize is a recently proposed framework for learning optimization algorithms using reinforcement learning . in this paper , we explore learning an optimization algorithm for training shallow neural nets . such high-dimensional stochastic optimization problems present interesting challenges for existing reinforcement learning algorithms . we develop an extension that is suited to learning optimization algorithms in this setting and demonstrate that the learned optimization algorithm consistently outperforms other known optimization algorithms even on unseen tasks and is robust to changes in stochasticity of gradients and the neural net architecture . more specifically , we show that an optimization algorithm trained with the proposed method on the problem of training a neural net on mnist generalizes to the problems of training neural nets on the toronto faces dataset , cifar-10 and cifar-100 . story_separator_special_tag this article provides an overview of rank aggregation methods and algorithms , with an emphasis on modern biological applications . rank aggregation methods have traditionally been used extensively in marketing and advertisement research , and in applied psychology in general . in recent years , rank aggregation methods have emerged as an important tool for combining information from different internet search engines or from different omics-scale biological studies . we discuss three classes of methods , namely distributional based , heuristic , and stochastic search . the original thurstone 's scaling and its extensions represent the first class of methods that are most appropriate for aggregating many short ranked lists . aggregating results from consumer rankings of products falls into this category of problems . its application to biological problems is also being explored . on the other hand , heuristic algorithms and stochastic search methods are applicable to the situation of aggregating a small number of long lists , the so-called high-level meta-analysis scenario . combining results from different search engines/criteria and a number of omics-scale biological applications fall into this category . heuristic algorithms are deterministic in nature , ranging from simple arithmetic averages of ranks to markov story_separator_special_tag providing user support for the application of data mining algorithms in the field of knowledge discovery in databases ( kdd ) is an important issue . based on ideas from the fields of statistics , machine learning and knowledge engineering we provided a general framework for defining user support . the general framework contains a combined top-down and bottom-up strategy to tackle this problem . in the current paper we describe the algorithm selection tool ( ast ) that is one component in our framework . story_separator_special_tag in meta-learning , classification problems can be described by a variety of features , including complexity measures . these measures allow capturing the complexity of the frontier that separates the classes . for regression problems , on the other hand , there is a lack of such type of measures . this paper presents and analyses measures devoted to estimate the complexity of the function that should fitted to the data in regression problems . as case studies , they are employed as meta-features in three meta-learning setups : ( i ) the first one predicts the regression function type of some synthetic datasets ; ( ii ) the second one is designed to tune the parameter values of support vector regressors ; and ( iii ) the third one aims to predict the performance of various regressors for a given dataset . the results show the suitability of the new measures to describe the regression datasets and their utility in the meta-learning tasks considered . in cases ( ii ) and ( iii ) the achieved results are also similar or better than those obtained by the use of classical meta-features in meta-learning . story_separator_special_tag machine learning studies automatic algorithms that improve themselves through experience . it is widely used for analyzing and extracting value from large biomedical data sets , or big biomedical data , advancing biomedical research , and improving healthcare . before a machine learning model is trained , the user of a machine learning software tool typically must manually select a machine learning algorithm and set one or more model parameters termed hyper-parameters . the algorithm and hyper-parameter values used can greatly impact the resulting model s performance , but their selection requires special expertise as well as many labor-intensive manual iterations . to make machine learning accessible to layman users with limited computing expertise , computer science researchers have proposed various automatic selection methods for algorithms and/or hyper-parameter values for a given supervised machine learning problem . this paper reviews these methods , identifies several of their limitations in the big biomedical data environment , and provides preliminary thoughts on how to address these limitations . these findings establish a foundation for future research on automatically selecting algorithms and hyper-parameter values for analyzing big biomedical data . story_separator_special_tag many classification algorithms , such as neural networks and support vector machines , have a range of hyper-parameters that may strongly affect the predictive performance of the models induced by them . hence , it is recommended to define the values of these hyper-parameters using optimization techniques . while these techniques usually converge to a good set of values , they typically have a high computational cost , because many candidate sets of values are evaluated during the optimization process . it is often not clear whether this will result in parameter settings that are significantly better than the default settings . when training time is limited , it may help to know when these parameters should definitely be tuned . in this study , we use meta-learning to predict when optimization techniques are expected to lead to models whose predictive performance is better than those obtained by using default parameter settings . hence , we can choose to employ optimization techniques only when they are expected to improve performance , thus reducing the overall computational cost . we evaluate these meta-learning techniques on more than one hundred data sets . the experimental results show that it is possible to story_separator_special_tag supervised classification is the most studied task in machine learning . among the many algorithms used in such task , decision tree algorithms are a popular choice , since they are robust and efficient to construct . moreover , they have the advantage of producing comprehensible models and satisfactory accuracy levels in several application domains . like most of the machine leaning methods , these algorithms have some hyper-parameters whose values directly affect the performance of the induced models . due to the high number of possibilities for these hyper-parameter values , several studies use optimization techniques to find a good set of solutions in order to produce classifiers with good predictive performance . this study investigates how sensitive decision trees are to a hyper-parameter optimization process . four different tuning techniques were explored to adjust j48 decision tree algorithm hyper-parameters . in total , experiments using 102 heterogeneous datasets analyzed the tuning effect on the induced models . the experimental results show that even presenting a low average improvement over all datasets , in most of the cases the improvement is statistically significant . story_separator_special_tag machine learning algorithms have been investigated in several scenarios , one of them is the data classification . the predictive performance of the models induced by these algorithms is usually strongly affected by the values used for their hyper-parameters . different approaches to define these values have been proposed , like the use of default values and optimization techniques . although default values can result in models with good predictive performance , different implementations of the same machine learning algorithms use different default values , leading to models with clearly different predictive performance for the same dataset . optimization techniques have been used to search for hyper-parameter values able to maximize the predictive performance of induced models for a given dataset , but with the drawback of a high computational cost . a compromise is to use an optimization technique to search for values that are suitable for a wide spectrum of datasets . this paper investigates the use of meta-learning to recommend default values for the induction of support vector machine models for a new classification dataset . we compare the default values suggested by the weka and libsvm tools with default values optimized by meta-heuristics on a large story_separator_special_tag mantovani , r. g. use of meta-learning for hyperparameter tuning of classification problems . 2018 . 155 p. tese ( doutorado em ci\xeancias ci\xeancias de computa\xe7\xe3o e matem\xe1tica computacional ) instituto de ci\xeancias matem\xe1ticas e de computa\xe7\xe3o , universidade de s\xe3o paulo , s\xe3o carlos sp , 2018. machine learning solutions have been successfully used to solve many simple and complex problems . however , their development process still relies on human experts to perform tasks such as data preprocessing , feature engineering and model selection . as the complexity of these tasks increases , so does the demand for automated solutions , namely automated machine learning ( automl ) . most algorithms employed in these systems have hyperparameters whose configuration may directly affect their predictive performance . therefore , hyperparameter tuning is a recurring task in automl systems . this thesis investigated how to efficiently automate hyperparameter tuning by means of meta-learning . to this end , large-scale experiments were performed tuning the hyperparameters of different classification algorithms , and an enhanced experimental methodology was adopted throughout the thesis to explore and learn the hyperparameter profiles for different classification algorithms . the results also showed that in many cases story_separator_special_tag survey of previous comparisons and theoretical work descriptions of methods dataset descriptions criteria for comparison and methodology ( including validation ) empirical results machine learning on machine learning . story_separator_special_tag the support vector machine algorithm is sensitive to the choice of parameter settings . if these are not set correctly , the algorithm may have a substandard performance . it has been shown that meta-learning can be used to support the selection of svm parameters . however , it is very dependent on the quality of the dataset and the meta-features used to characterize the dataset . as alternative for this problem , a recent technique called active testing characterized a dataset based on the pairwise performance differences between possible solutions . this approach selects the most useful cross-validation tests . each new cross-validation test will contribute information to a better estimate of dataset similarity , and thus better predict which algorithms are most promising on the new dataset . in this paper we propose the application of active testing for the svm parameter problem . we test it on the problem of setting the rbf kernel parameters for classification problems and we compare its similarity strategy with based on data characteristics . the results showed the variants of active testing that rely on cross-validation tests to estimate dataset similarity provides better solutions than those that rely on data characteristics story_separator_special_tag deep neural networks excel in regimes with large amounts of data , but tend to struggle when data is scarce or when they need to adapt quickly to changes in the task . in response , recent work in meta-learning proposes training a meta-learner on a distribution of similar tasks , in the hopes of generalization to novel but related tasks by learning a high-level strategy that captures the essence of the problem it is asked to solve . however , many recent meta-learning approaches are extensively hand-designed , either using architectures specialized to a particular application , or hard-coding algorithmic components that constrain how the meta-learner solves the task . we propose a class of simple and generic meta-learner architectures that use a novel combination of temporal convolutions and soft attention ; the former to aggregate information from past experience and the latter to pinpoint specific pieces of information . in the most extensive set of meta-learning experiments to date , we evaluate the resulting simple neural attentive learner ( or snail ) on several heavily-benchmarked tasks . on all tasks , in both supervised and reinforcement learning , snail attains state-of-the-art performance by significant margins . story_separator_special_tag focusing on portfolio algorithm selection , this paper presents a hybrid machine learning approach , combining collaborative filtering and surrogate latent factor modeling . collaborative filtering , popularized by the netflix challenge , aims at selecting the items that a user will most probably like , based on the previous movies she liked , and the movies that have been liked by other users . as first noted by stern et al ( 2010 ) , algorithm selection can be formalized as a collaborative filtering problem , by considering that a problem instance `` prefers '' the algorithms with better performance { on this particular instance } . a main difference between collaborative filtering approaches and mainstream algorithm selection is to extract latent features to describe problem instances and algorithms , whereas algorithm selection most often relies on the initial descriptive features . a main contribution of the present paper concerns the so-called cold-start issue , when facing a brand new instance . in contrast with stern et al . ( 2010 ) , ars learns a non-linear mapping from the initial features onto the latent features , thereby supporting the recommendation of a good algorithm for the new problem story_separator_special_tag algorithm selection ( as ) , selecting the algorithm best suited for a particular problem instance , is acknowledged to be a key issue to make the best out of algorithm portfolios . this paper presents a collaborative filtering approach to as . collaborative filtering , popularized by the netflix challenge , aims to recommend the items that a user will most probably like , based on the previous items she liked , and the items that have been liked by other users . as first noted by stern et al . [ 47 ] , algorithm selection can be formalized as a collaborative filtering problem , by considering that a problem instance likes better the algorithms that achieve better performance on this particular instance . two merits of collaborative filtering ( cf ) compared to the mainstream algorithm selection ( as ) approaches are the following . firstly , mainstream as requires extensive and computationally expensive experiments to learn a performance model , with all algorithms launched on all problem instances , whereas cf can exploit a sparse matrix , with a few algorithms launched on each problem instance . secondly , as learns a performance model as a story_separator_special_tag knowledge discovery in databases is a complex process that involves many different data processing and learning operators . today 's knowledge discovery support systems can contain several hundred operators . a major challenge is to assist the user in designing workflows which are not only valid but also - ideally - optimize some performance measure associated with the user goal . in this paper we present such a system . the system relies on a meta-mining module which analyses past data mining experiments and extracts meta-mining models which associate dataset characteristics with workflow descriptors in view of workflow performance optimization . the meta-mining model is used within a data mining workflow planner , to guide the planner during the workflow planning . we learn the meta-mining models using a similarity learning approach , and extract the workflow descriptors by mining the workflows for generalized relational patterns accounting also for domain knowledge provided by a data mining ontology . we evaluate the quality of the data mining workflows that the system produces on a collection of real world datasets coming from biology and show that it produces workflows that are significantly better than alternative methods that can only do workflow selection story_separator_special_tag we consider the problem of learning bayes net structures for related tasks . we present a formalism for learning related bayes net structures that takes advantage of the similarity between tasks by biasing toward learning similar structures for each task . heuristic search is used to find a high scoring set of structures ( one for each task ) , where the score for a set of structures is computed in a principled way . experiments on synthetic problems generated from the alarm and insurance networks show that learning the structures for related tasks using the proposed method yields better results than learning the structures independently . story_separator_special_tag the presence of computationally demanding problems and the current inability to automatically transfer experience from the application of past experiments to new ones delays the evolution of knowledge itself . in this paper we present the automated data scientist , a system that employs meta-learning for hyperparameter selection and builds a rich ensemble of models through forward model selection in order to automate binary classification tasks . preliminary evaluation shows that the system is capable of coping with classification problems of medium complexity . story_separator_special_tag quantitative structure activity relationships ( qsars ) are functions that predict bioactivity from compound structure . although almost every form of statistical and machine learning method has been applied to learning qsars , there is no single best way of learning qsars . therefore , currently the qsar scientist has little to guide her/him on which qsar approach to choose for a specific problem . the aim of this work is to introduce meta-qsar , a meta-learning approach aimed to learning which qsar method is most appropriate for a particular problem . for the preliminary results presented here , we used chembl , a public available chemoinformatic database , to systematically run extensive comparative qsar experiments . we further apply meta-learning in order to generalise these results . story_separator_special_tag as the field of data science continues to grow , there will be an ever-increasing demand for tools that make machine learning accessible to non-experts . in this paper , we introduce the concept of tree-based pipeline optimization for automating one of the most tedious parts of machine learning -- -pipeline design . we implement an open source tree-based pipeline optimization tool ( tpot ) in python and demonstrate its effectiveness on a series of simulated and real-world benchmark data sets . in particular , we show that tpot can design machine learning pipelines that provide a significant improvement over a basic machine learning analysis while requiring little to no input nor prior knowledge from the user . we also address the tendency for tpot to design overly complex pipelines by integrating pareto optimization , which produces compact pipelines without sacrificing classification accuracy . as such , this work represents an important step toward fully automating machine learning pipeline design . story_separator_special_tag a major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution . however , in many real-world applications , this assumption may not hold . for example , we sometimes have a classification task in one domain of interest , but we only have sufficient training data in another domain of interest , where the latter data may be in a different feature space or follow a different data distribution . in such cases , knowledge transfer , if done successfully , would greatly improve the performance of learning by avoiding much expensive data-labeling efforts . in recent years , transfer learning has emerged as a new learning framework to address this problem . this survey focuses on categorizing and reviewing the current progress on transfer learning for classification , regression , and clustering problems . in this survey , we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation , multitask learning and sample selection bias , as well as covariate shift . we also explore some potential future issues in transfer learning story_separator_special_tag active learning ( al ) aims to enable training high performance classifiers with low annotation cost by predicting which subset of unlabelled instances would be most beneficial to label . the importance of al has motivated extensive research , proposing a wide variety of manually designed al algorithms with diverse theoretical and intuitive motivations . in contrast to this body of research , we propose to treat active learning algorithm design as a meta-learning problem and learn the best criterion from data . we model an active learning algorithm as a deep neural network that inputs the base learner state and the unlabelled point set and predicts the best point to annotate next . training this active query policy network with reinforcement learning , produces the best non-myopic policy for a given dataset . the key challenge in achieving a general solution to al then becomes that of learner generalisation , particularly across heterogeneous datasets . we propose a multi-task dataset-embedding approach that allows dataset-agnostic active learners to be trained . our evaluation shows that al algorithms trained in this way can directly generalise across diverse problems . story_separator_special_tag bayesian optimization ( bo ) is a model-based approach for gradient-free black-box function optimization . typically , bo is powered by a gaussian process ( gp ) , whose algorithmic complexity is cubic in the number of evaluations . hence , gp-based bo can not leverage large amounts of past or related function evaluations , for example , to warm start the bo procedure . we develop a multiple adaptive bayesian linear regression model as a scalable alternative whose complexity is linear in the number of observations . the multiple bayesian linear regression models are coupled through a shared feedforward neural network , which learns a joint representation and transfers knowledge across machine learning problems . story_separator_special_tag landmarking is a novel approach to describing tasks in meta-learning . previous approaches to meta-learning mostly considered only statistics-inspired measures of the data as a source for the definition of metaattributes . contrary to such approaches , landmarking tries to determine the location of a specific learning problem in the space of all learning problems by directly measuring the performance of some simple and efficient learning algorithms themselves . in the experiments reported we show how such a use of landmark values can help to distinguish between areas of the learning space favouring different learners . experiments , both with artificial and real-world databases , show that landmarking selects , with moderate but reasonable level of success , the best performing of a set of learning algorithms . story_separator_special_tag the selection of metafeatures for metalearning ( mtl ) is often an ad hoc process . the lack of a proper motivation for the choice of a metafeature rather than others is questionable and may originate a loss of valuable information for a given problem ( e.g. , use of class entropy and not attribute entropy ) . we present a framework to systematically generate metafeatures in the context of mtl . this framework decomposes a metafeature into three components : meta-function , object and post-processing . the automatic generation of metafeatures is triggered by the selection of a meta-function used to systematically generate metafeatures from all possible combinations of object and post-processing alternatives . we executed experiments by addressing the problem of algorithm selection in classification datasets . results show that the sets of systematic metafeatures generated from our framework are more informative than the non-systematic ones and the set regarded as state-of-the-art . story_separator_special_tag machine learning ( ml ) has been successfully applied to a wide range of domains and applications . one of the techniques behind most of these successful applications is ensemble learning ( el ) , the field of ml that gave birth to methods such as random forests or boosting . the complexity of applying these techniques together with the market scarcity on ml experts , has created the need for systems that enable a fast and easy drop-in replacement for ml libraries . automated machine learning ( automl ) is the field of ml that attempts to answers these needs . typically , these systems rely on optimization techniques such as bayesian optimization to lead the search for the best model . our approach differs from these systems by making use of the most recent advances on metalearning and a learning to rank approach to learn from metadata . we propose autobagging , an automl system that automatically ranks 63 bagging workflows by exploiting past performance and dataset characterization . results on 140 classification datasets from the openml platform show that autobagging can yield better performance than the average rank method and achieve results that are not statistically different story_separator_special_tag having access to massive amounts of data does not necessarily imply that induction algorithms must use them all . samples often provide the same accuracy with far less computational cost . however , the correct sample size rarely is obvious . we analyze methods for progressive samplingusing progressively larger samples as long as model accuracy improves . we explore several notions of efficient progressive sampling . we analyze efficiency relative to induction with all instances ; we show that a simple , geometric sampling schedule is asymptotically optimal , and we describe how best to take into account prior expectations of accuracy convergence . we then describe the issues involved in instantiating an efficient progressive sampler , including how to detect convergence . finally , we provide empirical results comparing a variety of progressive sampling methods . we conclude that progressive sampling can be remarkably efficient . story_separator_special_tag an introduction to pattern classification.- some notes on applied mathematics for machine learning.- bayesian inference : an introduction to principles and practice in machine learning.- gaussian processes in machine learning.- unsupervised learning.- monte carlo methods for absolute beginners.- stochastic learning.- to statistical learning theory.- concentration inequalities . story_separator_special_tag besides the classification performance , the training time is a second important factor that affects the suitability of a classification algorithm regarding an unknown dataset . an algorithm with a slightly lower accuracy is maybe preferred if its training time is significantly lower . additionally , an estimation of the required training time of a pattern recognition task is very useful if the result has to be available in a certain amount of time . meta-learning is often used to predict the suitability or performance of classifiers using different learning schemes and features . especially landmarking features have been used very successfully in the past . the accuracy of simple learners are used to predict the performance of a more sophisticated algorithm . in this work , we investigate the quantitative prediction of the training time for several target classifiers . different sets of meta-features are evaluated according to their suitability of predicting actual run-times of a parameter optimization by a grid search . additionally , we adapted the concept of landmarking to time prediction . instead of their accuracy , the run-time of simple learners are used as feature values . we evaluated the approach on real world datasets story_separator_special_tag the performance of most of the classification algorithms on a particular dataset is highly dependent on the learning parameters used for training them . different approaches like grid search or genetic algorithms are frequently employed to find suitable parameter values for a given dataset . grid search has the advantage of finding more accurate solutions in general at the cost of higher computation time . genetic algorithms , on the other hand , are able to find good solutions in less time , but the accuracy of these solutions is usually lower than those of grid search . this paper uses ideas from meta-learning and case-based reasoning to provide good starting points to the genetic algorithm . the presented approach reaches the accuracy of grid search at a significantly lower computational cost . we performed extensive experiments for optimizing learning parameters of the support vector machine ( svm ) and the random forest classifiers on over 100 datasets from uci and statlib repositories . for the svm classifier , grid search achieved an average accuracy of 81 % and took six hours for training , whereas the standard genetic algorithm obtained 74 % accuracy in close to one hour of story_separator_special_tag in few-shot classification , we are interested in learning algorithms that train a classifier from only a handful of labeled examples . recent progress in few-shot classification has featured meta-learning , in which a parameterized model for a learning algorithm is defined and trained on episodes representing different classification problems , each with a small labeled training set and its corresponding test set . in this work , we advance this few-shot classification paradigm towards a scenario where unlabeled examples are also available within each episode . we consider two situations : one where all unlabeled examples are assumed to belong to the same set of classes as the labeled examples of the episode , as well as the more challenging situation where examples from other distractor classes are also provided . to address this paradigm , we propose novel extensions of prototypical networks ( snell et al. , 2017 ) that are augmented with the ability to use unlabeled examples when producing prototypes . these models are trained in an end-to-end way on episodes , to learn to leverage the unlabeled examples successfully . we evaluate these methods on versions of the omniglot and miniimagenet benchmarks , adapted to story_separator_special_tag in this paper , we introduce factorization machines ( fm ) which are a new model class that combines the advantages of support vector machines ( svm ) with factorization models . like svms , fms are a general predictor working with any real valued feature vector . in contrast to svms , fms model all interactions between variables using factorized parameters . thus they are able to estimate interactions even in problems with huge sparsity ( like recommender systems ) where svms fail . we show that the model equation of fms can be calculated in linear time and thus fms can be optimized directly . so unlike nonlinear svms , a transformation in the dual form is not necessary and the model parameters can be estimated directly without the need of any support vector in the solution . we show the relationship to svms and the advantages of fms for parameter estimation in sparse settings . on the other hand there are many different factorization models like matrix factorization , parallel factor analysis or specialized models like svd++ , pitf or fpmc . the drawback of these models is that they are not applicable for general prediction tasks story_separator_special_tag work on metalearning for algorithm selection has often been criticized because it mostly considers only the default parameter settings of the candidate base learning algorithms . many have indeed argued that the choice of parameter values can have a significant impact on accuracy . yet little empirical evidence exists to provide definitive support for that argument . recent experiments do suggest that parameter optimization may indeed have an impact . however , the distribution of performance differences has a long tail , suggesting that in most cases parameter optimization has little effect on accuracy . in this paper , we revisit some of these results and use metalearning to characterize the situations when parameter optimization is likely to cause a significant increase in accuracy . in so doing , we show that 1 ) a relatively simple and efficient landmarker carries significant predictive power , and 2 ) metalearning for algorithm selection should be effected in two phases , the first in which one determines whether parameter optimization is likely to increase accuracy , and the second in which algorithm selection actually takes place . story_separator_special_tag meta-learning is increasingly used to support the recommendation of machine learning algorithms and their configurations . such recommendations are made based on meta-data , consisting of performance evaluations of algorithms on prior datasets , as well as characterizations of these datasets . these characterizations , also called meta-features , describe properties of the data which are predictive for the performance of machine learning algorithms trained on them . unfortunately , despite being used in a large number of studies , meta-features are not uniformly described and computed , making many empirical studies irreproducible and hard to compare . this paper aims to remedy this by systematizing and standardizing data characterization measures used in meta-learning , and performing an in-depth analysis of their utility . moreover , it presents mfe , a new tool for extracting meta-features from datasets and identify more subtle reproducibility issues in the literature , proposing guidelines for data characterization that strengthen reproducible empirical research in meta-learning . story_separator_special_tag until recently , statistical theory has been restricted to the design and analysis of sampling experiments in which the size and composition of the samples are completely determined before the experimentation begins . the reasons for this are partly historical , dating back to the time when the statistician was consulted , if at all , only after the experiment was over , and partly intrinsic in the mathematical difficulty of working with anything but a fixed number of independent random variables . a major advance now appears to be in the making with the creation of a theory of the sequential design of experiments , in which the size and composition of the samples are not fixed in advance but are functions of the observations themselves . story_separator_special_tag eighty pilots participated in a study of variables influencing the transfer process . posttraining performance was assessed in a flight simulation under 1 of 2 conditions . those in the maximum performance condition were made aware of the skill to be assessed and the fact that their teammates were confederates , whereas those in the typical performance condition were not . the results indicated that ( a ) simulator ratings correlated with a measure of transfer to the cockpit for those in the typical condition only ; ( b ) team leader support , manipulated in a pretask brief , moderated the disparity between maximum and typical performance ; ( c ) team climate mediated the impact of support on performance in the typical condition ; ( d ) those with a stronger predisposition toward the trained skill viewed their climate as more supportive ; and ( e ) perceptions of team climate were better predictors of performance for those with a more external locus of control . story_separator_special_tag the paper describes the application of neural networks as learning rules for the training of neural networks . the learning rule is part of the neural network architecture . as a result the learning rule is non-local and globally distributed within the network . the learning rules are evolved using an evolution strategy . the survival of a learning rule is based on its performance in training neural networks on a set of tasks . training algorithms will be evolved for single layer artificial neural networks . experimental results show that a learning rule of this type is very capable of generating an efficient training algorithm . story_separator_special_tag the selection of the optimal ensembles of classifiers in multiple-classifier selection technique is un-decidable in many cases and it is potentially subjected to a trial-and-error search . this paper introduces a quantitative meta-learning approach based on neural network and rough set theory in the selection of the best predictive model . this approach depends directly on the characteristic , meta-features of the input data sets . the employed meta-features are the degree of discreteness and the distribution of the features in the input data set , the fuzziness of these features related to the target class labels and finally the correlation and covariance between the different features . the experimental work that consider these criteria are applied on twenty nine data sets using different classification techniques including support vector machine , decision tables and bayesian believe model . the measures of these criteria and the best result classification technique are used to build a meta data set . the role of the neural network is to perform a black-box prediction of the optimal , best fitting , classification technique . the role of the rough set theory is the generation of the decision rules that controls this prediction approach . story_separator_special_tag one of the challenges of data mining is finding hyperparameters for a learning algorithm that will produce the best model for a given dataset . hyperparameter optimization automates this process , but it can still take significant time . it has been found that hyperparameter optimization does not always result in induced models with significant improvement over default values , yet no systematic analysis of the role of hyperparameter optimization in machine learning has been conducted . we use metalearning to inform the decision of whether to optimize hyperparameters based on expected performance improvement and computational cost . story_separator_special_tag despite recent breakthroughs in the applications of deep neural networks , one setting that presents a persistent challenge is that of `` one-shot learning . '' traditional gradient-based networks require a lot of data to learn , often through extensive iterative training . when new data is encountered , the models must inefficiently relearn their parameters to adequately incorporate the new information without catastrophic interference . architectures with augmented memory capacities , such as neural turing machines ( ntms ) , offer the ability to quickly encode and retrieve new information , and hence can potentially obviate the downsides of conventional models . here , we demonstrate the ability of a memory-augmented neural network to rapidly assimilate new data , and leverage this data to make accurate predictions after only a few samples . we also introduce a new method for accessing an external memory that focuses on memory content , unlike previous methods that additionally use memory location-based focusing mechanisms . story_separator_special_tag in machine learning , hyperparameter optimization is a challenging task that is usually approached by experienced practitioners or in a computationally expensive brute-force manner such as grid-search . therefore , recent research proposes to use observed hyperparameter performance on already solved problems ( i.e . data sets ) in order to speed up the search for promising hyperparameter configurations in the sequential model based optimization framework . in this paper , we propose multilayer perceptrons as surrogate models as they are able to model highly nonlinear hyperparameter response surfaces . however , since interactions of hyperparameters , data sets and metafeatures are only implicitly learned in the subsequent layers , we improve the performance of multilayer perceptrons by means of an explicit factorization of the interaction weights and call the resulting model a factorized multilayer perceptron . additionally , we evaluate different ways of obtaining predictive uncertainty , which is a key ingredient for a decent tradeoff between exploration and exploitation . our experimental results on two public meta data sets demonstrate the efficiency of our approach compared to a variety of published baselines . for reproduction purposes , we make our data sets and all the program code publicly story_separator_special_tag previous algorithms for supervised sequence learning are based on dynamic recurrent networks . this paper describes an alternative class of gradient-based systems consisting of two feedforward nets that learn to deal with temporal sequences using fast weights : the first net learns to produce context-dependent weight changes for the second net whose weights may vary very quickly . the method offers the potential for stm storage efficiency : a single weight ( instead of a full-fledged unit ) may be sufficient for storing temporal information . various learning methods are derived . two experiments with unknown time delays illustrate the approach . one experiment shows how the system can be used for adaptive temporary variable binding . story_separator_special_tag a recurrent neural network is presented which ( in principle ) can , besides learning to solve problems posed by the environment , also use its own weights as input data and learn new ( arbitrarily complex ) algorithms for modifying its own weights in response to the environmental input and evaluations . the network uses subsets of its input and output units for observing its own errors and for explicitly analysing and manipulating all of its own weights , including those weights responsible for analyzing and manipulating weights . this effectively embeds a chain of meta-networks and meta-meta- . .-networks into the network itself . > story_separator_special_tag we study task sequences that allow for speeding up the learner 's average reward intake through appropriate shifts of inductive bias ( changes of the learner 's policy ) . to evaluate long-term effects of bias shifts setting the stage for later bias shifts we use the success-story algorithm ( ssa ) . ssa is occasionally called at times that may depend on the policy itself . it uses backtracking to undo those bias shifts that have not been empirically observed to trigger long-term reward accelerations ( measured up until the current ssa call ) . bias shifts that survive ssa represent a lifelong success history . until the next ssa call , they are considered useful and build the basis for additional bias shifts . ssa allows for plugging in a wide variety of learning algorithms . we plug in ( 1 ) a novel , adaptive extension of levin search and ( 2 ) a method for embedding the learner 's policy modification strategy within the policy itself ( incremental self-improvement ) . our inductive transfer case studies involve complex , partially observable environments where traditional reinforcement learning fails . story_separator_special_tag feature selection , as a preprocessing step to machine learning , is effective in reducing dimensionality , removing irrelevant data , increasing learning accuracy , and improving result comprehensibility . however , the recent increase of dimensionality of data poses a severe challenge to many existing feature selection methods with respect to efficiency and effectiveness . in this work , we introduce a novel concept , predominant correlation , and propose a fast filter method which can identify relevant features as well as redundancy among relevant features without pairwise correlation analysis . the efficiency and effectiveness of our method is demonstrated through extensive comparisons with other methods using real-world data of high dimensionality story_separator_special_tag research and industry increasingly make use of large amounts of data to guide decision-making . to do this , however , data needs to be analyzed in typically nontrivial refinement processes , which require technical expertise about methods and algorithms , experience with how a precise analysis should proceed , and knowledge about an exploding number of analytic approaches . to alleviate these problems , a plethora of different systems have been proposed that intelligently help users to analyze their data.this article provides a first survey to almost 30 years of research on intelligent discovery assistants ( idas ) . it explicates the types of help idas can provide to users and the kinds of ( background ) knowledge they leverage to provide this help . furthermore , it provides an overview of the systems developed over the past years , identifies their most important features , and sketches an ideal future ida as well as the challenges on the road ahead . story_separator_special_tag recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful . this paper adds to the mounting evidence that this is indeed the case . we report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the \\overfeat network which was trained to perform object classification on ilsvrc13 . we use features extracted from the \\overfeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification , scene recognition , fine grained recognition , attribute detection and image retrieval applied to a diverse set of datasets . we selected these tasks and datasets as they gradually move further away from the original task and data the \\overfeat network was trained to solve . astonishingly , we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets . for instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset . the results are achieved using a linear svm classifier ( or $ l2 $ distance in case of retrieval ) applied to a feature representation story_separator_special_tag chaos is one of the most important phenomenons based on complex nonlinear dynamics . in this paper , we study on t system chaos . this system is resulted from lorenz chaotic system . considering the master and slave systems , we design a controller to synchronize these two systems . in this paper , according to unknown and uncertain system parameters , a controller is designed for synchronization via hybrid adaptive and gbm methods . story_separator_special_tag the algorithm selection problem [ rice 1976 ] seeks to answer the question : which algorithm is likely to perform best for my problemq recognizing the problem as a learning task in the early 1990 's , the machine learning community has developed the field of meta-learning , focused on learning about learning algorithm performance on classification problems . but there has been only limited generalization of these ideas beyond classification , and many related attempts have been made in other disciplines ( such as ai and operations research ) to tackle the algorithm selection problem in different ways , introducing different terminology , and overlooking the similarities of approaches . in this sense , there is much to be gained from a greater awareness of developments in meta-learning , and how these ideas can be generalized to learn about the behaviors of other ( nonlearning ) algorithms . in this article we present a unified framework for considering the algorithm selection problem as a learning problem , and use this framework to tie together the crossdisciplinary developments in tackling the algorithm selection problem . we discuss the generalization of meta-learning concepts to algorithms focused on tasks including sorting , story_separator_special_tag we propose prototypical networks for the problem of few-shot classification , where a classifier must generalize to new classes not seen in the training set , given only a small number of examples of each new class . prototypical networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class . compared to recent approaches for few-shot learning , they reflect a simpler inductive bias that is beneficial in this limited-data regime , and achieve excellent results . we provide an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning . we further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the cu-birds dataset . story_separator_special_tag when facing the need to select the most appropriate algorithm to apply on a new data set , data analysts often follow an approach which can be related to test-driving cars to decide which one to buy : apply the algorithms on a sample of the data to quickly obtain rough estimates of their performance . these estimates are used to select one or a few of those algorithms to be tried out on the full data set . we describe sampling-based landmarks ( sl ) , a systematization of this approach , building on earlier work on landmarking and sampling . sl are estimates of the performance of algorithms on a small sample of the data that are used as predictors of the performance of those algorithms on the full set . we also describe relative landmarks ( rl ) , that address the inability of earlier landmarks to assess relative performance of algorithms . rl aggregate landmarks to obtain predictors of relative performance . our experiments indicate that the combination of these two improvements , which we call sampling-based relative landmarks , are better for ranking than traditional data characterization measures . story_separator_special_tag the support vector machine algorithm is sensitive to the choice of parameter settings . if these are not set correctly , the algorithm may have a substandard performance . suggesting a good setting is thus an important problem . we propose a meta-learning methodology for this purpose and exploit information about the past performance of different settings . the methodology is applied to set the width of the gaussian kernel . we carry out an extensive empirical evaluation , including comparisons with other methods ( fixed default ranking ; selection based on cross-validation and a heuristic method commonly used to set the width of the svm kernel ) . we show that our methodology can select settings with low error while providing significant savings in time . further work should be carried out to see how the methodology could be adapted to different parameter setting tasks . story_separator_special_tag meta-learning techniques can be very useful for supporting non-expert users in the algorithm selection task . in this work , we investigate the use of different components in an unsupervised meta-learning framework . in such scheme , the system aims to predict , for a new learning task , the ranking of the candidate clustering algorithms according to the knowledge previously acquired . in the context of unsupervised meta-learning techniques , we analyzed two different sets of meta-features , nine different candidate clustering algorithms and two learning methods as meta-learners . such analysis showed that the system , using mlp and svr meta-learners , was able to successfully associate the proposed sets of dataset characteristics to the performance of the new candidate algorithms . in fact , a hypothesis test showed that the correlation between the predicted and ideal rankings were significantly higher than the default ranking method . in this sense , we also could validate the use of the proposed sets of meta-features for describing the artificial learning tasks . story_separator_special_tag bayesian optimization is a prominent method for optimizing expensive-to-evaluate black-box functions that is widely applied to tuning the hyperparameters of machine learning algorithms . despite its successes , the prototypical bayesian optimization approach - using gaussian process models - does not scale well to either many hyperparameters or many function evaluations . attacking this lack of scalability and flexibility is thus one of the key challenges of the field . we present a general approach for using flexible parametric models ( neural networks ) for bayesian optimization , staying as close to a truly bayesian treatment as possible . we obtain scalability through stochastic gradient hamiltonian monte carlo , whose robustness we improve via a scale adaptation . experiments including multi-task bayesian optimization with 21 tasks , parallel optimization of deep neural networks and deep reinforcement learning show the power and flexibility of this approach . story_separator_special_tag we consider the task of assigning experts from a portfolio of specialists in order to solve a set of tasks . we apply a bayesian model which combines collaborative filtering with a feature-based description of tasks and experts to yield a general framework for managing a portfolio of experts . the model learns an embedding of tasks and problems into a latent space in which affinity is measured by the inner product . the model can be trained incrementally and can track non-stationary data , tracking potentially changing expert and task characteristics . the approach allows us to use a principled decision theoretic framework for expert selection , allowing the user to choose a utility function that best suits their objectives . the model component for taking into account the performance feedback data is pluggable , allowing flexibility . weapply the model to manage a portfolio of algorithms to solve hard combinatorial problems . this is a well studied area and we demonstrate a large improvement on the state of the art in one domain ( constraint solving ) and in a second domain ( combinatorial auctions ) created a portfolio that performed significantly better than any single algorithm . story_separator_special_tag a basic step for each data-mining or machine learning task is to determine which model to choose based on the problem and the data at hand . in this paper we investigate when non-linear classifiers outperform linear classifiers by means of a large scale experiment . we benchmark linear and non-linear versions of three types of classifiers ( support vector machines ; neural networks ; and decision trees ) , and analyze the results to determine on what type of datasets the non-linear version performs better . to the best of our knowledge , this work is the first principled and large scale attempt to support the common assumption that non-linear classifiers excel only when large amounts of data are available . story_separator_special_tag people from a variety of industrial domains are beginning to realise that appropriate use of machine learning techniques for their data mining projects could bring great benefits . end-users now have to face the new problem of how to choose a combination of data processing tools and algorithms for a given dataset . this problem is usually termed the full model selection ( fms ) problem . extended from our previous work [ 10 ] , in this paper , we introduce a framework for designing fms algorithms . under this framework , we propose a novel algorithm combining both genetic algorithms ( ga ) and particle swarm optimization ( pso ) named gps ( which stands for ga-pso-fms ) , in which a ga is used for searching the optimal structure for a data mining solution , and pso is used for searching optimal parameters for a particular structure instance . given a classification dataset , gps outputs a fms solution as a directed acyclic graph consisting of diverse data mining operators that are available to the problem . experimental results demonstrate the benefit of the algorithm . we also present , with detailed analysis , two model-tree-based variants story_separator_special_tag in this paper , we present a novel meta-feature generation method in the context of meta-learning , which is based on rules that compare the performance of individual base learners in a one-against-one manner . in addition to these new meta-features , we also introduce a new meta-learner called approximate ranking tree forests ( art forests ) that performs very competitively when compared with several state-of-the-art meta-learners . our experimental results are based on a large collection of datasets and show that the proposed new techniques can improve the overall performance of meta-learning for algorithm ranking significantly . a key point in our approach is that each performance figure of any base learner for any specific dataset is generated by optimising the parameters of the base learner separately for each dataset . story_separator_special_tag bayesian optimization has recently been proposed as a framework for automatically tuning the hyperparameters of machine learning models and has been shown to yield state-of-the-art performance with impressive ease and efficiency . in this paper , we explore whether it is possible to transfer the knowledge gained from previous optimizations to new tasks in order to find optimal hyperparameter settings more efficiently . our approach is based on extending multi-task gaussian processes to the framework of bayesian optimization . we show that this method significantly speeds up the optimization process when compared to the standard single-task approach . we further propose a straightforward extension of our algorithm in order to jointly minimize the average error across multiple tasks and demonstrate how this can be used to greatly speed up k-fold cross-validation . lastly , we propose an adaptation of a recently developed acquisition function , entropy search , to the cost-sensitive , multi-task setting . we demonstrate the utility of this new acquisition function by leveraging a small dataset to explore hyper-parameter settings for a large dataset . our algorithm dynamically chooses which dataset to query in order to yield the most information per unit cost . story_separator_special_tag in this paper we develop a dynamic form of bayesian optimization for machine learning models with the goal of rapidly finding good hyperparameter settings . our method uses the partial information gained during the training of a machine learning model in order to decide whether to pause training and start a new model , or resume the training of a previously-considered model . we specifically tailor our method to machine learning problems by developing a novel positive-definite covariance kernel to capture a variety of training curves . furthermore , we develop a gaussian process prior that scales gracefully with additional temporal observations . finally , we provide an information-theoretic framework to automate the decision process . experiments on several common machine learning models show that our approach is extremely effective in practice . story_separator_special_tag most research on machine learning has focused on scenarios in which a learner faces a single , isolated learning task . the lifelong learning framework assumes instead that the learner encounters a multitude of related learning tasks over its lifetime , providing the opportunity for the transfer of knowledge . this paper studies lifelong learning in the context of binary classification . it presents the invariance approach , in which knowledge is transferred via a learned model of the invariances of the domain . results on learning to recognize objects from color images demonstrate superior generalization capabilities if invariances are learned and used to bias subsequent learning . story_separator_special_tag over the past three decades or so , research on machine learning and data mining has led to a wide variety of algorithms that learn general functions from experience . as machine learning is maturing , it has begun to make the successful transition from academic research to various practical applications . generic techniques such as decision trees and artificial neural networks , for example , are now being used in various commercial and industrial applications ( see e.g. , [ langley , 1992 ; widrow et al. , 1994 ] ) . story_separator_special_tag when considering new datasets for analysis with machine learning algorithms , we encounter the problem of choosing the algorithm which is best suited for the task at hand . the aim of meta-level learning is to relate the performance of different machine learning algorithms to the characteristics of the dataset . the relation is induced on the basis of empirical data about the performance of machine learning algorithms on the different datasets . story_separator_special_tag a novel class of applications of predictive clustering trees is addressed , namely ranking . predictive clustering trees , as implemented in clus , allow for predicting multiple target variables . this approach makes sense especially if the target variables are not independent of each other . this is typically the case in ranking , where the ( relative ) performance of several approaches on the same task has to be predicted from a given description of the task . we propose to use predictive clustering trees for ranking . as compared to existing ranking approaches which are instance-based , our approach also allows for an explanation of the predicted rankings . we illustrate our approach on the task of ranking machine learning algorithms , where the ( relative ) performance of the learning algorithms on a dataset has to be predicted from a given dataset description . story_separator_special_tag one of the challenges in machine learning to find a classifier and parameter settings that work well on a given dataset . evaluating all possible combinations typically takes too much time , hence many solutions have been proposed that attempt to predict which classifiers are most promising to try . as the first recommended classifier is not always the correct choice , multiple recommendations should be made , making this a ranking problem rather than a classification problem . even though this is a well studied problem , there is currently no good way of evaluating such rankings . we advocate the use of loss time curves , as used in the optimization literature . these visualize the amount of budget ( time ) needed to converge to a acceptable solution . we also investigate a method that utilizes the measured performances of classifiers on small samples of data to make such recommendation , and adapt it so that it works well in loss time space . experimental results show that this method converges extremely fast to an acceptable solution . story_separator_special_tag ensembles of classifiers are among the best performing classifiers available in many data mining applications , including the mining of data streams . rather than training one classifier , multiple classifiers are trained , and their predictions are combined according to a given voting schedule . an important prerequisite for ensembles to be successful is that the individual models are diverse . one way to vastly increase the diversity among the models is to build an heterogeneous ensemble , comprised of fundamentally different model types . however , most ensembles developed specifically for the dynamic data stream setting rely on only one type of base-level classifier , most often hoeffding trees . we study the use of heterogeneous ensembles for data streams . we introduce the online performance estimation framework , which dynamically weights the votes of individual classifiers in an ensemble . using an internal evaluation on recent training data , it measures how well ensemble members performed on this and dynamically updates their weights . experiments over a wide range of data streams show performance that is competitive with state of the art ensemble techniques , including online bagging and leveraging bagging , while being significantly faster . story_separator_special_tag with the advent of automated machine learning , automated hyperparameter optimization methods are by now routinely used in data mining . however , this progress is not yet matched by equal progress on automatic analyses that yield information beyond performance-optimizing hyperparameter settings . in this work , we aim to answer the following two questions : given an algorithm , what are generally its most important hyperparameters , and what are typically good values for these ? we present methodology and a framework to answer these questions based on meta-learning across many datasets . we apply this methodology using the experimental meta-data available on openml to determine the most important hyperparameters of support vector machines , random forests and adaboost , and to infer priors for all their hyperparameters . the results , obtained fully automatically , provide a quantitative basis to focus efforts in both manual algorithm design and in automated hyperparameter optimization . the conducted experiments confirm that the hyperparameters selected by the proposed method are indeed the most important ones and that the obtained priors also lead to statistically significant improvements in hyperparameter optimization . story_separator_special_tag we explore the possibilities of meta-learning on data streams , in particular algorithm selection . in a first experiment we calculate the characteristics of a small sample of a data stream , and try to predict which classifier performs best on the entire stream . this yields promising results and interesting patterns . in a second experiment , we build a meta-classifier that predicts , based on measurable data characteristics in a window of the data stream , the best classifier for the next window . the results show that this meta-algorithm is very competitive with state of the art ensembles , such as ozabag , ozaboost and leveraged bagging . the results of all experiments are made publicly available in an online experiment database , for the purpose of verifiability , reproducibility and generalizability . story_separator_special_tag many sciences have made significant breakthroughs by adopting online tools that help organize , structure and mine information that is too detailed to be printed in journals . in this paper , we introduce openml , a place for machine learning researchers to share and organize data in fine detail , so that they can work more effectively , be more visible , and collaborate with others to tackle harder problems . we discuss how openml relates to other examples of networked science and what benefits it brings for machine learning research , individual scientists , as well as students and practitioners . story_separator_special_tag matrix factorization ( mf ) is one of the most popular techniques for product recommendation , but is known to suffer from serious cold-start problems . item cold-start problems are particularly acute in settings such as tweet recommendation where new items arrive continuously . in this paper , we present a meta-learning strategy to address item cold-start when new items arrive continuously . we propose two deep neural network architectures that implement our meta-learning strategy . the first architecture learns a linear classifier whose weights are determined by the item history while the second architecture learns a neural network whose biases are instead adjusted . we evaluate our techniques on the real-world problem of tweet recommendation . on production data at twitter , we demonstrate that our proposed techniques significantly beat the mf baseline and also outperform production models for tweet recommendation . story_separator_special_tag this paper illustrates an approach to understand di erences in accuracy performance among learning algorithms . the study proceeds in two steps by 1 ) providing a characterization of real-world domains , and 2 ) by analyzing the internal mechanism of two learning algorithms , c4.5trees andc4.5rules . a functional view of the internal components of these two algorithms is related to the characteristics of the domain under study ; the analysis helps to predict di erences in accuracy behavior . empirical results obtained over a set of real-world domains correlate well with the predictions . this two-step approach advocates the view of meta-learning as the quest for a theory that can explain the class of domains for which a learning algorithmwill output accurate predictions . story_separator_special_tag a new and distinct cultivar of geranium plant named hwd campana , characterized by its semi-double purple flowers ; compact plant size ; freely branching habit ; and a large number of umbels per plant . story_separator_special_tag learning from a few examples remains a key challenge in machine learning . despite recent advances in important domains such as vision and language , the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data . in this work , we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories . our framework learns a network that maps a small labelled support set and an unlabelled example to its label , obviating the need for fine-tuning to adapt to new class types . we then define one-shot learning problems on vision ( using omniglot , imagenet ) and language tasks . our algorithm improves one-shot accuracy on imagenet from 87.6 % to 93.2 % and from 88.0 % to 93.8 % on omniglot compared to competing approaches . we also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the penn treebank . story_separator_special_tag in recent years deep reinforcement learning ( rl ) systems have attained superhuman performance in a number of challenging task domains . however , a major limitation of such applications is their demand for massive amounts of training data . a critical present objective is thus to develop deep rl methods that can adapt rapidly to new tasks . in the present work we introduce a novel approach to this challenge , which we refer to as deep meta-reinforcement learning . previous work has shown that recurrent networks can support meta-learning in a fully supervised context . we extend this approach to the rl setting . what emerges is a system that is trained using one rl algorithm , but whose recurrent dynamics implement a second , quite separate rl procedure . this second , learned rl algorithm can differ from the original one in arbitrary ways . importantly , because it is learned , it is configured to exploit structure in the training domain . we unpack these points in a series of seven proof-of-concept experiments , each of which examines a key aspect of deep meta-rl . we consider prospects for extending and scaling up the approach , story_separator_special_tag the performance of many machine learning algorithms depends on their hyperparameter settings . the goal of this study is to determine whether it is important to tune a hyperparameter or whether it can be safely set to a default value . we present a methodology to determine the importance of tuning a hyperparameter based on a non-inferiority test and tuning risk : the performance loss that is incurred when a hyperparameter is not tuned , but set to a default value . because our methods require the notion of a default parameter , we present a simple procedure that can be used to determine reasonable default parameters . we apply our methods in a benchmark study using 59 datasets from openml . our results show that leaving particular hyperparameters at their default value is non-inferior to tuning these hyperparameters . in some cases , leaving the hyperparameter at its default value even outperforms tuning it using a search procedure with a limited number of iterations . story_separator_special_tag in automated machine learning ( automl ) , the process of engineering machine learning applications with respect to a specific problem is ( partially ) automated . various automl tools have already been introduced to provide out-of-the-box machine learning functionality . more specifically , by selecting machine learning algorithms and optimizing their hyperparameters , these tools produce a machine learning pipeline tailored to the problem at hand . except for tpot , all of these tools restrict the maximum number of processing steps of such a pipeline . however , as tpot follows an evolutionary approach , it suffers from performance issues when dealing with larger datasets . in this paper , we present an alternative approach leveraging a hierarchical planning to configure machine learning pipelines that are unlimited in length . we evaluate our approach and find its performance to be competitive with other automl tools , including tpot . story_separator_special_tag hyperparameter optimization is often done manually or by using a grid search . however , recent research has shown that automatic optimization techniques are able to accelerate this optimization process and find hyperparameter configurations that lead to better models . currently , transferring knowledge from previous experiments to a new experiment is of particular interest because it has been shown that it allows to further improve the hyperparameter optimization . we propose to transfer knowledge by means of an initialization strategy for hyperparameter optimization . in contrast to the current state of the art initialization strategies , our strategy is neither limited to hyperparameter configurations that have been evaluated on previous experiments nor does it need meta-features . the initial hyperparameter configurations are derived by optimizing for a meta-loss formally defined in this paper . this loss depends on the hyperparameter response function of the data sets that were investigated in past experiments . since this function is unknown and only few observations are given , the meta-loss is not differentiable . we propose to approximate the response function by a differentiable plug-in estimator . then , we are able to learn the initial hyperparameter configuration sequence by applying gradient-based story_separator_special_tag the optimization of hyperparameters is often done manually or exhaustively but recent work has shown that automatic methods can optimize hyperparameters faster and even achieve better final performance . sequential model-based optimization ( smbo ) is the current state of the art framework for automatic hyperparameter optimization . currently , it consists of three components : a surrogate model , an acquisition function and an initialization technique . we propose to add a fourth component , a way of pruning the hyperparameter search space which is a common way of accelerating the search in many domains but yet has not been applied to hyperparameter optimization . we propose to discard regions of the search space that are unlikely to contain better hyperparameter configurations by transferring knowledge from past experiments on other data sets as well as taking into account the evaluations already done on the current data set . pruning as a new component for smbo is an orthogonal contribution but nevertheless we compare it to surrogate models that learn across data sets and extensively investigate the impact of pruning with and without initialization for various state of the art surrogate models . the experiments are conducted on two newly story_separator_special_tag algorithm selection as well as hyperparameter optimization are tedious task that have to be dealt with when applying machine learning to real-world problems . sequential model-based optimization ( smbo ) , based on so-called surrogate models , has been employed to allow for faster and more direct hyperparameter optimization . a surrogate model is a machine learning regression model which is trained on the meta-level instances in order to predict the performance of an algorithm on a specific data set given the hyperparameter settings and data set descriptors . gaussian processes , for example , make good surrogate models as they provide probability distributions over labels . recent work on smbo also includes meta-data , i.e . observed hyperparameter performances on other data sets , into the process of hyperparameter optimization . this can , for example , be accomplished by learning transfer surrogate models on all available instances of meta-knowledge ; however , the increasing amount of meta-information can make gaussian processes infeasible , as they require the inversion of a large covariance matrix which grows with the number of instances . consequently , instead of learning a joint surrogate model on all of the meta-data , we propose story_separator_special_tag we show that all algorithms that search for an extremum of a cost function perform exactly the same , when averaged over all possible cost functions . in particular , if algorithm a outperforms algorithm b on some cost functions , then loosely speaking there must exist exactly as many other functions where b outperforms a. starting from this we analyze a number of the other a priori characteristics of the search problem , like its geometry and its information-theoretic aspects . this analysis allows us to derive mathematical benchmarks for assessing a particular search algorithm 's performance . we also investigate minimax aspects of the search problem , the validity of using characteristics of a partial search over a cost function to predict future behavior of the search algorithm on that cost function , and time-varying cost functions . we conclude with some discussion of the justifiablility of biologically-inspired search methods . story_separator_special_tag algorithm selection and hyperparameter tuning remain two of the most challenging tasks in machine learning . the number of machine learning applications is growing much faster than the number of machine learning experts , hence we see an increasing demand for efficient automation of learning processes . here , we introduce oboe , an algorithm for time-constrained model selection and hyperparameter tuning . taking advantage of similarity between datasets , oboe finds promising algorithm and hyperparameter configurations through collaborative filtering . our system explores these models under time constraints , so that rapid initializations can be provided to warm-start more fine-grained optimization methods . one novel aspect of our approach is a new heuristic for active learning in time-constrained matrix completion based on optimal experiment design . our experiments demonstrate that oboe delivers state-of-the-art performance faster than competing approaches on a test bed of supervised learning problems . story_separator_special_tag we propose a fast and effective algorithm for automatic hyperparameter tuning that can generalize across datasets . our method is an instance of sequential model-based optimization ( smbo ) that transfers information by constructing a common response surface for all datasets , similar to bardenet et al . ( 2013 ) . the time complexity of reconstructing the response surface at every smbo iteration in our method is linear in the number of trials ( significantly less than previous work with comparable performance ) , allowing the method to realistically scale to many more datasets . specifically , we use deviations from the per-dataset mean as the response values . we empirically show the superiority of our method on a large number of synthetic and real-world datasets for tuning hyperparameters of logistic regression and ensembles of classifiers . story_separator_special_tag many deep neural networks trained on natural images exhibit a curious phenomenon in common : on the first layer they learn features similar to gabor filters and color blobs . such first-layer features appear not to be specific to a particular dataset or task , but general in that they are applicable to many datasets and tasks . features must eventually transition from general to specific by the last layer of the network , but this transition has not been studied extensively . in this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results . transferability is negatively affected by two distinct issues : ( 1 ) the specialization of higher layer neurons to their original task at the expense of performance on the target task , which was expected , and ( 2 ) optimization difficulties related to splitting networks between co-adapted neurons , which was not expected . in an example network trained on imagenet , we demonstrate that either of these two issues may dominate , depending on whether features are transferred from the bottom , middle , or top of
social media enable promising new approaches to measuring economic activity and analyzing economic behavior at high frequency and in real time using information independent from standard survey and administrative sources . this paper uses data from twitter to create indexes of job loss , job search , and job posting . signals are derived by counting job-related phrases in tweets such as `` lost my job . '' the social media indexes are constructed from the principal components of these signals . the university of michigan social media job loss index tracks initial claims for unemployment insurance at medium and high frequencies and predicts 15 to 20 percent of the variance of the prediction error of the consensus forecast for initial claims . the social media indexes provide real-time indicators of events such as hurricane sandy and the 2013 government shutdown . comparing the job loss index with the search and posting indexes indicates that the beveridge curve has been shifting inward since 2011. the university of michigan social media job loss index is update weekly and is available at http : //econprediction.eecs.umich.edu/ . story_separator_special_tag every day , millions of users reveal their interests on facebook , which are then monetized via targeted advertisement marketing campaigns . in this paper , we explore the use of demographically rich facebook ads audience estimates for tracking non-communicable diseases around the world . across 47 countries , we compute the audiences of marker interests , and evaluate their potential in tracking health conditions associated with tobacco use , obesity , and diabetes , compared to the performance of placebo interests . despite its huge potential , we find that , for modeling prevalence of health conditions across countries , differences in these interest audiences are only weakly indicative of the corresponding prevalence rates . within the countries , however , our approach provides interesting insights on trends of health awareness across demographic groups . finally , we provide a temporal error analysis to expose the potential pitfalls of using facebook 's marketing api as a black box . story_separator_special_tag abstract big data nowadays is a fashionable topic , independently of what people mean when they use this term . but being big is just a matter of volume , although there is no clear agreement in the size threshold . on the other hand , it is easy to capture large amounts of data using a brute force approach . so the real goal should not be big data but to ask ourselves , for a given problem , what is the right data and how much of it is needed . for some problems this would imply big data , but for the majority of the problems much less data will and is needed . in this talk we explore the trade-offs involved and the main problems that come with big data using the web as case study : scalability , redundancy , bias , noise , spam , and privacy . speaker biography ricardo baeza-yates ricardo baeza-yates is vp of research for yahoo labs leading teams in united states , europe and latin america since 2006 and based in sunnyvale , california , since august 2014. during this time he has lead the labs in barcelona and story_separator_special_tag online contract labor portals ( i.e. , crowdsourcing ) have recently emerged as attractive alternatives to university participant pools for the purposes of collecting survey data for behavioral research . however , prior research has not provided a thorough examination of crowdsourced data for organizational psychology research . we found that , as compared with a traditional university participant pool , crowdsourcing respondents were older , were more ethnically diverse , and had more work experience . additionally , the reliability of the data from the crowdsourcing sample was as good as or better than the corresponding university sample . moreover , measurement invariance generally held across these groups . we conclude that the use of these labor portals is an efficient and appropriate alternative to a university participant pool , despite small differences in personality and socially desirable responding across the samples . the risks and advantages of crowdsourcing are outlined , and an overview of practical and ethical guidelines is provided . story_separator_special_tag the body of content available on twitter undoubtedly contains a diverse range of political insight and commentary . but , to what extent is this representative of an electorate ? can we model political sentiment effectively enough to capture the voting intentions of a nation during an election capaign ? we use the recent irish general election as a case study for investigating the potential to model political sentiment through mining of social media . our approach combines sentiment analysis using supervised learning and volume-based measures . we evaluate against the conventional election polls and the final election result . we find that social analytics using both volume-based measures and sentiment analysis are predictive and wemake a number of observations related to the task of monitoring public sentiment during an election campaign , including examining a variety of sample sizes , time periods as well as methods for qualitatively exploring the underlying content . story_separator_special_tag abstractbackgroundthe fields of demography , sociology , and socio-psychology have been increasingly drawing on social network theories , which posit that individual fertility decision-making depends in part on the fertility behavior of other members of the population , and on the structure of the interactions between individuals . after reviewing this literature , we highlight the benefits of taking a social network perspective on fertility and family research.objectivewe review the literature that addresses the extent to which social mechanisms , such as social learning , social pressure , social contagion , and social support , influence childbearing decisions.conclusionswe find that all of the social mechanisms reviewed influence the beliefs and norms individuals hold regarding childbearing , their perceptions of having children , and the context of opportunities and constraints in which childbearing choices are made . the actual impact of these mechanisms on fertility tempo and quantum strongly depends on the structure of social interaction.1 . introductiondemographers are interested in population fertility and its dynamics . changes in the tempo and quantum of fertility are macro phenomena ; i.e. , they are the aggregate result of the childbearing behavior of individual actors . efforts to better explain fertility dynamics inevitably story_separator_special_tag in this work we present a thorough quantitative analysis of information consumption patterns of qualitatively different information on facebook . pages are categorized , according to their topics and the communities of interests they pertain to , in a ) alternative information sources ( diffusing topics that are neglected by science and main stream media ) ; b ) online political activism ; and c ) main stream media . we find similar information consumption patterns despite the very different nature of contents . then , we classify users according to their interaction patterns among the different topics and measure how they responded to the injection of 2788 false information ( parodistic imitations of alternative stories ) . we find that users prominently interacting with alternative information sources i.e . more exposed to unsubstantiated claims are more prone to interact with intentional and parodistic false claims . story_separator_special_tag knowing users ' views and demographic traits offers a great potential for personalizing web search results or related services such as query suggestion and query completion . such signals however are often only available for a small fraction of search users , namely those who log in with their social network account and allow its use for personalization of search results . in this paper , we offer a solution to this problem by showing how user demographic traits such as age and gender , and even political and religious views can be efficiently and accurately inferred based on their search query histories . this is accomplished in two steps ; we first train predictive models based on the publically available mypersonality dataset containing users ' facebook likes and their demographic information . we then match facebook likes with search queries using open directory project categories . finally , we apply the model trained on facebook likes to large-scale query logs of a commercial search engine while explicitly taking into account the difference between the traits distribution in both datasets . we find that the accuracy of classifying age and gender , expressed by the area under the roc curve story_separator_special_tag youth unemployment rates are still in alerting levels for many countries , among which italy . direct consequences include poverty , social exclusion , and criminal behaviours , while negative impact on the future employability and wage can not be obscured . in this study , we employ survey data together with social media data , and in particular likes on facebook pages , to analyse personality , moral values , but also cultural elements of the young unemployed population in italy . our findings show that there are small but significant differences in personality and moral values , with the unemployed males to be less agreeable while females more open to new experiences . at the same time , unemployed have a more collectivist point of view , valuing more in-group loyalty , authority , and purity foundations . interestingly , topic modelling analysis did not reveal major differences in interests and cultural elements of the unemployed . utilisation patterns emerged though ; the employed seem to use facebook to connect with local activities , while the unemployed use it mostly as for entertainment purposes and as a source of news , making them susceptible to mis/disinformation . we story_separator_special_tag amazon 's mechanical turk ( mturk ) is a relatively new website that contains the major elements required to conduct research : an integrated participant compensation system ; a large participant pool ; and a streamlined process of study design , participant recruitment , and data collection . in this article , we describe and evaluate the potential contributions of mturk to psychology and other social sciences . findings indicate that ( a ) mturk participants are slightly more demographically diverse than are standard internet samples and are significantly more diverse than typical american college samples ; ( b ) participation is affected by compensation rate and task length , but participants can still be recruited rapidly and inexpensively ; ( c ) realistic compensation rates do not affect data quality ; and ( d ) the data obtained are at least as reliable as those obtained via traditional methods . overall , mturk can be used to obtain high-quality data inexpensively and rapidly . story_separator_special_tag among those who have recently lost a job , social networks in general and online ones in particular may be useful to cope with stress and find new employment . this study focuses on the psychological and practical consequences of facebook use following job loss . by pairing longitudinal surveys of facebook users with logs of their online behavior , we examine how communication with different kinds of ties predicts improvements in stress , social support , bridging social capital , and whether they find new jobs . losing a job is associated with increases in stress , while talking with strong ties is generally associated with improvements in stress and social support . weak ties do not provide these benefits . bridging social capital comes from both strong and weak ties . surprisingly , individuals who have lost a job feel greater stress after talking with strong ties . contrary to the `` strength of weak ties '' hypothesis , communication with strong ties is more predictive of finding employment within three months . story_separator_special_tag social networks have become a central feature of everyday life . most young people are members of at least one online social network , and they naturally provide a great deal of personal information as a condition for participation in the rich online social lives these networks afford . increasingly , this information is being used as evidence in criminal and even civil legal proceedings . these latter uses , by actors involved in the justice system , are typically justified on the grounds that social network information is essentially public in nature , and thus does not generate a subjective expectation of privacy necessary to support a civil rights-based privacy protection . this justification , however , is based on the perceptions of individuals who are outside the online social network community , rather than reflecting the norms and privacy practices of participants in online social networks . this project takes a user-centric approach to the question of whether online social spaces are public venues , examining of the information-related practices of social network participants , focusing on how they treat their own information and that of others posted in online social spaces . our results reveal that online story_separator_special_tag the digital traces that we leave online are increasingly fruitful sources of data for social scientists , including those interested in demographic research . the collection and use of digital data also presents numerous statistical , computational , and ethical challenges , motivating the development of new research approaches to address these burgeoning issues . in this article , we argue that researchers with formal training in demography those who have a history of developing innovative approaches to using challenging data are well positioned to contribute to this area of work . we discuss the benefits and challenges of using digital trace data for social and demographic research , and we review examples of current demographic literature that creatively use digital trace data to study processes related to fertility , mortality , and migration . focusing on facebook data for advertisers a novel digital census that has largely been untapped by demographers we provide illustrative and empirical examples of how demographic researchers can manage issues such as bias and representation when using digital trace data . we conclude by offering our perspective on the road ahead regarding demography and its role in the data revolution . story_separator_special_tag much behavioral rescarch involves comparing the central tendencies of different groups , or of the same subjects under different conditions , and the usual analysis is some form of mean comparison . this article suggests that an ordinal statistic , d , is often more appropriate . d compares the number of times a score from one group or condition is higher than one from the other , compared with the reverse . compared to mean comparisons , d is more robust and equally or more powerful ; it is invariant under transformation ; and it often conforms more closely to the experimenter 's research hypothesis . it is suggested that inferences from d be based on sample estimates of its variance rather than on the more traditional assumption of identical distributions story_separator_special_tag the five-factor model is a dimensional representation of personality structure that has recently gained widespread acceptance among personality psychologists . this article describes the five factors ( neuroticism , extraversion , openness , agreeableness , and conscientiousness ) ; summarizes evidence on their consensual validity , comprehensiveness , universality , heritability , and longitudinal stability ; and reviews several approaches to the assessment of the factors and their defining traits . in research , measures of the five factors can be used to analyze personality disorder scales and to profile the traits of personality-disordered patient groups ; findings may be useful in diagnosing individuals . as an alternative to the current categorical system for diagnosing personality disorders , it is proposed that axis ii be used for the description of personality in terms of the five factors and for the diagnosis of personality-related problems in affective , interpersonal , experiential , attitudinal , and motivational . story_separator_special_tag amazon mechanical turk ( amt ) is an online crowdsourcing service where anonymous online workers complete web-based tasks for small sums of money . the service has attracted attention from experimental psychologists interested in gathering human subject data more efficiently . however , relative to traditional laboratory studies , many aspects of the testing environment are not under the experimenter 's control . in this paper , we attempt to empirically evaluate the fidelity of the amt system for use in cognitive behavioral experiments . these types of experiment differ from simple surveys in that they require multiple trials , sustained attention from participants , comprehension of complex instructions , and millisecond accuracy for response recording and stimulus presentation . we replicate a diverse body of tasks from experimental psychology including the stroop , switching , flanker , simon , posner cuing , attentional blink , subliminal priming , and category learning tasks using participants recruited using amt . while most of replications were qualitatively successful and validated the approach of collecting data anonymously online using a web-browser , others revealed disparity between laboratory results and online results . a number of important lessons were encountered in the process of story_separator_special_tag there is a large body of research on utilizing online activity as a survey of political opinion to predict real world election outcomes . there is considerably less work , however , on using this data to understand topic-specific interest and opinion amongst the general population and specific demographic subgroups , as currently measured by relatively expensive surveys . here we investigate this possibility by studying a full census of all twitter activity during the 2012 election cycle along with the comprehensive search history of a large panel of internet users during the same period , highlighting the challenges in interpreting online and social media activity as the results of a survey . as noted in existing work , the online population is a non-representative sample of the offline world ( e.g. , the u.s. voting population ) . we extend this work to show how demographic skew and user participation is non-stationary and difficult to predict over time . in addition , the nature of user contributions varies substantially around important events . furthermore , we note subtle problems in mapping what people are sharing or consuming online to specific sentiment or opinion measures around a particular topic . story_separator_special_tag is social media a valid indicator of political behavior ? there is considerable debate about the validity of data extracted from social media for studying offline behavior . to address this issue , we show that there is a statistically significant association between tweets that mention a candidate for the u.s. house of representatives and his or her subsequent electoral performance . we demonstrate this result with an analysis of 542,969 tweets mentioning candidates selected from a random sample of 3,570,054,618 , as well as federal election commission data from 795 competitive races in the 2010 and 2012 u.s. congressional elections . this finding persists even when controlling for incumbency , district partisanship , media coverage of the race , time , and demographic variables such as the district 's racial and gender composition . our findings show that reliable data about political behavior can be extracted from social media . story_separator_special_tag migrant assimilation is a major challenge for european societies , in part because of the sudden surge of refugees in recent years and in part because of long-term demographic trends . in this paper , we use facebook data for advertisers to study the levels of assimilation of arabic-speaking migrants in germany , as seen through the interests they express online . our results indicate a gradient of assimilation along demographic lines , language spoken and country of origin . given the difficulty to collect timely migration data , in particular for traits related to cultural assimilation , the methods that we develop and the results that we provide open new lines of research that computational social scientists are well-positioned to address . story_separator_special_tag abstract gender equality in access to the internet and mobile phones has become increasingly recognised as a development goal . monitoring progress towards this goal however is challenging due to the limited availability of gender-disaggregated data , particularly in low-income countries . in this data sparse context , we examine the potential of a source of digital trace big data facebook s advertisement audience estimates that provides aggregate data on facebook users by demographic characteristics covering the platform s over 2 billion users to measure and nowcast digital gender gaps . we generate a unique country-level dataset combining online indicators of facebook users by gender , age and device type , offline indicators related to a country s overall development and gender gaps , and official data on gender gaps in internet and mobile access where available . using this dataset , we predict internet and mobile phone gender gaps from official data using online indicators , as well as online and offline indicators . we find that the online facebook gender gap indicators are highly correlated with official statistics on internet and mobile phone gender gaps . for internet gender gaps , models using facebook data do better than story_separator_special_tag with the increasing sophistication and ubiquity of the internet , behavioral research is on the cusp of a revolution that will do for population sampling what the computer did for stimulus control and measurement . it remains a common assumption , however , that data from self-selected web samples must involve a trade-off between participant numbers and data quality . concerns about data quality are heightened for performance-based cognitive and perceptual measures , particularly those that are timed or that involve complex stimuli . in experiments run with uncompensated , anonymous participants whose motivation for participation is unknown , reduced conscientiousness or lack of focus could produce results that would be difficult to interpret due to decreased overall performance , increased variability of performance , or increased measurement noise . here , we addressed the question of data quality across a range of cognitive and perceptual tests . for three key performance metrics mean performance , performance variance , and internal reliability the results from self-selected web samples did not differ systematically from those obtained from traditionally recruited and/or lab-tested samples . these findings demonstrate that collecting data from uncompensated , anonymous , unsupervised , self-selected participants need not reduce story_separator_special_tag this paper - first published on-line in november 2008 - draws on data from an early version of the google flu trends search engine to estimate the levels of flu in a population . it introduces a computational model that converts raw search query data into a region-by-region real-time surveillance system that accurately estimates influenza activity with a lag of about one day - one to two weeks faster than the conventional reports published by the centers for disease prevention and control . this report introduces a computational model based on internet search queries for real-time surveillance of influenza-like illness ( ili ) , which reproduces the patterns observed in ili data from the centers for disease control and prevention . seasonal influenza epidemics are a major public health concern , causing tens of millions of respiratory illnesses and 250,000 to 500,000 deaths worldwide each year1 . in addition to seasonal influenza , a new strain of influenza virus against which no previous immunity exists and that demonstrates human-to-human transmission could result in a pandemic with millions of fatalities2 . early detection of disease activity , when followed by a rapid response , can reduce the impact of both seasonal story_separator_special_tag when time is limited , researchers may be faced with the choice of using an extremely brief measure of the big-five personality dimensions or using no measure at all . to meet the need for a very brief measure , 5 and 10-item inventories were developed and evaluated . although somewhat inferior to standard multi-item instruments , the instruments reached adequate levels in terms of : ( a ) convergence with widely used big-five measures in self , observer , and peer reports , ( b ) test retest reliability , ( c ) patterns of predicted external correlates , and ( d ) convergence between self and observer ratings . on the basis of these tests , a 10-item measure of the big-five dimensions is offered for situations where very short measures are needed , personality is not the primary topic of interest , or researchers can tolerate the somewhat diminished psychometric properties associated with very brief measures . story_separator_special_tag the rapid growth of the internet provides a wealth of new research opportunities for psychologists . internet data collection methods , with a focus on self-report questionnaires from self-selected samples , are evaluated and compared with traditional paper-and-pencil methods . six preconceptions about internet samples and data quality are evaluated by comparing a new large internet sample ( n = 361,703 ) with a set of 510 published traditional samples . internet samples are shown to be relatively diverse with respect to gender , socioeconomic status , geographic region , and age . moreover , internet findings generalize across presentation formats , are not adversely affected by nonserious or repeat responders , and are consistent with findings from traditional methods . it is concluded that internet methods can contribute to many areas of psychology . story_separator_special_tag how and why do moral judgments vary across the political spectrum ? to test moral foundations theory ( j. haidt & j. graham , 2007 ; j. haidt & c. joseph , 2004 ) , the authors developed several ways to measure people 's use of 5 sets of moral intuitions : harm/care , fairness/reciprocity , ingroup/loyalty , authority/respect , and purity/sanctity . across 4 studies using multiple methods , liberals consistently showed greater endorsement and use of the harm/care and fairness/reciprocity foundations compared to the other 3 foundations , whereas conservatives endorsed and used the 5 foundations more equally . this difference was observed in abstract assessments of the moral relevance of foundation-related concerns such as violence or loyalty ( study 1 ) , moral judgments of statements and scenarios ( study 2 ) , `` sacredness '' reactions to taboo trade-offs ( study 3 ) , and use of foundation-related words in the moral texts of religious sermons ( study 4 ) . these findings help to illuminate the nature and intractability of moral disagreements in the american `` culture war . '' story_separator_special_tag total survey error is a conceptual framework describing statistical error properties of sample survey statistics . early in the history of sample surveys , it arose as a tool to focus on implications . story_separator_special_tag researchers in moral psychology and social justice have agreed that morality is about matters of harm , rights , and justice . on this definition of morality , conservative opposition to social justice programs appears to be immoral , and has been explained as a product of various non-moral processes such as system justification or social dominance orientation . in this article we argue that , from an anthropological perspective , the moral domain is usually much broader , encompassing many more aspects of social life and valuing institutions as much or more than individuals . we present theoretical and empirical reasons for believing that there are five psychological systems that provide the foundations for the worlds many moralities . the five foundations are psychological preparations for detecting and reacting emotionally to issues related to harm/care , fairness/reciprocity , ingroup/loyalty , authority/respect , and purity/sanctity . political liberals have moral intuitions primarily based upon the first two foundations , and therefore misunderstand the moral motivations of political conservatives , who generally rely upon all five foundations . story_separator_special_tag why do people vary in their views of human nature and their visions of the good society ? why do many people categorize themselves as liberal , conservative , libertarian , socialist , and so on ? some researchers try to answer these questions by starting with people s self-identifications and then moving down , examining traits ( such as openness to experience ) that underlie and predict endorsement of an ideological label ( see jost , glaser , kruglanski , & sulloway , 2003 , and sibley & duckitt , 2008 , for reviews ) . in contrast , others find it more informative to move up from such labels , examining the network of meanings , strivings , and personal narratives that unite the individuals who endorse a label ( e.g. , conover & feldman , 1981 ; geertz , 1964 ; smith , 2003 ; sowell , 1995 , 2007 ) . these two approaches are quite obviously complementary . in this article we attempt to integrate them by using two theories that were designed explicitly for such cross-level work : dan mcadams s ( 1995 ; mcadams & pals , 2006 ) three-level account of personality story_separator_special_tag maps embellished with fantastical beasts , sixteenth-century wonder chambers 1 2lled with natural and technological marvels , even late-twentieth-century supermarket tabloids all attest to the human fascination with things that violate our basic ideas about reality . the study of morality and culture is therefore an intrinsically fascinating topic . people have created moralities as divergent as those of nazis and quakers , headhunters and jains . and yet , when we look closely at the daily lives of people in divergent cultures , we can 1 2nd elements that arise in nearly all of them for example , reciprocity , loyalty , respect for ( some ) authority , limits on physical harm , and regulation of eating and sexuality . what are we to make of this pattern of similarity within profound difference ? social scientists have traditionally taken two approaches . the empiricist approach posits that moral knowledge , moral beliefs , moral action , and all the other stuff of morality are learned in childhood . there is no moral faculty or moral anything else built into the human mind , although there may be some innate learning mechanisms that enable the acquisition of later knowledge story_separator_special_tag pervasive presence of location-sharing services made it possible for researchers to gain an unprecedented access to the direct records of human activity in space and time . this article analyses geo-located twitter messages in order to uncover global patterns of human mobility . based on a dataset of almost a billion tweets recorded in 2012 , we estimate the volume of international travelers by country of residence . mobility profiles of different nations were examined based on such characteristics as mobility rate , radius of gyration , diversity of destinations , and inflow outflow balance . temporal patterns disclose the universally valid seasons of increased international mobility and the particular character of international travels of different nations . our analysis of the community structure of the twitter mobility network reveals spatially cohesive regions that follow the regional division of the world . we validate our result using global tourism statistics and mobility models provided by other authors and argue that twitter is exceptionally useful for understanding and quantifying global mobility patterns . story_separator_special_tag background : the number of unmarried one-person households has increased rapidly among young adults living in the republic of korea since 2000. how this rise in solo living is related to psychological wellbeing is of importance to both individuals and society as a whole . objective : this study examined how living alone is related to psychological wellbeing and how this association differs across attitudes toward marriage among young adults aged 25-39. methods : we relied on repeated cross-sectional data from the korea social survey ( 2010 and 2012 ) to compare unmarried solo residents to both unmarried and married individuals living with family members . psychological wellbeing was measured in terms of life satisfaction and suicidal ideation over the past twelve months . results : in general , unmarried solo residents experienced greater life satisfaction than did unmarried family coresidents . of those with a positive attitude toward marriage , unmarried solo residents had lower life satisfaction than did married family coresidents . for those with a non-positive attitude toward marriage , however , there was no difference in the level of life satisfaction between unmarried solo residents and married family coresidents . suicidal ideation did not differ by story_separator_special_tag social media platforms provide active communication channels during mass convergence and emergency events such as disasters caused by natural hazards . as a result , first responders , decision makers , and the public can use this information to gain insight into the situation as it unfolds . in particular , many social media messages communicated during emergencies convey timely , actionable information . processing social media messages to obtain such information , however , involves solving multiple challenges including : handling information overload , filtering credible information , and prioritizing different classes of messages . these challenges can be mapped to classical information processing operations such as filtering , classifying , ranking , aggregating , extracting , and summarizing . we survey the state of the art regarding computational methods to process social media messages , focusing on their application in emergency response scenarios . we examine the particularities of this setting , and then methodically examine a series of key sub-problems ranging from the detection of events to the creation of actionable and useful summaries . story_separator_special_tag abstract personal electronic devices including smartphones give access to behavioural signals that can be used to learn about the characteristics and preferences of individuals . in this study , we explore the connection between demographic and psychological attributes and the digital behavioural records , for a cohort of 7633 people , closely representative of the us population with respect to gender , age , geographical distribution , education , and income . along with the demographic data , we collected self-reported assessments on validated psychometric questionnaires for moral traits and basic human values , and combined this information with passively collected multi-modal digital data from web browsing behaviour and smartphone usage . a machine learning framework was then designed to infer both the demographic and psychological attributes from the behavioural data . in a cross-validated setting , our models predicted demographic attributes with good accuracy as measured by the weighted auroc score ( area under the receiver operating characteristic ) , but were less performant for the moral traits and human values . these results call for further investigation , since they are still far from unveiling individuals ' psychological fabric . this connection , along with the most predictive story_separator_special_tag privacy has been identified as a hot button issue in literature on social network sites ( snss ) . while considerable research has been conducted with teenagers and young adults , scant attention has been paid to differences among adult age groups regarding privacy management behavior . with a multidimensional approach to privacy attitudes , we investigate facebook use , privacy attitudes , online privacy literacy , disclosure , and privacy protective behavior on facebook across three adult age groups ( 18-40 , 41-65 , and 65+ ) . the sample consisted of an online convenience sample of 518 adult facebook users . comparisons suggested that although age groups were comparable in terms of general internet use and online privacy literacy , younger groups were more likely to use snss more frequently , use facebook for social interaction purposes , and have larger networks . also , younger adults were more likely to self-disclose and engage in privacy protective behaviors on facebook . in terms of privacy attitudes , older age groups were more likely to be concerned about privacy of other individuals . in general , all dimensions of privacy attitudes ( i.e. , belief that privacy is a story_separator_special_tag we show that easily accessible digital records of behavior , facebook likes , can be used to automatically and accurately predict a range of highly sensitive personal attributes including : sexual orientation , ethnicity , religious and political views , personality traits , intelligence , happiness , use of addictive substances , parental separation , age , and gender . the analysis presented is based on a dataset of over 58,000 volunteers who provided their facebook likes , detailed demographic profiles , and the results of several psychometric tests . the proposed model uses dimensionality reduction for preprocessing the likes data , which are then entered into logistic/linear regression to predict individual psychodemographic profiles from likes . the model correctly discriminates between homosexual and heterosexual men in 88 % of cases , african americans and caucasian americans in 95 % of cases , and between democrat and republican in 85 % of cases . for the personality trait openness , prediction accuracy is close to the test retest accuracy of a standard personality test . we give examples of associations between attributes and likes and discuss implications for online personalization and privacy . story_separator_special_tag numerous crowdsourcing platforms are now available to support research as well as commercial goals . however , crowdsourcing is not yet widely adopted by researchers for generating , processing or analyzing research data . this study develops a deeper understanding of the circumstances under which crowdsourcing is a useful , feasible or desirable tool for research , as well as the factors that may influence researchers ' decisions around adopting crowdsourcing technology . we conducted semi-structured interviews with 18 researchers in diverse disciplines , spanning the humanities and sciences , to illuminate how research norms and practitioners ' dispositions were related to uncertainties around research processes , data , knowledge , delegation and quality . the paper concludes with a discussion of the design implications for future crowdsourcing systems to support research . story_separator_special_tag recent widespread adoption of electronic and pervasive technologies has enabled the study of human behavior at an unprecedented level , uncovering universal patterns underlying human activity , mobility , and interpersonal communication . in the present work , we investigate whether deviations from these universal patterns may reveal information about the socio-economical status of geographical regions . we quantify the extent to which deviations in diurnal rhythm , mobility patterns , and communication styles across regions relate to their unemployment incidence . for this we examine a country-scale publicly articulated social media dataset , where we quantify individual behavioral features from over 19 million geo-located messages distributed among more than 340 different spanish economic regions , inferred by computing communities of cohesive mobility fluxes . we find that regions exhibiting more diverse mobility fluxes , earlier diurnal rhythms , and more correct grammatical styles display lower unemployment rates . as a result , we provide a simple model able to produce accurate , easily interpretable reconstruction of regional unemployment incidence from their social-media digital fingerprints alone . our results show that cost-effective economical indicators can be built based on publicly-available social media datasets . story_separator_special_tag as people spend increasing proportions of their daily lives using social media , such as twitter and facebook , they are being invited to support myriad political causes by sharing , liking , endorsing , or downloading . chain reactions caused by these tiny acts of participation form a growing part of collective action today , from neighborhood campaigns to global political movements . political turbulence reveals that , in fact , most attempts at collective action online do not succeed , but some give rise to huge mobilizationseven revolutions . drawing on large-scale data generated from the internet and real-world events , this book shows how mobilizations that succeed are unpredictable , unstable , and often unsustainable . to better understand this unruly new force in the political world , the authors use experiments that test how social media influence citizens deciding whether or not to participate . they show how different personality types react to social influences and identify which types of people are willing to participate at an early stage in a mobilization when there are few supporters or signals of viability . the authors argue that pluralism is the model of democracy that is emerging in story_separator_special_tag this chapter examines theories and research on the creative personality . the personality approach to creative individuals offers a unique perspective on creativity , with both advantages and disadvantages . the personality approach has an advantage over many other approaches in that the standardized assessment techniques are available . it is found that that the most creative architects , unlike the other groups , would have liked to improve their interpersonal reactions and social relationships . autonomy in its various manifestations may play a pivotal role in all creative work . this may be because autonomy is functionally related to creativity . it is functional and necessary for all creativity . autonomy may also underlie and explain a range of other correlates of creativity . creative persons may have a tendency toward playfulness . this may be a reflection of their spontaneity and self-actualization . it is found that worldplay may involve a kind of fantasy life and daydreaming , which could be manifested in the construction of futuristic or other imaginary worlds and imaginary companions . persistence might be viewed as a prerequisite for creative accomplishment simply because important insights often demand a large investment of time . story_separator_special_tag amazon s mechanical turk is an online labor market where requesters post jobs and workers choose which jobs to do for pay . the central purpose of this article is to demonstrate how to use this web site for conducting behavioral research and to lower the barrier to entry for researchers who could benefit from this platform . we describe general techniques that apply to a variety of types of research and experiments across disciplines . we begin by discussing some of the advantages of doing experiments on mechanical turk , such as easy access to a large , stable , and diverse subject pool , the low cost of doing experiments , and faster iteration between developing theory and executing experiments . while other methods of conducting behavioral research may be comparable to or even better than mechanical turk on one or more of the axes outlined above , we will show that when taken as a whole mechanical turk can be a useful tool for many researchers . we will discuss how the behavior of workers compares with that of experts and laboratory subjects . then we will illustrate the mechanics of putting a task on mechanical turk story_separator_special_tag individual differences in personality may be described at three different levels . level i consists of those broad , decontextualized , and rela- tively nonconditional constructs called `` traits , '' which provide a dispositional signature for personality description . no description of a person is adequate without trait attributions , but trait attributions themselves yield little beyond a `` psychology ofthe stranger . '' at level ii ( called `` personal concems '' ) , per- sonality descriptions invoke personal strivings , life tasks , defense mechanisms , coping strategies , domain-specific skills and values , and a wide assortment of other motivational , developmental , or strategic constructs that are contextual- ized in time , place , or role . while dispositional traits and personal concerns appear to have near-universal applicability . level iii presents frameworks and constructs that may be uniquely relevant to adulthood only , and perhaps only within modern societies that put a premium on the individuation of the self . thus , in contemporary western societies , a full description of personality commonly requires a consideration of the extent to which a human life ex- presses unity and purpose , which are the hallmarks story_separator_special_tag despite impressive advances in recent years with respect to theory and research , personality psychology has yet to articulate clearly a comprehensive framework for understanding the whole person . in an effort to achieve that aim , the current article draws on the most promising empirical and theoretical trends in personality psychology today to articulate 5 big principles for an integrative science of the whole person . personality is conceived as ( a ) an individual 's unique variation on the general evolutionary design for human nature , expressed as a developing pattern of ( b ) dispositional traits , ( c ) characteristic adaptations , and ( d ) self-defining life narratives , complexly and differentially situated ( e ) in culture and social context . the 5 principles suggest a framework for integrating the big five model of personality traits with those self-defining features of psychological individuality constructed in response to situated social tasks and the human need to make meaning in culture . story_separator_special_tag background family ties in europe are affected by demographic trends associated with parenting and partnering , such as a decline in fertility , an increase in childlessness , postponement of parenthood and of partnership formation , the rise of new relationship forms and divorce rates . it is unclear how the contemporary family structure and composition are associated with people s mental wellbeing . objective this article examines how ties with parents , siblings , a partner and children are associated with depressive mood of men and women in seven eastern and western european countries . methods to test our hypotheses we made use of data from the generations and gender surveys . we performed logistic regression analyses to study the associations between people s family ties and depressive mood . results our research findings show that family ties can diminish people s depressive feelings . although we find some gender differences in these associations , we do not find support for the argument that family ties are more important for the mental wellbeing of women than of men . moreover , our findings support the hierarchical model of family relations in which new ties with partner and children in story_separator_special_tag backgroundstudies of internal migration ask who moves , why they move , and what are the consequences to themselves , their origin , and their destination . by contrast , studies of those who stay for very long durations are less common , despite the fact that most people move relatively infrequently . objectivewe argue that staying is the dominant , preferred state and that moving is simply an adjustment toward a desired state of stability ( or equilibrium ) . the core of our argument , already recognized in the literature , is that migration is risky . however , we extend the argument to loss aversion as developed within prospect theory . prospect theory posits that existing possessions , including the dwelling and existing commodities , are attributed a value well beyond their purchase price and that this extends the average period of staying among the loss-averse . methodsapplying prospect theory has several challenges , including measurement of the reference point and potential degrees of gain and loss households face in deciding to change residence , as well as their own degree of loss aversion . the growing number of large panel sets should make it possible to story_separator_special_tag social data in digital form , which includes user-generated content , expressed or implicit relationships between people , and behavioral traces , are at the core of many popular applications and platforms , and drive the research agenda of many researchers . the promises of social data are many , including understanding `` what the world thinks '' about a social issue , brand , product , celebrity , or other entity , as well as enabling better decision making in a variety of fields including public policy , healthcare , and economics . many academics and practitioners have warned against the naive usage of social data . there are biases and inaccuracies at the source of the data , but also introduced during processing . there are methodological limitations and pitfalls , as well as ethical boundaries and unexpected consequences that are often overlooked . this survey recognizes that the rigor with which these issues are addressed by different researchers varies across a wide range . we present a framework for identifying a broad range of menaces in the research and practices around social data . story_separator_special_tag this paper examines the relationships between satisfaction with life in general , particular domains of life , the partner , and parental relationships with existing children , and subsequent fertility . the data are from 2,948 women and 2,622 men aged 15 to 44 years from a longitudinal survey of the household population in australia . for both sexes a strong positive relationship between prior satisfaction with life and fertility two years later is found . men s satisfaction with their partner and with their partner s relationship with existing children are positively related to fertility . fertility is also related to age , parity , marital status , education , employment and birthplace . story_separator_special_tag this article summarizes expertise gleaned from the first years of internet-based experimental research and presents recommendations on : ( 1 ) ideal circumstances for conducting a study on the internet ; ( 2 ) what precautions have to be undertaken in web experimental design ; ( 3 ) which techniques have proven useful in web experimenting ; ( 4 ) which frequent errors and misconceptions need to be avoided ; and ( 5 ) what should be reported . procedures and solutions for typical challenges in web experimenting are discussed . topics covered include randomization , recruitment of samples , generalizability , dropout , experimental control , identity checks , multiple submissions , configuration errors , control of moti- vational confounding , and pre-testing . several techniques are explained , including `` warm-up , '' `` high hurdle , '' password methods , `` multiple site entry , '' randomization , and the use of incentives . the article concludes by proposing sixteen stan- dards for internet-based experimenting . story_separator_special_tag on 3 november 1948 , the day after harry truman won the united states presidential elections , the chicago tribune published one of the most famous erroneous headlines in newspaper history : dewey defeats truman ( 1 , 2 ) . the headline was informed by telephone surveys , which had inadvertently undersampled truman supporters ( 1 ) . rather than permanently discrediting the practice of polling , this event led to the development of more sophisticated techniques and higher standards that produce the more accurate and statistically rigorous polls conducted today ( 3 ) . story_separator_special_tag in his 1963 book informal sociology , william bruce cameron wrote the often-misattributed quote not everything that can be counted counts , and not everything that counts can be counted . with this . story_separator_special_tag the authors regret errors were present in the published article . counts of some of the adverse events were erroneous . changes to the text include . last sentence of the abstract should read . adverse effects of ppe included heat ( 1266 , 51 % ) , thirst ( 1174 , 47 % ) , pressure areas ( 1088 , 44 % ) , headaches ( 696 , 28 % ) , inability to use the bathroom ( 661 , 27 % ) and extreme exhaustion ( 492 , 20 % ) . all but pressure areas were associated with longer shift durations . last sentence of the results section of the manuscript should read . all but pressure areas were associated with longer duration of shifts wearing ppe ( table 4 ) . table 1 the total number of community/urban type of hospital should read 740 instead of 741. updated tables 3 and 4 should read as below : the authors would like to apologise for any inconvenience caused . story_separator_special_tag demonstrations that analyses of social media content can align with measurement from sample surveys have raised the question of whether survey research can be supplemented or even replaced with less costly and burdensome data mining of already-existing or `` found '' social media content . but just how trustworthy such measurement can be-say , to replace official statistics-is unknown . survey researchers and data scientists approach key questions from starting assumptions and analytic traditions that differ on , for example , the need for representative samples drawn from frames that fully cover the population . new conversations between these scholarly communities are needed to understand the potential points of alignment and non-alignment . across these approaches , there are major differences in ( a ) how participants ( survey respondents and social media posters ) understand the activity they are engaged in ; ( b ) the nature of the data produced by survey responses and social media posts , and the inferences that are legitimate given the data ; and ( c ) practical and ethical considerations surrounding the use of the data . estimates are likely to align to differing degrees depending on the research topic and the story_separator_special_tag we analyzed 700 million words , phrases , and topic instances collected from the facebook messages of 75,000 volunteers , who also took standard personality tests , and found striking variations in language with personality , gender , and age . in our open-vocabulary technique , the data itself drives a comprehensive exploration of language that distinguishes people , finding connections that are not captured with traditional closed-vocabulary word-category analyses . our analyses shed new light on psychosocial processes yielding results that are face valid ( e.g. , subjects living in high elevations talk about the mountains ) , tie in with other research ( e.g. , neurotic people disproportionately use the phrase sick of and the word depressed ) , suggest new hypotheses ( e.g. , an active life implies emotional stability ) , and give detailed insights ( males use the possessive my when mentioning their wife or girlfriend more often than females use my with husband or 'boyfriend ) . to date , this represents the largest study , by an order of magnitude , of language and personality . story_separator_special_tag background rapid development and social change in asia have led many to assume that the proportion of elderly people living alone is rising and that they tend to live in destitute situations . these assumptions often lack empirical validation . objective we address the trends and correlates of solitary living among older persons in myanmar , vietnam , and thailand . we examine the extent to which this form of living arrangement equates with their financial stress , physical and social isolation , psychological distress , and met need for personal care . methods we analyze 2011 12 national surveys of older persons from the three countries . we employ descriptive and multivariate analyses using either binary logistic regression or multiple classification analysis . results there has been a modest upward trend in solo living among the elderly in the three countries over the last few decades . the prevalence of solo living remains low , accounting for less than one-tenth of all elders in each setting . a substantial proportion of solo-dwelling elders live in quasi-coresidence . solo living is not always associated with financial stress . although solitary dwellers report more psychological distress than others , our evidence story_separator_special_tag large-scale databases of human activity in social media have captured scientific and policy attention , producing a flood of research and discussion . this paper considers methodological and conceptual challenges for this emergent field , with special attention to the validity and representativeness of social media big data analyses . persistent issues include the over-emphasis of a single platform , twitter , sampling biases arising from selection by hashtags , and vague and unrepresentative sampling frames . the socio-cultural complexity of user behavior aimed at algorithmic invisibility ( such as subtweeting , mock-retweeting , use of screen captures for text , etc . ) further complicate interpretation of big data social media . other challenges include accounting for field effects , i.e . broadly consequential events that do not diffuse only through the network under study but affect the whole society . the application of network methods from other fields to the study of human social activity may not always be appropriate . the paper concludes with a call to action on practical steps to improve our analytic capacity in this promising , rapidly-growing field . story_separator_special_tag we live in an increasingly interconnected world of techno-social systems , in which infrastructures composed of different technological layers are interoperating within the social component that drives their use and development . examples are provided by the internet , the world wide web , wifi communication technologies , and transportation and mobility infrastructures . the multiscale nature and complexity of these networks are crucial features in understanding and managing the networks . the accessibility of new data and the advances in the theory and modeling of complex networks are providing an integrated framework that brings us closer to achieving true predictive power of the behavior of techno-social systems . story_separator_special_tag in 1939 , george gallup 's american institute of public opinion published a pamphlet optimistically titled `` the new science of public opinion measurement '' . at the time , though , survey research was in its infancy , and only now , six decades later , can public opinion measurement be appropriately called a science , based in part on the development of the total survey error approach . herbert f. weisberg 's handbook presents a unified method for conducting good survey research centered on the various types of errors that can occur in surveys - from measurement and nonresponse error to coverage and sampling error . each chapter is built on theoretical elements drawn from specific disciplines , such as social psychology and statistics , and follows through with detailed treatments of the specific types of errors and their potential solutions . throughout , weisberg is attentive to survey constraints , including time and ethical considerations , as well as controversies within the field and the effects of new technology on the survey process - from internet surveys to those completed by phone , by mail , and in person . practitioners and students will find this comprehensive story_separator_special_tag we study several longstanding questions in media communications research , in the context of the microblogging service twitter , regarding the production , flow , and consumption of information . to do so , we exploit a recently introduced feature of twitter known as `` lists '' to distinguish between elite users - by which we mean celebrities , bloggers , and representatives of media outlets and other formal organizations - and ordinary users . based on this classification , we find a striking concentration of attention on twitter , in that roughly 50 % of urls consumed are generated by just 20k elite users , where the media produces the most information , but celebrities are the most followed . we also find significant homophily within categories : celebrities listen to celebrities , while bloggers listen to bloggers etc ; however , bloggers in general rebroadcast more information than the other categories . next we re-examine the classical `` two-step flow '' theory of communications , finding considerable support for it on twitter . third , we find that urls broadcast by different categories of users or containing different types of content exhibit systematically different lifespans . and finally story_separator_special_tag data about migration flows are largely inconsistent across countries , typically outdated , and often inexistent . despite the importance of migration as a driver of demographic change , there is limited availability of migration statistics . generally , researchers rely on census data to indirectly estimate flows . however , little can be inferred for specific years between censuses and for recent trends . the increasing availability of geolocated data from online sources has opened up new opportunities to track recent trends in migration patterns and to improve our understanding of the relationships between internal and international migration . in this paper , we use geolocated data for about 500,000 users of the social network website `` twitter '' . the data are for users in oecd countries during the period may 2011- april 2013. we evaluated , for the subsample of users who have posted geolocated tweets regularly , the geographic movements within and between countries for independent periods of four months , respectively . since twitter users are not representative of the oecd population , we can not infer migration rates at a single point in time . however , we proposed a difference-in-differences approach to reduce story_separator_special_tag purpose internet data hold many promises for demographic research , but come with severe drawbacks due to several types of bias . the purpose of this paper is to review the literature that uses internet data for demographic studies and presents a general framework for addressing the problem of selection bias in non-representative samples . design/methodology/approach the authors propose two main approaches to reduce bias . when ground truth data are available , the authors suggest a method that relies on calibration of the online data against reliable official statistics . when no ground truth data are available , the authors propose a difference in differences approach to evaluate relative trends . findings the authors offer a generalization of existing techniques . although there is not a definite answer to the question of whether statistical inference can be made from non-representative samples , the authors show that , when certain assumptions are met , the authors can extract signal from noisy and biased d . story_separator_special_tag given the importance of demographic data for monitoring development , the lack of appropriate sources and indicators for measuring progress toward the achievement of targets like the united nations 2030 agenda for sustainable development is a significant cause of uncertainty . as part of a larger effort to tackle the issue , in 2014 the united nations asked an independent expert advisory group to make recommendations to bring about a data revolution in sustainable development . data innovation , like new digital traces from a variety of technologies , is seen as a significant opportunity to inform policy evaluation and to improve estimates and projections . in this article , we contribute to the development of tools and methods that leverage new data sources for demographic research . we present an innovative approach to estimate stocks of migrants using a previously untapped data source : facebook s advertising platform . this freely available source allows advertisers and researchers to query information about socio-demographic characteristics of facebook users , aggregated at various levels of geographic granularity . we have three main goals : i ) to present a new data source that is relevant for demographers ; ii ) to discuss
while web search has become increasingly effective over the last decade , for many users ' needs the required answers may be spread across many documents , or may not exist on the web at all . yet , many of these needs could be addressed by asking people via popular community question answering ( cqa ) services , such as baidu knows , quora , or yahoo ! answers . in this paper , we perform the first large-scale analysis of how searchers become askers . for this , we study the logs of a major web search engine to trace the transformation of a large number of failed searches into questions posted on a popular cqa site . specifically , we analyze the characteristics of the queries , and of the patterns of search behavior that precede posting a question ; the relationship between the content of the attempted queries and of the posted questions ; and the subsequent actions the user performs on the cqa site . our work develops novel insights into searcher intent and behavior that lead to asking questions to the community , providing a foundation for more effective integration of automated web search story_separator_special_tag the rapidly increasing popularity of community-based question answering ( cqa ) services , e.g . yahoo ! answers , baidu zhidao , etc . have attracted great attention from both academia and industry . besides the basic problems , like question searching and answer finding , it should be noted that the low participation rate of users in cqa service is the crucial problem which limits its development potential . in this paper , we focus on addressing this problem by recommending answer providers , in which a question is given as a query and a ranked list of users is returned according to the likelihood of answering the question . based on the intuitive idea for recommendation , we try to introduce topic-level model to improve heuristic term-level methods , which are treated as the baselines . the proposed approach consists of two steps : ( 1 ) discovering latent topics in the content of questions and answers as well as latent interests of users to build user profiles ; ( 2 ) recommending question answerers for new arrival questions based on latent topics and term-level model . specifically , we develop a general generative model for questions and story_separator_special_tag enterprise and web data processing and content aggregation systems often require extensive use of human-reviewed data ( e.g . for training and monitoring machine learning-based applications ) . today these needs are often met by in-house efforts or out-sourced offshore contracting . emerging applications attempt to provide automated collection of human-reviewed data at internet-scale . we conduct extensive experiments to study the effectiveness of one such application . we also study the feasibility of using yahoo ! answers , a general question-answering forum , for human-reviewed data collection . story_separator_special_tag the quality of user-generated content varies drastically from excellent to abuse and spam . as the availability of such content increases , the task of identifying high-quality content sites based on user contributions -- social media sites -- becomes increasingly important . social media in general exhibit a rich variety of information sources : in addition to the content itself , there is a wide array of non-content information available , such as links between items and explicit quality ratings from members of the community . in this paper we investigate methods for exploiting such community feedback to automatically identify high quality content . as a test case , we focus on yahoo ! answers , a large community question/answering portal that is particularly rich in the amount and types of content and social interactions available in it . we introduce a general classification framework for combining the evidence from different sources of information , that can be tuned automatically for a given social media type and quality definition . in particular , for the community question/answering domain , we show that our system is able to separate high-quality items from the rest with an accuracy close to that of story_separator_special_tag community question answering ( cqa ) service provides a platform for increasing number of users to ask and answer for their own needs but unanswered questions still exist within a fixed period . to address this , the paper aims to route questions to the right answerers who have a top rank in accordance of their previous answering performance . in order to rank the answerers , we propose a framework called question routing ( qr ) which consists of four phases : ( 1 ) performance profiling , ( 2 ) expertise estimation , ( 3 ) availability estimation , and ( 4 ) answerer ranking . applying the framework , we conduct experiments with yahoo ! answers dataset and the results demonstrate that on average each of 1,713 testing questions obtains at least one answer if it is routed to the top 20 ranked answerers . story_separator_special_tag understanding the social roles of the members a group can help to understand the social context of the group . we present a method of applying social network analysis to support the task of characterizing authors in usenet newsgroups . we compute and visualize networks created by patterns of replies for each author in selected newsgroups and find that second-degree ego-centric networks give us clear distinctions between different types of authors and newsgroups . results show that newsgroups vary in terms of the populations of participants and the roles that they play . newsgroups can be characterized by populations that include question and answer newsgroups , conversational newsgroups , social support newsgroups , and flame newsgroups . this approach has applications for both researchers seeking to characterize different types of social cyberspaces as well as participants seeking to distinguish interaction partners and content authors . story_separator_special_tag we discuss the design , implementation and evaluation of two related visualizations of authors ' activities in usenet newsgroups . current usenet news browsers focus on messages and thread structures while disregarding valuable information about the authors of messages and the participants of the various discussions . newsgroup crowds graphically represents the population of authors in a particular newsgroup . authors are displayed according to the number of messages they contribute to each thread and the number of different days they appear in the space , illustrating and contrasting the interaction patterns of participants within the newsgroup . authorlines visualizes a particular author 's posting activity across all newsgroups over a period of one year . this visualization reveals temporal patterns of thread initiation and reply that can broadly characterize the roles authors play in usenet . we report the results of a user study that explored the value of these interfaces for developing high-level awareness of the activity and population in these conversational spaces . we suggest that interfaces that convey information about the social histories of populations and individuals may support better selection and evaluation of newsgroup content . story_separator_special_tag social roles in online discussion forums can be described by patterned characteristics of communication between network members which we conceive of as 'structural signatures . ' this paper uses visualization methods to reveal these structural signatures and regression analysis to confirm the relationship between these signatures and their associated roles in usenet newsgroups . our analysis focuses on distinguishing the signatures of one role from others , the role of `` answer people . '' answer people are individuals whose dominant behavior is to respond to questions posed by other users . we found that answer people predominantly contribute one or a few messages to discussions initiated by others , are disproportionately tied to relative isolates , have few intense ties and have few triangles in their local networks . ols regression shows that these signatures are strongly correlated with role behavior and , in combination , provide a strongly predictive model for identifying role behavior ( r =.72 ) . to conclude , we consider strategies for further improving the identification of role behavior in online discussion settings and consider how the development of a taxonomy of author types could be extended to a taxonomy of newsgroups in particular story_separator_special_tag question answering ( q & a ) websites are now large repositories of valuable knowledge . while most q & a sites were initially aimed at providing useful answers to the question asker , there has been a marked shift towards question answering as a community-driven knowledge creation process whose end product can be of enduring value to a broad audience . as part of this shift , specific expertise and deep knowledge of the subject at hand have become increasingly important , and many q & a sites employ voting and reputation mechanisms as centerpieces of their design to help users identify the trustworthiness and accuracy of the content.to better understand this shift in focus from one-off answers to a group knowledge-creation process , we consider a question together with its entire set of corresponding answers as our fundamental unit of analysis , in contrast with the focus on individual question-answer pairs that characterized previous work . our investigation considers the dynamics of the community activity that shapes the set of answers , both how answers and voters arrive over time and how this influences the eventual outcome . for example , we observe significant assortativity in the reputations story_separator_special_tag yahoo answers ( ya ) is a large and diverse question-answer forum , acting not only as a medium for sharing technical knowledge , but as a place where one can seek advice , gather opinions , and satisfy one 's curiosity about a countless number of things . in this paper , we seek to understand ya 's knowledge sharing and activity . we analyze the forum categories and cluster them according to content characteristics and patterns of interaction among the users . while interactions in some categories resemble expertise sharing forums , others incorporate discussion , everyday advice , and support . with such a diversity of categories in which one can participate , we find that some users focus narrowly on specific topics , while others participate across categories . this not only allows us to map related categories , but to characterize the entropy of the users ' interests . we find that lower entropy correlates with receiving higher answer ratings , but only for categories where factual expertise is primarily sought after . we combine both user attributes and answer characteristics to predict , within a given category , whether a particular answer will be story_separator_special_tag question answering ( q & a ) communities have been gaining popularity in the past few years . the success of such sites depends mainly on the contribution of a small number of expert users who provide a significant portion of the helpful answers , and so identifying users that have the potential of becoming strong contributers is an important task for owners of such communities . we present a study of the popular q & a website stackoverflow ( so ) , in which users ask and answer questions about software development , algorithms , math and other technical topics . the dataset includes information on 3.5 million questions and 6.9 million answers created by 1.3 million users in the years 2008 -- 2012. participation in activities on the site ( such as asking and answering questions ) earns users reputation , which is an indicator of the value of that user to the site . we describe an analysis of the so reputation system , and the participation patterns of high and low reputation users . the contributions of very high reputation users to the site indicate that they are the primary source of answers , and especially story_separator_special_tag computer systems that augment the process of finding , in an organization or worldwide , the appropriate expert for a given problem are becoming more feasible than ever as a result of the prevalence of corporate intranets and the internet . in this article , we investigate such systems in 2 parts . first , we explore the expert-finding problem in depth , review and analyze existing systems in this domain , and suggest a domain model that can serve as a framework for design and development decisions . second , on the basis of our analyses of the problem and solution spaces , we bring to light the gaps that remain to be addressed . finally , after this two-part investigation , we present our approach , called demoir ( dynamic expertise modeling from organizational information resources ) , which is a modular architecture for expert-finding systems that is based on a centralized expertise-modeling server but also incorporates decentralized components for expertise information gathering and exploitation . story_separator_special_tag online forums contain huge amounts of valuable user-generated content . in current forum systems , users have to passively wait for other users to visit the forum systems and read/answer their questions . the user experience for question answering suffers from this arrangement . in this paper , we address the problem of `` pushing '' the right questions to the right persons , the objective being to obtain quick , high-quality answers , thus improving user satisfaction . we propose a framework for the efficient and effective routing of a given question to the top-k potential experts ( users ) in a forum , by utilizing both the content and structures of the forum system . first , we compute the expertise of users according to the content of the forum system -this is to estimate the probability of a user being an expert for a given question based on the previous question answering of the user . specifically , we design three models for this task , including a profile-based model , a thread-based model , and a cluster-based model . second , we re-rank the user expertise measured in probability by utilizing the structural relations among users story_separator_special_tag user-interactive question answering ( qa ) communities such as yahoo ! answers are growing in popularity . however , as these qa sites always have thousands of new questions posted daily , it is difficult for users to find the questions that are of interest to them . consequently , this may delay the answering of the new questions . this gives rise to question recommendation techniques that help users locate interesting questions . in this paper , we adopt the probabilistic latent semantic analysis ( plsa ) model for question recommendation and propose a novel metric to evaluate the performance of our approach . the experimental results show our recommendation approach is effective . story_separator_special_tag we present aardvark , a social search engine . with aardvark , users ask a question , either by instant message , email , web input , text message , or voice . aardvark then routes the question to the person in the user 's extended social network most likely to be able to answer that question . as compared to a traditional web search engine , where the challenge lies in finding the right document to satisfy a user 's information need , the challenge in a social search engine like aardvark lies in finding the right person to satisfy a user 's information need . further , while trust in a traditional search engine is based on authority , in a social search engine like aardvark , trust is based on intimacy . we describe how these considerations inform the architecture , algorithms , and user interface of aardvark , and how they are reflected in the behavior of aardvark users . story_separator_special_tag software engineering is a complex filed with diverse specialties . by the growth of internet based applications , information security plays an important role in software development process . finding expert software engineers who have expertise in information security requires too much effort . stack overflow is the largest social q & a website in the field of software engineering . stack overflow contains developers ' posts and answers in different software engineering areas including information security . security related posts are asked in conjunction with various technologies , programming languages , tools and frameworks . in this paper , the content and metadata of stack overflow is analysed to find experts in diverse software engineering security related concepts using information security ontology . story_separator_special_tag crowdsourcing platforms enable to propose simple human intelligence tasks to a large number of participants who realise these tasks . the workers often receive a small amount of money or the platforms include some other incentive mechanisms , for example they can increase the workers reputation score , if they complete the tasks correctly . we address the problem of identifying experts among participants , that is , workers , who tend to answer the questions correctly . knowing who are the reliable workers could improve the quality of knowledge one can extract from responses . as opposed to other works in the literature , we assume that participants can give partial or incomplete responses , in case they are not sure that their answers are correct . we model such partial or incomplete responses with the help of belief functions , and we derive a measure that characterizes the expertise level of each participant . this measure is based on precise and exactitude degrees that represent two parts of the expertise level . the precision degree reflects the reliability level of the participants and the exactitude degree reflects the knowledge level of the participants . we also analyze our story_separator_special_tag with the growing volume and demand for data a major concern for an organization is to discover what data there actually is , what it contains and how it is being used and by who . the amount of data and the disparate systems used to handle this data increase in their number and complexity every year and unifying these systems becomes more and more complex . in this work we describe an intelligent search engine system , specifically designed to tackle the problem of information retrieval and sharing in a large multifaceted organization , that already has many systems in place for each department , which is an integral part of a joint operational data platform ( odp ) for data exploration and processing . story_separator_special_tag searching an organization 's document repositories for experts provides a cost effective solution for the task of expert finding . we present two general strategies to expert searching given a document collection which are formalized using generative probabilistic models . the first of these directly models an expert 's knowledge based on the documents that they are associated with , whilst the second locates documents on topic , and then finds the associated expert . forming reliable associations is crucial to the performance of expert finding systems . consequently , in our evaluation we compare the different approaches , exploring a variety of associations along with other operational parameters ( such as topicality ) . using the trec enterprise corpora , we show that the second strategy consistently outperforms the first . a comparison against other unsupervised techniques , reveals that our second model delivers excellent performance . story_separator_special_tag we present a new method for information retrieval using hidden markov models ( hmms ) . we develop a general framework for incorporating multiple word generation mechanisms within the same model . we then demonstrate that an extremely simple realization of this model substantially outperforms standard tf : idf ranking on both the trec-6 and trec-7 ad hoc retrieval tasks . we go on to present a novel method for performing blind feedback in the hmm framework , a more complex hmm that models bigram production , and several other algorithmic re nements . together , these methods form a state-of-the-art retrieval system that ranked among the best on the trec-7 ad hoc retrieval task . story_separator_special_tag community question answering ( cqa ) has become a popular service for users to ask and answer questions . in recent years , the efficiency of cqa service is hindered by a sharp increase of questions in the community . this paper is concerned with the problem of question routing . question routing in cqa aims to route new questions to the eligible answerers who can give high quality answers . however , the traditional methods suffer from the following two problems : ( 1 ) word mismatch between the new questions and the users ' answering history ; ( 2 ) high variance in perceived answer quality . to solve the above two problems , this paper proposes a novel joint learning method by taking both word mismatch and answer quality into a unified framework for question routing . we conduct experiments on large-scale real world data set from yahoo ! answers . experimental results show that our proposed method significantly outperforms the traditional query likelihood language model ( qllm ) as well as state-of-the-art cluster-based language model ( cblm ) and category-sensitive query likelihood language model ( tcslm ) . story_separator_special_tag in this article , we propose a method for finding active experts for a new question in order to improve the effectiveness of a question routing process . by active expert for a given question , we mean those experts who are active during the time of its posting . the proposed method uses the query likelihood language model , and two new measures , activeness and answering intensity . we compare the performance of the proposed method with its baseline query likelihood language model . we use a real-world dataset , called history , downloaded from yahoo ! answers web portal for this purpose . in every comparing scenario , the proposed method is found to outperform the corresponding baseline model . story_separator_special_tag we explore the relation between classical probabilistic models of information retrieval and the emerging language modeling approaches . it has long been recognized that the primary obstacle to effective performance of classical models is the need to estimate a relevance model : probabilities of words in the relevant class . we propose a novel technique for estimating these probabilities using the query alone . we demonstrate that our technique can produce highly accurate relevance models , addressing important notions of synonymy and polysemy . our experiments show relevance models outperforming baseline language modeling systems on trec retrieval and tdt tracking tasks . the main contribution of this work is an effective formal method for estimating a relevance model with no training data . story_separator_special_tag previous research on cluster-based retrieval has been inconclusive as to whether it does bring improved retrieval effectiveness over document-based retrieval . recent developments in the language modeling approach to ir have motivated us to re-examine this problem within this new retrieval framework . we propose two new models for cluster-based retrieval and evaluate them on several trec collections . we show that cluster-based retrieval can perform consistently across collections of realistic size , and significant improvements over document-based retrieval can be obtained in a fully automatic manner and without relevance information provided by human . story_separator_special_tag enterprise corpora contain evidence of what employees work on and therefore can be used to automatically find experts on a given topic . we present a general approach for representing the knowledge of a potential expert as a mixture of language models from associated documents . first we retrieve documents given the expert ? s name using a generative probabilistic technique and weight the retrieved documents according to expert-specific posterior distribution . then we model the expert indirectly through the set of associated documents , which allows us to exploit their underlying structure and complex language features . experiments show that our method has excellent performance on trec 2005 expert search task and that it effectively collects and combines evidence for expertise in a heterogeneous collection . story_separator_special_tag this paper investigates a ground-breaking incorporation of question category to question routing ( qr ) in community question answering ( cqa ) services . the incorporation of question category was designed to estimate answerer expertise for routing questions to potential answerers . two category-sensitive language models ( lms ) were developed with large-scale real world data sets being experimented . results demonstrated that higher accuracies of routing questions with lower computational costs were achieved , relative to traditional query likelihood lm ( qllm ) , state-of-the-art cluster-based lm ( cblm ) and the mixture of latent dirichlet allocation and qllm ( ldalm ) . story_separator_special_tag obtaining answers from community-based question answering ( cqa ) services is typically a lengthy process . in this light , the authors propose an algorithm that recommends answer providers . a two-step . story_separator_special_tag language modeling approaches to information retrieval are attractive and promising because they connect the problem of retrieval with that of language model estimation , which has been studied extensively in other application areas such as speech recognition . the basic idea of these approaches is to estimate a language model for each document , and to then rank documents by the likelihood of the query according to the estimated language model . a central issue in language model estimation is smoothing , the problem of adjusting the maximum likelihood estimator to compensate for data sparseness . in this article , we study the problem of language model smoothing and its influence on retrieval performance . we examine the sensitivity of retrieval performance to the smoothing parameters and compare several popular smoothing methods on different test collections . experimental results show that not only is the retrieval performance generally sensitive to the smoothing parameters , but also the sensitivity pattern is affected by the query type , with performance being more sensitive to smoothing for verbose queries than for keyword queries . verbose queries also generally require more aggressive smoothing to achieve optimal performance . this suggests that smoothing plays two story_separator_special_tag community question answering ( cqa ) has become a very popular web service to provide a platform for people to share knowledge . in current cqa services , askers post their questions to the system and wait for answerers to answer them passively . this procedure leads to several drawbacks . since new questions are presented to all users in the system , the askers can not expect some experts to answer their questions . meanwhile , answerers have to visit many questions and then pick out only a small part of them to answer . to overcome those drawbacks , a probabilistic framework is proposed to predict best answerers for new questions . by tracking answerers ' answering history , interests of answerers are modeled with the mixture of the language model and the latent dirichlet allocation model . user activity and authority information is also taken into consideration . experimental results show the proposed method can effectively push new questions to the best answerers . story_separator_special_tag community question answering ( cqa ) websites provide a rapidly growing source of information in many areas . this rapid growth , while offering new opportunities , puts forward new challenges . in most cqa implementations there is little effort in directing new questions to the right group of experts . this means that experts are not provided with questions matching their expertise , and therefore new matching questions may be missed and not receive a proper answer . we focus on finding experts for a newly posted question . we investigate the suitability of two statistical topic models for solving this issue and compare these methods against more traditional information retrieval approaches . we show that for a dataset constructed from the stackoverflow website , these topic models outperform other methods in retrieving a candidate set of best experts for a question . we also show that the segmented topic model gives consistently better performance compared to the latent dirichlet allocation model . story_separator_special_tag probabilistic latent semantic indexing is a novel approach to automated document indexing which is based on a statistical latent class model for factor analysis of count data . fitted from a training corpus of text documents by a generalization of the expectation maximization algorithm , the utilized model is able to deal with domain { specific synonymy as well as with polysemous words . in contrast to standard latent semantic indexing ( lsi ) by singular value decomposition , the probabilistic variant has a solid statistical foundation and defines a proper generative data model . retrieval experiments on a number of test collections indicate substantial performance gains over direct term matching methods as well as over lsi . in particular , the combination of models with different dimensionalities has proven to be advantageous . story_separator_special_tag a new method for automatic indexing and retrieval is described . the approach is to take advantage of implicit higher-order structure in the association of terms with documents ( semantic structure ) in order to improve the detection of relevant documents on the basis of terms found in queries . the particular technique used is singular-value decomposition , in which a large term by document matrix is decomposed into a set of ca . 100 orthogonal factors from which the original matrix can be approximated by linear combination . documents are represented by ca . 100 item vectors of factor weights . queries are represented as pseudo-document vectors formed from weighted combinations of terms , and documents with supra-threshold cosine values are returned . initial tests find this completely automatic method for retrieval to be promising . story_separator_special_tag with the fast development of web 2.0 , user-centric publishing and knowledge management platforms , such as wiki , blogs , and q & a systems attract a large number of users . given the availability of the huge amount of meaningful user generated content , incremental model based recommendation techniques can be employed to improve users ' experience using automatic recommendations . in this paper , we propose an incremental recommendation algorithm based on probabilistic latent semantic analysis ( plsa ) . the proposed algorithm can consider not only the users ' long-term and short-term interests , but also users ' negative and positive feedback . we compare the proposed method with several baseline methods using a real-world question & answer website called wenda . experiments demonstrate both the effectiveness and the efficiency of the proposed methods . story_separator_special_tag question recommendation that automatically recommends a new question to suitable users to answer is an appealing and challenging problem in the research area of community question answering ( cqa ) . unlike in general recommender systems where a user has only a single role , each user in cqa can play two different roles ( dual roles ) simultaneously : as an asker and as an answerer . to the best of our knowledge , this paper is the first to systematically investigate the distinctions between the two roles and their different influences on the performance of question recommendation in cqa . moreover , we propose a dual role model ( drm ) to model the dual roles of users effectively . with different indepen-dence assumptions , two variants of drm are achieved . finally , we present the drm based approach to question recommendation which provides a mechanism for naturally integrating the user relation between the answerer and the asker with the content re-levance between the answerer and the question into a uni-fied probabilistic framework . experiments using a real-world data crawled from yahoo ! answers show that : ( 1 ) there are evident distinctions between the two story_separator_special_tag we propose a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive bayes/unigram , mixture of unigrams [ 6 ] , and hof-mann 's aspect model , also known as probabilistic latent semantic indexing ( plsi ) [ 3 ] . in the context of text modeling , our model posits that each document is generated as a mixture of topics , where the continuous-valued mixture proportions are distributed as a latent dirichlet random variable . inference and learning are carried out efficiently via variational algorithms . we present empirical results on applications of this model to problems in text modeling , collaborative filtering , and text classification . story_separator_special_tag documents come naturally with structure : a section contains paragraphs which itself contains sentences ; a blog page contains a sequence of comments and links to related blogs . structure , of course , implies something about shared topics . in this paper we take the simplest form of structure , a document consisting of multiple segments , as the basis for a new form of topic model . to make this computationally feasible , and to allow the form of collapsed gibbs sampling that has worked well to date with topic models , we use the marginalized posterior of a two-parameter poisson-dirichlet process ( or pitman-yor process ) to handle the hierarchical modelling . experiments using either paragraphs or sentences as segments show the method significantly outperforms standard topic models on either whole document or segment , and previous segmented models , based on the held-out perplexity measure . story_separator_special_tag the two-parameter poisson-dirichlet distribution , denoted $ \\mathsf { pd } ( \\alpha , \\theta ) $ is a probability distribution on the set of decreasing positive sequences with sum 1. the usual poisson-dirichlet distribution with a single parameter $ \\theta $ , introduced by kingman , is $ \\mathsf { pd } ( 0 , \\theta ) $ . known properties of $ \\mathsf { pd } ( 0 , \\theta ) $ , including the markov chain description due to vershik , shmidt and ignatov , are generalized to the two-parameter case . the size-biased random permutation of $ \\mathsf { pd } ( \\alpha , \\theta ) $ is a simple residual allocation model proposed by engen in the context of species diversity , and rediscovered by perman and the authors in the study of excursions of brownian motion and bessel processes . for $ 0 < \\alpha < 1 , \\mathsf { pd } ( \\alpha , 0 ) $ is the asymptotic distribution of ranked lengths of excursions of a markov chain away from a state whose recurrence time distribution is in the domain of attraction of a stable law of index $ \\alpha $ story_separator_special_tag community question answering ( cqa ) sites are becoming increasingly important source of information where users can share knowledge on various topics . although these platforms bring new opportunities for users to seek help or provide solutions , they also pose many challenges with the ever growing size of the community . the sheer number of questions posted everyday motivates the problem of routing questions to the appropriate users who can answer them . in this paper , we propose an approach to predict the best answerer for a new question on cqa site . our approach considers both user interest and user expertise relevant to the topics of the given question . a user s interests on various topics are learned by applying topic modeling to previous questions answered by the user , while the user s expertise is learned by leveraging collaborative voting mechanism of cqa sites . we have applied our model on a dataset extracted from stackoverflow , one of the biggest cqa sites . the results show that our approach outperforms the tf-idf based approach . story_separator_special_tag web-based communities have become important places for people to seek and share expertise . we find that networks in these communities typically differ in their topology from other online networks such as the world wide web . systems targeted to augment web-based communities by automatically identifying users with expertise , for example , need to adapt to the underlying interaction dynamics . in this study , we analyze the java forum , a large online help-seeking community , using social network analysis methods . we test a set of network-based ranking algorithms , including pagerank and hits , on this large size social network in order to identify users with high expertise . we then use simulations to identify a small number of simple simulation rules governing the question-answer dynamic in the network . these simple rules not only replicate the structural characteristics and algorithm performance on the empirically observed java forum , but also allow us to evaluate how other algorithms may perform in communities with different characteristics . we believe this approach will be fruitful for practical algorithm design and implementation for online expertise-sharing communities . story_separator_special_tag new types of document collections are being developed by various web services . the service providers keep track of non-textual features such as click counts . in this paper , we present a framework to use non-textual features to predict the quality of documents . we also show our quality measure can be successfully incorporated into the language modeling-based retrieval model . we test our approach on a collection of question and answer pairs gathered from a community based question answering service where people ask and answer questions . experimental results using our quality measure show a significant improvement over our baseline . story_separator_special_tag the explosive growth and the widespread accessibility of the web has led to a surge of research activity in the area of information retrieval on the world wide web . the seminal papers of kleinberg [ 1998 , 1999 ] and brin and page [ 1998 ] introduced link analysis ranking , where hyperlink structures are used to determine the relative authority of a web page and produce improved algorithms for the ranking of web search results . in this article we work within the hubs and authorities framework defined by kleinberg and we propose new families of algorithms . two of the algorithms we propose use a bayesian approach , as opposed to the usual algebraic and graph theoretic approaches . we also introduce a theoretical framework for the study of link analysis ranking algorithms . the framework allows for the definition of specific properties of link analysis ranking algorithms , as well as for comparing different algorithms . we study the properties of the algorithms that we define , and we provide an axiomatic characterization of the indegree heuristic which ranks each node according to the number of incoming links . we conclude the article with an extensive story_separator_special_tag the network structure of a hyperlinked environment can be a rich source of information about the content of the environment , provided we have effective means for understanding it . we develop a set of algorithmic tools for extracting information from the link structures of such environments , and report on experiments that demonstrate their effectiveness in a variety of context on the world wide web . the central issue we address within our framework is the distillation of broad search topics , through the discovery of authorative information sources on such topics . we propose and test an algorithmic formulation of the notion of authority , based on the relationship between a set of relevant authoritative pages and the set of hub pages that join them together in the link structure . our formulation has connections to the eigenvectors of certain matrices associated with the link graph ; these connections in turn motivate additional heuristrics for link-based analysis . story_separator_special_tag question-answer portals such as naver and yahoo ! answers are quickly becoming rich sources of knowledge on many topics which are not well served by general web search engines . unfortunately , the quality of the submitted answers is uneven , ranging from excellent detailed answers to snappy and insulting remarks or even advertisements for commercial content . furthermore , user feedback for many topics is sparse , and can be insufficient to reliably identify good answers from the bad ones . hence , estimating the authority of users is a crucial task for this emerging domain , with potential applications to answer ranking , spam detection , and incentive mechanism design . we present an analysis of the link structure of a general-purpose question answering community to discover authoritative users , and promising experimental results over a dataset of more than 3 million answers from a popular community qa site . we also describe structural differences between question topics that correlate with the success of link analysis for authority discovery . story_separator_special_tag question-answer portals such as naver and yahoo ! answers are growing in popularity . however , despite the increased popularity , the quality of answers is uneven , and while some users usually provide good answers , many others often provide bad answers . hence , estimating the authority , or the expected quality of users , is a crucial task for this emerging domain , with potential applications to answer ranking and to incentive mechanism design . we adapt a powerful link analysis methodology from the web domain as a first step towards estimating authority in question answer portals . our experimental results over more than 3 million answers from yahoo ! answers are promising , and warrant further exploration along the lines outlined in this poster . story_separator_special_tag in the original pagerank algorithm for improving the ranking of search-query results , a single pagerank vector is computed , using the link structure of the web , to capture the relative `` importance '' of web pages , independent of any particular search query . to yield more accurate search results , we propose computing a set of pagerank vectors , biased using a set of representative topics , to capture more accurately the notion of importance with respect to a particular topic . by using these ( precomputed ) biased pagerank vectors to generate query-specific importance scores for pages at query time , we show that we can generate more accurate rankings than with a single , generic pagerank vector . for ordinary keyword search queries , we compute the topic-sensitive pagerank scores for pages satisfying the query using the topic of the query keywords . for searches done in context ( e.g. , when the search query is performed by highlighting words in a web page ) , we compute the topic-sensitive pagerank scores using the topic of the context in which the query appeared . story_separator_special_tag stack overflow is a highly successful community question answering ( cqa ) service for software developers with more than three millions users and more than ten thousand posts per day . the large volume of questions makes it difficult for users to find questions that they are interested in answering . in this paper , we propose a number of approaches to predict who will answer a new question using the characteristics of the question ( i.e . topic ) and users ( i.e . reputation ) , and the social network of stack overflow users ( i.e . interested in the same topic ) . specifically , our approach aims to identify a group of users ( candidates ) who have the potential to answer a new question by using feature-based prediction approach and social network based prediction approach . we develop predictive models to predict whether an identified candidate answers a new question . this prediction helps motivate the knowledge exchanging in the community by routing relevant questions to potential answerers . the evaluation results demonstrate the effectiveness of our predictive models , achieving 44 % precision , 59 % recall , and 49 % f-measure ( average story_separator_special_tag today , when searching for information on the www , one usually performs a query through a term-based search engine . these engines return , as the query 's result , a list of web pages whose contents matches the query . for broad-topic queries , such searches often result in a huge set of retrieved documents , many of which are irrelevant to the user . however , much information is contained in the link-structure of the www . information such as which pages are linked to others can be used to augment search algorithms . in this context , jon kleinberg introduced the notion of two distinct types of web pages : hubs and authorities . kleinberg argued that hubs and authorities exhibit a mutually reinforcing relationship : a good hub will point to many authorities , and a good authority will be pointed at by many hubs . in light of this , he dervised an algoirthm aimed at finding authoritative pages . we present salsa , a new stochastic approach for link-structure analysis , which examines random walks on graphs derived from the link-structure . we show that both salsa and kleinberg 's mutual reinforcement approach story_separator_special_tag as the web has evolved into a data-rich repository , with the standard `` page view , '' current search engines are becoming increasingly inadequate for a wide range of query tasks . while we often search for various data `` entities '' ( e.g. , phone number , paper pdf , date ) , today 's engines only take us indirectly to pages . while entities appear in many pages , current engines only find each page individually . toward searching directly and holistically for finding information of finer granularity , we study the problem of entity search , a significant departure from traditional document retrieval . we focus on the core challenge of ranking entities , by distilling its underlying conceptual model impression model and developing a probabilistic ranking framework , entityrank , that is able to seamlessly integrate both local and global information in ranking . we evaluate our online prototype over a 2tb web corpus , and show that entityrank performs effectively . story_separator_special_tag this paper focuses on the problem of identifying influential users of micro-blogging services . twitter , one of the most notable micro-blogging services , employs a social-networking model called `` following '' , in which each user can choose who she wants to `` follow '' to receive tweets from without requiring the latter to give permission first . in a dataset prepared for this study , it is observed that ( 1 ) 72.4 % of the users in twitter follow more than 80 % of their followers , and ( 2 ) 80.5 % of the users have 80 % of users they are following follow them back . our study reveals that the presence of `` reciprocity '' can be explained by phenomenon of homophily . based on this finding , twitterrank , an extension of pagerank algorithm , is proposed to measure the influence of users in twitter . twitterrank measures the influence taking both the topical similarity between users and the link structure into account . experimental results show that twitterrank outperforms the one twitter currently uses and other related algorithms , including the original pagerank and topic-sensitive pagerank . story_separator_special_tag the field of digital libraries ( dls ) coalesced in 1994 : the first digital library conferences were held that year , awareness of the world wide web was accelerating , and the national science foundation awarded $ 24 million ( u.s. ) for the digital library initiative ( dli ) . in this paper we examine the state of the dl domain after a decade of activity by applying social network analysis to the co-authorship network of the past acm , ieee , and joint acm/ieee digital library conferences . we base our analysis on a common binary undirectional network model to represent the co-authorship network , and from it we extract several established network measures . we also introduce a weighted directional network model to represent the co-authorship network , for which we define $ authorrank $ as an indicator of the impact of an individual author in the network . the results are validated against conference program committee members in the same period . the results show clear advantages of pagerank and authorrank over degree , closeness and betweenness centrality metrics . we also investigate the amount and nature of international participation in joint conference on digital story_separator_special_tag question-answer forums ( qaf ) are significant platforms for disseminating informal information and play important role in problem solving and learning . expert identification still has some limitations and link analysis methods do not consider community dimension . in this paper an authority analysis approach for identifying experts is proposed . this approach combines overlapping community detection ( ocd ) algorithms with ranking methods to compute the nodes ' expertise level in qafs . firstly , graph resulting from a specific search query is computed and an ocd algorithm is applied on it . after identifying clusters of nodes , we change updating rules of original hyperlink-induced topic search ( hits ) and pagerank to take the effect of intra cluster links and extra cluster connections . people whom are intra or overlapping to a community possess higher vision about context of the community than nodes which are outside . we experimented the proposed overlapping community-aware ranking algorithms and compared them with baseline approaches on online forums . results indicate that ocd improves expert identification accuracy and relevancy . story_separator_special_tag in this paper we propose a new two-phase algorithm for overlapping community detection ( ocd ) in social networks . in the first phase , called disassortative degree mixing , we identify nodes with high degrees through a random walk process on the row-normalized disassortative matrix representation of the network . in the second phase , we calculate how closely each node of the network is bound to the leaders via a cascading process called network coordination game . we implemented the algorithm and four additional ones as a web service on a federated peer-to-peer infrastructure . comparative test results for small and big real world networks demonstrated the correct identification of leaders , high precision and good time complexity . the web service is available as open source software . story_separator_special_tag we consider the problem of identifying authoritative users in yahoo ! answers . a common approach is to use link analysis techniques in order to provide a ranked list of users based on their degree of authority . a major problem for such an approach is determining how many users should be chosen as authoritative from a ranked list . to address this problem , we propose a method for automatic identification of authoritative actors . in our approach , we propose to model the authority scores of users as a mixture of gamma distributions . the number of components in the mixture is estimated by the bayesian information criterion ( bic ) while the parameters of each component are estimated using the expectation-maximization ( em ) algorithm . this method allows us to automatically discriminate between authoritative and non-authoritative users . the suitability of our proposal is demonstrated in an empirical study using datasets from yahoo ! answers . story_separator_special_tag let d be a database of n objects where each object has m fields . the objects are given in m sorted lists ( where the ith list is sorted according to the ith field ) . our goal is to find the top k objects according to a monotone aggregation function t , while minimizing access to the lists . the problem arises in several contexts . in particular fagin ( jcss 1999 ) considered it for the purpose of aggregating information in a multimedia database system . we are interested in instance optimality , i.e . that our algorithm will be as good as any other ( correct ) algorithm on any instance . we provide and analyze several instance optimal algorithms for the task , with various access costs and models . story_separator_special_tag how to improve authority ranking is a crucial research problem for expert finding . in this paper , we propose a novel framework for expert finding based on the authority information in the target category as well as the relevant categories . first , we develop a scalable method for measuring the relevancy between categories through topic models . then , we provide a link analysis approach for ranking user authority by considering the information in both the target category and the relevant categories . finally , the extensive experiments on two large-scale real-world q & a data sets clearly show that the proposed method outperforms the baseline methods with a significant margin . story_separator_special_tag the problem of expert finding targets on identifying experts with special skills or knowledge for some particular knowledge categories , i.e . knowledge domains , by ranking user authority . in recent years , this problem has become increasingly important with the popularity of knowledge sharing social networks . while many previous studies have examined authority ranking for expert finding , they have a focus on leveraging only the information in the target category for expert finding . it is not clear how to exploit the information in the relevant categories of a target category for improving the quality of authority ranking . to that end , in this paper , we propose an expert finding framework based on the authority information in the target category as well as the relevant categories . along this line , we develop a scalable method for measuring the relevancies between categories through topic models , which takes consideration of both content and user interaction based category similarities . also , we provide a topical link analysis approach , which is multiple-category-sensitive , for ranking user authority by considering the information in both the target category and the relevant categories . finally , in story_separator_special_tag in this paper , we consider the problem of estimating the relative expertise score of users in community question and answering services ( cqa ) . previous approaches typically only utilize the explicit question answering relationship between askers and an-swerers and apply link analysis to address this problem . the im-plicit pairwise comparison between two users that is implied in the best answer selection is ignored . given a question and answering thread , it 's likely that the expertise score of the best answerer is higher than the asker 's and all other non-best answerers ' . the goal of this paper is to explore such pairwise comparisons inferred from best answer selections to estimate the relative expertise scores of users . formally , we treat each pairwise comparison between two users as a two-player competition with one winner and one loser . two competition models are proposed to estimate user expertise from pairwise comparisons . using the ntcir-8 cqa task data with 3 million questions and introducing answer quality prediction based evaluation metrics , the experimental results show that the pairwise comparison based competition model significantly outperforms link analysis based approaches ( pagerank and hits ) and pointwise story_separator_special_tag question and answer ( qa smartq incorporates a lightweight spammer detection method to identify potential spammers . in order to reduce the loads of experts , we propose a strategy to recommend suggested answers from similar questions to each new question . our trace-driven simulation on peersim demonstrates the effectiveness of smartq in providing good user experience . we then develop a real application of smartq and deploy it for use in a student group in clemson university . the user feedback shows that smartq can provide high-quality answers for users in a community . story_separator_special_tag question answering websites are becoming an ever more popular knowledge sharing platform . on such websites , people may ask any type of question and then wait for someone else to answer the question . however , in this manner , askers may not obtain correct answers from appropriate experts . recently , various approaches have been proposed to automatically find experts in question answering websites . in this paper , we propose a novel hybrid approach to effectively find experts for the category of the target question in question answering websites . our approach considers user subject relevance , user reputation and authority of a category in finding experts . a user 's subject relevance denotes the relevance of a user 's domain knowledge to the target question . a user 's reputation is derived from the user 's historical question-answering records , while user authority is derived from link analysis . moreover , our proposed approach has been extended to develop a question dependent approach that considers the relevance of historical questions to the target question in deriving user domain knowledge , reputation and authority . we used a dataset obtained from yahoo ! answer taiwan to evaluate story_separator_special_tag community-based question answering ( cqa ) such as stack overflow and quora face the challenge of providing unsolved questions with high expertise users to obtain high quality answers , which is called question routing . many existing methods try to tackle this by learning user model from structure and topic information , which suffer from the sparsity issue of cqa data . in this paper , we propose a novel question routing method from the viewpoint of knowledge graph embedding . we integrate topic representations with network structure into a unified knowledge graph question routing framework , named as kgqr . the extensive experiments carried out on stack overflow data suggest that kgqr outperforms other state-of-the-art methods . story_separator_special_tag community question answering ( cqa ) services enables users to ask and answer questions . in these communities , there are typically a small number of experts amongst the large population of users . we study which questions a user select for answering and show that experts prefer answering questions where they have a higher chance of making a valuable contribution . we term this preferential selection as question selection bias and propose a mathematical model to estimate it . our results show that using gaussian classification models we can effectively distinguish experts from ordinary users over their selection biases . in order to estimate these biases , only a small amount of data per user is required , which makes an early identification of expertise a possibility . further , our study of bias evolution reveals that they do not show significant changes over time indicating that they emanates from the intrinsic characteristics of users . story_separator_special_tag question answering communities ( qa ) are sustained by a handful of experts who provide a large number of high quality answers . identifying these experts during the first few weeks of their joining the community can be beneficial as it would allow community managers to take steps to develop and retain these potential experts . in this paper , we explore approaches to identify potential experts as early as within the first two weeks of their association with the qa . we look at users ' behavior and estimate their motivation and ability to help others . these qualities enable us to build classification and ranking models to identify users who are likely to become experts in the future . our results indicate that the current experts can be effectively identified from their early behavior . we asked community managers to evaluate the potential experts identified by our algorithm and their analysis revealed that quite a few of these users were already experts or on the path of becoming experts . our retrospective analysis shows that some of these potential experts had already left the community , highlighting the value of early identification and engagement . story_separator_special_tag community-based question and answering ( cqa ) services have brought users to a new era of knowledge dissemination by allowing users to ask questions and to answer other users ' questions . however , due to the fast increasing of posted questions and the lack of an effective way to find interesting questions , there is a serious gap between posted questions and potential answerers . this gap may degrade a cqa service 's performance as well as reduce users ' loyalty to the system . to bridge the gap , we present a new approach to question routing , which aims at routing questions to participants who are likely to provide answers . we consider the problem of question routing as a classification task , and develop a variety of local and global features which capture different aspects of questions , users , and their relations . our experimental results obtained from an evaluation over the yahoo ! ~answers dataset demonstrate high feasibility of question routing . we also perform a systematical comparison on how different types of features contribute to the final results and show that question-user relationship features play a key role in improving the overall performance story_separator_special_tag this paper focuses on the problem of question routing ( qr ) in community question answering ( cqa ) , which aims to route newly posted questions to the potential answerers who are most likely to answer them . traditional methods to solve this problem only consider the text similarity features between the newly posted question and the user profile , while ignoring the important statistical features , including the question-specific statistical feature and the user-specific statistical features . moreover , traditional methods are based on unsupervised learning , which is not easy to introduce the rich features into them . this paper proposes a general framework based on the learning to rank concepts for qr . training sets consist of triples ( q , asker , answerers ) are first collected . then , by introducing the intrinsic relationships between the asker and the answerers in each cqa session to capture the intrinsic labels/orders of the users about their expertise degree of the question q , two different methods , including the svm-based and rankingsvm-based methods , are presented to learn the models with different example creation processes from the training set . finally , the potential answerers are story_separator_special_tag we focus on detecting potential topical experts in community question answering platforms early on in their lifecycle . we use a semi-supervised machine learning approach . we extract three types of feature : ( i ) textual , ( ii ) behavioral , and ( iii ) time-aware , which we use to predict whether a user will become an expert in the longterm . we compare our method to a machine learning method based on a state-of-the-art method in expertise retrieval . results on data from stack overflow demonstrate the utility of adding behavioral and time-aware features to the baseline method with a net improvement in accuracy of 26 % for very early detection of expertise . story_separator_special_tag in community question answering ( cqa ) forums , there is typically a small fraction of users who provide high-quality posts and earn a very high reputation status from the community . these top contributors are critical to the community since they drive the development of the site and attract traffic from internet users . identifying these individuals could be highly valuable , but this is not an easy task . unlike publication or social networks , most cqa sites lack information regarding peers , friends , or collaborators , which can be an important indicator signaling future success or performance . in this paper , we attempt to perform this analysis by extracting different sets of features to predict future contribution . the experiment covers 376,000 users who remain active in stack overflow for at least one year and together contribute more than 21 million posts . one of the highlights of our approach is that we can identify rising stars after short observations . our approach achieves high accuracy , 85 % , when predicting whether a user will become a top contributor after a few weeks of observation . as a slightly different problem in which we story_separator_special_tag yahoo ! answers is currently one of the most popular question answering systems . we claim however that its user experience could be significantly improved if it could route the `` right question '' to the `` right user . '' indeed , while some users would rush answering a question such as `` what should i wear at the prom ? , '' others would be upset simply being exposed to it . we argue here that community question answering sites in general and yahoo ! answers in particular , need a mechanism that would expose users to questions they can relate to and possibly answer.we propose here to address this need via a multi-channel recommender system technology for associating questions with potential answerers on yahoo ! answers . one novel aspect of our approach is exploiting a wide variety of content and social signals users regularly provide to the system and organizing them into channels . content signals relate mostly to the text and categories of questions and associated answers , while social signals capture the various user interactions with questions , such as asking , answering , voting , etc . we fuse and generalize known recommendation story_separator_special_tag community question answering ( cqa ) services enable their users to exchange knowledge in the form of questions and answers . these communities thrive as a result of a small number of highly active users , typically called experts , who provide a large number of high-quality useful answers . expert identification techniques enable community managers to take measures to retain the experts in the community . there is further value in identifying the experts during the first few weeks of their participation as it would allow measures to nurture and retain them . in this article we address two problems : ( a ) how to identify current experts in cqa ? and ( b ) how to identify users who have potential of becoming experts in future ( potential experts ) ? in particular , we propose a probabilistic model that captures the selection preferences of users based on the questions they choose for answering . the probabilistic model allows us to run machine learning methods for identifying experts and potential experts . our results over several popular cqa datasets indicate that experts differ considerably from ordinary users in their selection preferences ; enabling us to predict experts story_separator_special_tag the value of question answering ( q & a ) communities is dependent on members of the community finding the questions they are most willing and able to answer . this can be difficult in communities with a high volume of questions . much previous has work attempted to address this problem by recommending questions similar to those already answered . however , this approach disregards the question selection behaviour of the answers and how it is affected by factors such as question recency and reputation . in this paper , we identify the parameters that correlate with such a behaviour by analysing the users ' answering patterns in a q & a community . we then generate a model to predict which question a user is most likely to answer next . we train learning to rank ( ltr ) models to predict question selections using various user , question and thread feature sets . we show that answering behaviour can be predicted with a high level of success , and highlight the particular features that influence users ' question selections . story_separator_special_tag the quality measures used in information retrieval are particularly difficult to optimize directly , since they depend on the model scores only through the sorted order of the documents returned for a given query . thus , the derivatives of the cost with respect to the model parameters are either zero , or are undefined . in this paper , we propose a class of simple , flexible algorithms , called lambdarank , which avoids these difficulties by working with implicit cost functions . we describe lambdarank using neural network models , although the idea applies to any differentiable function class . we give necessary and sufficient conditions for the resulting implicit cost function to be convex , and we show that the general method has a simple mechanical interpretation . we demonstrate significantly improved accuracy , over a state-of-the-art ranking algorithm , on several datasets . we also show that lambdarank provides a method for significantly speeding up the training phase of that ranking algorithm . although this paper is directed towards ranking , the proposed method can be extended to any non-smooth and multivariate cost functions . story_separator_special_tag the paper is concerned with learning to rank , which is to construct a model or a function for ranking objects . learning to rank is useful for document retrieval , collaborative filtering , and many other applications . several methods for learning to rank have been proposed , which take object pairs as 'instances ' in learning . we refer to them as the pairwise approach in this paper . although the pairwise approach offers advantages , it ignores the fact that ranking is a prediction task on list of objects . the paper postulates that learning to rank should adopt the listwise approach in which lists of objects are used as 'instances ' in learning . the paper proposes a new probabilistic method for the approach . specifically it introduces two probability models , respectively referred to as permutation probability and top k probability , to define a listwise loss function for learning . neural network and gradient descent are then employed as model and algorithm in the learning method . experimental results on information retrieval show that the proposed listwise approach performs better than the pairwise approach . story_separator_special_tag community question answering ( cqa ) is a popular online service for people asking and answering questions . recently , with accumulation of users and contents in cqa platforms , their answer quality has aroused wide concern . expert finding has been proposed as one way to address such problem , which aims at finding suitable answerers who can give high-quality answers . in this paper , we formalize expert finding as a learning to rank task by leveraging the user feedback on answers ( i.e. , the votes of answers ) as the `` relevance '' labels . to achieve this task , we present a listwise learning to rank approach , which is referred to as listef . in the listef approach , realizing that questions in cqa are relatively short and usually attached with tags , we propose a tagword topic model ( ttm ) to derive high-quality topical representations of questions . based on ttm , we develop a competition-based user expertise extraction ( coupe ) method to capture user expertise features for given questions . we adopt the widely used listwise learning to rank method lambdamart to train the ranking function . finally , for story_separator_special_tag community-based question answering ( cqa ) services are becoming popular as the public gets used to look for help and obtain information . existing cqa services try to recommend someone for answering new questions . on the other hand , people are allowed to exchange information and experience using various collaborative tools . it would be interesting to combine the two approaches to increase the reliability of recommending an answerer . thus , relying on semantically modeled traces , we propose a comprehensive approach that recommends an answerer in a collaborative environment . from a global point of view , this approach consists in evaluating users by the performance in the cqa services and the corresponding knowledge sharing activities in which they participated in a collaborative context . by modeling and analyzing users ' behavior , we assess the competency of an answerer in a particular collaborative context . story_separator_special_tag we address the problem of ranking question answerers according to their credibility , characterized here by the probability that a given question answerer ( user ) will be awarded a best answer on a question given the answerer s question-answering history . this probability ( represented by ) is considered to be a hidden variable that can only be estimated statistically from specific observations associated with the user , namely the number b of best answers awarded , associated with the number n of questions answered . the more specific problem addressed is the potentially high degree of uncertainty associated with such credibility estimates when they are based on small numbers of answers . we address this problem by a kind of bayesian smoothing . the credibility estimate will consist of a mixture of the overall population statistics and those of the specific user . the greater the number of questions asked , the greater will be the contribution of the specific user statistics relative to those of the overall population . we use the predictive stochastic complexity ( psc ) as an accuracy measure to evaluate several methods that can be used for the estimation . we compare our story_separator_special_tag as the netflix prize competition has demonstrated , matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations , allowing the incorporation of additional information such as implicit feedback , temporal effects , and confidence levels . story_separator_special_tag there are users who generate significant amounts of domain knowledge in online forums or community question and answer ( cqa ) websites . existing literature defines them as experts . these users attain such statuses by providing multiple relevant answers to the question askers . past works have focused on recommending relevant posts to these users . with the rise of web forums where certified experts answer questions , strategies that are tailored towards addressing the new type of experts will be beneficial . in this paper , we identify a new type of user called designated experts ( i.e. , users designated as domain experts by the web administrators ) . these are the experts who are guaranteed by web administrators to be an expert in a given domain . our focus is on how we can capture the unique behavior of designated experts in an online domain . we have noticed designated experts have different behaviors compared to cqa experts . in particular , unlike existing cqas , only one designated expert responds to any given thread . to capture this intuition , we introduce a matrix factorization algorithm with regularization to capture the behavior . our results story_separator_special_tag relational learning is concerned with predicting unknown values of a relation , given a database of entities and observed relations among entities . an example of relational learning is movie rating prediction , where entities could include users , movies , genres , and actors . relations encode users ' ratings of movies , movies ' genres , and actors ' roles in movies . a common prediction technique given one pairwise relation , for example a # users x # movies ratings matrix , is low-rank matrix factorization . in domains with multiple relations , represented as multiple matrices , we may improve predictive accuracy by exploiting information from one relation while predicting another . to this end , we propose a collective matrix factorization model : we simultaneously factor several matrices , sharing parameters among factors when an entity participates in multiple relations . each relation can have a different value type and error distribution ; so , we allow nonlinear relationships between the parameters and outputs , using bregman divergences to measure error . we extend standard alternating projection algorithms to our model , and derive an efficient newton update for the projection . furthermore , we story_separator_special_tag community question answering ( cqa ) sites provide us online platforms to post questions or answers . generally , there are a great number of questions waiting to be answered by expert users . however , most of answerers are ordinary with just basic background knowledge in certain areas . to help askers to get their preferable answers , a set of possible expert users should be recommended . there have been some studies on the expert recommendation in cqa , the latest work models the user expertise under topics , where each topic is learnt based on the content and tags of questions and answers . practically , such topics are too general , whereas question tags can be more informative and valuable than the topic of each question . in this paper , we study the user expertise under tags . experimental analysis on a large data set from stack overflow demonstrates that our method performs better than the up-to-date method . story_separator_special_tag many existing approaches to collaborative filtering can neither handle very large datasets nor easily deal with users who have very few ratings . in this paper we present the probabilistic matrix factorization ( pmf ) model which scales linearly with the number of observations and , more importantly , performs well on the large , sparse , and very imbalanced netflix dataset . we further extend the pmf model to include an adaptive prior on the model parameters and show how the model capacity can be controlled automatically . finally , we introduce a constrained version of the pmf model that is based on the assumption that users who have rated similar sets of movies are likely to have similar preferences . the resulting model is able to generalize considerably better for users with very few ratings . when the predictions of multiple pmf models are linearly combined with the predictions of restricted boltzmann machines models , we achieve an error rate of 0.8861 , that is nearly 7 % better than the score of netflix 's own system . story_separator_special_tag in this paper , we address the problem of authority identification in community question answering ( cqa ) . most of the existing approaches attempt to identify authorities in cqa by means of link analysis techniques . however , these traditional techniques only consider the link structure while ignore the topic information about the users , giving rise to an increasing problem of topic drift . to solve the problem of topic drift , we propose a topical ranking method , which is an extension of pagerank algorithm to identify authorities in cqa . compared to the traditional link analysis techniques , our proposed method is more effective because it measures the authority scores by taking into account both the link structure and the topic information . we conduct experiments on real world data set from yahoo ! answers . experimental results show that our proposed method significantly outperforms the traditional link analysis techniques and achieves the state-of-the-art performance for authority identification in cqa . story_separator_special_tag expert finding is important to the development of community question answering websites and e-learning . in this study , we propose a topic-sensitive probabilistic model to estimate the user authority ranking for each question , which is based on the link analysis technique and topical similarities between users and questions . most of the existing approaches focus on the user relationship only . compared to the existing approaches , our method is more effective because we consider the link structure and the topical similarity simultaneously . we use the real-world data set from zhihu ( a famous cqa website in china ) to conduct experiments . experimental results show that our algorithm outperforms other algorithms in the user authority ranking . story_separator_special_tag community question answering ( cqa ) websites such as quora and stackoverflow provide a new way of asking and answering questions which are not well served by general web search engines . with the huge volume and ever-increasing number of users and questions , effective strategies of ranking experts for different questions need to be proposed . in this paper , we first make some analysis on the network structure of the cqa website . based on these works , we further propose an expert finding method newhits , which considers the topical similarity of the users and can well adapts to the feature of the cqa . then , we apply the newhits algorithm to user authority ranking . the comparison experiments with stackoverflow data are conducted and the experimental results demonstrate that the method we proposed performs better than traditional link analysis methods in the user authority ranking . story_separator_special_tag community question answering ( cqa ) services provide an open platform for people to share their knowledge and have attracted great attention for its rapidly increasing popularity . as more knowledge is shared in cqa , how to use the repository for solving new questions has become a crucial problem . in this paper , we tackle the problem by finding experts from the question answering history first and then recommending the appropriate experts to answer the new questions . we develop the topic-level expert learning ( tel ) model to find experts on topic level in cqa . our proposed model incorporates link analysis into content analysis . the main difference between tel and other generative models is that tel can automatically adjust and update the sampling parameters during iterations in order to better model the experts on topic level . the experiments are conducted on datasets crawled from yahoo ! answers and the results show that our method can effectively find experts to answer new questions and can better predict best responders for new questions . our model achieves significant improvement over the baseline methods on multiple metrics . beyond these metric performances , tel converges fast within story_separator_special_tag in this paper , we address the problem of expert finding in community question answering ( cqa ) . most of the existing approaches attempt to find experts in cqa by means of link analysis techniques . however , these traditional techniques only consider the link structure while ignore the topical similarity among users ( askers and answerers ) and user expertise and user reputation . in this study , we propose a topic-sensitive probabilistic model , which is an extension of pagerank algorithm to find experts in cqa . compared to the traditional link analysis techniques , our proposed method is more effective because it finds the experts by taking into account both the link structure and the topical similarity among users . we conduct experiments on real world data set from yahoo ! answers . experimental results show that our proposed method significantly outperforms the traditional link analysis techniques and achieves the state-of-the-art performance for expert finding in cqa . story_separator_special_tag community question answering ( cqa ) websites , where people share expertise on open platforms , have become large repositories of valuable knowledge . to bring the best value out of these knowledge repositories , it is critically important for cqa services to know how to find the right experts , retrieve archived similar questions and recommend best answers to new questions . to tackle this cluster of closely related problems in a principled approach , we proposed topic expertise model ( tem ) , a novel probabilistic generative model with gmm hybrid , to jointly model topics and expertise by integrating textual content model and link structure analysis . based on tem results , we proposed cqarank to measure user interests and expertise score under different topics . leveraging the question answering history based on long-term community reviews and voting , our method could find experts with both similar topical preference and high topical expertise . experiments carried out on stack overflow data , the largest cqa focused on computer programming , show that our method achieves significant improvement over existing methods on multiple metrics . story_separator_special_tag community question answering ( cqa ) service which enables users to ask and answer questions have emerged popular on the web . however , lots of questions usually ca n't be resolved by appropriate answerers effectively . to address this problem , we present a novel approach to recommend users who are most likely to be able to answer the new question . differently with previous methods , this approach utilizes the inherent semantic relations among asker-question-answerer simultaneously and perform the answerer recommendation task based on tensor factorization . experimental results on two real-world cqa dataset show that the proposed method is able to recommend appropriate answerers for new questions and outperforms other state-of-the-art approaches . story_separator_special_tag social community detection is a growing field of interest in the area of social network applications , and many approaches have been developed , including graph partitioning , latent space model , block model and spectral clustering . most existing work purely focuses on network structure information which is , however , often sparse , noisy and lack of interpretability . to improve the accuracy and interpretability of community discovery , we propose to infer users ' social communities by incorporating their spatiotemporal data and semantic information . technically , we propose a unified probabilistic generative model , user-community-geo-topic ( ucgt ) , to simulate the generative process of communities as a result of network proximities , spatiotemporal co-occurrences and semantic similarity . with a well-designed multi-component model structure and a parallel inference implementation to leverage the power of multicores and clusters , our ucgt model is expressive while remaining efficient and scalable to growing large-scale geo-social networking data . we deploy ucgt to two application scenarios of user behavior predictions : check-in prediction and social interaction prediction . extensive experiments on two large-scale geo-social networking datasets show that ucgt achieves better performance than existing state-of-the-art comparison methods . story_separator_special_tag community question answering ( qa ) portals contain questions and answers contributed by hundreds of millions of users . these databases of questions and answers are of great value if they can be used directly to answer questions from any user . in this research , we address this collaborative qa task by drawing knowledge from the crowds in community qa portals such as yahoo ! answers . despite their popularity , it is well known that answers in community qa portals have unequal quality . we therefore propose a quality-aware framework to design methods that select answers from a community qa portal considering answer quality in addition to answer relevance . besides using answer features for determining answer quality , we introduce several other quality-aware qa methods using answer quality derived from the expertise of answerers . such expertise can be question independent or question dependent . we evaluate our proposed methods using a database of 95k questions and 537k answers obtained from yahoo ! answers . our experiments have shown that answer quality can improve qa performance significantly . furthermore , question dependent expertise based methods are shown to outperform methods using answer features only . it is story_separator_special_tag community question answering ( cqa ) service enables its users to exchange knowledge in the form of questions and answers . by allowing the users to contribute knowledge , cqa not only satisfies the question askers but also provides valuable references to other users with similar queries . due to a large volume of questions , not all questions get fully answered . as a result , it can be useful to route a question to a potential answerer . in this paper , we present a question routing scheme which takes into account the answering , commenting and voting propensities of the users . unlike prior work which focuses on routing a question to the most desirable expert , we focus on routing it to a group of users - who would be willing to collaborate and provide useful answers to that question . through empirical evidence , we show that more answers and comments are desirable for improving the lasting value of a question-answer thread . as a result , our focus is on routing a question to a team of compatible users . we propose a recommendation model that takes into account the compatibility , topical expertise story_separator_special_tag cqa sites are dynamic environments where new users join constantly , or the activity levels or interest of existing users change over time . classic expertise estimation approaches which were mostly developed for static datasets , can not effectively model changing expertise and interest levels in these sites . this paper proposes how available temporal information in cqa sites can be used to make these existing approaches more effective for expertise related applications like question routing . adapting two widely used expert finding approaches for question routing returned consistent and statistically significant improvements over the original approaches , which shows the effectiveness of the proposed temporal modeling . story_separator_special_tag expert finding for question answering is a challenging problem in community-based question answering ( cqa ) systems , arising in many real applications such as question routing and identification of best answers . in order to provide high-quality experts , many existing approaches learn the user model from their past question-answering activities in cqa systems . however , the past activities of users in most cqa systems are rather few , and thus the user model may not be well inferred in practice . in this paper , we consider the problem of expert finding from the viewpoint of missing value estimation . we then employ users social networks for inferring user model , and thus improve the performance of expert finding in cqa systems . in addition , we develop a novel graph-regularized matrix completion algorithm for inferring the user model . we further develop two efficient iterative procedures , grmc-egm and grmc-agm , to solve the optimization problem . grmc-egm utilizes the extended gradient method ( egm ) , while grmc-agm applies the accelerated proximal gradient search method ( agm ) , for the optimization . we evaluate our methods on the well-known question answering system quora , story_separator_special_tag expert finding for question answering is a challenging problem in community-based question answering ( cqa ) systems such as quora . the success of expert finding is important to many real applications such as question routing and identification of best answers . currently , many approaches of expert findings rely heavily on the past question-answering activities of the users in order to build user models . however , the past question-answering activities of most users in real cqa systems are rather limited . we call the users who have only answered a small number of questions the cold-start users . using the existing approaches , we find that it is difficult to address the cold-start issue in finding the experts . story_separator_special_tag large general-purposed community question-answering sites are becoming popular as a new venue for generating knowledge and helping users in their information needs . in this paper we analyze the characteristics of knowledge generation and user participation behavior in the largest question-answering online community in south korea , naver knowledge-in . we collected and analyzed over 2.6 million question/answer pairs from fifteen categories between 2002 and 2007 , and have interviewed twenty six users to gain insights into their motivations , roles , usage and expertise . we find altruism , learning , and competency are frequent motivations for top answerers to participate , but that participation is often highly intermittent . using a simple measure of user performance , we find that higher levels of participation correlate with better performance . we also observe that users are motivated in part through a point system to build a comprehensive knowledge database . these and other insights have significant implications for future knowledge generating online communities . story_separator_special_tag identifying the rising stars is an important but difficult human resource exercise in all organizations . rising stars are those who currently have relatively low profiles but may eventually emerge as prominent contributors to the organizations . in this paper , we propose a novel pubrank algorithm to identify rising stars in research communities by mining the social networks of researchers in terms of their co-authorship relationships . experimental results show that pubrank algorithm can be used to effectively mine the bibliography networks to search for rising stars in the research communities . story_separator_special_tag this paper addresses the problem of finding rising stars in academic social networks . rising stars are the authors which have low research profile in the beginning of their career but may become prominent contributors in the future . an effort for finding rising stars named pubrank is proposed , which considers mutual influence and static ranking of conferences or journals . in this work an improvement of pubrank is proposed by considering authors contribution based mutual influence and dynamic publication venue scores . experimental results show that proposed enhancements are useful and better rising stars are found by our proposed methods in terms of average citations based performance evaluation . effect of parameter alpha and damping factor is also studied in detail . story_separator_special_tag finding relevant experts in a specific field is often crucial for consulting , both in industry and in academia . the aim of this paper is to address the expert-finding task in a real world academic field . we present three models for expert finding based on the large-scale dblp bibliography and google scholar for data supplementation . the first , a novel weighted language model , models an expert candidate based on the relevance and importance of associated documents by introducing a document prior probability , and achieves much better results than the basic language model . the second , a topic-based model , represents each candidate as a weighted sum of multiple topics , whilst the third , a hybrid model , combines the language model and the topic-based model . we evaluate our system using a benchmark dataset based on human relevance judgments of how well the expertise of proposed experts matches a query topic . evaluation results show that our hybrid model outperforms other models in nearly all metrics . story_separator_special_tag expert finding in bibliographic networks has received increased interests in recent years . this task concerns with finding relevant researchers for a given topic . motivated by the observation that rarely do all coauthors contribute to a paper equally , in this paper , we propose a discriminative method to realize leading authors contributing in a scientific publication . specifically , we cast the problem of expert finding in a bibliographic network to find leading experts in a research group , which is easier to solve . according to some observations , we recognize three feature groups that can discriminate relevant and irrelevant experts . experimental results on a real dataset , and an automatically generated one that is gathered from microsoft academic search show that the proposed model significantly improves the performance of expert finding in terms of all common information retrieval evaluation metrics . story_separator_special_tag an essential part of an expert-finding task , such as matching reviewers to submitted papers , is the ability to model the expertise of a person based on documents . we evaluate several measures of the association between an author in an existing collection of research papers and a previously unseen document . we compare two language model based approaches with a novel topic model , author-persona-topic ( apt ) . in this model , each author can write under one or more `` personas , '' which are represented as independent distributions over hidden topics . examples of previous papers written by prospective reviewers are gathered from the rexa database , which extracts and disambiguates author mentions from documents gathered from the web . we evaluate the models using a reviewer matching task based on human relevance judgments determining how well the expertise of proposed reviewers matches a submission . we find that the apt topic model outperforms the other models . story_separator_special_tag the last two decades have seen an increasing interest in the task of question answering ( qa ) . earlier approaches focused on automated retrieval and extraction models . recent developments have more focus on community driven qa . this work addresses this task through cross-platform question routing . we study question types as well as the answers that can be gathered from different platforms . after developing new evaluation measures , we optimize for various constraints of the user needs . we consider models that work for the general public , before adapting them to some special demographics ( arab journalists ) . story_separator_special_tag synchronous social q & a systems exist on the web and in the enterprise to connect people with questions to people with answers in real-time . in such systems , askers ' desire for quick answers is in tension with costs associated with interrupting numerous candidate answerers per question . supporting users of synchronous social q & a systems at various points in the question lifecycle ( from conception to answer ) helps askers make informed decisions about the likelihood of question success and helps answerers face fewer interruptions . for example , predicting that a question will not be well answered may lead the asker to rephrase or retract the question . similarly , predicting that an answer is not forthcoming during the dialog can prompt system behaviors such as finding other answerers to join the conversation . as another example , predictions of asker satisfaction can be assigned to completed conversations and used for later retrieval . in this paper , we use data from an instant-messaging-based synchronous social q & a service deployed to an online community of over two thousand users to study the prediction of : ( i ) whether a question will be answered story_separator_special_tag blogs have become a means by which new ideas and information spreads rapidly on the web . they often discuss the latest trends and echo with reactions on different events in the world . the collective wisdom present on the blogosphere is invaluable for market researchers and companies launching new products . in this paper , we validate the effectiveness of some of the influence models on the blogosphere . we validate the robustness of different heuristics in presence of splogs or spam blogs on the web . experiments also show how pagerank based heuristics could be used to select an influential set of bloggers such that we could maximize the spread of information on the blogosphere . story_separator_special_tag models for the processes by which ideas and influence propagate through a social network have been studied in a number of domains , including the diffusion of medical and technological innovations , the sudden and widespread adoption of various strategies in game-theoretic settings , and the effects of `` word of mouth '' in the promotion of new products . recently , motivated by the design of viral marketing strategies , domingos and richardson posed a fundamental algorithmic problem for such social network processes : if we can try to convince a subset of individuals to adopt a new product or innovation , and the goal is to trigger a large cascade of further adoptions , which set of individuals should we target ? we consider this problem in several of the most widely studied models in social network analysis . the optimization problem of selecting the most influential nodes is np-hard here , and we provide the first provable approximation guarantees for efficient algorithms . using an analysis framework based on submodular functions , we show that a natural greedy strategy obtains a solution that is provably within 63 % of optimal for several classes of models ; our story_separator_special_tag content in microblogging systems such as twitter is produced by tens to hundreds of millions of users . this diversity is a notable strength , but also presents the challenge of finding the most interesting and authoritative authors for any given topic . to address this , we first propose a set of features for characterizing social media authors , including both nodal and topical metrics . we then show how probabilistic clustering over this feature space , followed by a within-cluster ranking procedure , can yield a final list of top authors for a given topic . we present results across several topics , along with results from a user study confirming that our method finds authors who are significantly more interesting and authoritative than those resulting from several baseline conditions . additionally our algorithm is computationally feasible in near real-time scenarios making it an attractive alternative for capturing the rapidly changing dynamics of microblogs . story_separator_special_tag a common method for finding information in an organization is to use social networks -- -ask people , following referrals until someone with the right information is found . another way is to automatically mine documents to determine who knows what . email documents seem particularly well suited to this task of `` expertise location '' , as people routinely communicate what they know . moreover , because people explicitly direct email to one another , social networks are likely to be contained in the patterns of communication . can these patterns be used to discover experts on particular topics ? is this approach better than mining message content alone ? to find answers to these questions , two algorithms for determining expertise from email were compared : a content-based approach that takes account only of email text , and a graph-based ranking algorithm ( hits ) that takes account both of text and communication patterns . an evaluation was done using email and explicit expertise ratings from two different organizations . the rankings given by each algorithm were compared to the explicit rankings with the precision and recall measures commonly used in information retrieval , as well as the story_separator_special_tag in this paper we study graph -- based ranking measures for the purpose of using them to rank email correspondents according to their degree of expertise on subjects of interest . while this complete expertise analysis consists of several steps , in this paper we focus on the analysis of digraphs whose nodes correspond to correspondents ( people ) , whose edges correspond to the existence of email correspondence between the people corresponding to the nodes they connect and whose edge directions point from the member of the pair whose relative expertise has been estimated to be higher . we perform our analysis on both synthetic and real data and we introduce a new error measure for comparing ranked lists . story_separator_special_tag a major problem in social network analysis and link discovery is the discovery of hidden organizational structure and selection of interesting influential members based on low-level , incomplete and noisy evidence data . to address such a challenge , we exploit an information theoretic model that combines information theory with statistical techniques from area of text mining and natural language processing . the entropy model identifies the most interesting and important nodes in a graph . we show how entropy models on graphs are relevant to study of information flow in an organization . we review the results of two different experiments which are based on entropy models . the first version of this model has been successfully tested and evaluated on the enron email dataset . story_separator_special_tag finding relevant expertise is a critical need in collaborative software engineering , particularly in geographically distributed developments . we introduce a tool , called expertise browser ( exb ) , that uses data from change management systems to locate people with desired expertise . it uses a quantification of experience , and presents evidence to validate this quantification as a measure of expertise . the tool enables developers , for example , to easily distinguish someone who has worked only briefly in a particular area of the code from someone who has more extensive experience , and to locate people with broad expertise throughout large parts of the product , such as modules or even subsystems . in addition , it allows a user to discover expertise profiles for individuals or organizations . data from a deployment of the tool in a large software development organization shows that newer , remote sites tend to use the tool for expertise location more frequently . larger , more established sites used the tool to find expertise profiles for people or organizations . we conclude by describing extensions that provide continuous awareness of ongoing work and an interactive , quantitative resume/spl acute/ . story_separator_special_tag searching an organization 's document repositories for experts is a frequently faced problem in intranet information management . this paper proposes a candidate-centered model which is referred as candidate description document ( cdd ) -based retrieval model . the expertise evidence about an expert candidate scattered over repositories is mined and aggregated automatically to form a profile called the candidate 's cdd , which represents his knowledge . we present the model from its foundations through its logical development and argue in favor of this model for expert finding . we devise and compare the different strategies for exploring a variety of expertise evidence . the experiments on trec enterprise corpora demonstrate that the cdd-based model achieves significant and consistent improvement on performance through comparative studies with non-cdd methods . story_separator_special_tag expertise retrieval has been largely unexplored on data other than the w3c collection . at the same time , many intranets of universities and other knowledge-intensive organisations offer examples of relatively small but clean multilingual expertise data , covering broad ranges of expertise areas . we first present two main expertise retrieval tasks , along with a set of baseline approaches based on generative language modeling , aimed at finding expertise relations between topics and people . for our experimental evaluation , we introduce ( and release ) a new test set based on a crawl of a university site . using this test set , we conduct two series of experiments . the first is aimed at determining the effectiveness of baseline expertise retrieval methods applied to the new test set . the second is aimed at assessing refined models that exploit characteristic features of the new test set , such as the organizational structure of the university , and the hierarchical structure of the topics in the test set . expertise retrieval models are shown to be robust with respect to environments smaller than the w3c collection , and current techniques appear to be generalizable to other settings story_separator_special_tag in this paper we present the features of a question/answering ( q/a ) system that had unparalleled performance in the trec-9 evaluations . we explain the accuracy of our system through the unique characteristics of its architecture : ( 1 ) usage of a wide-coverage answer type taxonomy ; ( 2 ) repeated passage retrieval ; ( 3 ) lexico-semantic feedback loops ; ( 4 ) extraction of the answers based on machine learning techniques ; and ( 5 ) answer caching . experimental results show the effects of each feature on the overall performance of the q/a system and lead to general conclusions about q/a from large text collections . story_separator_special_tag a common task in many applications is to find persons who are knowledgeable about a given topic ( i.e. , expert finding ) . in this paper , we propose and develop a general probabilistic framework for studying expert finding problem and derive two families of generative models ( candidate generation models and topic generation models ) from the framework . these models subsume most existing language models proposed for expert finding . we further propose several techniques to improve the estimation of the proposed models , including incorporating topic expansion , using a mixture model to model candidate mentions in the supporting documents , and defining an email count-based prior in the topic generation model . our experiments show that the proposed estimation strategies are all effective to improve retrieval accuracy . story_separator_special_tag in an expert search task , the users ' need is to identify people who have relevant expertise to a topic of interest . an expert search system predicts and ranks the expertise of a set of candidate persons with respect to the users ' query . in this paper , we propose a novel approach for predicting and ranking candidate expertise with respect to a query . we see the problem of ranking experts as a voting problem , which we model by adapting eleven data fusion techniques.we investigate the effectiveness of the voting approach and the associated data fusion techniques across a range of document weighting models , in the context of the trec 2005 enterprise track . the evaluation results show that the voting paradigm is very effective , without using any collection specific heuristics . moreover , we show that improving the quality of the underlying document representation can significantly improve the retrieval performance of the data fusion techniques on an expert search task . in particular , we demonstrate that applying field-based weighting models improves the ranking of candidates . finally , we demonstrate that the relative performance of the adapted data fusion techniques for story_separator_special_tag question answering communities such as naver and yahoo ! answers have emerged as popular , and often effective , means of information seeking on the web . by posting questions for other participants to answer , information seekers can obtain specific answers to their questions . users of popular portals such as yahoo ! answers already have submitted millions of questions and received hundreds of millions of answers from other participants . however , it may also take hours -- and sometime days -- until a satisfactory answer is posted . in this paper we introduce the problem of predicting information seeker satisfaction in collaborative question answering communities , where we attempt to predict whether a question author will be satisfied with the answers submitted by the community participants . we present a general prediction model , and develop a variety of content , structure , and community-focused features for this task . our experimental results , obtained from a largescale evaluation over thousands of real questions and user ratings , demonstrate the feasibility of modeling and predicting asker satisfaction . we complement our results with a thorough investigation of the interactions and information seeking patterns in question answering communities story_separator_special_tag community question answering ( cqa ) systems , such as yahoo ! answers and stack overflow , represent a well-known example of collective intelligence . the existing cqa systems , despite their overall successfulness and popularity , fail to answer a significant amount of questions in required time . one option for scaffolding collaboration in cqa systems is a recommendation of new questions to users who are suitable candidates for providing correct answers ( so called question routing ) . various methods have been proposed so far to find appropriate answerers , but almost all approaches heavily depend on previous users ' activities in a particular cqa system ( i.e . qa-data ) . in our work , we attempt to involve a whole community including users with no or minimal previous activity ( e.g . newcomers or lurkers ) . we proposed a question routing method which analyses users ' non-qa data from a cqa system itself as well as from external services and platforms , such as blogs , micro-blogs or social networking sites , in order to estimate users ' interests and expertise early and more precisely . consequently , we can recommend new questions to a story_separator_special_tag motivated by several applications , we introduce various distance measures between `` top k lists . '' some of these distance measures are metrics , while others are not . for each of these latter distance measures : we show that it is `` almost '' a metric in the following two seemingly unrelated aspects : step- ( i ) it satisfies a relaxed version of the polygonal ( hence , triangle ) inequality , andstep- ( ii ) there is a metric with positive constant multiples that bounds our measure above and below.this is not a coincidence -- -we show that these two notions of almost being a metric are the same . based on the second notion , we define two distance measures to be equivalent if they are bounded above and below by constant multiples of each other . we thereby identify a large and robust equivalence class of distance measures.besides the applications to the task of identifying good notions of ( dis- ) similarity between two top k lists , our results imply polynomial-time constant-factor approximation algorithms for the rank aggregation problem with respect to a large class of distance measures . story_separator_special_tag recommender systems have been evaluated in many , often incomparable , ways . in this article , we review the key decisions in evaluating collaborative filtering recommender systems : the user tasks being evaluated , the types of analysis and datasets being used , the ways in which prediction quality is measured , the evaluation of prediction attributes other than quality , and the user-based evaluation of the system as a whole . in addition to reviewing the evaluation strategies used by prior researchers , we present empirical results from the analysis of various accuracy metrics on one content domain where all the tested metrics collapsed roughly into three equivalence classes . metrics within each equivalency class were strongly correlated , while metrics from different equivalency classes were uncorrelated . story_separator_special_tag community based question and answering ( cqa ) services provide a convenient way for online users to share and exchange information and knowledge , which is highly valuable for information seeking . user interest and dedication act as the motivation to promote the interactive process of question and answering . in this paper , we aim to address a key issue about cqa systems : routing newly asked questions to appropriate users that may potentially provide answer with high quality . we incorporate answer quality and answer content to build a probabilistic question routing model . our proposed model is capable of 1 ) differentiating and quantifying the authority of users for different topic or category ; 2 ) routing questions to users with expertise . experimental results based on a large collection of data from wenwen demonstrate that our model is effective and has promising performance . story_separator_special_tag community-based question answering ( cqa ) services such as yahoo ! answers have been widely used by internet users to get the answers for their inquiries . the cqa services totally rely on the contributions by the users . however , it is known that newcomers are prone to lose their interests and leave the communities . thus , finding expert users in an early phase when they are still active is essential to improve the chances of motivating them to contribute to the communities further . in this paper , we propose a novel approach to discovering `` potentially '' contributive users from recently-joined users in cqa services . the likelihood of becoming a contributive user is defined by the user 's expertise as well as availability , which we call the answer affordance . the main technical difficulty lies in the fact that such recently-joined users do not have abundant information accumulated for many years . we utilize a user 's productive vocabulary to mitigate the lack of available information since the vocabulary is the most fundamental element that reveals his/her knowledge . extensive experiments were conducted with a huge data set of naver knowledge-in ( kin ) story_separator_special_tag point-of-interest ( poi ) recommendation has become an important means to help people discover attractive and interesting locations , especially when users travel out of town . however , extreme sparsity of user-poi matrix creates a severe challenge . to cope with this challenge , a growing line of research has exploited the temporal effect , geographical-social influence , content effect and word-of-mouth effect . however , current research lacks an integrated analysis of the joint effect of the above factors to deal with the issue of data-sparsity , especially in the out-of-town recommendation scenario which has been ignored by most existing work . in light of the above , we propose a joint probabilistic generative model to mimic user check-in behaviors in a process of decision making , which strategically integrates the above factors to effectively overcome the data sparsity , especially for out-of-town users . to demonstrate the applicability and flexibility of our model , we investigate how it supports two recommendation scenarios in a unified way , i.e. , home-town recommendation and out-of-town recommendation . we conduct extensive experiments to evaluate the performance of our model on two real large-scale datasets in terms of both recommendation effectiveness story_separator_special_tag this book covers the major fundamentals of and the latest research on next-generation spatio-temporal recommendation systems in social media . it begins by describing the emerging characteristics of social media in the era of mobile internet , and explores the limitations to be found in current recommender techniques . the book subsequently presents a series of latent-class user models to simulate users behaviors in decision-making processes , which effectively overcome the challenges arising from temporal dynamics of users behaviors , user interest drift over geographical regions , data sparsity and cold start . based on these well designed user models , the book develops effective multi-dimensional index structures such as metric-tree , and proposes efficient top-k retrieval algorithms to accelerate the process of online recommendation and support real-time recommendation . in addition , it offers methodologies and techniques for evaluating both the effectiveness and efficiency of spatio-temporal recommendation systems in social media . the book will appeal to a broad readership , from researchers and developers to undergraduate and graduate students . story_separator_special_tag a key functionality in collaborative question answering ( cqa ) systems is the assignment of the questions from information seekers to the potential answerers . an attractive solution is to automatically recommend the questions to the potential answerers with expertise or interest in the question topic . however , previous work has largely ignored a key problem in question recommendation - namely , whether the potential answerer is likely to accept and answer the recommended questions in a timely manner . this paper explores the contextual factors that influence the answerer behavior in a large , popular cqa system , with the goal to inform the construction of question routing and recommendation systems . specifically , we consider when users tend to answer questions in a large-scale cqa system , and how answerers tend to choose the questions to answer . our results over a dataset of more than 1 million questions draw from a real cqa system could help develop more realistic evaluation methods for question recommendation , and inform the design of future question recommender systems . story_separator_special_tag point-of-interest recommendation is an essential means to help people discover attractive locations , especially when people travel out of town or to unfamiliar regions . while a growing line of research has focused on modeling user geographical preferences for poi recommendation , they ignore the phenomenon of user interest drift across geographical regions , i.e. , users tend to have different interests when they travel in different regions , which discounts the recommendation quality of existing methods , especially for out-of-town users . in this paper , we propose a latent class probabilistic generative model spatial-temporal lda ( st-lda ) to learn region-dependent personal interests according to the contents of their checked-in pois at each region . as the users check-in records left in the out-of-town regions are extremely sparse , st-lda incorporates the crowd s preferences by considering the public s visiting behaviors at the target region . to further alleviate the issue of data sparsity , a social-spatial collective inference framework is built on st-lda to enhance the inference of region-dependent personal interests by effectively exploiting the social and spatial correlation information . besides , based on st-lda , we design an effective attribute pruning ( ap ) story_separator_special_tag what makes a good question recommendation system for community question-answering sites ? first , to maintain the health of the ecosystem , it needs to be designed around answerers , rather than exclusively for askers . next , it needs to scale to many questions and users , and be fast enough to route a newly-posted question to potential answerers within the few minutes before the asker 's patience runs out . it also needs to show each answerer questions that are relevant to his or her interests . we have designed and built such a system for yahoo ! answers , but realized , when testing it with live users , that it was not enough.we found that those drawing-board requirements fail to capture user 's interests . the feature that they really missed was diversity . in other words , showing them just the main topics they had previously expressed interest in was simply too dull . adding the spice of topics slightly outside the core of their past activities significantly improved engagement . we conducted a large-scale online experiment in production in yahoo ! answers that showed that recommendations driven by relevance alone perform worse than a story_separator_special_tag in recent years , with the widespread usage of web 2.0 techniques , crowdsourcing plays an important role in offering human intelligence in various service websites , such as yahoo ! answer and quora . with the increasing amount of crowd-oriented service data , an important task is to analyze latest hot topics and track topic evolution over time . however , the existing techniques in text mining can not effectively work due to the unique structure of crowd-oriented service data , task-response pairs , which consists of the task and its corresponding responses . in particular , existing approaches become ineffective with the ever-increasing crowd-oriented service data that accumulate along the time . in this paper , we first study the problem of discovering topics over crowd-oriented service data . then we propose a new probabilistic topic model , the topic crowd service model ( tcs model ) , to effectively discover latent topics from massive crowd-oriented service data . in particular , in order to train tcs efficiently , we design a novel parameter inference algorithm , the bucket parameter estimation ( bpe ) , which utilizes belief propagation and a new sketching technique , called pairwise sketch story_separator_special_tag community question answering ( cqa ) services thrive as a result of a small number of highly active users , typically called experts , who provide a large number of high quality useful answers . understanding the temporal dynamics and interactions between experts can present key insights into how community members evolve over time . in this paper , we present a temporal study of experts in cqa and analyze the changes in their behavioral patterns over time . further , using unsupervised machine learning methods , we show the interesting evolution patterns that can help us distinguish experts from one another . using supervised classification methods , we show that the models based on evolutionary data of users can be more effective at expert identification than the models that ignore evolution . we run our experiments on two large online cqa to show the generality of our proposed approach . story_separator_special_tag community question answering ( or cqa ) services ( also known as q/a social networks ) have become widespread in the last several years . it is seen as a potential alternative to search as using q/a services avoids sifting through a large number of ( ranked ) search results , returned by a typical search engine , to get at the desired information . currently , \\emph { best } answers in cqa services are determined either manually or through a voting process . many cqa services calculate activity levels for users to approximate the notion of expertise . as large numbers of cqa services are becoming available , it is important and challenging to predict \\emph { best } answers ( not necessarily answers by an expert ) using machine learning techniques . previous approaches , typically , extract a set of features ( primarily textual and non-textual ) from the data set and use them in a classification system to determine the \\emph { best } answer . this paper posits that temporal features , different from the ones proposed and used in the literature , are better-suited for q/a data sets and can be quite effective story_separator_special_tag with the rapid development of smartphones , spatial crowdsourcing platforms are getting popular . a foundational research of spatial crowdsourcing is to allocate micro-tasks to suitable crowd workers . most existing studies focus on offline scenarios , where all the spatiotemporal information of micro-tasks and crowd workers is given . however , they are impractical since micro-tasks and crowd workers in real applications appear dynamically and their spatiotemporal information can not be known in advance . in this paper , to address the shortcomings of existing offline approaches , we first identify a more practical micro-task allocation problem , called the global online micro-task allocation in spatial crowdsourcing ( goma ) problem . we first extend the state-of-art algorithm for the online maximum weighted bipartite matching problem to the goma problem as the baseline algorithm . although the baseline algorithm provides theoretical guarantee for the worst case , its average performance in practice is not good enough since the worst case happens with a very low probability in real world . thus , we consider the average performance of online algorithms , a.k.a online random order model.we propose a two-phase-based framework , based on which we present the tgoa algorithm story_separator_special_tag with the rapid rise of various e-commerce and social network platforms , users are generating large amounts of heterogeneous behavior data , such as purchasehistory , adding-to-favorite , adding-to-cart and click activities , and this kind of user behavior data is usually binary , only reflecting a user 's action or inaction ( i.e. , implicit feedback data ) . tensor factorization is a promising means of modeling heterogeneous user behaviors by distinguishing different behavior types . however , ambiguity arises in the interpretation of the unobserved user behavior records that mix both real negative examples and potential positive examples . existing tensor factorization models either ignore unobserved examples or treat all of them as negative examples , leading to either poor prediction performance or huge computation cost . in addition , the distribution of positive examples w.r.t . behavior types is heavily skewed . existing tensor factorization models would bias towards the type of behaviors with a large number of positive examples . in this paper , we propose a scalable probabilistic tensor factorization model ( sptf ) for heterogeneous behavior data and develop a novel negative sampling technique to optimize sptf by leveraging both observed and unobserved examples story_separator_special_tag given a task t , a pool of experts with different skills , and a social network g that captures social relationships and various interactions among these experts , we study the problem of finding a wise group of experts , a subset of , to perform the task . we call this the expert group formation problem in this paper . in order to reduce various potential social influence among team members and avoid following the crowd , we require that the members of not only meet the skill requirements of the task , but also be diverse . to quantify the diversity of a group of experts , we propose one metric based on the social influence incurred by the subgraph in g that only involves . we analyze the problem of diverse expert group formation and show that it is np-hard . we explore its connections with existing combinatorial problems and propose novel algorithms for its approximation solution . to the best of our knowledge , this is the first work to study diversity in the social graph and facilitate its effect in the expert group formation problem . we conduct extensive experiments on the dblp dataset story_separator_special_tag we present polylens , a new collaborative filtering recommender system designed to recommend items for groups of users , rather than for individuals . a group recommender is more appropriate and useful for domains in which several people participate in a single activity , as is often the case with movies and restaurants . we present an analysis of the primary design issues for group recommenders , including questions about the nature of groups , the rights of group members , social value functions for groups , and interfaces for displaying group recommendations . we then report on our polylens prototype and the lessons we learned from usage logs and surveys from a nine-month trial that included 819 users we found that users not only valued group recommendations , but were willing to yield some privacy to get the benefits of group recommendations users valued an extension to the group recommender system that enabled them to invite non-members to participate , via email story_separator_special_tag social friendship has been shown beneficial for item recommendation for years . however , existing approaches mostly incorporate social friendship into recommender systems by heuristics . in this paper , we argue that social influence between friends can be captured quantitatively and propose a probabilistic generative model , called social influenced selection ( sis ) , to model the decision making of item selection ( e.g. , what book to buy or where to dine ) . based on sis , we mine the social influence between linked friends and the personal preferences of users through statistical inference . to address the challenges arising from multiple layers of hidden factors in sis , we develop a new parameter learning algorithm based on expectation maximization ( em ) . moreover , we show that the mined social influence and user preferences are valuable for group recommendation and viral marketing . finally , we conduct a comprehensive performance evaluation using real datasets crawled from last.fm and whrrl.com to validate our proposal . experimental results show that social influence captured based on our sis model is effective for enhancing both item recommendation and group recommendation , essential for viral marketing , and useful story_separator_special_tag increasingly , web recommender systems face scenarios where they need to serve suggestions to groups of users ; for example , when families share e-commerce or movie rental web accounts . research to date in this domain has proposed two approaches : computing recommendations for the group by merging any members ' ratings into a single profile , or computing ranked recommendations for each individual that are then merged via a range of heuristics . in doing so , none of the past approaches reason on the preferences that arise in individuals when they are members of a group . in this work , we present a probabilistic framework , based on the notion of information matching , for group recommendation . this model defines group relevance as a combination of the item 's relevance to each user as an individual and as a member of the group ; it can then seamlessly incorporate any group recommendation strategy in order to rank items for a set of individuals . we evaluate the model 's efficacy at generating recommendations for both single individuals and groups using the movielens and moviepilot data sets . in both cases , we compare our results story_separator_special_tag an online community consists of a group of users who share a common interest , background , or experience , and their collective goal is to contribute toward the welfare of the community members . several websites allow their users to create and manage niche communities , such as yahoo ! groups , facebook groups , google+ circles , and webmd forums . these community services also exist within enterprises , such as ibm connections . question answering within these communities enables their members to exchange knowledge and information with other community members . however , the onus of finding the right community for question asking lies with an individual user . the overwhelming number of communities necessitates the need for a good question routing strategy so that new questions get routed to an appropriately focused community and thus get resolved in a reasonable time frame.in this article , we consider the novel problem of routing a question to the right community and propose a framework for selecting and ranking the relevant communities for a question . we propose several novel features for modeling the three main entities of the system : questions , users , and communities . we story_separator_special_tag crowdsourcing has been shown to be effective in a wide range of applications , and is seeing increasing use . a large-scale crowdsourcing task often consists of thousands or millions of atomic tasks , each of which is usually a simple task such as binary choice or simple voting . to distribute a large-scale crowdsourcing task to limited crowd workers , a common practice is to pack a set of atomic tasks into a task bin and send to a crowd worker in a batch . it is challenging to decompose a large-scale crowdsourcing task and execute batches of atomic tasks , which ensures reliable answers at a minimal total cost . large batches lead to unreliable answers of atomic tasks , while small batches incur unnecessary cost . in this paper , we investigate a general crowdsourcing task decomposition problem , called the smart large-scale task de composer ( slade ) problem , which aims to decompose a large-scale crowdsourcing task to achieve the desired reliability at a minimal cost . we prove the np-hardness of the slade problem and propose solutions in both homogeneous and heterogeneous scenarios . for the homogeneous slade problem , where all the atomic story_separator_special_tag by combining user preferences , redundancy analysis , and trust-network inference , the proposed trust model can augment candidate answers with information about target sources on the basis of connections with other web users and sources . experiments show that the model is more effective overall than trust analyses based on inference alone . story_separator_special_tag given the recent advancement of online social networking technologies , social question and answering has become an important venue for individuals to seek and share information . while studies have suggested the possibilities of routing questions to potential answerers for their help and the information provided , there is little analysis proposed to identify the characteristics that differentiate the possible responders from the nonresponders . in order to address such gap , in this work we present a model to predict potential responders in social q & a using only non-qa-based attributes . we build the classifier using features from two different aspects , including : features extracted from one 's social profile and style of posting . to evaluate our model , we collect over 20,000 questions posted on wenwo , a social q & a application based on weibo , along with all their responders . our experimental results over the collected dataset demonstrate the effectiveness of responder prediction based on non-qa features and proposed potential implications for system design . story_separator_special_tag on top of an enterprise social platform , we are building a smart social qa system that automatically routes questions to suitable employees who are willing , able , and ready to provide answers . due to a lack of social qa history ( training data ) to start with , in this paper , we present an optimization-based approach that recommends both top-matched active ( seed ) and inactive ( prospect ) answerers for a given question . our approach includes three parts . first , it uses a predictive model to find top-ranked seed answerers by their fitness , including their ability and willingness , to answer a question . second , it uses distance metric learning to discover prospects most similar to the seeds identified in the first step . third , it uses a constraint-based approach to balance the selection of both seeds and prospects identified in the first two steps . as a result , not only does our solution route questions to top-matched active users , but it also engages inactive users to grow the pool of answerers . our real-world experiments that routed 114 questions to 684 people identified from 400,000+ employees included story_separator_special_tag in this paper , we introduce factorization machines ( fm ) which are a new model class that combines the advantages of support vector machines ( svm ) with factorization models . like svms , fms are a general predictor working with any real valued feature vector . in contrast to svms , fms model all interactions between variables using factorized parameters . thus they are able to estimate interactions even in problems with huge sparsity ( like recommender systems ) where svms fail . we show that the model equation of fms can be calculated in linear time and thus fms can be optimized directly . so unlike nonlinear svms , a transformation in the dual form is not necessary and the model parameters can be estimated directly without the need of any support vector in the solution . we show the relationship to svms and the advantages of fms for parameter estimation in sparse settings . on the other hand there are many different factorization models like matrix factorization , parallel factor analysis or specialized models like svd++ , pitf or fpmc . the drawback of these models is that they are not applicable for general prediction tasks story_separator_special_tag an up-to-date , self-contained introduction to a state-of-the-art machine learning approach , ensemble methods : foundations and algorithms shows how these accurate methods are used in real-world tasks . it gives you the necessary groundwork to carry out further research in this evolving field . after presenting background and terminology , the book covers the main algorithms and theories , including boosting , bagging , random forest , averaging and voting schemes , the stacking method , mixture of experts , and diversity measures . it also discusses multiclass extension , noise tolerance , error-ambiguity and bias-variance decompositions , and recent progress in information theoretic diversity . moving on to more advanced topics , the author explains how to achieve better performance through ensemble pruning and how to generate better clustering results by combining multiple clusterings . in addition , he describes developments of ensemble methods in semi-supervised learning , active learning , cost-sensitive learning , class-imbalance learning , and comprehensibility enhancement . story_separator_special_tag function estimation/approximation is viewed from the perspective of numerical optimization in function space , rather than parameter space . a connection is made between stagewise additive expansions and steepest-descent minimization . a general gradient descent boosting paradigm is developed for additive expansions based on any fitting criterion.specific algorithms are presented for least-squares , least absolute deviation , and huber-m loss functions for regression , and multiclass logistic likelihood for classification . special enhancements are derived for the particular case where the individual additive components are regression trees , and tools for interpreting such treeboost models are presented . gradient boosting of regression trees produces competitive , highly robust , interpretable procedures for both regression and classification , especially appropriate for mining less than clean data . connections between this approach and the boosting methods of freund and shapire and friedman , hastie and tibshirani are discussed . story_separator_special_tag tree boosting is a highly effective and widely used machine learning method . in this paper , we describe a scalable end-to-end tree boosting system called xgboost , which is used widely by data scientists to achieve state-of-the-art results on many machine learning challenges . we propose a novel sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning . more importantly , we provide insights on cache access patterns , data compression and sharding to build a scalable tree boosting system . by combining these insights , xgboost scales beyond billions of examples using far fewer resources than existing systems . story_separator_special_tag a large family of algorithms - supervised or unsupervised ; stemming from statistics or geometry theory - has been designed to provide different solutions to the problem of dimensionality reduction . despite the different motivations of these algorithms , we present in this paper a general formulation known as graph embedding to unify them within a common framework . in graph embedding , each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set , with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided . furthermore , the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms . by utilizing this framework as a tool , we propose a new supervised dimensionality reduction algorithm called marginal fisher analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class , while the penalty graph connects the marginal points and characterizes the interclass separability . we show that mfa effectively overcomes the story_separator_special_tag point-of-interest ( poi ) recommendation has become an important way to help people discover attractive and interesting places , especially when they travel out of town . however , the extreme sparsity of user-poi matrix and cold-start issues severely hinder the performance of collaborative filtering-based methods . moreover , user preferences may vary dramatically with respect to the geographical regions due to different urban compositions and cultures . to address these challenges , we stand on recent advances in deep learning and propose a spatial-aware hierarchical collaborative deep learning model ( sh-cdl ) . the model jointly performs deep representation learning for pois from heterogeneous features and hierarchically additive representation learning for spatial-aware personal preferences . to combat data sparsity in spatial-aware user preference modeling , both the collective preferences of the public in a given target region and the personal preferences of the user in adjacent regions are exploited in the form of social regularization and spatial smoothing . to deal with the multimodal heterogeneous features of the pois , we introduce a late feature fusion strategy into our sh-cdl model . the extensive experimental analysis shows that our proposed model outperforms the state-of-the-art recommendation models , especially in story_separator_special_tag finding experts in specified areas is an important task and has attracted much attention in the information retrieval community . research on this topic has made significant progress in the past few decades and various techniques have been proposed . in this survey , we review the state-of-the-art methods in expert finding and summarize these methods into different categories based on their underlying algorithms and models . we also introduce the most widely used data collection for evaluating expert finding systems , and discuss future research directions .
man inhabits a universe composed of a great variety of elements and their isotopes . in table i,1 a count of the stable and radioactive elements and isotopes is listed . ninety elements are found terrestrially and one more , technetium , is found in stars ; only promethium has not been found in nature . story_separator_special_tag nucleosynthesis in the $ s $ process takes place in the he-burning layers of low-mass asymptotic giant branch ( agb ) stars and during the he- and c-burning phases of massive stars . the $ s $ process contributes about half of the element abundances between cu and bi in solar system material . depending on stellar mass and metallicity the resulting $ s $ -abundance patterns exhibit characteristic features , which provide comprehensive information for our understanding of the stellar life cycle and for the chemical evolution of galaxies . the rapidly growing body of detailed abundance observations , in particular , for agb and post-agb stars , for objects in binary systems , and for the very faint metal-poor population represents exciting challenges and constraints for stellar model calculations . based on updated and improved nuclear physics data for the $ s $ -process reaction network , current models are aiming at an ab initio solution for the stellar physics related to convection and mixing processes . progress in the intimately related areas of observations , nuclear and atomic physics , and stellar modeling is reviewed and the corresponding interplay is illustrated by the general abundance patterns of story_separator_special_tag the unstable isotope 147pm represents an important branch point in the s-process reaction path . this paper reports on the successful determination of the stellar ( n , ) cross section via the activation technique . the experiment was difficult because the relatively short 147pm half-life of 2.62 yr enforced the sample mass to be restricted to 28 ng or 1014 atoms only . by means of a modular , high-efficiency ge clover array the low induced activity could be identified in spite of considerable backgrounds from various impurities . both partial cross sections feeding the 5.37 day ground state and the 41.3 day isomer in 148pm were determined independently , yielding a total ( n , ) cross section of 709 \xb1 100 mbarn at a thermal energy of kt = 30 kev . the ( n , ) cross sections of the additional branch point isotopes 147nd and 148pm as well as the effect of thermally excited states were obtained by detailed statistical model calculations . the present results allowed considerably refined analyses of the s-process branchings at a = 147/148 , which are probing the neutron density in the he-burning shell of low-mass asymptotic giant branch stars story_separator_special_tag observations of galactic gamma-ray activity have challenged the current understanding of nucleosynthesis in massive stars . recent measurements of ( 60 ) fe abundances relative to ; { 26 } al ; { g } have underscored the need for accurate nuclear information concerning the stellar production of ( 60 ) fe . in light of this motivation , a first measurement of the stellar ( 60 ) fe ( n , gamma ) ( 61 ) fe cross section , the predominant destruction mechanism of ( 60 ) fe , has been performed by activation at the karlsruhe van de graaff accelerator . results show a maxwellian averaged cross section at kt = 25 kev of 9.9 +/-_ { 1.4 ( stat ) } ; { 2.8 ( syst ) } mbarn , a significant reduction in uncertainty with respect to existing theoretical discrepancies . this result will serve to significantly constrain models of ( 60 ) fe nucleosynthesis in massive stars . story_separator_special_tag abstract the potential for measuring the radionuclides 41 ca and 55 fe was investigated with the 3\xa0mv tandem accelerator at vera . interestingly , up to now , no applications have been published for 55 fe using the technique of ams . this is in strong contrast to 41 ca , which is routinely measured by medium and large tandem accelerators in various applications . using caf 2 samples the quantification of 41 ca down to levels of a few 10 13 for the isotope ratio 41 ca/ca has become possible with the use of the tof technique . both nuclides , 41 ca and 55 fe were found to be of interest in nuclear astrophysics . a first application of 41 ca detection at vera is the measurement of the 40 ca ( n , \xa0 ) 41 ca cross section at stellar temperatures . similarly , of astrophysical interest is the production of 55 fe via neutron capture on 54 fe . to this end , different ca and fe blank and standard samples were investigated with the goal to establish an ams method for 41 ca and 55 fe measurements . indeed , low background levels for story_separator_special_tag abstract the technique of accelerator mass spectrometry ( ams ) offers a complementary tool for studying long-lived radionuclides in nuclear astrophysics : ( 1 ) as a tool for investigating nucleosynthesis in the laboratory ; and ( 2 ) via a direct search of live long-lived radionuclides in terrestrial archives as signatures of recent nearby supernova-events . a key ingredient to our understanding of nucleosynthesis is accurate cross-section data . ams was applied for measurements of the neutron-induced cross sections 13c ( n , ) and 14n ( n , p ) , both leading to the long-lived radionuclide 14c . solid samples were irradiated at karlsruhe institute of technology with neutrons closely resembling a maxwell boltzmann distribution for kt = 25 kev , and with neutrons of energies between 123 and 178 kev . after neutron activation the amount of 14c nuclides in the samples was measured by ams at the vera ( vienna environmental research accelerator ) facility . both reactions , 13c ( n , ) 14c and 14n ( n , p ) 14c , act as neutron poisons in s-process nucleosynthesis . however , previous experimental data are discordant . the new data for both story_separator_special_tag measurements of neutron radiative capture cross sections in the kev region were made using fast ( millimicrosecond ) time-of-flight techniques and a large liquid scintillator tank . two series of measurements were completed on a number of nuclides . these are determinations of ( 1 ) cross sections relative to that of indium at 30 kev and at 65 kev for 49 elements , and ( 2 ) cross sections as a function of neutron energy for the following nuclei : br , nb , pd , ag , cd , in , sb , i , pr , sm , gd , tb , dy , ho , er , tm , yb , lu , ta , w , pt , and au . curve fits , using the statistical model , were obtained for br , nb , ag , in , sb , i , pr , tb , ho , tm , lu , ta , and au . the results demonstrate the presence of the 2p giant resonance near a = 100 predicted by the optical model . the average nuclear parameters obtained are in good agreement with recent low-energy total cross- section results story_separator_special_tag recent theories concerning the formation of the elements are based on neutron capture , and are subject to quantitative experimental tests through radiative capture measurements1 3. this communication reports 30 kev neutron capture measurements for the isotopes of samarium . 2 20-g samples of electromagnetically separated samarium isotopes as sm2o3 were used with the ornl 3-mv pulsed van de graaff , providing the neutrons via the lithium-7 ( p , n ) reaction . a moxon-rae counter detected the capture -rays as in an earlier tin isotopo measurement3 . story_separator_special_tag neutron capture cross sections were measured for seven tin isotopes ( 116 to 120 , 122 , and 124 ) with 30-kev neutrons . the abundances of the tin isotopes due to giantstar and supernovae nucleosynthesis are calculated , and the product of the abundance and cross section is ap proximately constant for all the isotopes , in agreement with the predicted inverse proportionality . the increased importance of the r-process ( supernovae neutron capture ) relative to the s- process ( giant star neutron capture ) is shown . story_separator_special_tag abstract a scintillation gamma ray detector is described whose efficiency for detecting neutron capture events depends only on the total gamma ray energy released . the detector is cheap and fast ; it has a low background and a low sensitivity to neutrons in the energy range studied ( up to 100 kev ) . a description is given of the use of the detector with the harwell electron linac neutron time of flight system , and results are presented of capture cross section measurements on rhodium , silver , indium , tantalum and gold , over the energy range 100 ev to 50 kev . story_separator_special_tag abstract neutron radiative capture cross sections have been measured using a moxon-rae detector , for neutrons near 30 kev . a time-of-flight system with less than 3 ns resolution and a 7 cm flight path has been shown applicable to small sample measurements . cross sections at 30 kev for mo , cd , sn , ta , w , pt and au and cross sections versus energy for fluorine , sulphur and yttrium are presented . story_separator_special_tag neutron-radiative-capture cross sections have been measured at various energies from 30-220 kev using a new `` total-energy-detector '' technique of high sensitivity , applicable to small samples of separated stable isotopes . samples studied are : v , fe , ni , $ ^ { 86 } \\mathrm { sr } $ , $ ^ { 87 } \\mathrm { sr } $ , y , nb , rh , ag , $ ^ { 122 } \\mathrm { te } $ , $ ^ { 123 } \\mathrm { te } $ , $ ^ { 124 } \\mathrm { te } $ , $ ^ { 125 } \\mathrm { te } $ , $ ^ { 126 } \\mathrm { te } $ , $ ^ { 128 } \\mathrm { te } $ , $ ^ { 130 } \\mathrm { te } $ , i , eu , $ ^ { 175 } \\mathrm { lu } $ , $ ^ { 176 } \\mathrm { lu } $ , w , au , $ ^ { 204 } \\mathrm { pb } $ , and $ ^ { 208 } \\mathrm { pb } story_separator_special_tag abstract a small-mass system has been developed for monitoring the flux of neutrons with energy up to 1\xa0mev at the new time-of-flight facility at cern , n_tof . the monitor is based on a thin mylar foil with a 6 li deposit , placed in the neutron beam , and an array of silicon detectors , placed outside the beam , for detecting the products of the 6 li ( n , ) 3 h reaction . the small amount of material on the beam ensures a minimal perturbation of the flux and minimizes the background related to scattered neutrons . moreover , a further reduction of the -ray background has been obtained by constructing the scattering chamber hosting the device in carbon fibre . a detailed description of the flux monitor is here presented , together with the characteristics of the device , in terms of efficiency , resolution and induced background . the use of the monitor in the measurement of neutron capture cross-sections at n_tof is discussed . story_separator_special_tag the slow neutron capture process in massive stars ( weak s process ) produces most of the s-process isotopes between iron and strontium . neutrons are provided by the 22ne ( ? , n ) 25mg reaction , which is activated at the end of the convective he-burning core and in the subsequent convective c-burning shell . the s-process-rich material in the supernova ejecta carries the signature of these two phases . in the past years , new measurements of neutron capture cross sections of isotopes beyond iron significantly changed the predicted weak s-process distribution . the reason is that the variation of the maxwellian-averaged cross sections ( macs ) is propagated to heavier isotopes along the s path . in the light of these results , we present updated nucleosynthesis calculations for a 25 m ? star of population i ( solar metallicity ) in convective he-burning core and convective c-burning shell conditions . in comparison with previous simulations based on the bao et ? al . compilation , the new measurement of neutron capture cross sections leads to an increase of s-process yields from nickel up to selenium . the variation of the cross section of one isotope story_separator_special_tag the neutron sensitivity of the c6d6 detector setup used at n tof for capture measurements has been studied by means of detailed geant4 simulations . a realistic software replica of the entire n tof experimental hall , including the neutron beam line , sample , detector supports and the walls of the experimental area has been implemented in the simulations . the simulations have been analyzed in the same manner as experimental data , in particular by applying the pulse height weighting technique . the simulations have been validated against a measurement of the neutron background performed with a nat c sample , showing an excellent agreement above 1 kev . at lower energies , an additional component in the measured nat c yield has been discovered , which prevents the use of nat c data for neutron background estimates at neutron energies below a few hundred ev . the origin and time structure of the neutron background have been derived from the simulations . examples of the neutron background for two di erent samples are demonstrating the important role of accurate simulations of the neutron background in capture cross section measurements . story_separator_special_tag he ( n , gamma ) reaction of the radioactive isotope 93zr has been measured at the n_tof high-resolution time-of-flight facility at cern . resonance parameters have been extracted in the neutron energy range up to 8 kev , yielding capture widths smaller ( 14 % ) than reported in an earlier experiment . these results are important for detailed nucleosynthesis calculations and for refined studies of waste transmutation concepts . story_separator_special_tag in this work we explore for the first time the applicability of using gamma-ray imaging in neutron capture measurements to identify and suppress spatially localized background . for this aim , a pinhole gamma camera is assembled , tested and characterized in terms of energy and spatial performance . it consists of a monolithic cebr3 scintillating crystal coupled to a position-sensitive photomultiplier and readout through an integrated circuit amic2gr . the pinhole collimator is a massive carven block of lead . a series of dedicated measurements with calibrated sources and with a neutron beam incident on a au-197 sample have been carried out at n_tof , achieving an enhancement of a factor of two in the signal-to-background ratio when selecting only those events coming from the direction of the sample . ( c ) 2016 elsevier b.v. all rights reserved . story_separator_special_tag abstract a gamma ray telescope using the double compton process is described , which measures extraterrestrial gamma ray fluxes in the energy range 1 10 mev . the detector consists of two large plastic scintillator blocks , 1.20 m apart . a gamma ray event is identified by a compton collision in the upper detector , followed by a second scattering in the lower crystal , the sequence being verified by a time of flight measurement . the properties of the telescope were measured in the laboratory using the gamma ray sources 60co and 24na . the telescope has an opening half angle of 15\xb0 ( hwhm ) with an energy resolution of \xb120 % and an absolute detection efficiency of about 0.5 % . also , it has an especially low sensitivity to undesired background gamma rays . a slightly modified version of the telescope will be used in balloon flights to measure the spectrum of diffuse primary gamma rays . story_separator_special_tag a new method for measuring gr cross sections aiming at enhanced signal-to-background ratio is presented . this new approach is based on the combination of the pulse-height weighting technique with a total energy detection system that features $ \\gamma $ -ray imaging capability ( i-ted ) . the latter allows one to exploit compton imaging techniques to discriminate between true capture $ \\gamma $ -rays arising from the sample under study and background $ \\gamma $ -rays coming from contaminant neutron ( prompt or delayed ) captures in the surrounding environment . a general proof-of-concept detection system for this application is presented in this article together with a description of the imaging method and a conceptual demonstration based on monte carlo simulations . story_separator_special_tag abstract i-ted consists of both a total energy detector and a compton camera primarily intended for the measurement of neutron capture cross sections by means of the simultaneous combination of neutron time-of-flight ( tof ) and -ray imaging techniques . tof allows one to obtain a neutron-energy differential capture yield , whereas the imaging capability is intended for the discrimination of radiative background sources , that have a spatial origin different from that of the capture sample under investigation . a distinctive feature of i-ted is the embedded dynamic electronic collimation ( dec ) concept , which allows for a trade-off between efficiency and image resolution . here we report on some general design considerations and first performance characterization measurements made with an i-ted demonstrator in order to explore its -ray detection and imaging capabilities . story_separator_special_tag abstract the compton camera , which shows gamma-ray distribution utilizing the kinematics of compton scattering , is a promising detector capable of imaging across a wide range of energy . in this study , we aim to construct a small-animal molecular imaging system in a wide energy range by using the compton camera . we developed a compact medical compton camera based on a ce-doped gd3al2ga3o12 ( ce : gagg ) scintillator and multi-pixel photon counter ( mppc ) . a basic performance confirmed that for 662\xa0kev , the typical energy resolution was 7.4\xa0 % ( fwhm ) and the angular resolution was 4.5\xb0 ( fwhm ) . we then used the medical compton camera to conduct imaging experiments based on a 3-d imaging reconstruction algorithm using the multi-angle data acquisition method . the result confirmed that for a 137cs point source at a distance of 4\xa0cm , the image had a spatial resolution of 3.1\xa0mm ( fwhm ) . furthermore , we succeeded in producing 3-d multi-color image of different simultaneous energy sources ( 22na [ 511\xa0kev ] , 137cs [ 662\xa0kev ] , and 54mn [ 834\xa0kev ] ) . story_separator_special_tag abstract in nuclear-medical imaging , most clinically applied gamma rays have energies less than or equal to 511 kev . there is growing interest in the applications of radioisotopes emitting higher-energy gamma rays for pretherapeutic and therapeutic imaging . compton cameras have the capability of imaging gamma rays with a wide range of energies . since sensitivity of compton cameras decrease with increase in gamma-ray energy , high sensitivity is required to image such radioisotopes . in this study , we developed a cost-effective compton camera using high-sensitive inorganic scintillators and a commercially available data acquisition system for a positron emission tomography camera . an imaging experiment of a \xa054mn point source was performed to demonstrate the imaging capability for the camera , and the source was successfully imaged . story_separator_special_tag we investigate the performance of large area radiation detectors , with high energy- and spatial-resolution , intended for the development of a total energy detector with gamma-ray imaging capability , so-called i-ted . this new development aims for an enhancement in detection sensitivity in time-of-flight neutron capture measurements , versus the commonly used c6d6 liquid scintillation total-energy detectors . in this work , we study in detail the impact of the readout photosensor on the energy response of large area ( 5050 mm2 ) monolithic lacl3 ( ce ) crystals , in particular when replacing a conventional mono-cathode photomultiplier tube by an 88 pixelated silicon photomultiplier . using the largest commercially available monolithic sipm array ( 25 cm2 ) , with a pixel size of 66 mm2 , we have measured an average energy resolution of 3.92 % fwhm at 662 kev for crystal thicknesses of 10 , 20 and 30 mm . the results are confronted with detailed monte carlo ( mc ) calculations , where both optical processes and properties have been included for the reliable tracking of the scintillation photons . after the experimental validation of the mc model , se use our mc code to explore story_separator_special_tag this paper presents a study of possible models to describe the relation between the scintillation light point-of-origin and the measured photo detector pixel signals in monolithic scintillation crystals . from these models the x , y and depth of interaction ( doi ) coordinates can be estimated simultaneously by nonlinear least-square fitting . the method depends only on the information embedded in the signals of individual events , and therefore does not need any prior position training or calibration . three possible distributions of the light sources were evaluated : an exact solid-angle-based distribution , an approximate solid-angle distribution and an extended approximate solid-angle-based distribution which includes internal reflection at side and bottom surfaces . the performance of the general model using these three distributions was studied using monte carlo simulated data of a 20 x 20 x 10 mm lutetium oxyorthosilicate ( lu sio or lso ) block read out by 2 hamamatsu s8550 avalanche photo diode arrays . the approximate solid-angle-based model had the best compromise between resolution and simplicity . this model was also evaluated using experimental data by positioning a narrow 1.2 mm full width at half maximum ( fwhm ) beam of 511 kev photons story_separator_special_tag abstract we report on the spatial response characterization of large lacl3 ( ce ) monolithic crystals optically coupled to 8\xa0\xd7\xa08 pixel silicon photomultiplier ( sipm ) sensors . a systematic study has been carried out for 511\xa0kev -rays using three different crystal thicknesses of 10\xa0mm , 20\xa0mm and 30\xa0mm , all of them with planar geometry and a base size of 50\xa0\xd7\xa050\xa0mm 2 . in this work we investigate and compare two different approaches for the determination of the main -ray hit location . on one hand , methods based on the fit of an analytical model for the scintillation light distribution provide the best results in terms of linearity and field of view , with spatial resolutions close to 1\xa0mm fwhm . on the other hand , position reconstruction techniques based on neural networks provide similar linearity and field-of-view , becoming the attainable spatial resolution 3\xa0mm fwhm . for the third space coordinate z or depth-of-interaction we have implemented an inverse linear calibration approach based on the cross-section of the measured scintillation-light distribution at a certain height . the detectors characterized in this work are intended for the development of so-called total energy detectors with compton imaging capability ( story_separator_special_tag abstract we present a readout and digitization asic featuring low-noise and low-power for time-of flight ( tof ) applications using sipms . the circuit is designed in standard cmos 110 nm technology , has 64 independent channels and is optimized for time-of-flight measurement in positron emission tomography ( tof-pet ) . the input amplifier is a low impedance current conveyor based on a regulated common-gate topology . each channel has quad-buffered analogue interpolation tdcs ( time binning 20 ps ) and charge integration adcs with linear response at full scale ( 1500 pc ) . the signal amplitude can also be derived from the measurement of time-over-threshold ( tot ) . simulation results show that for a single photo-electron signal with charge 200 ( 550 ) fc generated by a sipm with ( 320 pf ) capacitance the circuit has 24 ( 30 ) db snr , 75 ( 39 ) ps r.m.s . resolution , and 4 ( 8 ) mw power consumption . the event rate is 600 khz per channel , with up to 2 mhz dark counts rejection .
the review summarizes much of particle physics and cosmology . using data from previous editions , plus 3,283 new measurements from 899 japers , we list , evaluate , and average measured properties of gauge bosons and the recently discovered higgs boson , leptons , quarks , mesons , and baryons . we summarize searches for hypothetical particles such as heavy neutrinos , supersymmetric and technicolor particles , axions , dark photons , etc . all the particle properties and search limits are listed in summary tables . we also give numerous tables , figures , formulae , and reviews of topics such as supersymmetry , extra dimensions , particle detectors , probability , and statistics . among the 112 reviews are many that are new or heavily revised including those on : dark energy , higgs boson physics , electroweak model , neutrino cross section measurements , monte carlo neutrino generators , top quark , dark matter , dynamical electroweak symmetry breaking , accelerator physics of colliders , high-energy collider parameters , big bang nucleosynthesis , astrophysical constants and cosmological parameters . story_separator_special_tag this note presents a combination of published and preliminary electroweak results from the four lep collaborations and the sld collaboration which were prepared for the 1998 summer conferences . averages are derived for hadronic and leptonic cross-sections , the leptonic forwardbackward asymmetries , the polarisation asymmetries , the bb and cc partial widths and forwardbackward asymmetries and the qq charge asymmetry . the major changes with respect to results presented in summer 1997 are updates to the measurements of the z lineshape , tau polarisation , w mass and triple-gauge-boson couplings from lep , and alr from sld . the results are compared with precise electroweak measurements from other experiments . a significant update here is a new measurement of the mixing angle from the nutev collaboration . the parameters of the standard model are evaluated , first using the combined lep electroweak measurements , and then using the full set of electroweak results . the lep collaborations each take responsibility for the preliminary data of their own experiment . d. abbaneo , j. alcaraz , p. antilogus , t. behnke , g. bella , b. bertucci , b. bloch-devaux , d. bloch , a. blondel , d.g . charlton story_separator_special_tag we present a general analysis of extensions of the standard model which satisfy the criterion of minimal flavour violation ( mfv ) . we define this general framework by constructing a low-energy effective theory containing the standard model fields , with one or two higgs doublets and , as the only source of su ( 3 ) ^5 flavour symmetry breaking , the background values of fields transforming under the flavour group as the ordinary yukawa couplings . we analyse present bounds on the effective scale of dimension-six operators , which range between 1 and 10 tev , with the most stringent constraints imposed by b - > x_s gamma . in this class of theories , it is possible to relate predictions for fcnc processes in b physics to those in k physics . we compare the sensitivity of various experimental searches in probing the hypothesis of mfv . within the two-higgs-doublet scenario , we develop a general procedure to obtain all tan ( beta ) -enhanced higgs-mediated fcnc amplitudes , discussing in particular their impact in b - > l^+l^- , delta m_b and b - > x_s gamma . as a byproduct , we derive some two-loop story_separator_special_tag we obtain the bounds on arbitrary linear combinations of operators of dimension 6 in the standard model . we consider a set of 21 flavor and $ cp $ conserving operators . each of our 21 operators is tightly constrained by the standard set of electroweak measurements . we perform a fit to all relevant precision electroweak data and include neutrino scattering experiments , atomic parity violation , w mass , lep1 , sld , and lep2 data . our results provide an efficient way of obtaining bounds on weakly coupled extensions of the standard model . story_separator_special_tag currently two scenarios exist which explain su ( 2 ) \xd7 u ( 1 ) breaking : the higgs mechanism , and standard hypercolor schemes . in this paper , a third scenario called oblique hypercolor is proposed . a hyperquark condensate is formed which , although kinematically allowed to point in an su ( 2 ) \xd7 u ( 1 ) preserving direction , is forced by yukawa interactions of the hyperquarks to misalign by a small angle , breaking su ( 2 ) \xd7 u ( 1 ) . the low energy spectrum involves normal fermions with correct masses , a partially composite higgs boson , and physical charged scalars . story_separator_special_tag we construct models in which the higgs doublet whose vacuum expectation breaks su ( 2 ) \xd7 u ( 10 is a bound state of massive strongly interacting fermions . the couplings of the composite higgs to ordinary fermions are induced by heavy gauge boson exchange in the manner of extended technicolor . other heavy gauge bosons generate a negative mass term for the higgs . story_separator_special_tag abstract we discuss the corrections to the relation mw = mz cos in composite higgs models , and construct a model which has a custodial su ( 2 ) symmetry and in which the higgs potential is produced entirely by su ( 2 ) \xd7 u ( 1 ) and axial u ( 1 ) gauge interactions . story_separator_special_tag calculability conditions are discussed for local gauge theories with higgs-type symmetry breaking . we focus on the naturalness of $ \\ensuremath { \\mu } e $ universality , the naturalness of the cabibbo angle $ \\ensuremath { \\theta } $ , the naturalness of $ \\mathrm { cp } $ -violating phases , and the naturalness of the nonleptonic $ \\ensuremath { \\delta } i=\\frac { 1 } { 2 } $ rule . in this context we examine many published gauge models and construct others to illuminate the questions at hand . we note that naturalness of $ \\ensuremath { \\mu } e $ universality for charged currents does not necessarily imply universality for neutral currents ( natural `` restricted '' universality ) , and we emphasize the need for $ { \\ensuremath { u } } _ { e } $ -beam experiments . for su ( 2 ) \\ifmmode\\times\\else\\texttimes\\fi { } u ( 1 ) and su ( 2 ) \\ifmmode\\times\\else\\texttimes\\fi { } u ( 1 ) \\ifmmode\\times\\else\\texttimes\\fi { } u ( 1 ) we give first examples of how a nontrivial natural $ \\ensuremath { \\theta } $ can appear . models with $ \\mathrm { story_separator_special_tag we propose a new class of four-dimensional theories for natural electroweak symmetry breaking , relying neither on supersymmetry nor on strong dynamics at the tev scale . the new tev physics is perturbative , and radiative corrections to the higgs mass are finite . the softening of this mass occurs because the higgs is an extended object in theory space , resulting in an accidental symmetry . a novel higgs potential emerges naturally , requiring a second light su ( 2 ) doublet scalar . story_separator_special_tag recently , a new class of realistic models for electroweak symmetry breaking have been constructed , without supersymmetry . these theories have naturally light higgs bosons and perturbative new physics at the tev scale . we describe these models in detail , and show that electroweak symmetry breaking can be triggered by a large top quark yukawa coupling . a rich spectrum of particles is predicted , with a pair of light higgs doublets accompanied by new light weak triplet and singlet scalars . the lightest of these new scalars is charged under a geometric discrete symmetry and is therefore stable , providing a new candidate for wimp dark matter . at tev energies , a plethora of new heavy scalars , gauge bosons and fermions are revealed , with distinctive quantum numbers and decay modes . story_separator_special_tag recently a new class of theories of electroweak symmetry breaking have been constructed . these models , based on deconstruction and the physics of theory space , provide the first alternative to weak-scale supersymmetry with naturally light higgs fields and perturbative new physics at the tev scale . the higgs is light because it is a pseudo-goldstone boson , and the quadratically divergent contributions to the higgs mass are cancelled by new tev scale `` partners '' of the same statistics . in this paper we present the minimal theory space model of electroweak symmetry breaking , with two sites and four link fields , and the minimal set of fermions . there are very few parameters and degrees of freedom beyond the standard model . below a tev , we have the standard model with two light higgs doublets , and an additional complex scalar weak triplet and singlet . at the tev scale , the new particles that cancel the 1-loop quadratic divergences in the higgs mass are revealed . the entire higgs potential needed for electroweak symmetry breaking - the quartic couplings as well as the familiar negative mass squared - can be generated by the top story_separator_special_tag we present an economical theory of natural electroweak symmetry breaking , generalizing an approach based on deconstruction . this theory is the smallest extension of the standard model to date that stabilizes the electroweak scale with a naturally light higgs and weakly coupled new physics at tev energies . the higgs is one of a set of pseudo goldstone bosons in an $ su ( 5 ) /so ( 5 ) $ nonlinear sigma model . the symmetry breaking scale $ f $ is around a tev , with the cutoff $ \\lambda \\lsim 4\\pi f \\sim $ 10 tev . a single electroweak doublet , the `` little higgs '' , is automatically much lighter than the other pseudo goldstone bosons . the quartic self-coupling for the little higgs is generated by the gauge and yukawa interactions with a natural size $ o ( g^2 , \\lambda_t^2 ) $ , while the top yukawa coupling generates a negative mass squared triggering electroweak symmetry breaking . beneath the tev scale the effective theory is simply the minimal standard model . the new particle content at tev energies consists of one set of spin one bosons with the same quantum numbers story_separator_special_tag new theories of electroweak symmetry breaking have recently been constructed that stabilize the weak scale and do not rely upon supersymmetry . in these theories the higgs boson is a weakly coupled pseudo-goldstone boson . in this note we study the class of theories that can be described by theory spaces and show that the fundamental group of theory space describes all the relevant classical physics in the low energy theory . the relationship between the low energy physics and the topological properties of theory space allow a systematic method for constructing theory spaces that give any desired low energy particle content and potential . this provides us with tools for analyzing and constructing new theories of electroweak symmetry breaking . story_separator_special_tag we construct an su ( 6 ) /sp ( 6 ) non-linear sigma model in which the higgses arise as pseudo-goldstone bosons . there are two higgs doublets whose masses have no one-loop quadratic sensitivity to the cutoff of the effective theory , which can be at around 10 tev . the higgs potential is generated by gauge and yukawa interactions , and is distinctly different from that of the minimal supersymmetric standard model . at the tev scale , the new bosonic degrees of freedom are a single neutral complex scalar and a second copy of su ( 2 ) xu ( 1 ) gauge bosons . additional vector-like pairs of colored fermions are also present . story_separator_special_tag little higgs theories are an exciting new possibility for physics at tev energies . in the standard model the higgs mass suffers from an instability under radiative corrections . this `` hierarchy problem '' motivates much of current physics beyond the standard model research . little higgs theories offer a new and very promising solution to this problem in which the higgs is naturally light as a result of non-linearly realized symmetries . this article reviews some of the underlying ideas and gives a pedagogical introduction to the little higgs . the examples provided are taken from the paper `` a little higgs from a simple group '' , by d.e . kaplan and m. schmaltz . story_separator_special_tag we present a model of electroweak symmetry breaking in which the higgs boson is a pseudo-nambu-goldstone boson . by embedding the standard model su ( 2 ) x u ( 1 ) into an su ( 4 ) x u ( 1 ) gauge group , one-loop quadratic divergences to the higgs mass from gauge and top loops are canceled automatically with the minimal particle content . the potential contains a higgs quartic coupling which does not introduce one-loop quadratic divergences . our theory is weakly coupled at the electroweak scale , it has new weakly coupled particles at the tev scale and a cutoff above 10 tev , all without fine tuning . we discuss the spectrum of the model and estimate the constraints from electroweak precision measurements . story_separator_special_tag in this paper we present a little higgs model that has custodial $ \\mathrm { su } ( 2 ) $ as an approximate symmetry . this theory is a simple modification of the `` minimal moose '' model with $ \\mathrm { so } ( 5 ) $ global symmetries protecting the higgs boson mass . this allows for a simple limit where tev physics makes small contributions to precision electroweak observables . the spectrum of particles and their couplings to standard model fields are studied in detail . at low energies this model has two higgs doublets and it favors a light higgs boson from precision electroweak bounds , though for different reasons than in the standard model . the limit on the breaking scale , f , is roughly 700 gev , with a top partner of 2 tev , $ { w } ^ { \\ensuremath { ' } } $ and $ { b } ^ { \\ensuremath { ' } } $ of 2.5 tev , and heavy higgs partners of 2 tev . these particles are easily accessible at hadron colliders . story_separator_special_tag we propose a class of models with gauge mediation of supersymmetry breaking , inspired by simple brane constructions , where r-symmetry is very weakly broken . the gauge sector has an extended n = 2 supersymmetry and the two electroweak higgses form an n = 2 hypermultiplet , while quarks and leptons remain in n = 1 chiral multiplets . supersymmetry is broken via the d-term expectation value of a secluded u ( 1 ) and it is transmitted to the standard model via gauge interactions of messengers in n = 2 hypermultiplets : gauginos thus receive dirac masses . the model has several distinct experimental signatures with respect to ordinary models of gauge or gravity mediation realizations of the minimal supersymmetric standard model ( mssm ) . first , it predicts extra states as a third chargino that can be observed at collider experiments . second , the absence of a d-flat direction in the higgs sector implies a lightest higgs behaving exactly as the standard model one and thus a reduction of the little fine-tuning in the low tan region . this breaking of supersymmetry can be easily implemented in string theory models . story_separator_special_tag in this note , a `` littlest higgs '' model is presented which has an approximate custodial su ( 2 ) symmetry . the model is based on the coset space so ( 9 ) / ( so ( 5 ) \xd7 so ( 4 ) ) . the light pseudo-goldstone bosons of the theory include a single higgs doublet below a tev and a set of three su ( 2 ) w triplets and an electroweak singlet in the tev range . all of these scalars obtain approximately custodial su ( 2 ) preserving vacuum expectation values . this model addresses a defect in the earlier so ( 5 ) \xd7 su ( 2 ) \xd7 u ( 1 ) moose model , with the only extra complication being an extended top sector . some of the precision electroweak observables are computed and do not deviate appreciably from standard model predictions . in an s-t oblique analysis , the dominant non-standard model contributions are the extended top sector and higgs doublet contributions . in conclusion , a wide range of higgs masses is allowed in a large region of parameter space consistent with naturalness , where large higgs masses story_separator_special_tag abstract the ads/cft correspondence allows one to relate 4d strongly coupled theories to weakly coupled theories in 5d ads . we use this correspondence to study a scenario in which the higgs appears as a composite pseudo-goldstone boson ( pgb ) of a strongly coupled theory . we show how a non-linearly realized global symmetry protects the higgs mass and guarantees the absence of quadratic divergences at any loop order . the gauge and yukawa interactions for the pgb higgs are simple to introduce in the 5d ads theory , and their one-loop contributions to the higgs potential are calculated using perturbation theory . these contributions are finite , giving a squared-mass to the higgs which is one-loop smaller than the mass of the first kaluza klein state . we also show that if the symmetry breaking is caused by boundary conditions in the extra dimension , the pgb higgs corresponds to the fifth component of the bulk gauge boson . to make the model fully realistic , a tree-level higgs quartic coupling must be induced . we present a possible mechanism to generate it and discuss the conditions under which an unwanted large higgs mass term is avoided . story_separator_special_tag constraints from precision electroweak measurements reveal no evidence for new physics up to 5 - 7 tev , whereas naturalness requires new particles at around 1 tev to address the stability of the electroweak scale . we show that this `` little hierarchy problem '' can be cured by introducing a symmetry for new particles at the tev scale . as an example , we construct a little higgs model with this new symmetry , dubbed t-parity , which naturally solves the little hierarchy problem and , at the same time , stabilize the electroweak scale up to 10 tev . the model has many important phenomenological consequences , including consistency with the precision data without any fine-tuning , a stable weakly-interacting particle as the dark matter candidate , as well as collider signals completely different from existing little higgs models , but rather similar to the supersymmetric theories with conserved r-parity . story_separator_special_tag we describe a natural uv complete theory with a composite little higgs . below a tev we have the minimal standard model with a light higgs , and an extra neutral scalar . at the tev scale there are additional scalars , gauge bosons , and vector-like charge 2/3 quarks , whose couplings to the higgs greatly reduce the uv sensitivity of the higgs potential . stabilization of the higgs mass squared parameter , without finetuning , occurs due to a softly broken shift symmetry -- the higgs is a pseudo nambu-goldstone boson . above the 10 tev scale the theory has new strongly coupled interactions . a perturbatively renormalizable uv completion , with softly broken supersymmetry at 10 tev is explicitly worked out . our theory contains new particles which are odd under an exact `` dark matter parity '' , ( -1 ) ^ { ( 2s+3b+l ) } . we argue that such a parity is likely to be a feature of many theories of new tev scale physics . the lightest parity odd particle , or `` lpop '' , is most likely a neutral fermion , and may make a good dark matter candidate , story_separator_special_tag the current experimental lower bound on the higgs mass significantly restricts the allowed parameter space in most realistic supersymmetric models , with the consequence that these models exhibit significant fine-tuning . we propose a solution to this ` supersymmetric little hierarchy problem ' . we consider scenarios where the stop masses are relatively heavy - in the 500 gev to a tev range . radiative stability of the higgs soft mass against quantum corrections from the top quark yukawa coupling is achieved by imposing a global su ( 3 ) symmetry on this interaction . this global symmetry is only approximate - it is not respected by the gauge interactions . a subgroup of the global symmetry is gauged by the familiar su ( 2 ) of the standard model . the physical higgs is significantly lighter than the other scalars because it is the pseudo-goldstone boson associated with the breaking of this symmetry . radiative corrections to the higgs potential naturally lead to the right pattern of gauge and global symmetry breaking . we show that both the gauge and global symmetries can be embedded into a single su ( 6 ) grand unifying group , thereby maintaining the story_separator_special_tag little higgs theories are an attempt to address the `` little hierarchy problem , '' i.e. , the tension between the naturalness of the electroweak scale and the precision electroweak measurements showing no evidence for new physics up to 5 10 tev . in little higgs theories , the higgs mass-squareds are protected at one-loop order from the quadratic divergences . this allows the cutoff of the theory to be raised up to ~ 10 tev , beyond the scales probed by the current precision data . however , strong constraints can still arise from the contributions of the new tev scale particles which cancel the one-loop quadratic divergences from the standard model fields , and hence re-introduces the fine-tuning problem . in this paper we show that a new symmetry , denoted as t-parity , under which all heavy gauge bosons and scalar triplets are odd , can remove all the tree-level contributions to the electroweak observables and therefore makes the little higgs theories completely natural . the t-parity can be manifestly implemented in a majority of little higgs models by following the most general construction of the low energy effective theory a la callan , coleman , wess story_separator_special_tag we present an ultra-violet extension of the simplest little higgs model . the model marries the simplest little higgs at low energies to two copies of the littlest higgs at higher energies . the result is a weakly coupled theory below 100 tev with a naturally light higgs . the higher cutoff suppresses the contributions of strongly coupled dynamics to dangerous operators such as those which induce flavor changing neutral currents and cp violation . we briefly survey the distinctive phenomenology of the model . dkaplan @ pha.jhu.edu schmaltz @ bu.edu witold.skiba @ yale.edu story_separator_special_tag we show that the su ( 3 ) little higgs model has a region of parameter space in which electroweak symmetry breaking is natural and in which corrections to precision electroweak observables are sufficiently small . the model is anomaly free , generates a higgs mass near 150 gev , and predicts new gauge bosons and fermions at 1 tev . story_separator_special_tag we construct t-parity invariant extensions of the littlest higgs model , in which only linear representations of the full symmetry group are employed , without recourse to the non-linear representations introduced by coleman , callan , wess , and zumino ( ccwz ) . these models are based on the symmetry breaking pattern su ( 5 ) _l x h_r / so ( 5 ) , where h_r can be so ( 5 ) or other larger symmetry groups . the structure of the models in the su ( 5 ) _l sector is identical to the littlest higgs model based on su ( 5 ) /so ( 5 ) . since the full symmetry group is realized linearly , these models can be thought of as possible uv extensions of the t-invariant model using non-linear representations via ccwz , with whom they share similar low energy phenomenology . we also comment on how to avoid constraints from four-fermion operators on t-invariant models with or without ccwz construction . the electroweak data therefore place a very weak bound on the symmetry breaking scale , f > 450 gev . story_separator_special_tag abstract we study the idea of a composite higgs in the framework of a five-dimensional ads theory . we present the minimal model of the higgs as a pseudo-goldstone boson in which electroweak symmetry is broken dynamically via top loop effects , all flavour problems are solved , and contributions to electroweak precision observables are below experimental bounds . since the 5d theory is weakly coupled , we are able to fully determine the higgs potential and other physical quantities . the lightest resonances are expected to have a mass around 2 tev and should be discovered at the lhc . the top sector is mostly composite and deviations from standard model couplings are expected . story_separator_special_tag the little higgs mechanism produces a light 100 gev higgs while raising the natural cutoff from 1 tev to 10 tev . we attempt an iterative little higgs mechanism to produce multiple factors of 10 between the cutoff and the 100 gev higgs mass in a perturbative theory . in the renormalizable sector of the theory , all quantum corrections to the higgs mass proportional to mass scales greater than 1 tev are absent this includes quadratically divergent , log-divergent , and finite loops at all orders . however , even loops proportional to scales just a factor of 10 above the higgs ( or any other scalar ) mass come with large numerical factors that reintroduce fine-tuning . top loops , for example , produce an expansion parameter of not 1/ ( 4 ) but 1/5 . the geometric increase in the number of fields at higher energies simply exacerbates this problem . we build a complete two-stage model up to 100 tev , show that direct sensitivity of the electroweak scale to the cutoff is erased , and estimate the tuning due to large numerical factors . we then discuss the possibility , in a toy model with story_separator_special_tag we implement the su ( 5 ) /so ( 5 ) littlest higgs theory in a slice of 5d anti-de sitter space bounded by a uv brane and an ir brane . in this model , there is a bulk su ( 5 ) gauge symmetry that is broken to so ( 5 ) on the ir brane , and the higgs boson is contained in the goldstones from this breaking . all of the interactions on the ir brane preserve the global symmetries that protect the higgs mass , but a radiative potential is generated through loops that stretch to the uv brane where there are explicit su ( 5 ) violating boundary conditions . like the original littlest higgs , this model exhibits collective breaking in that two interactions must be turned on in order to generate a higgs potential . in ads space , however , collective breaking does not appear in coupling constants directly but rather in the choice of uv brane boundary conditions . we match this ads construction to the known low energy structure of the littlest higgs and comment on some of the tensions inherent in the ads construction . we calculate the story_separator_special_tag the standard model has some intrinsic beauty in the sector of fermions and gauge bosons . its scalar sector , though minimal , is however haunted by the hierarchy problem . the fermionic spectrum also have two major problems , the flavor problem with its fundamental notion about why there are three families , and the phenomenological limitation of massless neutrinos . we present here a completed chiral fermionic sector model , based on a little higgs model , that has the plausible potential of addressing all these problems of the sm at an accessible energy scale , and comment briefly on its phenomenology . the focus here is not on the little higgs part , but rather on the electroweak quarks and leptons from the model , which of course from an important part of the full model . story_separator_special_tag the implementation of the little higgs mechanism to solve the hierarchy problem provides an interesting guiding principle to build particle physics models beyond the electroweak scale . most model building works , however , do not pay much attention to the fermionic sector . through a case example , we illustrate how a complete and consistent fermionic sector of the tev effective field theory may actually be largely dictated by the gauge structure of the model . the completed fermionic sector has a specific flavor physics structure , and many phenomenological constraints on the model can thus be obtained beyond gauge , higgs , and top physics . we take a first look at some of the quark sector constraints . story_separator_special_tag we calculate the tree-level expressions for the electroweak precision observables in the $ \\mathrm { su } ( 5 ) /so ( 5 ) $ littlest higgs model . the source for these corrections are the exchange of heavy gauge bosons and a triplet higgs vacuum expectation value ( vev ) . weak isospin violating contributions are present because there is no custodial $ \\mathrm { su } ( 2 ) $ global symmetry . the bulk of these weak isospin violating corrections arise from heavy gauge boson exchange while a smaller contribution comes from the triplet higgs vev . a global fit is performed to the experimental data and we find that throughout the parameter space the symmetry breaking scale is bounded by $ fg4\\mathrm { tev } $ at 95 % c.l . stronger bounds on f are found for generic choices of the high energy gauge couplings . we find that even in the best case scenario one would need fine-tuning of less than a percent to get a higgs boson mass as light as 200 gev . story_separator_special_tag little higgs models offer a new way to address the hierarchy problem , and give rise to a weakly-coupled higgs sector . these theories predict the existence of new states which are necessary to cancel the quadratic divergences of the standard model . the simplest version of these models , the littlest higgs , is based on an $ su ( 5 ) /so ( 5 ) $ non-linear sigma model and predicts that four new gauge bosons , a weak isosinglet quark , $ t ' $ , with $ q=2/3 $ , as well as an isotriplet scalar field exist at the tev scale . we consider the contributions of these new states to precision electroweak observables , and examine their production at the tevatron . we thoroughly explore the parameter space of this model and find that small regions are allowed by the precision data where the model parameters take on their natural values . these regions are , however , excluded by the tevatron data . combined , the direct and indirect effects of these new states constrain the ` decay constant ' $ f\\gsim 3.5 $ tev and $ m_ { t ' } \\gsim story_separator_special_tag we calculate the tree-level electroweak precision constraints on a wide class of little higgs models including variations of the littlest higgs su ( 5 ) /so ( 5 ) , su ( 6 ) /sp ( 6 ) , and $ \\mathrm { su } { ( 4 ) } ^ { 4 } /\\mathrm { su } { ( 3 ) } ^ { 4 } $ models . by performing a global fit to the precision data we find that for generic regions of the parameter space the bound on the symmetry breaking scale f is several tev , where we have kept the normalization of f constant in the different models . for example , the `` minimal '' implementation of su ( 6 ) /sp ( 6 ) is bounded by $ fg3.0\\mathrm { tev } $ throughout most of the parameter space , and $ \\mathrm { su } { ( 4 ) } ^ { 4 } /\\mathrm { su } { ( 3 ) } ^ { 4 } $ is bounded by $ { f } ^ { 2 } \\ensuremath { \\equiv } { f } _ { 1 } ^ story_separator_special_tag we study precision electroweak constraints on the close cousin of the littlest higgs model , the $ \\mathrm { su } ( 6 ) /sp ( 6 ) $ model . we identify a near-oblique limit in which the heavy $ { w } ^ { \\ensuremath { ' } } $ and $ { b } ^ { \\ensuremath { ' } } $ decouple from the light fermions , and then calculate oblique corrections , including one-loop contributions from the extended top sector and the two higgs doublets . we find regions of parameter space that give acceptably small precision electroweak corrections and only mild fine-tuning in the higgs potential , and also find that the mass of the lightest higgs boson is relatively unconstrained by precision electroweak data . the fermions from the extended top sector can be as light as $ \\ensuremath { \\simeq } 1\\mathrm { tev } , $ and the $ { w } ^ { \\ensuremath { ' } } $ can be as light as $ \\ensuremath { \\simeq } 1.8\\mathrm { tev } . $ we include an independent breaking scale for the $ { b } ^ { \\ensuremath story_separator_special_tag recently a new class of composite higgs models have been developed which give rise to naturally light higgs bosons without supersymmetry . based on the chiral symmetries of theory space , involving replicated gauge groups and appropriate gauge symmetry breaking patterns , these models allow the scale of the underlying strong dynamics giving rise to the composite particles to be as large as of order 10 tev , without any fine tuning to prevent large corrections to higgs boson mass ( es ) of order 100 gev . in this paper we show that the size of flavor violating interactions arising generically from underlying flavor dynamics constrains the scale of the higgs boson compositeness to be greater than of order 75 tev , implying that significant fine-tuning is required . without fine-tuning , the low-energy structure of the composite higgs model alone is not sufficient to eliminate potential problems with flavor-changing neutral currents or excessive cp violation ; solving those problems requires additional information or assumptions about the symmetries of the underlying flavor or strong dynamics . we also consider the weaker , but more model-independent , bounds which arise from limits on weak isospin violation . story_separator_special_tag in the littlest higgs model with t-parity new flavor-changing interactions between mirror fermions and the standard model ( sm ) fermions can induce various flavor-changing neutral-current decays for b-mesons , the z-boson , and the higgs boson . since all these decays induced in the littlest higgs with t-parity model are correlated , in this work we perform a collective study for these decays , namely , the z-boson decay z - > b ( s ) overbar , the higgs-boson decay h - > b ( s ) overbar , and the b-meson decays b - > x ( s ) gamma , b ( s ) - > mu ( + ) mu ( - ) , and b - > x ( s ) mu ( + ) mu ( - ) . we find that under the current experimental constraints from the b-decays , the branching ratios of both z - > b ( s ) overbar and h - > b ( s ) overbar can still deviate from the sm predictions significantly . in the parameter space allowed by the b-decays , the branching ratio of z - > b ( s ) overbar story_separator_special_tag little higgs theories , in which the higgs particle is realized as the pseudo-goldstone boson of an approximate global chiral symmetry have generated much interest as possible alternatives to weak scale supersymmetry . in this paper we analyze precision electroweak observables in the minimal moose model and find that in order to be consistent with current experimental bounds , the gauge structure of this theory needs to be modified . we then look for viable regions of parameter space in the modified theory by calculating the various contributions to the s and t parameters . story_separator_special_tag we perform a one-loop analysis of the rho parameter in the littlest higgs model , including the logarithmically enhanced contributions from both fermion and scalar loops . we find the one-loop contributions are comparable to the tree level corrections in some regions of parameter space . the fermion loop contribution dominates in the low cutoff scale f region . on the other hand , the scalar loop contribution dominates in the high cutoff scale f region and it grows with the cutoff scale f. this in turn implies an upper bound on the cutoff scale . a low cutoff scale is allowed for a non-zero triplet vev . constraints on various other parameters in the model are also discussed . the role of triplet scalars in constructing a consistent renormalization scheme is emphasized . story_separator_special_tag we study the low energy limit of a little higgs model with custodial symmetry . the method consists in eliminating the heavy fields using their classical equations of motion in the infinite mass limit . after the elimination of the heavy degrees of freedom we can directly read off deviations from the precision electroweak data . we also examine the effects on the low energy precision experiments . story_separator_special_tag the mechanism of electroweak symmetry breaking in little higgs models is analyzed in an effective field theory approach . this enables us to identify observable effects irrespective of the specific structure and content of the heavy degrees of freedom . we parameterize these effects in a common operator basis and present the complete set of anomalous contributions to gauge-boson , higgs , and fermion couplings . if the hypercharge assignments of the model retain their standard form , electroweak precision data are affected only via the s and t parameters and by contact interactions . as a proof of principle , we apply this formalism to the minimal model and consider the current constraints on the parameter space . finally , we show how the interplay of measurements at lhc and a linear collider could reveal the structure of these models . story_separator_special_tag abstract in the context of the littlest higgs ( lh ) model , we study the contributions of the new particles to the branching ratio\xa0 r b . we find that the contributions mainly dependent on the free parameters\xa0 f , c and\xa0 x l . the precision measurement value of r b gives severe constraints on these free parameters . story_separator_special_tag the littlest higgs model contains a new vector-like heavy quark in the up sector . there are two interesting features of its existence . one is that it extends the $ 3\\times3 $ ckm matrix in the standard model to a $ 4\\times3 $ matrix and the other is that it allows z-mediated flavor changing neutral currents at tree level in the up sector but not in the down sector . we examine a few of possible windows in which the z-mediated flavor changing neutral currents in the littlest higgs model can be tested . story_separator_special_tag abstract we calculate the k 0 k \xaf 0 , b d , s 0 b \xaf d , s 0 mixing mass differences m k , m d , s and the cp-violating parameter k in the littlest higgs ( lh ) model . for f / v as low as 5 and the yukawa parameter x l 0.8 , the enhancement of m d amounts to at most 20 % . similar comments apply to m s and k . the correction to m k is negligible . the dominant new contribution in this parameter range , calculated here for the first time , comes from the box diagrams with ( w l \xb1 , w h \xb1 ) exchanges and ordinary quarks that are only suppressed by the mass of w h \xb1 but do not involve explicit o ( v 2 / f 2 ) factors . this contribution is strictly positive . the explicit o ( v 2 / f 2 ) corrections to the sm diagrams with ordinary quarks and two w l \xb1 exchanges have to be combined with the box diagrams with a single heavy t quark exchange for the gim mechanism story_separator_special_tag abstract an alternate solution of hierarchy problem in the standard model namely , the little higgs model , has been proposed lately . in this work b d 0 b \xaf d 0 mass difference in the framework of the little higgs model is evaluated . the experimental limits on the mass difference is shown to provide meaningful constraints on the parameter space of the model . story_separator_special_tag we reconsider little-higgs corrections to precision data . in five models with global symmetries su ( 5 ) , su ( 6 ) , so ( 9 ) corrections are ( although not explicitly ) of 'universal ' type . we get simple expressions for the s-circumflex , t-circumflex , w , y parameters , which summarize all effects . in all models w , y { > = } 0 and in almost all models s-circumflex > ( w+y ) /2 . results differ from previous analyses , which are sometimes incomplete , sometimes incorrect , and because we add lep2 ee { yields } ff cross sections to the data set . depending on the model , the constraint on f ranges between 2 and 20 tev . we next study the simplest little-higgs model ( and propose a related model ) which is not universal and affects precision data due to the presence of an extra z { sup ' } vector . by restricting the data set to the most accurate leptonic data we show how corrections to precision data generated by a generic z { sup ' } can be encoded in four effective s-circumflex story_separator_special_tag the little higgs model provides an alternative to traditional candidates for new physics at the tev scale . the new heavy gauge bosons predicted by this model should be observable at the large hadron collider ( lhc ) . we discuss how the lhc experiments could test the little higgs model by studying the production and decay of these particles . story_separator_special_tag we study the low-energy phenomenology of the little higgs model . we first discuss the linearized effective theory of the `` littlest higgs model '' and study the low-energy constraints on the model parameters . we identify sources of the corrections to low-energy observables , discuss model-dependent arbitrariness , and outline some possible directions of extensions of the model in order to evade the precision electroweak constraints . we then explore the characteristic signatures to test the model in the current and future collider experiments . we find that the cern lhc has great potential to discover the new $ \\mathrm { su } ( 2 ) $ gauge bosons and the possible new $ u ( 1 ) $ gauge boson to the multi-tev mass scale . other states such as the colored vectorlike quark t and doubly charged higgs boson $ { \\ensuremath { \\phi } } ^ { ++ } $ may also provide interesting signals . at a linear collider , precision measurements on the triple gauge boson couplings could be sensitive to the new physics scale of a few tev . we provide a comprehensive list of the linearized interactions and vertices for the littlest story_separator_special_tag little higgs models , in which the higgs particle arises as a pseudo-goldstone boson , have a natural mechanism of electroweak symmetry breaking associated with the large value of the top quark yukawa coupling . the mechanism typically involves a new heavy su ( 2 ) { sub l } singlet top quark , t. we discuss the relationship of the higgs boson and the two top quarks . we suggest experimental tests of the little higgs mechanism of electroweak symmetry breaking using the production and decay of the t at the large hadron collider . story_separator_special_tag we analyse the consequences of the little higgs model for double higgs boson production at the lhc and for the partial decay width ( h ! ) . in particular , we study the sensitivity of these processes in terms of the parameters of the model . we nd that a generic prediction is that the partial width ( h ! ) is greatly suppressed with respect to the standard model due to a cancellation between the contributions from the charged vector particles . this is a robust prediction of the little higgs model.on the other hand , the little higgs model does not change signicantly either single or double higgs production at hadron colliders . story_separator_special_tag in this talk i describe how to discover or rule out the existence of w^ { prime } bosons at the cern large hadron collider as a function of arbitrary couplings and w^ { prime } masses . if w^ { prime } bosons are not found , i demonstrate the 95 % confidence-level exclusions that can be reached for several classes of models . in particular , w^ { prime } bosons in the entire reasonable parameter space of little higgs models can be discovered or excluded in 1 year at the lhc . story_separator_special_tag in the context of the littlest higgs ( lh ) model , we consider the higgs strahlung process e +e - zh . we find that the correction effects on the process mainly come from the heavy photon a . if we take the mixing angle c in the range of 0 85 1 , the contributions of the heavy gauge boson z can not be neglected . in most of the parameter space , the deviation of the total production cross section tot from its sm value is larger than 5 % , which may be observable in the future high energy e +e - collider ( lc ) experiments . the future lc experiments could test the lh model by measuring the cross section of the process e +e - zh . story_separator_special_tag we discuss possible searches for the new particles predicted by little higgs models at the lhc . by using a simulation of the atlas detector , we demonstrate how the predicted quark , gauge bosons and additional higgs bosons can be found and estimate the mass range over which their properties can be constrained . story_separator_special_tag we calculate the two body higgs boson decays in the framework of the littlest higgs model . the decay h { yields } { gamma } z is computed at one-loop-level and , using previous results , we evaluate the branching fractions in the framework of the littlest higgs model . a wide range of the space parameter of the model is considered and possible deviations from the standard model are explored . story_separator_special_tag a generic feature of little higgs models is presence of extra neutral gauge bosons . in the littlest higgs model , the neutral extra gauge boson a_h is lightest among the extra particles and could be as light as a few hundred gev , which may be produced directly at an e^+ e^- linear collider . we study production and decay of a_h at the linear collider and compare them with those of z ' bosons in supersymmetric e_6 models . story_separator_special_tag little higgs models have an enlarged global symmetry which makes the higgs boson a pseudo-goldstone boson . this symmetry typically contains spontaneously broken u ( 1 ) subgroups which provide light electroweak-singlet pseudoscalars . unless such particles are absorbed as the longitudinal component of $ { z } ^ { \\ensuremath { ' } } $ states , they appear as pseudoscalars in the physical spectrum at the electroweak scale . we outline their significant impact on little higgs phenomenology and analyze a few possible signatures at the lhc and other future colliders in detail . in particular , their presence significantly affects the physics of the new heavy quark states predicted in little higgs models , and inclusive production at lhc may yield impressive diphoton resonances . story_separator_special_tag in the context of the littlest higgs ( lh ) model , we study single production of the new gauge bosons $ { b } _ { h } $ , $ { z } _ { h } $ and $ { w } _ { h } ^ { \\ifmmode\\pm\\else\\textpm\\fi { } } $ via $ { e } ^ { \\ensuremath { - } } \\ensuremath { \\gamma } $ collisions and discuss the possibility of detecting these new particles in the tev energy $ { e } ^ { + } { e } ^ { \\ensuremath { - } } $ collider ( lc ) . we find that these new particles can not be detected via the $ { e } ^ { \\ensuremath { - } } \\ensuremath { u } \\ensuremath { u } $ signal in all of the parameter space preferred by the electroweak precision data . however , the heavy gauge bosons $ { b } _ { h } $ and $ { z } _ { h } $ may be observed via the decay channel $ { b } _ { h } ( { z
the aim of the paper is to analyse the extent and determinants of panel attrition in the european community household panel ( echp ) . the fact that , after five waves , in some countries the response rate has declined to about 50 % , leads to concerns about the representativeness of the remaining participants . we find the extent and determinants of panel attrition to reveal high variability across countries as well as for different waves within one country . differences were also found when comparing attrition behaviour across different surveys running parallel in the same countries , as was the case for germany and the united kingdom ( uk ) . response rates are found to depend strongly on whether households moved during the sample period and whether the interviewer in the sample period changed . compared to these two influences , all other characteristics are of minor importance . despite these different attrition rates , neither is the analysis of income biased , nor is the ranking of national results disturbed . story_separator_special_tag in time series predictor evaluation , we observe that with respect to the model selection procedure there is a gap between evaluation of traditional forecasting procedures , on the one hand , and evaluation of machine learning techniques on the other hand . in traditional forecasting , it is common practice to reserve a part from the end of each time series for testing , and to use the rest of the series for training . thus it is not made full use of the data , but theoretical problems with respect to temporal evolutionary effects and dependencies within the data as well as practical problems regarding missing values are eliminated . on the other hand , when evaluating machine learning and other regression methods used for time series forecasting , often cross-validation is used for evaluation , paying little attention to the fact that those theoretical problems invalidate the fundamental assumptions of cross-validation . to close this gap and examine the consequences of different model selection procedures in practice , we have developed a rigorous and extensive empirical study . six different model selection procedures , based on ( i ) cross-validation and ( ii ) evaluation using the story_separator_special_tag inferential statistics teach us that we need a random probability sample to infer from a sample to the general population . in online survey research , however , volunteer access panels , in which respondents self-select themselves into the sample , dominate the landscape . such panels are attractive due to their low costs . nevertheless , recent years have seen increasing numbers of debates about the quality , in particular about errors in the representativeness and measurement , of such panels . in this article , we describe four probability-based online and mixed-mode panels for the general population , namely , the longitudinal internet studies for the social sciences ( liss ) panel in the netherlands , the german internet panel ( gip ) and the gesis panel in germany , and the longitudinal study by internet for the social sciences ( elipss ) panel in france . we compare them in terms of sampling strategies , offline recruitment procedures , and panel characteristics . our aim is to provide an overview to the scientific community of the availability of such data sources to demonstrate the potential strategies for recruiting and maintaining probability-based online panels to practitioners and to story_separator_special_tag various open probability-based panel infrastructures have been established in recent years , allowing researchers to collect high-quality survey data . in this report , we describe the processes and deliverables of setting up the gesis panel , the first probability-based mixed-mode panel infrastructure in germany open for data collection to the academic research community . the reference population for the gesis panel is the german-speaking population aged between 18 and 70 years permanently residing in germany . in 2013 , approximately 5,000 panelists had been recruited from a random sample drawn from municipal population registers . we describe the outcomes of the sampling strategy and the multistep recruitment process , involving computer-aided personal interviews conducted at respondents homes . next , we describe the outcomes of the two self-administered survey modes ( online and paper-and-pencil ) of the gesis panel used for the initial profile survey and all subsequent bimonthly data collection waves . across all stages of setting up the gesis panel , we report sample composition discrepancies for key demographic variables between the gesis panel and established benchmark surveys . overall , the findings highlight the usefulness of pursuing a mixed-mode strategy when building a probability-based panel infrastructure story_separator_special_tag random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest . the generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large . the generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them . using a random selection of features to split each node yields error rates that compare favorably to adaboost ( y. freund & r. schapire , machine learning : proceedings of the thirteenth international conference , * * * , 148 156 ) , but are more robust with respect to noise . internal estimates monitor error , strength , and correlation and these are used to show the response to increasing the number of features used in the splitting . internal estimates are also used to measure variable importance . these ideas are also applicable to regression . story_separator_special_tag classification and regression trees are machine learning methods for constructing prediction models from data . the models are obtained by recursively partitioning the data space and fitting a simple prediction model within each partition . as a result , the partitioning can be represented graphically as a decision tree . classification trees are designed for dependent variables that take a finite number of unordered values , with prediction error measured in terms of misclassification cost . regression trees are for dependent variables that take continuous or ordered discrete values , with prediction error typically measured by the squared difference between the observed and predicted values . this article gives an introduction to the subject by reviewing some widely available algorithms and comparing their capabilities , strengths , and weakness in two examples . \xa9 2011 john wiley & sons , inc. wires data mining knowl discov 2011 1 14 23 doi : 10.1002/widm.8 story_separator_special_tag using the high school and beyond longitudinal study , we investigate the participation patterns across four waves of data . because nonrespondents from one wave are recontacted at subsequent waves , both monotone and nonmonotone attrition patterns arise . we discuss correlates of these two types of attrition in an attempt to describe individuals who may be at-risk of attrition . gender and incomplete participation in the base-year ( respondents who exhibit item nonresponse on key variables ) are important predictors of later attrition . estimated effects of monotone and nonmonotone attrition on parameter estimates in regression models suggest that certain demographic effects will be biased due to sample attrition . the evidence for bias is neither pervasive nor consistent , but suggests a systematic inflation of the black-white achievement disparity . story_separator_special_tag machine learning techniques comprise an array of computer-intensive methods that aim at discovering patterns in data using flexible , often nonparametric , methods for modeling and variable selection . these methods offer an expansion to the more traditional methods , such as ols or logistic regression , which have been used by survey researchers and social scientists . many of the machine learning methods do not require the distributional assumptions of the more traditional methods , and many do not require explicit model specification prior to estimation . machine learning methods are beginning to be used for various aspects of survey research including responsive/adaptive designs , data processing and nonresponse adjustments and weighting . this special issue aims to familiarize survey researchers and social scientists with the basic concepts in machine learning and highlights five common methods . specifically , articles in this issue will offer an accessible introduction to : lasso models , support vector machines , neural networks , and classification and regression trees and random forests . in addition to a detailed description , each article will highlight how the respective method is being used in survey research along with an application of the method to a story_separator_special_tag we present results from a large-scale empirical comparison between ten learning methods : svms , neural nets , logistic regression , naive bayes , memory-based learning , random forests , decision trees , bagged trees , boosted trees , and boosted stumps . we evaluate the methods on binary classification problems using nine performance criteria : accuracy , squared error , cross-entropy , roc area , f-score , precision/recall breakeven point , average precision , lift , and calibration . because some models ( e.g . svms and boosted trees ) do not predict well-calibrated probabilities , we compare the performance of the algorithms both before and after calibrating their predictions with platt scaling and isotonic regression . before scaling , the models with the best overall performance are neural nets , bagged trees , and random forests . after scaling , the best models are boosted trees , random forests , and unscaled neural nets . story_separator_special_tag an approach to the construction of classifiers from imbalanced datasets is described . a dataset is imbalanced if the classification categories are not approximately equally represented . often real-world data sets are predominately composed of `` normal '' examples with only a small percentage of `` abnormal '' or `` interesting '' examples . it is also the case that the cost of misclassifying an abnormal ( interesting ) example as a normal example is often much higher than the cost of the reverse error . under-sampling of the majority ( normal ) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class . this paper shows that a combination of our method of oversampling the minority ( abnormal ) cla ss and under-sampling the majority ( normal ) class can achieve better classifier performance ( in roc space ) tha n only under-sampling the majority class . this paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance ( in roc space ) t han varying the loss ratios in ripper or class priors in naive story_separator_special_tag tree boosting is a highly effective and widely used machine learning method . in this paper , we describe a scalable end-to-end tree boosting system called xgboost , which is used widely by data scientists to achieve state-of-the-art results on many machine learning challenges . we propose a novel sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning . more importantly , we provide insights on cache access patterns , data compression and sharding to build a scalable tree boosting system . by combining these insights , xgboost scales beyond billions of examples using far fewer resources than existing systems . story_separator_special_tag online panel surveys have changed social and market research . especially in applied market research , online panels are a very important tool for conducting surveys . in the early 2000s , nearly all online panels were based on self-selected samples of respondents who have access to the internet . these self-selected panels offer quick and cheap data collection . this comes at the price of low external validity.1 thus , self-selected panel respondents are likely to differ from the population to which the results from these respondents are meant to generalize . more importantly , the nature and size of these potential biases can never be properly assessed , because sampling theories do not apply to studies that do not rely on a random sampling scheme . this lack of external validity has continuously concerned academic researchers and people working in official statistics . story_separator_special_tag this paper aims to analyse predictors of attrition in a major uk longitudinal survey , the family and children study , and thus to contribute to a deeper understanding of the process and reasons for attrition as a social phenomenon . multilevel modelling techniques are used to analyse attrition across several waves accounting for clustering of sample members within interviewers . the models are guided by current conceptual frameworks and theories of survey participation . the analysis also explores the role of the interviewer in gaining cooperation in a longitudinal study , in particular investigating effects of changes of interviewers across waves . an advantage of the data is that relatively rich information on both respondents and non-respondents is available from early waves and from interviewer observations story_separator_special_tag we evaluate 179 classifiers arising from 17 families ( discriminant analysis , bayesian , neural networks , support vector machines , decision trees , rule-based classifiers , boosting , bagging , stacking , random forests and other ensembles , generalized linear models , nearest-neighbors , partial least squares and principal component regression , logistic and multinomial regression , multiple adaptive regression splines and other methods ) , implemented in weka , r ( with and without the caret package ) , c and matlab , including all the relevant classifiers available today . we use 121 data sets , which represent the whole uci data base ( excluding the large-scale problems ) and other own real problems , in order to achieve significant conclusions about the classifier behavior , not dependent on the data set collection . the classifiers most likely to be the bests are the random forest ( rf ) versions , the best of which ( implemented in r and accessed via caret ) achieves 94.1 % of the maximum accuracy overcoming 90 % in the 84.3 % of the data sets . however , the difference is not statistically significant with the second best , the story_separator_special_tag longitudinal or panel surveys offer unique benefits for social science research , but they typically suffer from attrition , which reduces sample size and can result in biased inferences . previous research tends to focus on the demographic predictors of attrition , conceptualizing attrition propensity as a stable , individual-level characteristic some individuals ( e.g. , young , poor , residentially mobile ) are more likely to drop out of a study than others . we argue that panel attrition reflects both the characteristics of the individual respondent as well as her survey experience , a factor shaped by the design and implementation features of the study . in this article , we examine and compare the predictors of panel attrition in the 2008 2009 american national election study , an online panel , and the 2006 2010 general social survey , a face-to-face panel . in both cases , survey experience variables are predictive of panel attrition above and beyond the standard demographic predictors , but the particular measures of relevance differ across the two surveys . the findings inform statistical corrections for panel attrition bias and provide study design insights for future panel data collections . story_separator_special_tag many surveys of the u.s. household population are experiencing higher refusal rates . nonresponse can , but need not , induce nonresponse bias in survey estimates . recent empirical findings illustrate cases when the linkage between nonresponse rates and nonresponse biases is absent . despite this , professional standards continue to urge high response rates . statistical expressions of nonresponse bias can be translated into causal models to guide hypotheses about when nonresponse . causes bias . alternative designs to measure nonresponse bias exist , providing different but incomplete information about the nature of the bias . a synthesis of research studies estimating nonresponse bias shows the bias often present . a logical question at this moment in history is what advantage probability sample surveys have if they suffer from high nonresponse rates . since postsurvey adjustment for nonresponse requires auxiliary variables , the answer depends on the nature of the design and the quality of the auxiliary variables . story_separator_special_tag abstract the emergence of cyber-physical-social systems ( cpss ) as a novel paradigm has revolutionized the relationship between humans , computers and the physical environment . in this paper , we survey the advancement of cpss through cyber-physical systems ( cps ) , cyber-social systems ( css ) and cpss , as well as related techniques . cpss are still at their infancy , most recent studies are application-specific and lack of systematic design methodology . to exploit the design methodology for cpss , we review the existing system-level design methodologies in multiple application domains and further compare their performance characteristics and applicability for cpss . finally , we introduce our latest research advancement on system-level design methodology for cpss and summarize future challenges for designing cpss . story_separator_special_tag summary . over the past few years surveys have expanded to new populations , have incorporated measurement of new and more complex substantive issues and have adopted new data collection tools . at the same time there has been a growing reluctance among many household populations to participate in surveys . these factors have combined to present survey designers and survey researchers with increased uncertainty about the performance of any given survey design at any particular point in time . this uncertainty has , in turn , challenged the survey practitioner s ability to control the cost of data collection and quality of resulting statistics . the development of computer-assisted methods for data collection has provided survey researchers with tools to capture a variety of process data ( paradata ) that can be used to inform cost quality trade-off decisions in realtime . the ability to monitor continually the streams of process data and survey data creates the opportunity to alter the design during the course of data collection to improve survey cost efficiency and to achieve more precise , less biased estimates . we label such surveys as responsive designs . the paper defines responsive design and uses examples story_separator_special_tag during the past decade there has been an explosion in computation and information technology . with it have come vast amounts of data in a variety of fields such as medicine , biology , finance , and marketing . the challenge of understanding these data has led to the development of new tools in the field of statistics , and spawned new areas such as data mining , machine learning , and bioinformatics . many of these tools have common underpinnings but are often expressed with different terminology . this book describes the important ideas in these areas in a common conceptual framework . while the approach is statistical , the emphasis is on concepts rather than mathematics . many examples are given , with a liberal use of color graphics . it is a valuable resource for statisticians and anyone interested in data mining in science or industry . the book 's coverage is broad , from supervised learning ( prediction ) to unsupervised learning . the many topics include neural networks , support vector machines , classification trees and boosting -- -the first comprehensive treatment of this topic in any book . this major new edition features many story_separator_special_tag in this paper we develop a theory of the survey response decision process and apply it to the analysis of field office policy measures in an attempt to see which of these are effective in reducing panel attrition . we use data from the health and retirement study ( hrs ) to assess the effectiveness of 1 ) reducing the length of the interview and 2 ) assigning the same initial interviewer wave after wave . there is virtually no evidence in the data that interview length affects subsequent wave response . assigning the same interviewer wave after wave , however , has a strong positive effect on response rates . story_separator_special_tag predictive modeling methods from the field of machine learning have become a popular tool across various disciplines for exploring and analyzing diverse data . these methods often do not require specific prior knowledge about the functional form of the relationship under study and are able to adapt to complex non-linear and non-additive interrelations between the outcome and its predictors while focusing specifically on prediction performance . this modeling perspective is beginning to be adopted by survey researchers in order to adjust or improve various aspects of data collection and/or survey management . to facilitate this strand of research , this paper ( 1 ) provides an introduction to prominent tree-based machine learning methods , ( 2 ) reviews and discusses previous and ( potential ) prospective applications of tree-based supervised learning in survey research , and ( 3 ) exemplifies the usage of these techniques in the context of modeling and predicting nonresponse in panel surveys . story_separator_special_tag attrition in longitudinal studies is a major threat to the representativeness of the data and the generalizability of the findings . typical approaches to address systematic nonresponse are either expensive and unsatisfactory ( e.g. , oversampling ) or rely on the unrealistic assumption of data missing at random ( e.g. , multiple imputation ) . thus , models that effectively predict who most likely drops out in subsequent occasions might offer the opportunity to take countermeasures ( e.g. , incentives ) . with the current study , we introduce a longitudinal model validation approach and examine whether attrition in two nationally representative longitudinal panel studies can be predicted accurately . we compare the performance of a basic logistic regression model to a more flexible , data-driven machine learning algorithm gradient boosting machines . our results show almost no difference in accuracies for both modeling approaches , which contradicts claims of similar studies on survey attrition . prediction models could not be generalized across surveys and were less accurate when tested at a later survey wave . we discuss the implications of these findings for survey retention , the use of complex machine learning algorithms , and give some recommendations to story_separator_special_tag attrition is mostly caused by not contacted or refusing sample members . on one hand it is well-known that reasons to attrite due to non-contact are different from those that are due to refusal . on the other hand does non-contact most probably affect household attrition , while refusal can be effective on both households and individuals . in this article , attrition on both the household and ( conditional on household participation ) the individual level is analysed in three panel surveys from the cross national equivalent file ( cnef ) : the german socio- economic panel ( gsoep ) , the british household panel study ( bhps ) , and the swiss household panel ( shp ) . to follow households over time we use a common rule in all three surveys . first , we find different attrition magnitudes and patterns both across the surveys and also on the household and the individual level . second , there is more evidence for reinforced rather than compensated household level selection effects if the individual level is also taken into account . story_separator_special_tag attrition is the process of dropout from a panel study . earlier studies into the determinants of attrition study respondents still in the survey and those who attrited at any given wave of data collection . in many panel surveys , the process of attrition is more subtle than being either in or out of the study . respondents often miss out on one or more waves , but might return after that . they start off responding infrequently , but more often later in the course of the study . using current analytical models , it is difficult to incorporate such response patterns in analyses of attrition . this article shows how to study attrition in a latent class framework . this allows the separation of different groups of respondents , that each follow a different and distinct process of attrition . classifying attriting respondents enables us to formally test substantive theories of attrition and its effects on data accuracy more effectively . story_separator_special_tag research methods for graduate business and social science students john adams , hafiz t a khan , robert raeside and david white response books , a division of sage publications , new delhi , 2007 , pages : 270 ; price : rs . 395 ; isbn : 978-0-7619-3589 this book is a fair attempt in comprehensively covering separate chapters with various topics related to research methods , including research problem formulation , research design , data collection , data analysis , advanced statistical analysis and easy-to comprehend snippets on report-writing . the book is divided into eight sections . section 1 is a general introduction with only one chapter . the authors provide answers to questions such as what is research , why research is conducted , who does research and how research is conducted . three major research types , such as , descriptive , explanatory and predictive , are indicated . it is highlighted that the type of research approach selected depends on the nature of the research problem at hand . section 2 with its single chapter focuses on the research methodology . the authors differentiate between research method and methodology . while the former is story_separator_special_tag recent decades have seen a shift away from surveys in which all procedures are standardised towards a variety of approaches ( tailored , responsive , adaptive ) in which different sample members are treated differently . a particular variant of the non-standardised approach involves applying to each of a number of subgroups targeted design features that are identified in advance of field work and are not subsequently modified . targeted designs have mainly been implemented on panel surveys and mainly to address non-response and attrition . this article provides a framework for targeted designs , discusses their objectives , reviews their development , and outlines possible future developments . story_separator_special_tag this meta-analysis quantifies the dose-response relationship between monetary incentives and response rates in household surveys . it updates and augments the existing meta-analyses on incentives by analyzing the latest experimental research , focusing specifically on general-population household surveys , and includes the three major data-collection modes ( mail , telephone , and in-person ) under the same analytic framework . using hierarchical regression modeling and literature from the past 21 years , the analysis finds a strong , nonlinear effect of incentives . survey mode and incentive delivery timing ( prepaid or promised ) also play important roles in the effectiveness of incentives . prepaid incentives offered in mail surveys had the largest per-dollar impact on response . incentive timing appears to play an important role in the effectiveness of incentives offered in telephone surveys but not in-person surveys . our model estimates a null effect of promised incentives in mail surveys ; however , given the dearth of experiments testing this type of incentive , we are unable to draw firm conclusions regarding their effectiveness . survey burden and survey year both were negatively correlated with response overall . however , neither significantly impacted the dose-response relationship . survey story_separator_special_tag machine learning is a field at the intersection of statistics and computer science that uses algorithms to extract information and knowledge from data . its applications increasingly find their way . story_separator_special_tag abstract machines are increasingly doing intelligent things . face recognition algorithms use a large dataset of photos labeled as having a face or not to estimate a function that predicts the pre . story_separator_special_tag this paper presents micro-level evidence on the role of the socio-demographic characteristics of the population and the characteristics of the data collection process as predictors of survey response . our evidence is based on the public use files of the european community household panel ( echp ) , a longitudinal household survey covering the countries of the european union , whose attractive feature is the high level of comparability across countries and over time . we model the response process as the outcome of two sequential events : ( i ) contact between the interviewer and an eligible interviewee , and ( ii ) cooperation of the interviewee . our model allows for dependence between the ease of contact and the propensity to cooperate , taking into account the censoring problem caused by the fact that we observe whether a person is a respondent only if she has been contacted . story_separator_special_tag the central problem of longitudinal surveys is attrition . the national longitudinal survey of youth in 1979 ( nlsy79 ) , which this issue of the monthly labor review features , is the gold standard for sample retention against which longitudinal surveys are usually measured . however , we can not understand how the nlsy79 has done so well without considering what was done differently in the other cohorts of the nls and what we have learned by formal evaluations of attrition aversion measures that evolved over a quarter century of field work . the lessons here are hard-won and , to some , unconventional . story_separator_special_tag 1me second year student , computer engineering , modern education society s college of engineering , pune , india 2assistant professor , computer engineering , modern education society s college of engineering , pune , india -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - * * * -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- abstract designing a generic machine learning service based on rest platform for prediction by using a combined tensorflow and scikit learn approach in python . the data preprocessing being different for google tensorflow and scikitlearn , using factory pattern and delegator pattern , to generify this . the generic machine learning service combines use of random forest classifier , svm classifier from scikit-learn and google tensorflow deep neural network for class based classification . there is abstract machine learning odelfactory that has responsibility of abstracting out the internal details that are implementation story_separator_special_tag copyright ( \xa9 ) 1999 2012 r foundation for statistical computing . permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies . permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying , provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one . permission is granted to copy and distribute translations of this manual into another language , under the above conditions for modified versions , except that this permission notice may be stated in a translation approved by the r core team . story_separator_special_tag household panels are special types of panels that aim to obtain information not only about the sampled persons in panel , but also about their household context . this is typically necessary in order to assess important political and sociological questions such as poverty or parenthood that are defined on the household level and not on the personal level . in order to facilitate the evaluation of such studies this means that all persons in the household are interviewed and the resulting household variables are then constructed by the agency that conducts the sample . story_separator_special_tag in light of the recent interest in using longitudinal panel data to study personality development , it is important to know if personality traits are related to panel attrition . we analyse the effects of personality on panel drop-out separately for an older subsample ( started in 1984 ) , a relatively young subsample ( started in 2000 ) , and a new subsample ( started in 2009 ) of the german socio-economic panel ( soep ) study . we found that openness slightly decreases the probability of panel drop-out in all three samples . for the older subsample only , we found a small negative effect of agreeableness on panel drop-out . we control for age , sex , education , migration background , and the number of inhabitants in the region of the respondents . story_separator_special_tag tables and figures . glossary . 1. introduction . 1.1 overview . 1.2 examples of surveys with nonresponse . 1.3 properly handling nonresponse . 1.4 single imputation . 1.5 multiple imputation . 1.6 numerical example using multiple imputation . 1.7 guidance for the reader . 2. statistical background . 2.1 introduction . 2.2 variables in the finite population . 2.3 probability distributions and related calculations . 2.4 probability specifications for indicator variables . 2.5 probability specifications for ( x , y ) . 2.6 bayesian inference for a population quality . 2.7 interval estimation . 2.8 bayesian procedures for constructing interval estimates , including significance levels and point estimates . 2.9 evaluating the performance of procedures . 2.10 similarity of bayesian and randomization -- based inferences in many practical cases . 3. underlying bayesian theory . 3.1 introduction and summary of repeated -- imputation inferences . 3.2 key results for analysis when the multiple imputations are repeated draws from the posterior distribution of the missing values . 3.3 inference for scalar estimands from a modest number of repeated completed -- data means and variances . 3.4 significance levels for multicomponent estimands from a modest number of repeated completed -- data story_separator_special_tag 1 introduction 1.1 what is data mining ? 1.2 motivating challenges 1.3 the origins of data mining 1.4 data mining tasks 1.5 scope and organization of the book 1.6 bibliographic notes 1.7 exercises 2 data 2.1 types of data 2.2 data quality 2.3 data preprocessing 2.4 measures of similarity and dissimilarity 2.5 bibliographic notes 2.6 exercises 3 exploring data 3.1 the iris data set 3.2 summary statistics 3.3 visualization 3.4 olap and multidimensional data analysis 3.5 bibliographic notes 3.6 exercises 4 classification : basic concepts , decision trees , and model evaluation 4.1 preliminaries 4.2 general approach to solving a classification problem 4.3 decision tree induction 4.4 model overfitting 4.5 evaluating the performance of a classifier 4.6 methods for comparing classifiers 4.7 bibliographic notes 4.8 exercises 5 classification : alternative techniques 5.1 rule-based classifier 5.2 nearest-neighbor classifiers 5.3 bayesian classifiers 5.4 artificial neural network ( ann ) 5.5 support vector machine ( svm ) 5.6 ensemble methods 5.7 class imbalance problem 5.8 multiclass problem 5.9 bibliographic notes 5.10 exercises 6 association analysis : basic concepts and algorithms 6.1 problem definition 6.2 frequent itemset generation 6.3 rule generation 6.4 compact representation of frequent itemsets 6.5 alternative methods for generating frequent itemsets story_separator_special_tag abstract in evaluations of forecasting accuracy , including forecasting competitions , researchers have paid attention to the selection of time series and to the appropriateness of forecast-error measures . however , they have not formally analyzed choices in the implementation of out-of-sample tests , making it difficult to replicate and compare forecasting accuracy studies . in this paper , i ( 1 ) explain the structure of out-of-sample tests , ( 2 ) provide guidelines for implementing these tests , and ( 3 ) evaluate the adequacy of out-of-sample tests in forecasting software . the issues examined include series-splitting rules , fixed versus rolling origins , updating versus recalibration of model coefficients , fixed versus rolling windows , single versus multiple test periods , diversification through multiple time series , and design characteristics of forecasting competitions . for individual time series , the efficiency and reliability of out-of-sample tests can be improved by employing rolling-origin evaluations , recalibrating coefficients , and using multiple test periods . the results of forecasting competitions would be more generalizable if based upon precisely described groups of time series , in which the series are homogeneous within group and heterogeneous between groups . few forecasting story_separator_special_tag using mobile phones to conduct survey interviews has gathered momentum recently . however , using mobile telephones in surveys poses many new challenges . one important challenge involves properly classifying final case dispositions to understand response rates and non-response error and to implement responsive survey designs . both purposes demand accurate assessments of the outcomes of individual call attempts . by looking at actual practices across three countries , we suggest how the disposition codes of the american association for public opinion research , which have been developed for telephone surveys , can be modified to fit mobile phones . adding an international dimension to these standard definitions will improve survey methods by making systematic comparisons across different contexts possible . copyright 2007 royal statistical society . story_separator_special_tag summary we propose a new method for estimation in linear models . the 'lasso ' minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant . because of the nature of this constraint it tends to produce some coefficients that are exactly 0 and hence gives interpretable models . our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression . it produces interpretable models like subset selection and exhibits the stability of ridge regression . there is also an interesting relationship with recent work in adaptive function estimation by donoho and johnstone . the lasso idea is quite general and can be applied in a variety of statistical models : extensions to generalized regression models and tree-based models are briefly described . story_separator_special_tag panel surveys suffer from attrition . most panel studies use propensity models or weighting class approaches to correct for non-random dropout . these models draw on variables measured in a previous wave or from paradata of the study . while it is plausible that they affect contactability and cooperativeness , panel studies usually can not assess the impact of events between waves on attrition . the amount of change in the population could be seriously underestimated if such events had an effect on participation in subsequent waves . the panel study pass is a novel dataset for labour market and poverty research . in pass , survey data on ( un ) employment histories , income and education of participants are linked to corresponding data from respondents ' administrative records . thus , change can be observed for attritors as well as for continued participants . these data are used to show that change in household composition , employment status or receipt of benefits has an influence on contact and cooperation rates in the following wave . a large part of the effect is due to lower contactability of households who moved . nevertheless , this effect can lead to story_separator_special_tag panel attrition is a process producing data absent from panel records due to survey non-participation or other data unavailability . i examine the nature and causes of attrition resulting from non-contact and survey refusal in the british household panel study . focusing on non-response transitions amongst wave 1 respondents using discrete time transition models , i locate attrition at first non-response over the first 14 waves . physical impediments to contact , less time spent at home and high likelihood of geographic mobility are predictive of subsequent non-contact . refusals most often result from lack of interest in the survey and general low motivation to participate . story_separator_special_tag cette etude examine l'attrition dans l'enquete du panel suisse de menages ( psm ) . il s'agit de savoir si l'attrition differe selon les caracteristiques demographiques et l ' insertion sociale . les repondants reguliers sont compares aux repondants irreguliers ainsi qu ' a ceux qui abandonnent . il ressort que les repondants fideles sont plutot feminins , plus \xe2ges , maries , ont une meilleure formation et sont proprietaires . ils sont egalement mieux integres dans la societe et en meilleure sante . les caracteristiques demographiques et l'integration sociale ont un effet independant sur les schemas de reponses , induisant un leger biais . les personnes qui renouvellent leur participation au psm presentent des caracteristiques similaires aux repondants qui ont abandonne , ce qui diminue les biais de reponses . enfin , les implications de l'attrition pour l'utilisation des donnees sont discutees . story_separator_special_tag this study examines the importance of change in characteristics and circumstances of households and household members for contact and cooperation patterns . the literature suggests that there might be an underrepresentation of change in panel studies , because respondents facing more changes would be more likely to drop out . we approach this problem by analysing whether previous changes are predictive of later attrition or temporary drop-out , using eleven waves of the swiss household panel ( 1999 2009 ) . our analyses support previous findings to some extent . changes in household composition , employment status and social involvement as well as moving are associated mainly with attrition and less with temporary drop-out . these changes affect obtaining cooperation rather than obtaining contact , and tend to increase attrition . story_separator_special_tag this chapter examines the factors that influence continued participation by sample members in longitudinal surveys . it is structured into two distinct parts . first , evidence from previous research that has modeled the response process within a multivariate framework is reviewed . second , estimates of predictors of response from a new national household panel survey the household , income and labour dynamics in australia ( hilda ) survey are presented . following other recent treatments in the literature , the estimation model treats survey participation as involving two sequential events , contact and response . story_separator_special_tag for over 35 years , a random sample of u.s. women has responded for free to a government survey that tracks their socioeconomic development . in 2003 an experiment was run to understand if providing monetary incentives of up to $ 40 would impact participation rates . providing incentives to respondents , who previously refused to participate in the last survey round , significantly boosted response rates , and resulted in longer interviews and more items answered . however , providing monetary incentives to previously willing respondents showed a mixed impact on response rates , interview times , and items answered . story_separator_special_tag this paper for the first time adopts economic field experiments for 267 children aged 8 to 11 from the third to fifth grades of a rural primary school in central china as subjects to study the impact of success and failure in a competition on children s follow-up goal-setting and competition entry decision-making . the study finds that after the feedback of competition results , winning children will set a goal that is lower than their own ability , and losers will set a goal that is similar to their own ability while increasing their efforts to improve their performance . ultimately losers achieve scores similar to the successful ones . in terms of the competition entry decision , success in a competition generates significant positive incentives for follow-up competition entry . children getting success in the first round are significantly more active to choose to compete with others in the follow-up competition choice and ultimately obtain higher scores . we also find that success and failure experiences have a more significant impact on girls goal-setting and competition entry decision . winning girls become more conservative in following rounds , such as setting a goal that is significantly lower than
a constant rebalanced portfolio is an investment strategy which keeps the same distribution of wealth among a set of stocks from period to period . recently there has been work on on-line investment strategies that are competitive with the best constant rebalanced portfolio determined in hindsight ( cover , 1991 , 1996s helmbold et al. , 1996s cover & ordentlich , 1996a , 1996bs ordentlich & cover , 1996 ) . for the universal algorithm of cover ( cover , 1991 ) , we provide a simple analysis which naturallyextends to the case of a fixed percentage transaction cost ( commission ) , answering a question raised in ( cover , 1991s helmbold et al. , 1996s cover & ordentlich , 1996a , 1996bs ordentlich & cover , 1996s cover , 1996 ) . in addition , we present a simple randomized implementation that is significantly faster in practice . we conclude by explaining how these algorithms can be applied to other problems , such as combining the predictions of statistical language models , where the resulting guarantees are more striking . story_separator_special_tag a constant rebalanced portfolio is an investment strategy which keeps the same distribution of wealth among a set of stocks from day to day . there has been much work on cover 's universal algorithm , which is competitive with the best constant rebalanced portfolio determined in hindsight ( d. helmbold et al. , 1995 ; a. blum and a. kalai , 1999 ; t.m . cover and e. ordentlich , 1996 ) . while this algorithm has good performance guarantees , all known implementations are exponential in the number of stocks , restricting the number of stocks used in experiments . we present an efficient implementation of the universal algorithm that is based on non-uniform random walks that are rapidly mixing ( d. applegate and r. kannanm , 1991 ) . this same implementation also works for non-financial applications of the universal algorithm , such as data compression ( t.m . cover , 1886 ) and language modeling ( a. kalai et al. , 1999 ) . story_separator_special_tag we consider the problem of sampling according to a distribution with log-concave density f over a convex body k r n. the sampling is done using a biased random walk and we give improved polynomial upper bounds on the time to get a sample point with distribution close to f . story_separator_special_tag we present a sequential investment algorithm , the /spl mu/-weighted universal portfolio with side information , which achieves , to first order in the exponent , the same wealth as the best side-information dependent investment strategy ( the best state-constant rebalanced portfolio ) determined in hindsight from observed market and side-information outcomes . this is an individual sequence result which shows the difference between the exponential growth wealth of the best state-constant rebalanced portfolio and the universal portfolio with side information is uniformly less than ( d/ ( 2n ) ) log ( n+1 ) + ( k/n ) log 2 for every stock market and side-information sequence and for all time n. here d=k ( m-1 ) is the number of degrees of freedom in the state-constant rebalanced portfolio with k states of side information and m stocks . the proof of this result establishes a close connection between universal investment and universal data compression . story_separator_special_tag the standard so-called experts algorithms are methods for utilizing a given set of `` experts '' to make good choices in a sequential decision-making problem . in the standard setting of experts algorithms , the decision maker chooses repeatedly in the same `` state '' based on information about how the different experts would have performed if chosen to be followed . in this paper we seek to extend this framework by introducing state information . more precisely , we extend the framework by allowing an experts algorithm to rely on state information , namely , partial information about the cost function , which is revealed to the decision maker before the latter chooses an action . this extension is very natural in prediction problems . for illustration , an experts algorithm , which is supposed to predict whether the next day will be rainy , can be extended to predicting the same given the current temperature . we introduce new algorithms , which attain optimal performance in the new framework , and apply to more general settings than variants of regression that have been considered in the statistics literature . story_separator_special_tag convex programming involves a convex set f rn and a convex cost function c : f r. the goal of convex programming is to find a point in f which minimizes c. in online convex programming , the convex set is known in advance , but in each step of some repeated optimization problem , one must select a point in f before seeing the cost function for that step . this can be used to model factory production , farm production , and many other industrial optimization problems where one is unaware of the value of the items produced until they have already been constructed . we introduce an algorithm for this domain . we also apply this algorithm to repeated games , and show that it is really a generalization of infinitesimal gradient ascent , and the results here imply that generalized infinitesimal gradient ascent ( giga ) is universally consistent . story_separator_special_tag we experimentally study on-line investment algorithms first proposed by agarwal and hazan and extended by hazan et al . which achieve almost the same wealth as the best constant-rebalanced portfolio determined in hindsight . these algorithms are the first to combine optimal logarithmic regret bounds with efficient deterministic computability . they are based on the newton method for offline optimization which , unlike previous approaches , exploits second order information . after analyzing the algorithm using the potential function introduced by agarwal and hazan , we present extensive experiments on actual financial data . these experiments confirm the theoretical advantage of our algorithms , which yield higher returns and run considerably faster than previous algorithms with optimal regret . additionally , we perform financial analysis using mean-variance calculations and the sharpe ratio . story_separator_special_tag we present a novel efficient algorithm for portfolio selection which theoretically attains two desirable properties : worst-case guarantee : the algorithm is universal in the sense that it asymptotically performs almost as well as the best constant rebalanced portfolio determined in hindsight from the realized market prices . furthermore , it attains the tightest known bounds on the regret , or the log-wealth difference relative to the best constant rebalanced portfolio . we prove that the regret of the algorithm is bounded by o ( log q ) , where q is the quadratic variation of the stock prices . this is the first improvement upon cover 's ( 1991 ) seminal work that attains a regret bound of o ( log t ) , where t is the number of trading iterations . average-case guarantee : in the geometric brownian motion ( gbm ) model of stock prices , our algorithm attains tighter regret bounds , which are provably impossible in the worst-case . hence , when the gbm model is a good approximation of the behavior of market , the new algorithm has an advantage over previous ones , albeit retaining worst-case guarantees . we derive this algorithm story_separator_special_tag in an online decision problem , one makes a sequence of decisions without knowledge of the future . each period , one pays a cost based on the decision and observed state . we give a simple approach for doing nearly as well as the best single decision , where the best is chosen with the benefit of hindsight . a natural idea is to follow the leader , i.e . each period choose the decision which has done best so far . we show that by slightly perturbing the totals and then choosing the best decision , the expected performance is nearly as good as the best decision in hindsight . our approach , which is very much like hannan 's original game-theoretic approach from the 1950s , yields guarantees competitive with the more modern exponential weighting algorithms like weighted majority . more importantly , these follow-the-leader style algorithms extend naturally to a large class of structured online problems for which the exponential algorithms are inefficient . story_separator_special_tag in an online convex optimization problem a decision-maker makes a sequence of decisions , i.e. , chooses a sequence of points in euclidean space , from a fixed feasible set . after each point is chosen , it encounters a sequence of ( possibly unrelated ) convex cost functions . zinkevich ( icml 2003 ) introduced this framework , which models many natural repeated decision-making problems and generalizes many existing problems such as prediction from expert advice and cover 's universal portfolios . zinkevich showed that a simple online gradient descent algorithm achieves additive regret $ o ( \\sqrt { t } ) $ , for an arbitrary sequence of t convex cost functions ( of bounded gradients ) , with respect to the best single decision in hindsight . in this paper , we give algorithms that achieve regret o ( log ? ( t ) ) for an arbitrary sequence of strictly convex functions ( with bounded first and second derivatives ) . this mirrors what has been done for the special cases of prediction from expert advice by kivinen and warmuth ( eurocolt 1999 ) , and universal portfolios by cover ( math . finance 1:1 --
some robots can interact with humans using natural language , and identify service requests through human-robot dialog . however , few robots are able to improve their language capabilities from this experience . in this paper , we develop a dialog agent for robots that is able to interpret user commands using a semantic parser , while asking clarification questions using a probabilistic dialog manager . this dialog agent is able to augment its knowledge base and improve its language capabilities by learning from dialog experiences , e.g. , adding new entities and learning new ways of referring to existing entities . we have extensively evaluated our dialog system in simulation as well as with human participants through mturk and real-robot platforms . we demonstrate that our dialog agent performs better in efficiency and accuracy in comparison to baseline learning agents . demo video can be found at https : //youtu.be/dfb3jbhbqye story_separator_special_tag robots frequently face complex tasks that require more than one action , where sequential decision-making ( sdm ) capabilities become necessary . the key contribution of this work is a robot sdm framework , called lcorpp , that supports the simultaneous capabilities of supervised learning for passive state estimation , automated reasoning with declarative human knowledge , and planning under uncertainty toward achieving long-term goals . in particular , we use a hybrid reasoning paradigm to refine the state estimator , and provide informative priors for the probabilistic planner . in experiments , a mobile robot is tasked with estimating human intentions using their motion trajectories , declarative contextual knowledge , and human-robot interaction ( dialog-based and motion-based ) . results suggest that , in efficiency and accuracy , our framework performs better than its no-learning and no-reasoning counterparts in office environment . story_separator_special_tag contents : preface . j.r. anderson , c. lebiere , introduction . j.r. anderson , c. lebiere , knowledge representation . j.r. anderson , c. lebiere , m. lovett , performance . j.r. anderson , c. lebiere , learning . j.r. anderson , m. matessa , c. lebiere , visual interface . m.d . byrne , j.r. anderson , perception and action . j.r. anderson , d. bothell , c. lebiere , m. matessa , list memory . m. lovett , choice . c. lebiere , j.r. anderson , cognitive arithmetic . d.d . salvucci , j.r. anderson , analogy . c.d . schunn , j.r. anderson , scientific discovery . j.r. anderson , c. lebiere , reflections . story_separator_special_tag the long-anticipated revision of this # 1 selling book offers the most comprehensive , state of the art introduction to the theory and practice of artificial intelligence for modern applications . intelligent agents . solving problems by searching . informed search methods . game playing . agents that reason logically . first-order logic . building a knowledge base . inference in first-order logic . logical reasoning systems . practical planning . planning and acting . uncertainty . probabilistic reasoning systems . making simple decisions . making complex decisions . learning from observations . learning with neural networks . reinforcement learning . knowledge in learning . agents that communicate . practical communication in english . perception . robotics . for computer professionals , linguists , and cognitive scientists interested in artificial intelligence . story_separator_special_tag this paper focuses on the investigation and improvement of knowledge representation language p-log that allows for both logical and probabilistic reasoning . we refine the definition of the language by eliminating some ambiguities and incidental decisions made in its original version and slightly modify the formal semantics to better match the intuitive meaning of the language constructs . we also define a new class of coherent ( i.e. , logically and probabilistically consistent ) p-log programs which facilitates their construction and proofs of correctness . there are a query answering algorithm , sound for programs from this class , and a prototype implementation which , due to their size , are not included in the paper . they , however , can be found in the dissertation of the first author . story_separator_special_tag life expectancy in sweden is high and the country performs well in comparisons related to disease-oriented indicators of health service outcomes and quality of care . the swedish health system is committed to ensuring the health of all citizens and abides by the principles of human dignity , need and solidarity , and cost-effectiveness . the state is responsible for overall health policy , while the funding and provision of services lies largely with the county councils and regions . the municipalities are responsible for the care of older and disabled people . the majority of primary care centres and almost all hospitals are owned by the county councils . health care expenditure is mainly tax funded ( 80 % ) and is equivalent to 9.9 % of gross domestic product ( gdp ) ( 2009 ) . only about 4 % of the population has voluntary health insurance ( vhi ) . user charges fund about 17 % of health expenditure and are levied on visits to professionals , hospitalization and medicines . the number of acute care hospital beds is below the european union ( eu ) average and sweden allocates more human resources to the health sector story_separator_special_tag deeper neural networks are more difficult to train . we present a residual learning framework to ease the training of networks that are substantially deeper than those used previously . we explicitly reformulate the layers as learning residual functions with reference to the layer inputs , instead of learning unreferenced functions . we provide comprehensive empirical evidence showing that these residual networks are easier to optimize , and can gain accuracy from considerably increased depth . on the imagenet dataset we evaluate residual nets with a depth of up to 152 layers -- -8x deeper than vgg nets but still having lower complexity . an ensemble of these residual nets achieves 3.57 % error on the imagenet test set . this result won the 1st place on the ilsvrc 2015 classification task . we also present analysis on cifar-10 with 100 and 1000 layers . the depth of representations is of central importance for many visual recognition tasks . solely due to our extremely deep representations , we obtain a 28 % relative improvement on the coco object detection dataset . deep residual nets are foundations of our submissions to ilsvrc & coco 2015 competitions , where we also won story_separator_special_tag several methods have been previously used to approximate free boundaries in finite-difference numerical simulations . a simple , but powerful , method is described that is based on the concept of a fractional volume of fluid ( vof ) . this method is shown to be more flexible and efficient than other methods for treating complicated free boundary configurations . to illustrate the method , a description is given for an incompressible hydrodynamics code , sola-vof , that uses the vof technique to track free fluid surfaces . story_separator_special_tag in reinforcement learning ( rl ) , an agent is guided by the rewards it receives from the reward function . unfortunately , it may take many interactions with the environment to learn from sparse rewards , and it can be challenging to specify reward functions that reflect complex reward-worthy behavior . we propose using reward machines ( rms ) , which are automata-based representations that expose reward function structure , as a normal form representation for reward functions . we show how specifications of reward in various formal languages , including ltl and other regular languages , can be automatically translated into rms , easing the burden of complex reward function specification . we then show how the exposed structure of the reward function can be exploited by tailored q-learning algorithms and automated reward shaping techniques in order to improve the sample efficiency of reinforcement learning methods . experiments show that these rm-tailored techniques significantly outperform state-of-the-art ( deep ) rl algorithms , solving problems that otherwise can not reasonably be solved by existing approaches . story_separator_special_tag on the basis of demonstration experience gained through demonstration learning , this paper proposes an interactive robot learning system , which uses wearable sensors that can detect surface electromyography signals ( semg ) and inertial information . gesture recognition and trajectory calculation are the main process in our system . robot grasping and other hand actions , can be marked and controlled through human gesture recognition . a comparison of 4 groups of feature extraction project and multiple kernel relevance vector machine ( mkrvm ) based on multiple kernel expansion via kernel alignment was used to get better recognition performance . after hand trajectory estimation , the operator 's trajectory data can be encoding by gaussian mixture model ( gmm ) , then it can generalize them to deal with new situations by gaussian mixture regression ( gmr ) . to ensure the success of the crawling operation , the reinforcement learning algorithm is used to correct the grab attitude and position . this paper proposes a kind of q-learning algorithm based on statistical coding parameters in demonstration learning named gm_ql . by the introduction of the demonstration programming experience , as a priori knowledge of the crawling operation , story_separator_special_tag in partially observed environments , it can be useful for a human to provide the robot with declarative information that represents probabilistic relational constraints on properties of objects in the world , augmenting the robot 's sensory observations . for instance , a robot tasked with a search-and-rescue mission may be informed by the human that two victims are probably in the same room . an important question arises : how should we represent the robot 's internal knowledge so that this information is correctly processed and combined with raw sensory information ? in this paper , we provide an efficient belief state representation that dynamically selects an appropriate factoring , combining aspects of the belief when they are correlated through information and separating them when they are not . this strategy works in open domains , in which the set of possible objects is not known in advance , and provides significant improvements in inference time over a static factoring , leading to more efficient planning for complex partially observed tasks . we validate our approach experimentally in two open-domain planning problems : a 2d discrete gridworld task and a 3d continuous cooking task . a supplementary video can story_separator_special_tag in 1978 , the acm special interest group on programming languages ( sigplan ) sponsored a conference on the history of programming languages ( hopl ) . papers were prepared and presentations made at a conference in los angeles , california . the program committee selected thirteen languages that met the criteria of having been in use for at least 10 years , had significant influence , and were still in use . the languages were : algol , apl , apt , basic , cobol , fortran , gpss , joss , jovial , lisp , pl/i , simula , and snobol . the results of that conference were recorded in history of programming languages , edited by richard l. wexelblat [ new york : academic press , 19811. the second acm sigplan history of programming languages conference ( hopl-ii ) took place on april 20-23 , 1993 in cambridge , massachusetts . the papers prepared for that conference form the basis of this present volume , along with the transcripts of the presentations , a keynote address `` language design as design '' by fred brooks , a discussion of the period between hopl and hopl-ii by jean story_separator_special_tag we introduce dtproblog , a decision-theoretic extension of prolog and its probabilistic variant problog . dt-problog is a simple but expressive probabilistic programming language that allows the modeling of a wide variety of domains , such as viral marketing . in dtproblog , the utility of a strategy ( a particular choice of actions ) is defined as the expected reward for its execution in the presence of probabilistic effects . the key contribution of this paper is the introduction of exact , as well as approximate , solvers to compute the optimal strategy for a dtproblog program and the decision problem it represents , by making use of binary and algebraic decision diagrams . we also report on experimental results that show the effectiveness and the practical usefulness of the approach . story_separator_special_tag starcraft : broodwar ( sc : bw ) is a very popular commercial real strategy game ( rts ) which has been extensively used in ai research . despite being a popular test-bed reinforcement learning ( rl ) has not been evaluated extensively . a successful attempt was made to show the use of rl in a small-scale combat scenario involving an overpowered agent battling against multiple enemy units [ 1 ] . however , the chosen scenario was very small and not representative of the complexity of the game in its entirety . in order to build an rl agent that can manage the complexity of the full game , more efficient approaches must be used to tackle the state-space explosion . in this paper , we demonstrate how plan-based reward shaping can help an agent scale up to larger , more complex scenarios and significantly speed up the learning process as well as how high level planning can be combined with learning focusing on learning the starcraft strategy , battlecruiser rush . we empirically show that the agent with plan-based reward shaping is significantly better both in terms of the learnt policy , as well as convergence speed story_separator_special_tag the history of learning for control has been an exciting back and forth between two broad classes of algorithms : planning and reinforcement learning . planning algorithms effectively reason over long horizons , but assume access to a local policy and distance metric over collision-free paths . reinforcement learning excels at learning policies and relative values of states , but fails to plan over long horizons . despite the successes of each method on various tasks , long horizon , sparse reward tasks with high-dimensional observations remain exceedingly challenging for both planning and reinforcement learning algorithms . frustratingly , these sorts of tasks are potentially the most useful , as they are simple to design ( a human only need to provide an example goal state ) and avoid injecting bias through reward shaping . we introduce a general-purpose control algorithm that combines the strengths of planning and reinforcement learning to effectively solve these tasks . our main idea is to decompose the task of reaching a distant goal state into a sequence of easier tasks , each of which corresponds to reaching a particular subgoal . we use goal-conditioned rl to learn a policy to reach each waypoint and story_separator_special_tag creativity is another aspect of dealing with novelty . what is creativity ? your first thought may be that it is something different thinking outside of the box . certainly , this is an important part of creativity . but psychologists view creativity as being a little more complex than just being something new . creativity is traditionally defined as something that is novel , good , and appropriate for the task . if you are asked a math question on a midterm and you draw a picture of an elephant , then this is doing something in a different way but not necessarily a creative way ( kaufman & sternberg , 2007 ) . research suggests that , to a large extent , people can become creative if they decide that is what they want to do ( chen , kasof , himsel , dmitrieva , dong , & xue , 2005 ) . creativity can occur anywhere . consider , for example , the following story : a politician and his wife decide to eat dinner in a fancy french restaurant in washington , dc . the waiter approaches their table and asks the wife what she would story_separator_special_tag probabilistic logic programs are logic programs in which some of the facts are annotated with probabilities . this paper investigates how classical inference and learning tasks known from the graphical model community can be tackled for probabilistic logic programs . several such tasks such as computing the marginals given evidence and learning from ( partial ) interpretations have not really been addressed for probabilistic logic programs before . the first contribution of this paper is a suite of efficient algorithms for various inference tasks . it is based on a conversion of the program and the queries and evidence to a weighted boolean formula . this allows us to reduce the inference tasks to well-studied tasks such as weighted model counting , which can be solved using state-of-the-art methods known from the graphical model and knowledge compilation literature . the second contribution is an algorithm for parameter estimation in the learning from interpretations setting . the algorithm employs expectation maximization , and is built on top of the developed inference algorithms . the proposed approach is experimentally evaluated . the results show that the inference algorithms improve upon the state-of-the-art in probabilistic logic programming and that it is indeed possible story_separator_special_tag we describe a new problem solver called strips that attempts to find a sequence of operators in a space of world models to transform a given initial world model into a model in which a given goal formula can be proven to be true . strips represents a world model as an arbi trary collection of first-order predicate calculus formulas and is designed to work with models consisting of large numbers of formulas . it employs a resolution theorem prover to answer ( juestions of particular models and uses means-ends analysis to guide it to the desired goal-satisfying model . story_separator_special_tag in this work we present isa , a novel approach for learning and exploiting subgoals in reinforcement learning ( rl ) . our method relies on inducing an automaton whose transitions are subgoals expressed as propositional formulas over a set of observable events . a state-of-the-art inductive logic programming system is used to learn the automaton from observation traces perceived by the rl agent . the reinforcement learning and automaton learning processes are interleaved : a new refined automaton is learned whenever the rl agent generates a trace not recognized by the current automaton . we evaluate isa in several gridworld problems and show that it performs similarly to a method for which automata are given in advance . we also show that the learned automata can be exploited to speed up convergence through reward shaping and transfer learning across multiple tasks . finally , we analyze the running time and the number of traces that isa needs to learn an automata , and the impact that the number of observable events have on the learner 's performance . story_separator_special_tag deep reinforcement learning ( drl ) brings the power of deep neural networks to bear on the generic task of trial-and-error learning , and its effectiveness has been convincingly demonstrated on tasks such as atari video games and the game of go . however , contemporary drl systems inherit a number of shortcomings from the current generation of deep learning techniques . for example , they require very large datasets to work effectively , entailing that they are slow to learn even when such datasets are available . moreover , they lack the ability to reason on an abstract level , which makes it difficult to implement high-level cognitive functions such as transfer learning , analogical reasoning , and hypothesis-based reasoning . finally , their operation is largely opaque to humans , rendering them unsuitable for domains in which verifiability is important . in this paper , we propose an end-to-end reinforcement learning architecture comprising a neural back end and a symbolic front end with the potential to overcome each of these shortcomings . as proof-of-concept , we present a preliminary implementation of the architecture and apply it to several variants of a simple video game . we show that story_separator_special_tag lifelong machine learning ( or lifelong learning ) is an advanced machine learning paradigm that learns continuously , accumulates the knowledge learned in previous tasks , and uses it to help future learning . in the process , the learner becomes more and more knowledgeable and effective at learning . this learning ability is one of the hallmarks of human intelligence . however , the current dominant machine learning paradigm learns in isolation : given a training dataset , it runs a machine learning algorithm on the dataset to produce a model . it makes no attempt to retain the learned knowledge and use it in future learning . although this isolated learning paradigm has been very successful , it requires a large number of training examples , and is only suitable for well-defined and narrow tasks . in comparison , we humans can learn effectively with a few examples because we have accumulated so much knowledge in the past which enables us to learn with little data or effort . lifelong learning aims to achieve this capability . as statistical machine learning matures , it is time to make a major effort to break the isolated learning tradition and story_separator_special_tag interpolation is an important property of classical and many non-classical logics that has been shown to have interesting applications in computer science and ai . here we study the interpolation property for the the non-monotonic system of equilibrium logic , establishing weaker or stronger forms of interpolation depending on the precise interpretation of the inference relation . these results also yield a form of interpolation for ground logic programs under the answer sets semantics . for disjunctive logic programs we also study the property of uniform interpolation that is closely related to the concept of variable forgetting . the first-order version of equilibrium logic has analogous interpolation properties whenever the collection of equilibrium models is ( first-order ) definable . since this is the case for so-called safe programs and theories , it applies to the usual situations that arise in practical answer set programming . story_separator_special_tag knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships . in this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities . previous work represented entities as either discrete atomic units or with a single entity vector representation . we show that performance can be improved when entities are represented as an average of their constituting word vectors . this allows sharing of statistical strength between , for instance , facts involving the `` sumatran tiger '' and `` bengal tiger . '' lastly , we demonstrate that all models improve when these word vectors are initialized with vectors learned from unsupervised large corpora . we assess the model by considering the problem of predicting additional true relations between entities given a subset of the knowledge base . our model outperforms previous models and can classify unseen relationships in wordnet and freebase with an accuracy of 86.2 % and 90.0 % , respectively . story_separator_special_tag 1 introduction and overview i classical planning 2 representations for classical planning * 3 complexity of classical planning * 4 state-space planning * 5 plan-space planning ii neoclassical planning 6 planning-graph techniques * 7 propositional satisfiability techniques * 8 constraint satisfaction techniques iii heuristics and control strategies 9 heuristics in planning * 10 control rules in planning * 11 hierarchical task network planning * 12 control strategies in deductive planning iv planning with time and resources 13 time for planning * 14 temporal planning * 15 planning and resource scheduling v planning under uncertainty 16 planning based on markov decision processes * 17 planning based on model checking * 18 uncertainty with neo-classical techniques vi case studies and applications 19 space applications * 20 planning in robotics * 21 planning for manufacturability analysis * 22 emergency evacuation planning * 23 planning in the game of bridge vii conclusion 24 conclusion and other topics viii appendices a search procedures and computational complexity * b first order logic * c model checking story_separator_special_tag the architecture described in this paper encodes a theory of intentions based on the key principles of non-procrastination , persistence , and automatically limiting reasoning to relevant knowledge and observations . the architecture reasons with transition diagrams of any given domain at two different resolutions , with the fine-resolution description defined as a refinement of , and hence tightly-coupled to , a coarse-resolution description . for any given goal , nonmonotonic logical reasoning with the coarse-resolution description computes an activity , i.e. , a plan , comprising a sequence of abstract actions to be executed to achieve the goal . each abstract action is implemented as a sequence of concrete actions by automatically zooming to and reasoning with the part of the fine-resolution transition diagram relevant to the current coarse-resolution transition and the goal . each concrete action in this sequence is executed using probabilistic models of the uncertainty in sensing and actuation , and the corresponding fine-resolution outcomes are used to infer coarse-resolution observations that are added to the coarse-resolution history . the architecture s capabilities are evaluated in the context of a simulated robot assisting humans in an office domain , on a physical robot ( baxter ) story_separator_special_tag enabling robots to learn tasks and follow instructions as easily as humans is important for many real-world robot applications . previous approaches have applied machine learning to teach the mapping from language to low dimensional symbolic representations constructed by hand , using demonstration trajectories paired with accompanying instructions . these symbolic methods lead to data efficient learning . other methods map language directly to high-dimensional control behavior , which requires less design effort but is data-intensive . we propose to first learning symbolic abstractions from demonstration data and then mapping language to those learned abstractions . these symbolic abstractions can be learned with significantly less data than end-to-end approaches , and support partial behavior specification via natural language since they permit planning using traditional planners . during training , our approach requires only a small number of demonstration trajectories paired with natural language without the use of a simulator and results in a representation capable of planning to fulfill natural language instructions specifying a goal or partial plan . we apply our approach to two domains , including a mobile manipulator , where a small number of demonstrations enable the robot to follow navigation commands like take left at the story_separator_special_tag yet another prolog ( yap ) is a prolog system originally developed in the mid-eighties and that has been under almost constant development since then . this paper presents the general structure and design of the yap system , focusing on three important contributions to the logic programming community . first , it describes the main techniques used in yap to achieve an efficient prolog engine . second , most logic programming systems have a rather limited indexing algorithm . yap contributes to this area by providing a dynamic indexing mechanism , or just-in-time indexer . third , a important contribution of the yap system has been the integration of both or-parallelism and tabling in a single logic programming system . story_separator_special_tag one of the major difficulties in applying q-learning to realworld domains is the sharp increase in the number of learning steps required to converge towards an optimal policy as the size of the state space is increased . in this paper we propose a method , planq-learning , that couples a q-learner with a strips planner . the planner shapes the reward function , and thus guides the q-learner quickly to the optimal policy . we demonstrate empirically that this combination of highlevel reasoning and low-level learning displays significant improvements in scaling-up behaviour as the state-space grows larger , compared to both standard q-learning and hierarchical q-learning methods . story_separator_special_tag in order to deal with uncertainty intelligently , we need to be able to represent it and reason about it . in this book , joseph halpern examines formal ways of representing uncertainty and considers various logics for reasoning about it . while the ideas presented are formalized in terms of definitions and theorems , the emphasis is on the philosophy of representing and reasoning about uncertainty . halpern surveys possible formal systems for representing uncertainty , including probability measures , possibility measures , and plausibility measures ; considers the updating of beliefs based on changing information and the relation to bayes ' theorem ; and discusses qualitative , quantitative , and plausibilistic bayesian networks . this second edition has been updated to reflect halpern 's recent research . new material includes a consideration of weighted probability measures and how they can be used in decision making ; analyses of the doomsday argument and the sleeping beauty problem ; modeling games with imperfect recall using the runs-and-systems approach ; a discussion of complexity-theoretic considerations ; the application of first-order conditional logic to security . reasoning about uncertainty is accessible and relevant to researchers and students in many fields , including story_separator_special_tag pct no . pct/ep93/01894 sec . 371 date jan. 25 , 1995 sec . 102 ( e ) date jan. 25 , 1995 pct filed jul . 17 , 1993 pct pub . no . wo94/02128 pct pub . date feb. 3 , 1994compounds of alpha , omega -dicarboxylic acids of the formula i ( i ) in which x and y , which can be the same or different , signify hydrogen , halogen , c1-c6-alkyl , c1-c6-alkoxy , hydroxyl , cyano , carboxyl , c1-c6-alkoxycarbonyl or carbamoyl , r1 and r2 , which can be the same or different , hydrogen or c1-c6-alkyl and q a linear saturated or unsaturated alkylene chain with 2-14 c-atoms in which one or more c-atoms can be replaced by cyclohexyl rings , phenyl or heterocycles , as well as of their in vivo-hydrolysable carboxylic acid derivatives for the preparation of medicaments with fibrinogen-lowering action . story_separator_special_tag in recent years , the combination of artificial intelligence and all walks of life has gradually become a social hot spot . the in-depth integration of artificial intelligence technology and education has had a profound impact on the traditional educational concept , educational system and teaching mode , and has become a key issue in china for some time to come . in this paper , the core journals in the field of artificial intelligence education in china in recent 30 years are statistically studied . this paper sorts out its publications , research institutions , subject distribution , research levels , fund projects , highly cited papers and high-yield authors in detail . the research status and hot spots in the main fields of artificial intelligence education are summarized and discussed , and the future research trends are considered , in order to provide reference for the follow-up research . story_separator_special_tag in many environments , robots have to handle partial observations , occlusions , and uncertainty . in this kind of setting , a partially observable markov decision process ( pomdp ) is the method of choice for planning actions . however , especially in the presence of non-expert users , there are still open challenges preventing mass deployment of pomdps in human environments . to this end , we present a novel approach that addresses both incorporating user objectives during task specification and asking humans for specific information during task execution ; allowing for mutual information exchange . in pomdps , the standard way of using a reward function to specify the task is challenging for experts and even more demanding for non-experts . we present a new pomdp algorithm that maximizes the probability of task success defined in the form of intuitive logic sentences . moreover , we introduce the use of targeted queries in the pomdp model , through which the robot can request specific information . in contrast , most previous approaches rely on asking for full state information which can be cumbersome for users . compared to previous approaches our approach is applicable to large state story_separator_special_tag we consider the problem of constructing abstract representations for planning in high-dimensional , continuous environments . we assume an agent equipped with a collection of high-level actions , and construct representations provably capable of evaluating plans composed of sequences of those actions . we first consider the deterministic planning case , and show that the relevant computation involves set operations performed over sets of states . we define the specific collection of sets that is necessary and sufficient for planning , and use them to construct a grounded abstract symbolic representation that is provably suitable for deterministic planning . the resulting representation can be expressed in pddl , a canonical high-level planning domain language ; we construct such a representation for the playroom domain and solve it in milliseconds using an off-the-shelf planner . we then consider probabilistic planning , which we show requires generalizing from sets of states to distributions over states . we identify the specific distributions required for planning , and use them to construct a grounded abstract symbolic representation that correctly estimates the expected reward and probability of success of any plan . in addition , we show that learning the relevant probability distributions corresponds to story_separator_special_tag in development for thirty years , soar is a general cognitive architecture that integrates knowledge-intensive reasoning , reactive execution , hierarchical reasoning , planning , and learning from experience , with the goal of creating a general computational system that has the same cognitive abilities as humans . in contrast , most ai systems are designed to solve only one type of problem , such as playing chess , searching the internet , or scheduling aircraft departures . soar is both a software system for agent development and a theory of what computational structures are necessary to support human-level agents . over the years , both software system and theory have evolved . this book offers the definitive presentation of soar from theoretical and practical perspectives , providing comprehensive descriptions of fundamental aspects and new components . the current version of soar features major extensions , adding reinforcement learning , semantic memory , episodic memory , mental imagery , and an appraisal-based model of emotion . this book describes details of soar 's component memories and processes and offers demonstrations of individual components , components working in combination , and real-world applications . beyond these functional considerations , the book story_separator_special_tag in this paper we describe icarus , a cognitive architecture for physical agents that integrates ideas from a number of traditions , but that has been especially influenced by results from cognitive psychology . we review icarus ' commitments to memories and representations , then present its basic processes for performance and learning . we illustrate the architecture 's behavior on a task from in-city driving that requires interaction among its various components . in addition , we discuss icarus ' consistency with qualitative findings about the nature of human cognition . in closing , we consider the framework 's relation to other cognitive architectures that have been proposed in the literature . story_separator_special_tag the idaho national laboratory ( inl ) is funded through the department of energy ( doe ) office of nuclear energy and other customers who have direct contracts with the laboratory . the people , equipment , facilities and other infrastructure at the laboratory require continual investment to maintain and improve the laboratory s capabilities . with ever tightening federal and customer budgets , the ability to direct investments into the people , equipment , facilities and other infrastructure which are most closely aligned with the laboratory s mission and customers goals grows increasingly more important . the ability to justify those investment decisions based on objective criteria that can withstand political , managerial and technical criticism also becomes increasingly more important . the systems engineering tools of decision analysis , risk management and roadmapping , when properly applied to such problems , can provide defensible decisions . story_separator_special_tag deep reinforcement learning ( drl ) has gained great success by learning directly from high-dimensional sensory inputs , yet is notorious for the lack of interpretability . interpretability of the subtasks is critical in hierarchical decision-making as it increases the transparency of black-box-style drl approach and helps the rl practitioners to understand the high-level behavior of the system better . in this paper , we introduce symbolic planning into drl and propose a framework of symbolic deep reinforcement learning ( sdrl ) that can handle both high-dimensional sensory inputs and symbolic planning . the task-level interpretability is enabled by relating symbolic actions to options.this framework features a planner -- controller -- meta-controller architecture , which takes charge of subtask scheduling , data-driven subtask learning , and subtask evaluation , respectively . the three components cross-fertilize each other and eventually converge to an optimal symbolic plan along with the learned subtasks , bringing together the advantages of long-term planning capability with symbolic knowledge and end-to-end reinforcement learning directly from a high-dimensional sensory input . experimental results validate the interpretability of subtasks , along with improved data efficiency compared with state-of-the-art approaches . story_separator_special_tag david wilkins et shelly hulse wilkins ont presente dismembered ( seattle : university of washington press , 2017 ) le vendredi 6 octobre 2017 a l universite paris-diderot a l invitation de marine le puloch , maitre de conferences et specialiste des questions autochtones dans les ameriques . leur livre , consacre aux recentes vagues d exclusions qui ont decime plusieurs groupes tribaux des etats-unis , est le premier ouvrage universitaire consacre a un phenomene qui a passionne les medias americains . . story_separator_special_tag the ability to specify a task without having to write special software is an important and prominent feature for a mobile service robot deployed in a crowded office environment , working around and interacting with people . in this paper , we contribute an interactive approach for enabling the users to instruct tasks to a mobile service robot through verbal commands.the input is given as typed or spoken instructions , which are then mapped to the available sensing and actuation primitives on the robot . the main contributions of this work are the addition of conditionals on sensory information that the specified actions to be executed in a closed-loop manner , and a correction mode that allows an existing task to be modified or corrected at a later time by providing a replacement action during the test execution.we describe all the components of our approach along with the implementation details and illustrative examples in depth . we also discuss the extensibility of the presented approach , and point out potential future extensions . story_separator_special_tag we introduce blog , a formal language for defining probability models with unknown objects and identity uncertainty . a blog model describes a generative process in which some steps add objects to the world , and others determine attributes and relations on these objects . subject to certain acyclicity constraints , a blog model specifies a unique probability distribution over first-order model structures that can contain varying and unbounded numbers of objects . furthermore , inference algorithms exist for a large class of blog models . story_separator_special_tag state estimation is the task of estimating the state of a partially observable dynamical system given a sequence of executed actions and observations . in logical settings , state estimation can be realized via logical filtering , which is exact but can be intractable . we propose logical smoothing , a form of backwards reasoning that works in concert with approximated logical filtering to refine past beliefs in light of new observations . we characterize the notion of logical smoothing together with an algorithm for backwards-forwards state estimation . we also present an approximation of our smoothing algorithm that is space efficient . we prove properties of our algorithms , and experimentally demonstrate their behaviour , contrasting them with state estimation methods for planning . smoothing and backwards-forwards reasoning are important techniques for reasoning about partially observable dynamical systems , introducing the logical analogue of effective techniques from control theory and dynamic programming . story_separator_special_tag the invention discloses a belgium raspberry soil acidity biological improver and a preparation method thereof . the belgium raspberry soil acidity biological improver is prepared through using the following raw materials , by weight , 35-45 parts of pine wood chip , 15-20 parts of tobacco leaf leftover , 10-15 parts of tea leave residue , 8-12 parts of beer residue , 5-10 parts of rice bran , 3-5 parts of garlic leaf , 2-4 parts of nidus vespae , 1-2 parts of orange peel , 2-3 parts of common threewingnut root , 1-2 parts of balsam pear leaf , 1-3 parts of rhododendron , 5-10 parts of meerschaum powder , 4-7 parts of bentonite , 3-5 parts of vulcanic ash , 20-30 parts of polyvinyl alcohol , 10-15 parts of hydroxyethyl cellulose and 4-8 parts of boric acid . the soil improver can improve the ph value of soil to make the soil reach blueberry growth conditions . the soil improver contains a plurality of water retention and sterilization components , organic matters and functional microbes , so the soil improver improves soil , increases the content of organic matters in the soil , improves the water retention and story_separator_special_tag abstract the independent choice logic ( icl ) is part of a project to combine logic and decision/game theory into a coherent framework . the icl has a simple possible-worlds semantics characterised by independent choices and an acyclic logic program that specifies the consequences of these choices . this paper gives an abductive characterization of the icl . the icl is defined model-theoretically , but we show that it is naturally abductive : the set of explanations of a proposition g is a concise description of the worlds in which g is true . we give an algorithm for computing explanations and show it is sound and complete with respect to the possible-worlds semantics . what is unique about this approach is that the explanations of the negation of g can be derived from the explanations of g . the use of probabilities over choices in this framework and going beyond acyclic logic programs are also discussed . story_separator_special_tag machine learning 's focus on ill-defined problems and highly flexible methods makes it ideally suited for knowledge discovery in databases ( kdd ) applications . among the ideas machine learning contributes to kdd are the importance of empirical validation , the impossibility of learning without a priori assumptions , and the utility of limited-search or limited-representation methods . machine learning provides methods for incorporating knowledge into the learning process , changing and combining representations , combatting the curse of dimensionality , and learning comprehensible models . kdd challenges for machine learning include scaling up its algorithms to large databases , using cost information in learning , automating data preprocessing , and enabling rapid development of applications . kdd opens up new directions for machine-learning research and brings new urgency to others . these directions include interfacing with the human user and the database system , learning from nonattribute-vector data , learning partial models , and learning continuously from an open-ended stream of data . story_separator_special_tag in order for robots to intelligently perform tasks with humans , they must be able to access a broad set of background knowledge about the environments in which they operate . unlike other approaches , which tend to manually define the knowledge of the robot , our approach enables robots to actively query the world wide web ( www ) to learn background knowledge about the physical environment . we show that our approach is able to search the web to infer the probability that an object , such as a `` coffee , '' can be found in a location , such as a `` kitchen . '' our approach , called objecteval , is able to dynamically instantiate a utility function using this probability , enabling robots to find arbitrary objects in indoor environments . our experimental results show that the interactive version of objecteval visits 28 % fewer locations than the version trained offline and 71 % fewer locations than a baseline approach which uses no background knowledge . story_separator_special_tag partially-observable markov decision processes ( pomdps ) provide a powerful model for sequential decision-making problems with partially-observed state and are known to have ( approximately ) optimal dynamic programming solutions . much work in recent years has focused on improving the efficiency of these dynamic programming algorithms by exploiting symmetries and factored or relational representations . in this work , we show that it is also possible to exploit the full expressive power of first-order quantification to achieve state , action , and observation abstraction in a dynamic programming solution to relationally specified pomdps . among the advantages of this approach are the ability to maintain compact value function representations , abstract over the space of potentially optimal actions , and automatically derive compact conditional policy trees that minimally partition relational observation spaces according to distinctions that have an impact on policy values . this is the first lifted relational pomdp solution that can optimally accommodate actions with a potentially infinite relational space of observation outcomes . story_separator_special_tag recently , there is increasing interest in action model learning . however , most previous studies focused on learning effect-based action models . on the other hand , a rule-based planning domain description language was proposed in the latest planning competition . that is the relational dynamic influence diagram language ( rddl ) . it uses rules to describe transitions instead of action models . in this paper , we build a system to learn planning domain descriptions in the rddl . there are three major parts of an rddl domain description : constraints , transitions and rewards . we first take advantage of the finite state machine analysis to identify constraints . then , we employ the inductive learning technique to learn transitions . at last , we use regression to fix rewards . the evaluation was performed on benchmarks from planning competitions . it showed that our system can learn domain descriptions in the rddl with low error rates . moreover , our system is developed based on classical approaches . it implicates that the rddl roots in previous planning languages . therefore , more classical approaches could be useful in the rddl domains . story_separator_special_tag we consider cognitive factories with multiple teams of heterogenous robots , and address two key challenges of these domains , hybrid reasoning for each team and finding an optimal global plan ( with minimum makespan ) for multiple teams . for hybrid reasoning , we propose modeling each team s workspace taking into account capabilities of heterogeneous robots , embedding continuous external computations into discrete symbolic representation and reasoning , not only optimizing the makespans of local plans but also minimizing the total cost of robotic actions . to find an optimal global plan , we propose a semi-distributed approach that does not require exchange of information between teams but yet achieves on an optimal coordination of teams that can help each other . we prove that the optimal coordination problem is np-complete , and describe a solution using automated reasoners . we experimentally evaluate our methods , and show their applications on a cognitive factory with dynamic simulations and a physical implementation . story_separator_special_tag autonomous robots are intelligent machines capable of performing tasks in the world by themselves , without explicit human control . examples range from autonomous helicopters to roomba , the robot vacuum cleaner . in this book , george bekey offers an introduction to the science and practice of autonomous robots that can be used both in the classroom and as a reference for industry professionals . he surveys the hardware implementations of more than 300 current systems , reviews some of their application areas , and examines the underlying technology , including control , architectures , learning , manipulation , grasping , navigation , and mapping . living systems can be considered the prototypes of autonomous systems , and bekey explores the biological inspiration that forms the basis of many recent developments in robotics . he also discusses robot control issues and the design of control architectures . after an overview of the field that introduces some of its fundamental concepts , the book presents background material on hardware , control ( from both biological and engineering perspectives ) , software architecture , and robot intelligence . it then examines a broad range of implementations and applications , including locomotion story_separator_special_tag submitted paper will be peer reviewed by conference committees , and accepted papers after registration and presentation will be published in the international conference proceedings series by acm ( isbn : 978-14503-8834-4 ) , which will be archived in the acm digital library , and indexed by ei compendex , scopus , etc . mlmi 2019 proceedings ( isbn : 978-1-4503-7248-0 ) has been indexed by ei-compendex & scopus . mlmi 2018 proceedings ( isbn : 978-1-4503-6556-7 ) has been indexed by ei-compendex & scopus . publication after successfully held in vietnam and jakarta , this year mlmi will be held in hangzhou , china , september 18-20. it is supported by hangzhou dianzi university , china . insightful presentations , engaging discussions , vibrant networking mlmi 2020 has it all . with leading academics on the scientific committee of the event , the program is guaranteed to address the most relevant topics in the field of machine learning and machine intelligence . it 's an opportunity to source feedback on your research , to get published in conference proceedings , and to explore the beautiful city hangzhou , china . story_separator_special_tag we propose a new family of policy gradient methods for reinforcement learning , which alternate between sampling data through interaction with the environment , and optimizing a `` surrogate '' objective function using stochastic gradient ascent . whereas standard policy gradient methods perform one gradient update per data sample , we propose a novel objective function that enables multiple epochs of minibatch updates . the new methods , which we call proximal policy optimization ( ppo ) , have some of the benefits of trust region policy optimization ( trpo ) , but they are much simpler to implement , more general , and have better sample complexity ( empirically ) . our experiments test ppo on a collection of benchmark tasks , including simulated robotic locomotion and atari game playing , and we show that ppo outperforms other online policy gradient methods , and overall strikes a favorable balance between sample complexity , simplicity , and wall-time . story_separator_special_tag this paper describes an architecture for an agent to learn and reason about affordances . in this architecture , answer set prolog , a declarative language , is used to represent and reason with incomplete domain knowledge that includes a representation of affordances as relations defined jointly over objects and actions . reinforcement learning and decision-tree induction based on this relational representation and observations of action outcomes , are used to interactively and cumulatively ( a ) acquire knowledge of affordances of specific objects being operated upon by specific agents ; and ( b ) generalize from these specific learned instances . the capabilities of this architecture are illustrated and evaluated in two simulated domains , a variant of the classic blocks world domain , and a robot assisting humans in an office environment . story_separator_special_tag reinforcement learning , one of the most active research areas in artificial intelligence , is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex , uncertain environment . in reinforcement learning , richard sutton and andrew barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning . their discussion ranges from the history of the field 's intellectual foundations to the most recent developments and applications . the only necessary mathematical background is familiarity with elementary concepts of probability . the book is divided into three parts . part i defines the reinforcement learning problem in terms of markov decision processes . part ii provides basic solution methods : dynamic programming , monte carlo methods , and temporal-difference learning . part iii presents a unified view of the solution methods and incorporates artificial neural networks , eligibility traces , and planning ; the two final chapters present case studies and consider the future of reinforcement learning . story_separator_special_tag reinforcement learning and symbolic planning have both been used to build intelligent autonomous agents . reinforcement learning relies on learning from interactions with real world , which often requires an unfeasibly large amount of experience . symbolic planning relies on manually crafted symbolic knowledge , which may not be robust to domain uncertainties and changes . in this paper we present a unified framework { \\em peorl } that integrates symbolic planning with hierarchical reinforcement learning ( hrl ) to cope with decision-making in a dynamic environment with uncertainties . symbolic plans are used to guide the agent 's task execution and learning , and the learned experience is fed back to symbolic knowledge to improve planning . this method leads to rapid policy search and robust symbolic plans in complex domains . the framework is tested on benchmark domains of hrl . story_separator_special_tag in order to be fully robust and responsive to a dynamically changing real-world environment , intelligent robots will need to engage in a variety of simultaneous reasoning modalities . in particular , in this paper we consider their needs to i ) reason with commonsense knowledge , ii ) model their nondeter-ministic action outcomes and partial observability , and iii ) plan toward maximizing long-term rewards . on one hand , answer set programming ( asp ) is good at representing and reasoning with commonsense and default knowledge , but is ill-equipped to plan under probabilistic uncertainty . on the other hand , partially observable markov decision processes ( pomdps ) are strong at planning under uncertainty toward maximizing long-term rewards , but are not designed to incorporate commonsense knowledge and inference . this paper introduces the corpp algorithm which combines p-log , a probabilistic extension of asp , with pomdps to integrate commonsense reasoning with planning under uncertainty . our approach is fully implemented and tested on a shopping request identification problem both in simulation and on a real robot . compared with existing approaches using p-log or pomdps individually , we observe significant improvements in both efficiency and story_separator_special_tag one camera and one low-cost inertial measurement unit ( imu ) form a monocular visual-inertial system ( vins ) , which is the minimum sensor suite ( in size , weight , and power ) for the metric six degrees-of-freedom ( dof ) state estimation . in this paper , we present vins-mono : a robust and versatile monocular visual-inertial state estimator . our approach starts with a robust procedure for estimator initialization . a tightly coupled , nonlinear optimization-based method is used to obtain highly accurate visual-inertial odometry by fusing preintegrated imu measurements and feature observations . a loop detection module , in combination with our tightly coupled formulation , enables relocalization with minimum computation . we additionally perform 4-dof pose graph optimization to enforce the global consistency . furthermore , the proposed system can reuse a map by saving and loading it in an efficient way . the current and previous maps can be merged together by the global pose graph optimization . we validate the performance of our system on public datasets and real-world experiments and compare against other state-of-the-art algorithms . we also perform an onboard closed-loop autonomous flight on the microaerial-vehicle platform and port the story_separator_special_tag to operate in human-robot coexisting environments , intelligent robots need to simultaneously reason with commonsense knowledge and plan under uncertainty . markov decision processes ( mdps ) and partially observable mdps ( pomdps ) , are good at planning under uncertainty toward maximizing long-term rewards ; p-log , a declarative programming language under answer set semantics , is strong in common-sense reasoning . in this paper , we present a novel algorithm called icorpp to dynamically reason about , and construct ( po ) mdps using p-log . icorpp successfully shields exogenous domain attributes from ( po ) mdps , which limits computational complexity and enables ( po ) mdps to adapt to the value changes these attributes produce . we conduct a number of experimental trials using two example problems in simulation and demonstrate icorpp on a real robot . results show significant improvements compared to competitive baselines . story_separator_special_tag deep reinforcement learning has been successfully used in many dynamic decision making domains , especially those with very large state spaces . however , it is also well-known that deep reinforcement learning can be very slow and resource intensive . the resulting system is often brittle and difficult to explain . in this paper , we attempt to address some of these problems by proposing a framework of rule-interposing learning ( ril ) that embeds high level rules into the deep reinforcement learning . with some good rules , this framework not only can accelerate the learning process , but also keep it away from catastrophic explorations , thus making the system relatively stable even during the very early stage of training . moreover , given the rules are high level and easy to interpret , they can be easily maintained , updated and shared with other similar tasks .
how do members and leaders social network structures help or hinder team effectiveness ? a meta-analysis of 37 studies of teams in natural contexts suggests that teams with densely configured inte . story_separator_special_tag we propose automated sport game models as a novel technical means for the analysis of team sport games . the basic idea is that automated sport game models are based on a conceptualization of key notions in such games and probabilistically derived from a set of previous games . in contrast to existing approaches , automated sport game models provide an analysis that is sensitive to their context and go beyond simple statistical aggregations allowing objective , transparent and meaningful concept definitions . based on automatically gathered spatio-temporal data by a computer vision system , a model hierarchy is built bottom up , where context-sensitive concepts are instantiated by the application of machine learning techniques . we describe the current state of implementation of the aspogamo system including its computer vision subsystem that realizes the idea of automated sport game models . their usage is exemplified with an analysis of the final of the soccer world cup 2006 . story_separator_special_tag in terms of analyzing soccer matches , two of the most important factors to consider are : 1 ) the formation the team played ( e.g. , 4-4-2 , 4-2-3-1 , 3-5-2 etc . ) , and 2 ) the manner in which they executed it ( e.g. , conservative -sitting deep , or aggressive -pressing high ) . despite the existence of ball and player tracking data , no current methods exist which can automatically detect and visualize formations . using an entire season of prozone data which consists of ball and player tracking information from a recent top-tier professional league , we showcase an automatic formation detectionmethod by investigating the home advantage . in a paper we published recently , using an entire season of ball tracking data we showed that home teams had significantlymore possession in the forward-third which correlated with more shots and goals while the shooting and passing proficiencies were the same . using our automatic formation analysis , we extend this analysisand show that while teams tend to play the same formation at home as they do away , the manner in which they execute the formation is significantly different . specifically , we story_separator_special_tag to the trained-eye , experts can often identify a team based on their unique style of play due to their movement , passing and interactions . in this paper , we present a method which can accurately determine the identity of a team from spatiotemporal player tracking data . we do this by utilizing a formation descriptor which is found by minimizing the entropy of role-specific occupancy maps . we show how our approach is significantly better at identifying different teams compared to standard measures ( i.e. , shots , passes etc. ) . we demonstrate the utility of our approach using an entire season of prozone player tracking data from a top-tier professional soccer league . story_separator_special_tag although the collection of player and ball tracking data is fast becoming the norm in professional sports , large-scale mining of such spatiotemporal data has yet to surface . in this paper , given an entire season 's worth of player and ball tracking data from a professional soccer league ( a # x2248 ; 400,000,000 data points ) , we present a method which can conduct both individual player and team analysis . due to the dynamic , continuous and multi-player nature of team sports like soccer , a major issue is aligning player positions over time . we present a `` role-based '' representation that dynamically updates each player 's relative role at each frame and demonstrate how this captures the short-term context to enable both individual player and team analysis . we discover role directly from data by utilizing a minimum entropy data partitioning method and show how this can be used to accurately detect and visualize formations , as well as analyze individual player behavior . story_separator_special_tag we propose a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive bayes/unigram , mixture of unigrams [ 6 ] , and hof-mann 's aspect model , also known as probabilistic latent semantic indexing ( plsi ) [ 3 ] . in the context of text modeling , our model posits that each document is generated as a mixture of topics , where the continuous-valued mixture proportions are distributed as a latent dirichlet random variable . inference and learning are carried out efficiently via variational algorithms . we present empirical results on applications of this model to problems in text modeling , collaborative filtering , and text classification . story_separator_special_tag centrality measures , or at least popular interpretations of these measures , make implicit assumptions about the manner in which traffic flows through a network . for example , some measures count only geodesic paths , apparently assuming that whatever flows through the network only moves along the shortest possible paths . this paper lays out a typology of network flows based on two dimensions of variation , namely the kinds of trajectories that traffic may follow ( geodesics , paths , trails , or walks ) and the method of spread ( broadcast , serial replication , or transfer ) . measures of centrality are then matched to the kinds of flows that they are appropriate for . simulations are used to examine the relationship between type of flow and the differential importance of nodes with respect to key measurements such as speed of reception of traffic and frequency of receiving traffic . it is shown that the off-the-shelf formulas for centrality measures are fully applicable only for the specific flow processes they are designed for , and that when they are applied to other flow processes they get the wrong answer . it is noted that the most story_separator_special_tag quantitative analysis of sports performance has been shown to produce information that coaches can use within the coaching process to enhance performance . traditional methods for quantifying sport performances are limited in their capacity to describe the complex interactions of events that occur within a performance over time . in this paper , we outline a new approach to the analysis of time-based event records and real-time behaviour records on sport performance known as t-pattern detection . the relevant elements of the t-pattern detection process are explained and exemplar data from the analysis of 13 soccer matches are presented to highlight the potential of this form of analysis . the results from soccer suggest that it is possible to identify new profiles for both individuals and teams based on the analysis of temporal behavioural patterns detected within the performances . story_separator_special_tag abstract in this article , we examine the space time coordination dynamics of two basketball teams during competition . we identified six game sequences at random , from which the movement data of each player were obtained for analysis of team behaviours in both the longitudinal ( basket-to-basket ) and lateral ( side-to-side ) directions . the central position of a team was measured using its spatial ( geometric ) centre and dispersion using a stretch index , obtained from the mean distance of team members from the spatial centre . relative-phase analysis of the spatial centres demonstrated in-phase stabilities in both the longitudinal and lateral directions , with more stability in the longitudinal than lateral direction . as anticipated , this finding is consistent with the results of an analysis of individual playing dyads ( see companion article , this issue ) , as well as the more general principle of complex systems conforming to similar descriptions at different levels of analysis . phase relations for the stretch index demonstrated in-phase attraction in the longitudinal direction and no attraction to any values in the lateral direction . finally , the difference between the two stretch indexes at any instant story_separator_special_tag abstract we examined space time patterns of basketball players during competition by analysing movement data obtained from six game sequences . strong in-phase relations in the longitudinal ( basket-to-basket ) direction were observed for all playing dyads , especially player opponent dyads matched for playing position , indicating that these movements were very constrained by the game demands . similar findings for in-phase relations were observed for the most part in the lateral direction , the main exception being dyads comprising the two wing players from the same team . these dyads instead demonstrated strong attractions to anti-phase , a consequence perhaps of seeking to increase and decrease team width in tandem . single instances from select dyads and game sequences demonstrated further evidence of phase stabilities and phase transitions on some occasions . together , these findings demonstrate that space time movement patterns of playing dyads in basketball , while unique , nonetheless conform to a uniform description in keeping with universal principles of dynamical self-organizing systems as hypothesized . story_separator_special_tag the aim of the present study was to determine the inter-observer reliability of prozone s matchviewer system . two groups of trained observers independently analysed an english fa premier league soc . story_separator_special_tag for each point of a road network , let there be given the number of cars starting from it , and the destination of the cars . under these conditions one wishes to estimate the distribution of traffic flow . whether one street is preferable to another depends not only on the quality of the road , but also on the density of the flow . if every driver takes the path that looks most favorable to him , the resultant running times need not be minimal . furthermore , it is indicated by an example that an extension of the road network may cause a redistribution of the traffic that results in longer individual running times . story_separator_special_tag abstract traditional approaches to the quantification of team sports have proved to be limited in their ability to identify complex structural regularities that , despite being unobservable , nonetheless underlie the development of the sporting contest between opposing teams . this paper describes a method for detecting the dynamics of play in professional soccer through the analysis of temporal patterns ( t-patterns ) . the observation instrument used was sof-5 , which is especially designed for studying the dynamics of the game in soccer . the recording consisted of within-session monitoring using the match vision studio 3.0 software , while the theme software was used to detect and analyse t-patterns . these t-patterns revealed regularities in the playing style of the observed team , fc barcelona . the structures detected included a ball possession pattern , whereby the ball was first kept in the central zone before being played forward , through several moves , into the zones closest to the opposing team 's goal in o . story_separator_special_tag basketball games evolve continuously in space and time as players constantly interact with their teammates , the opposing team , and the ball . however , current analyses of basketball outcomes rely on discretized summaries of the game that reduce such interactions to tallies of points , assists , and similar events . in this paper , we propose a framework for using optical player tracking data to estimate , in real time , the expected number of points obtained by the end of a possession . this quantity , called \\textit { expected possession value } ( epv ) , derives from a stochastic process model for the evolution of a basketball possession ; we model this process at multiple levels of resolution , differentiating between continuous , infinitesimal movements of players , and discrete events such as shot attempts and turnovers . transition kernels are estimated using hierarchical spatiotemporal models that share information across players while remaining computationally tractable on very large data sets . in addition to estimating epv , these models reveal novel insights on players ' decision-making tendencies as a function of their spatial strategy . story_separator_special_tag in basketball , efficiency is primarily determined by the value from shooting the ball . effective field goal percentage ( efg ) is the advanced metric currently used to evaluate shooting . the problem is that efg confounds two different properties : the quality of a shot and the ability to make that shot . in this paper , we introduce and propose methodologies for deriving and evaluating two new metrics : ( 1 ) effective shot quality ( esq ) and ( 2 ) efg+ , which is efg minus esq , a measure of shooting ability above expectation . we discuss how this recharacterizes performance for teams and players , and how this type of analysis can affect analysis beyond shooting . story_separator_special_tag sports analytics in general , and football ( soccer in usa ) analytics in particular , have evolved in recent years in an amazing way , thanks to automated or semi-automated sensing technologies that provide high-fidelity data streams extracted from every game . in this paper we propose a data-driven approach and show that there is a large potential to boost the understanding of football team performance . from observational data of football games we extract a set of pass-based performance indicators and summarize them in the h indicator . we observe a strong correlation among the proposed indicator and the success of a team , and therefore perform a simulation on the four major european championships ( 78 teams , almost 1500 games ) . the outcome of each game in the championship was replaced by a synthetic outcome ( win , loss or draw ) based on the performance indicators computed for each team . we found that the final rankings in the simulated championships are very close to the actual rankings in the real championships , and show that teams with high ranking error show extreme values of a defense/attack efficiency measure , the pezzali score . story_separator_special_tag the aim of this pilot study was propose a set of network methods to measure the specific properties of football teams . these metrics were organized on `` meso '' and `` micro '' analysis levels . five official matches of the same team on the first portuguese football league were analyzed . an overall of 577 offensive plays were analyzed from the five matches . from the adjacency matrices developed per each offensive play it were computed the scaled connectivity , the clustering coefficient and the centroid significance and centroid conformity . results showed that the highest values of scaled connectivity were found in lateral defenders and central and midfielder players and the lowest values were found in the striker and goalkeeper . the highest values of clustering coefficient were generally found in midfielders and forwards . in addition , the centroid results showed that lateral and central defenders tend to be the centroid players in the attacking process . in sum , this study showed that network metrics can be a powerful tool to help coaches to understanding the specific team 's properties , thus supporting decision-making and improving sports training based on match analysis . story_separator_special_tag the tactical behaviour of football players is fundamental in sport teams . despite this importance , the methods to measure such behaviour are very time-consuming for human operators . therefore , the aim of this case study was to propose a set of collective technological metrics to evaluate the attacking coverage provided by teammates to the player in possession of the ball . for this case study data was collected from three official matches of the same professional team . using the information about the cartesian position of players in the field provided from a tracking method , it was possible to propose four different technological metrics and ratios : i ) cover in support ; ii ) cover in vigilance ; iii ) attacking cover ; and iv ) depth mobility . using those metrics it was possible to observe that on average the team observed use with higher regularity support in vigilance as well as depth mobility , thus suggesting a specific tactical behaviour . in summary , it was possible to apply all metrics to real data from three official matches , thus allowing a new technological method to improve the match analysis systems that use multiplayer story_separator_special_tag the aim of this study was to propose a set of network methods to measure the specific properties of a team . these metrics were organised at macro-analysis levels . the interactions between teammates were collected and then processed following the analysis levels herein announced . overall , 577 offensive plays were analysed from five matches . the network density showed an ambiguous relationship among the team , mainly during the 2nd half . the mean values of density for all matches were 0.48 in the 1st half , 0.32 in the 2nd half and 0.34 for the whole match . the heterogeneity coefficient for the overall matches rounded to 0.47 and it was also observed that this increased in all matches in the 2nd half . the centralisation values showed that there was no 'star topology ' . the results suggest that each node ( i.e. , each player ) had nearly the same connectivity , mainly in the 1st half . nevertheless , the values increased in the 2nd half , showing a decreasing participation of all players at the same level . briefly , these metrics showed that it is possible to identify how players connect with story_separator_special_tag the study of teammates interaction on team sports has been growing in the last few years . nevertheless , no specific software has been developed so far to do this in a user-friendly manner . therefore , the aim of this study was to introduce a software called the performance analysis tool that allows the user to quickly record the teammates interaction and automatically generate the outputs in adjacency matrices that can then be imported by social network analysis software such as socnetv . moreover , it was also the aim of this study to process the data in a real-life scenario , thus the seven matches of the german national soccer team in the fifa world cup 2014 were used to test the software and then compute the network metrics . a dataset of 3032 passes between teammates in seven soccer matches was generated with the performance analysis tool software , which permitted a study of the network structure . the analysis of variance of centrality metrics between different tactical positions was made . the two-way m . story_separator_special_tag this paper analyzes the network of passes among the players of the spanish team during the last fifa world cup 2010 , where they emerged as the champion , with the objective of explaining the results obtained from the behavior at the complex network level . the team is considered a network with players as nodes and passes as ( directed ) edges . a temporal analysis of the resulting passes network is also done , looking at the number of passes , length of the chain of passes , and to network measures such as player centrality and clustering coefficient . results of the last three matches ( the decisive ones ) indicate that the clustering coefficient of the pass network remains high , indicating the elaborate style of the spanish team . the effectiveness of the opposing team in negating the spanish game is reflected in the change of several network measures over time , most importantly in drops of the clustering coefficient and passing length/speed , as well as in their being able in removing the most talented players from the central positions of the network . spain s ability to restore their combinative game and move story_separator_special_tag vibratory power unit for vibrating conveyers and screens comprising an asynchronous polyphase motor , at least one pair of associated unbalanced masses disposed on the shaft of said motor , with the first mass of a pair of said unbalanced masses being rigidly fastened to said shaft and with said second mass of said pair being movably arranged relative to said first mass , means for controlling and regulating the conveying rate during conveyer operation by varying the rotational speed of said motor between predetermined minimum and maximum values , said second mass being movably outwardly by centrifugal force against the pressure of spring means , said spring means being prestressed in such a manner that said second mass is , at rotational motor speeds lower than said minimum speed , held in its initial position , and at motor speeds between said lower and upper values in positions which are radially offset with respect to the axis of said motor to an extent depending on the value of said rotational motor speed . story_separator_special_tag background teamwork is a fundamental aspect of many human activities , from business to art and from sports to science . recent research suggest that team work is of crucial importance to cutting-edge scientific research , but little is known about how teamwork leads to greater creativity . indeed , for many team activities , it is not even clear how to assign credit to individual team members . remarkably , at least in the context of sports , there is usually a broad consensus on who are the top performers and on what qualifies as an outstanding performance . methodology/principal findings in order to determine how individual features can be quantified , and as a test bed for other team-based human activities , we analyze the performance of players in the european cup 2008 soccer tournament . we develop a network approach that provides a powerful quantification of the contributions of individual players and of overall team performance . conclusions/significance we hypothesize that generalizations of our approach could be useful in other contexts where quantification of the contributions of individual team members is important . story_separator_special_tag abstract team sports represent complex systems : players interact continuously during a game , and exhibit intricate patterns of interaction , which can be identified and investigated at both individual and collective levels . we used voronoi diagrams to identify and investigate the spatial dynamics of players behavior in futsal . using this tool , we examined 19 plays of a sub-phase of a futsal game played in a reduced area ( 20\xa0m 2 ) from which we extracted the trajectories of all players . results obtained from a comparative analysis of player s voronoi area ( dominant region ) and nearest teammate distance revealed different patterns of interaction between attackers and defenders , both at the level of individual players and teams . we found that , compared to defenders , larger dominant regions were associated with attackers . furthermore , these regions were more variable in size among players from the same team but , at the player level , the attackers dominant regions were more regular than those associated with each of the defenders . these findings support a formal description of the dynamic spatial interaction of the players , at least during the particular sub-phase of story_separator_special_tag we present a transformation that can be used to compute voronoi diagrams with a sweepline technique . the transformation is used to obtain simple algorithms for computing the voronoi diagram of point sites , of line segment sites , and of weighted point sites . all algorithms have o ( n log n ) worst case running time and use o ( n ) space . story_separator_special_tag although basketball is a dualistic sport , with all players competing on both offense and defense , almost all of the sport 's conventional metrics are designed to summarize offensive play . as a result , player valuations are largely based on offensive performances and to a much lesser degree on defensive ones . steals , blocks and defensive rebounds provide only a limited summary of defensive effectiveness , yet they persist because they summarize salient events that are easy to observe . due to the inefficacy of traditional defensive statistics , the state of the art in defensive analytics remains qualitative , based on expert intuition and analysis that can be prone to human biases and imprecision . fortunately , emerging optical player tracking systems have the potential to enable a richer quantitative characterization of basketball performance , particularly defensive performance . unfortunately , due to computational and methodological complexities , that potential remains unmet . this paper attempts to fill this void , combining spatial and spatio-temporal processes , matrix factorization techniques and hierarchical regression models with player tracking data to advance the state of defensive analytics in the nba . our approach detects , characterizes and quantifies story_separator_special_tag a family of new measures of point and graph centrality based on early intuitions of bavelas ( 1948 ) is introduced . these measures define centrality in terms of the degree to which a point falls on the shortest path between others and there fore has a potential for control of communication . they may be used to index centrality in any large or small network of symmetrical relations , whether connected or unconnected . story_separator_special_tag abstract the intuitive background for measures of structural centrality in social networks is reviewed and existing measures are evaluated in terms of their consistency with intuitions and their interpretability . three distinct intuitive conceptions of centrality are uncovered and existing measures are refined to embody these conceptions . three measures are developed for each concept , one absolute and one relative measure of the centrality of positions in a network , and one reflecting the degree of centralization of the entire network . the implications of these measures for the experimental study of small groups is examined . story_separator_special_tag abstract there is a need for a collective variable that captures the dynamics of team sports like soccer at match level . the centroid positions and surface areas of two soccer teams potentially describe the coordinated flow of attacking and defending in small-sided soccer games at team level . the aim of the present study was to identify an overall game pattern by establishing whether the proposed variables were linearly related between teams over the course of the game . in addition , we tried to identify patterns in the build-up of goals . a positive linear relation and a negative linear relation were hypothesized for the centroid positions and surface areas respectively . finally , we hypothesized that deviations from these patterns are present in the build-up of goals . ten young male elite soccer players ( mean age 17.3 , s=0.7 ) played three small-sided soccer games ( 4-a-side ) of 8 minutes as part of their regular training routine . an innovative player tracking system , local position measurement ( lpm ) , was us . story_separator_special_tag a quantitative method for evaluating sport teamwork is proposed . the sports considered here are team sports in which players can move freely in the field , and two teams compete against each other . for this kind of sports , each player 's dominant region has an important role in evaluating the teamwork . therefore , here we propose some approaches to quantitative evaluation based on the concept of a generalized voronoi diagram that divides space into dominant regions . we also construct a more realistic model of player 's motion model based on experiments , and apply it to the evaluation . \xa9 2005 wiley periodicals , inc. syst comp jpn , 36 ( 6 ) : 49 58 , 2005 ; published online in wiley interscience ( ) . doi 10.1002sscj.20254 story_separator_special_tag this paper investigates spatial and visual analytics as means to enhance basketball expertise . we introduce courtvision , a new ensemble of analytical techniques designed to quantify , visualize , and communicate spatial aspects of nba performance with unprecedented precision and clarity . we propose a new way to quantify the shooting range of nba players and present original methods that measure , chart , and reveal differences in nba players shooting abilities . we conduct a case study , which applies these methods to 1 ) inspect spatially aware shot site performances for every player in the nba , and 2 ) to determine which players exhibit the most potent spatial shooting behaviors . we present evidence that steve nash and ray allen have the best shooting range in the nba . we conclude by suggesting that visual and spatial analysis represent vital new methodologies for nba analysts . story_separator_special_tag basketball is a dualistic sport : all players compete on both offense and defense , and the core strategies of basketball revolve around scoring points on offense and preventing points on defense . however , conventional basketball statistics emphasize offensive performance much more than defensive performance . in the basketball analytics community , we do not have enough metrics and analytical frameworks to effectively characterize defensive play . however , although measuring defense has traditionally been difficult , new player tracking data are presenting new opportunities to understand defensive basketball . this paper introduces new spatial and visual analytics capable of assessing and characterizing the nature of interior defense in the nba . we present two case studies that each focus on a different component of defensive play . our results suggest that the integration of spatial approaches and player tracking data promise to improve the status quo of defensive analytics but also reveal some important challenges associated with evaluating defense . story_separator_special_tag abstract a team is more than the sum of its individual players , and so implies a structure of relations on the set . the q-analysis , or polyhedral dynamics , of atkin is chosen to define and operationalise intuitive notions of structure in a soccer match between liverpool and manchester united . the injection of q-holes , or obtrusive objects , by the defense of one team appears to contribute to the fragmentation and loss of the other . story_separator_special_tag abstract a defining feature of a work group is how its individual members interact . building on a dataset of 283,259 passes between professional soccer players , this study applies mixed-effects modeling to 76 repeated observations of the interaction networks and performance of 23 soccer teams . controlling for unobserved characteristics , such as the quality of the teams , the study confirms previous findings with panel data : networks characterized by high intensity ( controlling for interaction opportunities ) and low centralization are indeed associated with better team performance . story_separator_special_tag the eastern coast of the algiers , which stretches over 15\xa0km , is currently experiencing very intense socioeconomic and urban development that is causing severe disturbances to the coastal environment . the main issue of this study concerns itself with understanding the evolutionary trends of this system and assessing its state of vulnerability towards erosion phenomena . this work focuses on the historical study of the variation in the shoreline position by combining photogrammetry data and in situ dgps measurements ( differential global positioning system ) . data treatment was carried out using a geographic information system ( gis ) and the digital shoreline analysis system ( dsas ) geostatistical computing tool . these techniques have enabled identification of the erosion/accretion rates and description of the evolutionary trends over a period of 58\xa0years by calculating the net rates of coastline changes over three time periods ( 1959 1980 , 1980 2003 and 2003 2017 ) . the results show that the net rate fluctuates between sites , with an overall tendency towards erosion ( 49 % of the coastline ) , associated with a significant variation in the average annual rates . the computed statistics show that the study area story_separator_special_tag earlier experimental studies by one of us ( kelso , 1981a , 1984 ) have shown that abrupt phase transitions occur in human hand movements under the influence of scalar changes in cycling frequency . beyond a critical frequency the originally prepared out-of-phase , antisymmetric mode is replaced by a symmetrical , in-phase mode involving simultaneous activation of homologous muscle groups . qualitavely , these phase transitions are analogous to gait shifts in animal locomotion as well as phenomena common to other physical and biological systems in which new modes or spatiotemporal patterns arise when the system is parametrically scaled beyond its equilibrium state ( haken , 1983 ) . in this paper a theoretical model , using concepts central to the interdisciplinary field of synergetics and nonlinear oscillator theory , is developed , which reproduces ( among other features ) the dramatic change in coordinative pattern observed between the hands . story_separator_special_tag a knowledgeable observer of a game of football ( soccer ) can make a subjective evaluation of the quality of passes made between players during the game . in this paper we consider the problem of producing an automated system to make the same evaluation of passes . we present a model that constructs numerical predictor variables from spatiotemporal match data using feature functions based on methods from computational geometry , and then learns a classification function from labelled examples of the predictor variables . in addition , we show that the predictor variables computed using methods from computational geometry are among the most important to the learned classifiers . story_separator_special_tag a probabilistic framework for representing and visually recognizing complex multi-agent action is presented . motivated by work in model-based object recognition and designed for the recognition of action from visual evidence , the representation has three components : ( 1 ) temporal structure descriptions representing the temporal relationships between agent goals , ( 2 ) belief networks for probabilistic ally representing and recognizing individual agent goals from visual evidence , and ( 3 ) belief networks automatically generated from the temporal structure descriptions that support the recognition of the complex action . we describe our current work on recognizing american football plays from noisy trajectory data . story_separator_special_tag videos of multi-player team sports provide a challenging domain for dynamic scene analysis . player actions and interactions are complex as they are driven by many factors , such as the short-term goals of the individual player , the overall team strategy , the rules of the sport , and the current context of the game . we show that constrained multi-agent events can be analyzed and even predicted from video . such analysis requires estimating the global movements of all players in the scene at any time , and is needed for modeling and predicting how the multi-agent play evolves over time on the field . to this end , we propose a novel approach to detect the locations of where the play evolution will proceed , e.g . where interesting events will occur , by tracking player positions and movements over time . we start by extracting the ground level sparse movement of players in each time-step , and then generate a dense motion field . using this field we detect locations where the motion converges , implying positions towards which the play is evolving . we evaluate our approach by analyzing videos of a variety of complex story_separator_special_tag the quantitative analysis of sports is a growing branch of science and , in many ways one that has developed through non-academic and non-traditionally peer-reviewed work . the aim of this paper is to bring to a peer-reviewed journal the generally accepted basics of the analysis of basketball , thereby providing a common starting point for future research in basketball . the possession concept , in particular the concept of equal possessions for opponents in a game , is central to basketball analysis . estimates of possessions have existed for approximately two decades , but the various formulas have sometimes created confusion . we hope that by showing how most previous formulas are special cases of our more general formulation , we shed light on the relationship between possessions and various statistics . also , we hope that our new estimates can provide a common basis for future possession estimation . in addition to listing data sources for statistical research on basketball , we also discuss other concepts and methods , including offensive and defensive ratings , plays , per-minute statistics , pace adjustments , true shooting percentage , effective field goal percentage , rebound rates , four factors , story_separator_special_tag this paper has always been one of my favorite children , combining as it does elements of the duality of linear programming and combinatorial tools from graph theory . it may be of some interest to tell the story of its origin . story_separator_special_tag we consider the group motion segmentation problem and provide a solution for it . the group motion segmentation problem aims at analyzing motion trajectories of multiple objects in video and finding among them the ones involved in a group motion pattern . this problem is motivated by and serves as the basis for the multi-object activity recognition problem , which is currently an active research topic in event analysis and activity recognition . specifically , we learn a spatio-temporal driving force model to characterize a group motion pattern and design an approach for segmenting the group motion . we illustrate the approach using videos of american football plays , where we identify the offensive players , who follow an offensive motion pattern , from motions of all players in the field . experiments using gatech football play dataset validate the effectiveness of the segmentation algorithm . story_separator_special_tag while video-based activity analysis and recognition has received much attention , existing body of work mostly deals with single object/person case . coordinated multi-object activities , or group activities , present in a variety of applications such as surveillance , sports , and biological monitoring records , etc. , are the main focus of this paper . unlike earlier attempts which model the complex spatial temporal constraints among multiple objects with a parametric bayesian network , we propose a discriminative temporal interaction manifold ( dtim ) framework as a data-driven strategy to characterize the group motion pattern without employing specific domain knowledge . in particular , we establish probability densities on the dtim , whose element , the discriminative temporal interaction matrix , compactly describes the coordination and interaction among multiple objects in a group activity . for each class of group activity we learn a multi-modal density function on the dtim . a maximum a posteriori ( map ) classifier on the manifold is then designed for recognizing new activities . experiments on football play recognition demonstrate the effectiveness of the approach . story_separator_special_tag real-world ai systems have been recently deployed which can automatically analyze the plan and tactics of tennis players . as the game-state is updated regularly at short intervals ( i.e . point-level ) , a library of successful and unsuccessful plans of a player can be learnt over time . given the relative strengths and weaknesses of a player 's plans , a set of proven plans or tactics from the library that characterize a player can be identified . for low-scoring , continuous team sports like soccer , such analysis for multi-agent teams does not exist as the game is not segmented into `` discretized '' plays ( i.e . plans ) , making it difficult to obtain a library that characterizes a team 's behavior . additionally , as player tracking data is costly and difficult to obtain , we only have partial team tracings in the form of ball actions which makes this problem even more difficult . in this paper , we propose a method to overcome these issues by representing team behavior via play-segments , which are spatio-temporal descriptions of ball movement over fixed windows of time . using these representations we can characterize team story_separator_special_tag in this paper , we describe a method to represent and discover adversarial group behavior in a continuous domain . in comparison to other types of behavior , adversarial behavior is heavily structured as the location of a player ( or agent ) is dependent both on their teammates and adversaries , in addition to the tactics or strategies of the team . we present a method which can exploit this relationship through the use of a spatiotemporal basis model . as players constantly change roles during a match , we show that employing a `` role-based '' representation instead of one based on player `` identity '' can best exploit the playing structure . as vision-based systems currently do not provide perfect detection/tracking ( e.g . missed or false detections ) , we show that our compact representation can effectively `` denoise '' erroneous detections as well as enabling temporal analysis , which was previously prohibitive due to the dimensionality of the signal . to evaluate our approach , we used a fully instrumented field-hockey pitch with 8 fixed high-definition ( hd ) cameras and evaluated our approach on approximately 200,000 frames of data from a state-of-the-art real-time player story_separator_special_tag in this paper , we use ball and player tracking data from stats sportsvu from the 2012-2013 nba season to analyze offensive and defensive formations of teams . we move beyond current analysis thatusesonly play-by-play event-driven statistics ( i.e. , rebounds , shots ) and look at the spatiotemporal changes in a team s formation . a major concern , which also gives a clue to unlocking this problem , is that of permutations caused by the constant movement and interchanging of positions by players . in this paper , we use a method that represents a team via role which is immune to the problem of permutations . we demonstrate the utility of our approach by analyzing all the plays that resulted in a 3-point shot attemptin the 2012-2013 nba season.we analyzed close to 20,000 shots and found that when a player is open the shooting percentage is around 40 % , compared to a pressured shot which is close to 32 % . there is nothing groundbreaking behind this finding ( i.e. , putting more defensive pressure on the shooter reduces shooting percentages ) but finding how teams get shooters open is . using our method , we story_separator_special_tag in this paper , we present a method which accurately estimates the likelihood of chances in soccer using strategic features from an entire season of player and ball tracking data taken from a professional league . from the data , we analyzed the spatiotemporal patterns of the ten-second window of play before a shot for nearly 10,000 shots . from our analysis , we found that not only is the game phase important ( i.e. , corner , free-kick , open-play , counter attack etc . ) , the strategic features such as defender proximity , interaction of surrounding players , speed of play , coupled with the shot location play an impact on determining the likelihood of a team scoring a goal . using our spatiotemporal strategic features , we can accurately measure the likelihood of each shot . we use this analysis to quantify the efficiency of each team and their strategy . story_separator_special_tag this paper leverages stats sportvu optical tracking data to deconstruct several previously hidden aspects of rebounding . we are able to move beyond the outcome of who got the rebound to discover the non-linear relationship between shot location and its impact on offensive rebound rates , implications of the height of where rebounds are obtained , and estimates of where players should move in order to improve rebounding rates . we also leverage machine-learning methods to estimate the predictability of rebounding . story_separator_special_tag in the uk , guidelines for self-treatment of symptomatic hypoglycaemia have been prepared by diabetes uk , and recommend initial treatment with 10 15 g of refined carbohydrate ( three to five glucose tablets ) , followed by the consumption of unrefined ( starch ) carbohydrate ( follow-up treatment ) [ 1 ] . these guidelines are often used in the education of people who are commencing treatment with insulin , in whom mild ( self-treated ) hypoglycaemia is a common side-effect [ 2 ] . however , the amount of carbohydrate recommended in the guidelines is arbitrary , does not embrace differing circumstances and varying severity of hypoglycaemia , and is not based on scientific evidence [ 3 ] . it is not known how many people in the uk follow the guidelines recommended by diabetes uk and how often symptomatic mild hypoglycaemia is undertreated or overtreated . either measure might have significant metabolic consequences by compromising subsequent glycaemic control , causing either rebound hyperglycaemia or increasing the risk of progression to more severe hypoglycaemia . to assess how people self-treat mild symptomatic hypoglycaemia , how this is determined by clinical and psychological factors , and the extent of story_separator_special_tag we develop a machine learning approach to represent and analyze the underlying spatial structure that governs shot selection among professional basketball players in the nba . typically , nba players are discussed and compared in an heuristic , imprecise manner that relies on unmeasured intuitions about player behavior . this makes it difficult to draw comparisons between players and make accurate player specific predictions . modeling shot attempt data as a point process , we create a low dimensional representation of offensive player types in the nba . using non-negative matrix factorization ( nmf ) , an unsupervised dimensionality reduction technique , we show that a low-rank spatial decomposition summarizes the shooting habits of nba players . the spatial representations discovered by the algorithm correspond to intuitive descriptions of nba player types , and can be used to model other spatial effects , such as shooting accuracy . story_separator_special_tag this paper describes a method for a real-time calculation of a dominant region diagram ( simply , a dominant region ) . the dominant region is proposed to analyze the features of group behaviors . it draws spheres of influence and is used to analyze a teamwork in the team sports such as soccer and handball . in robocup soccer , particularly in small size league ( ssl ) , the dominant region takes an important role to analyze the current situation in the game , and it is useful for evaluating the suitability of the current strategy . another advantage of its real-time calculation is that it makes possible to predict a success or failure of passing . to let it work in a real environment , a real-time calculation of the dominant region is necessary . however , it takes 10 to 40 seconds to calculate the dominant region of the ssl s field by using the algorithm proposed in [ 3 ] . therefore , this paper proposes a real-time calculation algorithm of the dominant region . the proposing algorithm compute an approximate dominant region . the basic idea is ( 1 ) to make a reachable story_separator_special_tag abstract statistical properties of position-dependent ball-passing networks in real football games are examined . we find that the networks have the small-world property , and their degree distributions are fitted well by a truncated gamma distribution function . in order to reproduce these properties of networks , a model based on a markov chain is proposed . story_separator_special_tag the scientific study of networks , including computer networks , social networks , and biological networks , has received an enormous amount of interest in the last few years . the rise of the internet and the wide availability of inexpensive computers have made it possible to gather and analyze network data on a large scale , and the development of a variety of new theoretical tools has allowed us to extract new knowledge from many different kinds of networks.the study of networks is broadly interdisciplinary and important developments have occurred in many fields , including mathematics , physics , computer and information sciences , biology , and the social sciences . this book brings together for the first time the most important breakthroughs in each of these fields and presents them in a coherent fashion , highlighting the strong interconnections between work in different areas . subjects covered include the measurement and structure of networks in many branches of science , methods for analyzing network data , including methods developed in physics , statistics , and sociology , the fundamentals of graph theory , computer algorithms , and spectral methods , mathematical models of networks , including random graph story_separator_special_tag this paper describes and evaluates the novel utility of network methods for understanding human interpersonal interactions within social neurobiological systems such as sports teams . we show how collective system networks are supported by the sum of interpersonal interactions that emerge from the activity of system agents ( such as players in a sports team ) . to test this idea we trialled the methodology in analyses of intra-team collective behaviours in the team sport of water polo . we observed that the number of interactions between team members resulted in varied intra-team coordination patterns of play , differentiating between successful and unsuccessful performance outcomes . future research on small-world networks methodologies needs to formalize measures of node connections in analyses of collective behaviours in sports teams , to verify whether a high frequency of interactions is needed between players in order to achieve competitive performance outcomes . story_separator_special_tag we showcase in this paper the use of some tools from network theory to describe the strategy of football teams . using passing data made available by fifa during the 2010 world cup , we construct for each team a weighted and directed network in which nodes correspond to players and arrows to passes . the resulting network or graph provides a direct visual inspection of a team 's strategy , from which we can identify play pattern , determine hot-spots on the play and localize potential weaknesses . using different centrality measures , we can also determine the relative importance of each player in the game , the ` popularity ' of a player , and the effect of removing players from the game . story_separator_special_tag this article presents soccerstories , a visualization interface to support analysts in exploring soccer data and communicating interesting insights . currently , most analyses on such data relate to statistics on individual players or teams . however , soccer analysts we collaborated with consider that quantitative analysis alone does not convey the right picture of the game , as context , player positions and phases of player actions are the most relevant aspects . we designed soccerstories to support the current practice of soccer analysts and to enrich it , both in the analysis and communication stages . our system provides an overview+detail interface of game phases , and their aggregation into a series of connected visualizations , each visualization being tailored for actions such as a series of passes or a goal attempt . to evaluate our tool , we ran two qualitative user studies on recent games using soccerstories with data from one of the world 's leading live sports data providers . the first study resulted in a series of four articles on soccer tactics , by a tactics analyst , who said he would not have been able to write these otherwise . the second study story_separator_special_tag this paper proposes a novel , trajectory-based approach to the automatic recognition of complex multi-player behavior in a basketball game . first , a probabilistic play model is applied to the player-trajectory data in order to segment the play into game phases ( offense , defense , time out ) . in this way , both the temporal boundaries of the observed activity and its broader context are obtained . next , the team 's activity is analyzed in more detail by detecting the key elements of basketball play . following basketball theory , these key elements ( starting formation , screen , and move ) are the building blocks of basketball play , and therefore their temporal order is used to produce a semantic description of the observed activity . finally , the activity is recognized by comparing its semantic description with the descriptions of manually defined templates , stored in a database . the effectiveness and robustness of the proposed approach is demonstrated on two championship games and 71 examples of three types of basketball offense . story_separator_special_tag sports analysts live in a world of dynamic games flattened into tables of numbers , divorced from the rinks , pitches , and courts where they were generated . currently , these professional analysts use r , stata , sas , and other statistical software packages for uncovering insights from game data . quantitative sports consultants seek a competitive advantage both for their clients and for themselves as analytics becomes increasingly valued by teams , clubs , and squads . in order for the information visualization community to support the members of this blossoming industry , it must recognize where and how visualization can enhance the existing analytical workflow . in this paper , we identify three primary stages of today 's sports analyst 's routine where visualization can be beneficially integrated : 1 ) exploring a dataspace ; 2 ) sharing hypotheses with internal colleagues ; and 3 ) communicating findings to stakeholders.working closely with professional ice hockey analysts , we designed and built snapshot , a system to integrate visualization into the hockey intelligence gathering process . snapshot employs a variety of information visualization techniques to display shot data , yet given the importance of a specific hockey story_separator_special_tag basketball coaches at all levels use shot charts to study shot locations and outcomes for their own teams as well as upcoming opponents . shot charts are simple plots of the location and result of each shot taken during a game . although shot chart data are rapidly increasing in richness and availability , most coaches still use them purely as descriptive summaries . however , a team 's ability to defend a certain player could potentially be improved by using shot data to make inferences about the player 's tendencies and abilities . this article develops hierarchical spatial models for shot-chart data , which allow for spatially varying effects of covariates . our spatial models permit differential smoothing of the fitted surface in two spatial directions , which naturally correspond to polar coordinates : distance to the basket and angle from the line connecting the two baskets . we illustrate our approach using the 2003 2004 shot chart data for minnesota timberwolves guard sam cassell . story_separator_special_tag lie groups is intended as an introduction to the theory of lie groups and their representations at the advanced undergraduate or beginning graduate level . it covers the essentials of the subject starting from basic undergraduate mathematics . the correspondence between linear lie groups and lie algebras is developed in its local and global aspects . the classical groups are analysed in detail , first with elementary matrix methods , then with the help of the structural tools typical of the theory of semisimple groups , such as cartan subgroups , roots , weights , and reflections . the fundamental groups of the classical groups are worked out as an application of these methods . manifolds are introduced when needed , in connection with homogeneous spaces , and the elements of differential and integral calculus on manifolds are presented , with special emphasis on integration on groups and homogeneous spaces . representation theory starts from first principles , such as schur 's lemma and its consequences , and proceeds from there to the peter-weyl theorem , weyl 's character formula , and the borel-weil theorem , all in the context of linear groups . story_separator_special_tag in this final installment of the paper we consider the case where the signals or the messages or both are continuously variable , in contrast with the discrete nature assumed until now . to a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis . as the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case . there are , however , a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases . story_separator_special_tag every basketball player takes and makes a unique spatial array of shots . in recent years , technology to measure the coordinates of these constellations has made analysis of them possible , and the possibility exists for distinguishing between different shooters at a level of spatial detail finer than the entire basketball court . this paper addresses the challenge of characterizing and visualizing relative spatial shooting effectiveness in basketball by developing metrics to assess spatial variability in shooting . several global and local measures are introduced and formal tests are pro- posed to enable the comparison of shooting effectiveness between players , groups of players , or other collections of shots . we propose an empirical bayesian smoothing rate estimate that uses a novel local spatial neighborhood tai- lored for basketball shooting . these measures are evaluated using data from the 2011 to 2012 nba basketball season in three distinct ways . first we contrast nonspatial and spatial shooting metrics for two players from that season and then extend the comparison to all players attempting at least 250 shots in that season , rating them in terms of shooting effec- tiveness . second , we identify players shooting significantly better story_separator_special_tag optimizing the performance of a basketball offense may be viewed as a network problem , wherein each play represents a pathway '' through which the ball and players may move from origin ( the in-bounds pass ) to goal ( the basket ) . effective field goal percentages from the resulting shot attempts can be used to characterize the efficiency of each pathway . inspired by recent discussions of the price of anarchy '' in traffic networks , this paper makes a formal analogy between a basketball offense and a simplified traffic network . the analysis suggests that there may be a significant difference between taking the highest-percentage shot each time down the court and playing the most efficient possible game . there may also be an analogue of braess 's paradox in basketball , such that removing a key player from a team can result in the improvement of the team 's offensive efficiency . story_separator_special_tag the authors present a method for visualization of an invisible feature in human group motion . this feature called `` dominant region '' is a kind of dynamic sphere of influence . a dominant region is defined as a region in where the person can arrive earlier than any other persons and can be formulated by replacing the distance function in the voronoi region with a time function . as an application of the dominant region , a motion analysis system of team ball games was developed . in this system , the dominant region is used for quantitative evaluation of basic teamwork . from the experiments using actual soccer game scenes , it was shown that inferior or superior areas for each player and each team in the game can be observed visually and that some basic factor for teamwork can be evaluated quantitatively by using the dominant region . these evaluation results almost correspond with those by some professionals . story_separator_special_tag we present a motion analysis system of soccer games . the purpose of this system is to evaluate the teamwork quantitatively based on the movement of all the players in a game . space management and cooperative movement by the players are two major factors for teamwork evaluation . to quantify them from motion images , we propose two new features ; `` minimum moving time pattern '' and `` dominant region '' . from experiments using actual game scenes , it is suggested that the proposed system can be a new tool for supporting to evaluate the teamwork . story_separator_special_tag um als bildungsanbieter bei gef\xe4hrdeten studenten rechtszeitig intervenierend eingreifen zu k\xf6nnen , sind verfahren zur vorhersage studentischer leistungen notwendig . viele arbeiten haben den einsatz des svm-klassifikators vorgeschlagen . allerdings wurden unzureichende angaben zur wahl eines geeigneten kernel gegeben . au\xdferdem kann der svm-klassifikator bei fehlenden trainingsdaten zu allen m\xf6glichen noten nicht erfolgreich trainiert werden . zur l\xf6sung dieser probleme untersuchen wir die regressions-svm mit verschiedenen geeigneten kernel . dabei erreichen wir mit dem rbf-kernel und einer -parameter heuristik auf einem \xf6ffentlichen datensatz eines mathematikkurses bessere ergebnisse als in [ 3 ] mit einer svm erreicht wurden . f\xfcr den fall , dass zus\xe4tzlich zu den privaten daten der studenten auch vorherige noten bekannt waren , konnte die vorhersage von bestanden oder nicht bestanden mit einer genauigkeit von 90.57 % erreicht werden . das erm\xf6glicht eine praktische anwendbarkeit der regressions-svm zur erkennung gef\xe4hrdeter studenten . story_separator_special_tag analyzing team tactics plays an important role in the professional soccer industry . recently , the progressing ability to track the mobility of ball and players makes it possible to accumulate extensive match logs , which open a venue for better tactical analysis . however , traditional methods for tactical analysis largely rely on the knowledge and manual labor of domain experts . to this end , in this paper we propose an unsupervised approach to automatically discerning the typical tactics , i.e. , tactical patterns , of soccer teams through mining the historical match logs . to be specific , we first develop a novel model named team tactic topic model ( t3m ) for learning the latent tactical patterns , which can model the locations and passing relations of players simultaneously . furthermore , we demonstrate several potential applications enabled by the proposed t3m , such as automatic tactical pattern discovery , pass segment annotation , and spatial analysis of player roles . finally , we implement an intelligent demo system to empirically evaluate our approach based on the data collected from la liga 2013-2014. indeed , by visualizing the results obtained from t3m , we can successfully story_separator_special_tag part i. introduction : networks , relations , and structure : 1. relations and networks in the social and behavioral sciences 2. social network data : collection and application part ii . mathematical representations of social networks : 3. notation 4. graphs and matrixes part iii . structural and locational properties : 5. centrality , prestige , and related actor and group measures 6. structural balance , clusterability , and transitivity 7. cohesive subgroups 8. affiliations , co-memberships , and overlapping subgroups part iv . roles and positions : 9. structural equivalence 10. blockmodels 11. relational algebras 12. network positions and roles part v. dyadic and triadic methods : 13. dyads 14. triads part vi . statistical dyadic interaction models : 15. statistical analysis of single relational networks 16. stochastic blockmodels and goodness-of-fit indices part vii . epilogue : 17. future directions . story_separator_special_tag due to the demand for better and deeper analysis in sports , organizations ( both professional teams and broadcasters ) are looking to use spatiotemporal data in the form of player tracking information to obtain an advantage over their competitors . however , due to the large volume of data , its unstructured nature , and lack of associated team activity labels ( e.g . strategic/tactical ) , effective and efficient strategies to deal with such data have yet to be deployed . a bottleneck restricting such solutions is the lack of a suitable representation ( i.e . ordering of players ) which is immune to the potentially infinite number of possible permutations of player orderings , in addition to the high dimensionality of temporal signal ( e.g . a game of soccer last for 90 mins ) . leveraging a recent method which utilizes a `` role-representation '' , as well as a feature reduction strategy that uses a spatiotemporal bilinear basis model to form a compact spatiotemporal representation . using this representation , we find the most likely formation patterns of a team associated with match events across nearly 14 hours of continuous player and ball tracking data story_separator_special_tag in highly dynamic and adversarial domains such as sports , short-term predictions are made by incorporating both local immediate as well global situational information . for forecasting complex events , higher-order models such as hidden conditional random field ( hcrf ) have been used to good effect as capture the long-term , high-level semantics of the signal . however , as the prediction is based solely on the hidden layer , fine-grained local information is not incorporated which reduces its predictive capability . in this paper , we propose an augmented-hidden conditional random field ( a-hcrf ) which incorporates the local observation within the hcrf which boosts it forecasting performance . given an enormous amount of tracking data from vision-based systems , we show that our approach outperforms current state-of-the-art methods in forecasting short-term events in both soccer and tennis . additionally , as the tracking data is long-term and continuous , we show our model can be adapted to recent data which improves performance . story_separator_special_tag in 1970 , knuth , pratt , and morris [ 1 ] showed how to do basic pattern matching in linear time . related problems , such as those discussed in [ 4 ] , have previously been solved by efficient but sub-optimal algorithms . in this paper , we introduce an interesting data structure called a bi-tree . a linear time algorithm for obtaining a compacted version of a bi-tree associated with a given string is presented . with this construction as the basic tool , we indicate how to solve several pattern matching problems , including some from [ 4 ] in linear time . story_separator_special_tag we consider the problem of learning predictive models for in-game sports play prediction . focusing on basketball , we develop models for anticipating near-future events given the current game state . we employ a latent factor modeling approach , which leads to a compact data representation that enables efficient prediction given raw spatiotemporal tracking data . we validate our approach using tracking data from the 2012-2013 nba season , and show that our model can make accurate in-game predictions . we provide a detailed inspection of our learned factors , and show that our model is interpretable and corresponds to known intuitions of basketball game play .
a general construction of ergodic transformations with lebesgue component of finite multiplicity is proposed . all known examples with this property can be encompassed within the proposed construction . the spectral and combinatorial properties of the transformations are studied . it is shown that the construction permits one to obtain a continuum of spectrally nonisomorphic transformations with even-multiplicity lebesgue component . as a rule , the transformations have a continuous spectrum . it is proved that continuum many metrically nonisomorphic transformations having the same spectrum are contained in the proposed class . proof of all the results uses a combinatorial and approximation technique.bibliography : 15 titles . story_separator_special_tag the trimeric transcription factor ( tf ) nf-y regulates the ccaat box , a dna element enriched in promoters of genes overexpressed in many types of cancer . the regulatory nf-ya is present in two major isoforms , nf-yal ( long ) and nf-yas ( short ) . there is growing indication that nf-ya levels are increased in tumors . here , we report interrogation of rna-seq tcga ( the cancer genome atlas ) all 576 samples and geo ( gene expression ominibus ) datasets of lung adenocarcinoma ( luad ) . nf-yas is overexpressed in the three subtypes , proliferative , inflammatory , and tru ( terminal respiratory unit ) . ccaat is enriched in promoters of tumor differently expressed genes ( deg ) and in the proliferative/inflammatory intersection , matching with kegg ( kyoto encyclopedia of genes and genomes ) terms cell-cycle and signaling . increasing levels of nf-yas are observed from low to high cpg island methylator phenotypes ( cimp ) . we identified 166 genes overexpressed in luad cell lines with low nf-yas/nf-yal ratios : applying this centroid to tcga samples faithfully predicted tumors isoform ratio . this signature lacks ccaat in promoters . finally , story_separator_special_tag for each $ n > 1 $ , we construct explicitly a rigid weakly mixing rank- $ n $ transformation with homogeneous spectrum of multiplicity $ n $ . the existence of such transformations was established recently by o. ageev via baire category arguments ( a new short category proof is also given here ) . as an application , for any subset $ m\\subset\\mathbb n $ containing 1 , a weakly mixing transformation whose essential range for the spectral multiplicity equals $ n\\cdot m $ is constructed . story_separator_special_tag each subset $ e\\subset\\bbb n $ is realized as the set of essential values of the multiplicity function for the koopman operator of an ergodic conservative infinite measure preserving transformation . story_separator_special_tag for a dynamical system ( x , b , t , ) we investigate the connections between metric invariants , the rankr ( t ) and the covering numberf * ( t ) and a spectral property for having a simple spectrum . given a positive integerr 2 , a real numberb , 0 < b < 1 such thatr\xb7b 1 , we construct examples of systems withr ( t ) =r , f * ( t ) =b and having a simple spectrum . story_separator_special_tag despite many notable advances the general problem of classifying ergodic measure pre- serving transformations ( mpt ) has remained wide open . we show that the action of the whole group of mpt 's on ergodic actions by conjugation is turbulent in the sense of g. hjorth . the type of classifications ruled out by this property include countable algebraic objects such as those that occur in the halmos-von neumann theorem classifying ergodic mpt 's with pure point spectrum . we treat both the classical case of z as well as the case of general countable amenable groups . story_separator_special_tag topological dynamics and ergodic theory usually have been treated independently . h. furstenberg , instead , develops the common ground between them by applying the modern theory of dynamical systems to combinatories and number theory . originally published in 1981. the princeton legacy library uses the latest print-on-demand technology to again make available previously out-of-print books from the distinguished backlist of princeton university press . these editions preserve the original texts of these important books while presenting them in durable paperback and hardcover editions . the goal of the princeton legacy library is to vastly increase access to the rich scholarly heritage found in the thousands of books published by princeton university press since its founding in 1905 . story_separator_special_tag we present a simple proof of the fact that every countable group is weak rohlin , that is , there is in the polish space a of measure preserving -actions an action t whose orbit in a under conjugations is dense . in conjunction with earlier results this in turn yields a new characterization of non-kazhdan groups as those groups which admit such an action t which is also ergodic . story_separator_special_tag the purpose of this paper is to survey recent results in the spectral theory of ergodic dynamical systems . in addition we prove some known results using new methods and mention some new results , including the recent solution to rokhlin 's problem concerning ergodic transformations having a homogeneous spectrum of multiplicity two . we emphasize applications of ideas arising from the theory of joinings and markov intertwinings . story_separator_special_tag this paper addresses the following long-standing open question : if a stationary transformation on a probability space obeys the property for all measurable sets a 1 , a 2 , does it follow that for all measurable sets a 1 , a 2 , a 3 ? here we answer the question affirmatively for a certain class of transformations . story_separator_special_tag first part . preliminary facts.- 1. sets , categories , topology.- 1.1. sets.- 1.2. categories and functors.- 1.3. the elements of topology.- 2. groups and homogeneous spaces.- 2.1. transformation groups and abstract groups.- 2.2. homogeneous spaces.- 2.3. principal types of groups.- 2.4. extensions of groups.- 2.5. cohomology of groups.- 2.6. topological groups and homogeneous spaces.- 3. rings and module.- 3.1. rings.- 3.2. skew fields.- 3.3. modules over rings.- 3.4. linear spaces.- 3.5. algebra.- 4. elements of functional analysis.- 4.1. linear topological spaces.- 4.2. banach algebras.- 4.3. c * -algebras.- 4.4. commutative operator algebras.- 4.5. continuous sums of hilbert spaces and von neumann algebras.- 5. analysis on manifolds.- 5.1. manifolds.- 5.2. vector fields.- 5.3. differential forms.- 5.4. bundles.- 6. lie groups and lie algebras.- 6.1. lie group.- 6.2. lie algebras.- 6.3. the connection between lie groups and lie algebras.- 6.4. the exponential mapping.- second part . basic concepts and methods of the theory of representations.- 7. representations of groups.- 7.1. linear representations.- 7.2. representations of topological groups in linear topological space.- 7.3. unitary representations.- 8. decomposition of representations.- 8.1. decomposition of finite representation.- 8.2. irreducible representation.- 8.3. completely reducible representations.- 8.4. decomposition of unitary representations.- 9. invariant integration.- 9.1. means and story_separator_special_tag for any pair ( m , r ) such that2 m r > , we construct an ergodic dynamical system having spectral multiplicitym and rankr . the essential range of the multiplicity function is described . ifr 2 , the pair ( m , r ) also has a weakly mixing realization . story_separator_special_tag this book treats some basic topics in the spectral theory of dynamical systems . the treatment is at a general level , but two more advanced theorems , one by h. helson and w. parry and the other by b. host , are presented . moreover , ornstein 's family of mixing rank one automorphisms is described with construction and proof . systems of imprimitivity and their relevance to ergodic theory are discussed , and baire category theorems of ergodic theory , scattered in the literature , are derived in a unified way . riesz products are considered and they are used to describe the spectral types and eigenvalues of rank one automorphisms . story_separator_special_tag 1. basics 2. induced representations 3. the imprimitivity theorem 4. mackey analysis 5. topologies on dual spaces 6. topological frobenius properties 7. further applications references index . story_separator_special_tag in this paper upper bounds of flows defined over normed ring are given . for this end , we calculate some images of the autonomous operator acting on $ \\aseqreal $ . story_separator_special_tag preface 1. visible and invisible structures on infinite-dimensional groups 2. spinor representation 3. representations of the complex classical categories 4. fermion fock space 5. the weil representation : finite-dimensional case 6. the weil representation : infinite-dimensional case 7. representations of the diffeomorphisms of a circle and the virasoro algebra 8. the heavy groups 9. infinite-dimensional classical groups and almost invariant structures 10. some algebraic constructions of measure theory appendix a the real classical categories appendix b semple complexes , hinges , and boundaries of symmetric spaces appendix c boson-fermion correspondence appendix d univalent functions and the grunsky operator appendix e characteristic livsic function appendix f examples , counterexamples , notes references index story_separator_special_tag one of the unsolved problems in ergodic theory is the following . let t be ai invertible measure preserving transformation on the unit interval . when does t have a square root ? when can t be imbedded in a flow ? in his book on ergodic theory , halmos asked , ( 1 ) if every weakly mixing transformation had a square root , ( 2 ) if every bernoulli shift had a square root , and ( 3 ) if every bernoulli shift could be imbedded in a flow . chacon [ 1 ] showed that the answer to ( 1 ) was negative . we showed [ 5 ] , [ 6 ] that the answer to ( 2 ) and ( 3 ) was yes . these results seem to indicate that `` enough mixing '' forces t to have a square root or to be imbeddable in a flow . it is the purpose of this paper to give an example of a mixing transformation that has no square root . ( t is mixing if and only if story_separator_special_tag abstract we investigate the ergodic theory of poisson suspensions . in the process , we establish close connections between finite and infinite measure-preserving ergodic theory . poisson suspensions thus provide a new approach to infinite-measure ergodic theory . fields investigated here are mixing properties , spectral theory , joinings . we also compare poisson suspensions to the apparently similar looking gaussian dynamical systems . story_separator_special_tag it is proved that mixing transformations and flows of rank 1 have mixing of any multiplicity and a minimal self-joining of any order . story_separator_special_tag a metric in the set of mixing measure-preserving transform- ations is introduced making of it a complete separable metric space . dense and massive subsets of this space are investigated . a generic mixing trans- formation is proved to have simple singular spectrum and to be a mixing of arbitrary order ; all its powers are disjoint . the convolution powers of the maximal spectral type for such transformations are mutually singular if the ratio of the corresponding exponents is greater than 2. it is shown that the conjugates of a generic mixing transformation are dense , as are also the conjugates of an arbitrary fixed cartesian product . bibliography : 28 titles .
wireless sensor networks ( wsn ) are mobile ad hoc networks in which sensors have limited resources and communication capabilities . secure communications in some wireless sensor networks are critical . key management is the fundamental security mechanism in wireless sensor network . many key management schemes have been developed in recent years . in this paper , we present wireless sensor network key management survey and taxonomy . we classify proposed wireless sensor network key management schemes into three categories based on the encryption key mechanism . we then divide each category into several subcategories based on key pre-distribution and key establishment . story_separator_special_tag abstract integrating wireless sensor networks ( wsns ) and the internet of things ( iot ) is to build a heterogeneous device network for connecting , sharing and storing their information in order to create the application environment as smart . it also helps in industrial automation to predict and build a fault-tolerant environment . the automation of industry can be made possible with the emergence of new developments in sensor manufacturing and communication . in this paper , a cluster-tree based energy efficient data gathering ( cteedg ) protocol is proposed to increase the lifetime and throughput of wsns . the cteedg uses the fuzzy logic to select the cluster head ( ch ) based on the information collected locally . in the inter-cluster communication phase , the tree topology is established between the clusters towards the base station ( bs ) which ensures the availability of the congestion free shortest path to the bs . from the simulation results , the proposed cteedg outperforms the famacrow and dl-leach by 28.81 % and 38.28 % in terms of throughput . and also , the proposed method has 29.26 % and 49.29 % reduction in average energy consumption when compared story_separator_special_tag wireless sensor networks ( wsn ) is usually made by cooperation of number of limited sensor devices which are connected over the wireless media . there are a lot of its application in military , health and industry . due to limitations of sensor devices , the networks exposed to various kinds of attacks and conventional defenses against these attacks are not suitable due to the resource constrained nature of these kinds of networks . therefore , security in wsns is a challenging task due to inheritance limitations of sensors and it becomes a good topic for researchers . in this paper we focus at secure routing protocols in wireless sensor networks and surveyed nineteen papers which focusing on this matter . we represent their problems and methodologies which are used in order to address the problems and develop a matrix which identifies protocol , its features and the attacks it resistance to . story_separator_special_tag technological progress in integrated , low-power , cmos communication devices and sensors makes a rich design space of networked sensors viable . they can be deeply embedded in the physical world and spread throughout our environment like smart dust . the missing elements are an overall system architecture and a methodology for systematic advance . to this end , we identify key requirements , develop a small device that is representative of the class , design a tiny event-driven operating system , and show that it provides support for efficient modularity and concurrency-intensive operation . our operating system fits in 178 bytes of memory , propagates events in the time it takes to copy 1.25 bytes of memory , context switches in the time it takes to copy 6 bytes of memory and supports two level scheduling . the analysis lays a groundwork for future architectural advances . story_separator_special_tag we consider two of the most important design issues for distributed sensor networks in the battlefield : security for communication in such hostile terrain ; and energy efficiency because of the battery 's limited capacity and the impracticality of recharging . communication security is normally provided by encryption , i.e . data are encrypted before transmission and are decrypted first on reception . we exploit the secure sensor network design space for energy efficiency by investigating different microprocessors coupled with various public key algorithms . we propose a power control mechanism for sensors to operate in an energy-efficient fashion using the newly developed dynamical voltage scaling ( dvs ) technique . in particular we consider multiple voltage processors and insert additional information into the communication channel to guide the selection of proper voltages for data decryption/encryption and processing in order to reduce the total computational energy consumption . we experiment several encryption standards on a broad range of embedded processors and simulate the behavior of the sensor network to show that the sensor 's lifetime can be extended substantially . story_separator_special_tag sensor webs consisting of nodes with limited battery power and wireless communications are deployed to collect useful information from the field . gathering sensed information in an energy efficient manner is critical to operate the sensor network for a long period of time . in w. heinzelman et al . ( proc . hawaii conf . on system sci. , 2000 ) , a data collection problem is defined where , in a round of communication , each sensor node has a packet to be sent to the distant base station . if each node transmits its sensed data directly to the base station then it will deplete its power quickly . the leach protocol presented by w. heinzelman et al . is an elegant solution where clusters are formed to fuse data before transmitting to the base station . by randomizing the cluster heads chosen to transmit to the base station , leach achieves a factor of 8 improvement compared to direct transmissions , as measured in terms of when nodes die . in this paper , we propose pegasis ( power-efficient gathering in sensor information systems ) , a near optimal chain-based protocol that is an improvement over story_separator_special_tag topology control in a sensor network balances load on sensor nodes and increases network scalability and lifetime . clustering sensor nodes is an effective topology control approach . we propose a novel distributed clustering approach for long-lived ad hoc sensor networks . our proposed approach does not make any assumptions about the presence of infrastructure or about node capabilities , other than the availability of multiple power levels in sensor nodes . we present a protocol , heed ( hybrid energy-efficient distributed clustering ) , that periodically selects cluster heads according to a hybrid of the node residual energy and a secondary parameter , such as node proximity to its neighbors or node degree . heed terminates in o ( 1 ) iterations , incurs low message overhead , and achieves fairly uniform cluster head distribution across the network . we prove that , with appropriate bounds on node density and intracluster and intercluster transmission ranges , heed can asymptotically almost surely guarantee connectivity of clustered networks . simulation results demonstrate that our proposed approach is effective in prolonging the network lifetime and supporting scalable data aggregation . story_separator_special_tag extended network lifetime and load balancing are important requirements for many wsn applications . there are many clustering routing schemes for homogeneous proactive and reactive wsns but they suffer from the problem of uneven load distribution and back transmission . this paper presents an energy efficient load balanced clustering scheme with away cluster head ( ach ) scheme and free association mechanism to overcome these problems . chs selection depends upon many criteria i.e . residual energy and distance from other chs to evenly distribute the load to selected chs , on the other hand free association mechanism is used to associate the nodes into cluster which avoids the back transmission . thus this scheme prolong the network lifetime . proposed clustering scheme are simulated using matlab-r version 7.10.0.499 ( r2010a ) for leach and teen . simulated results shows that proposed clustering algorithm increase the network lifetime with less energy consumption . story_separator_special_tag this document describes hmac , a mechanism for message authentication using cryptographic hash functions . hmac can be used with any iterative cryptographic hash function , e.g. , md5 , sha-1 , in combination with a secret shared key . the cryptographic strength of hmac depends on the properties of the underlying hash function . story_separator_special_tag the purpose of this document is to make the sha-1 ( secure hash algorithm 1 ) hash algorithm conveniently available to the internet community . the united states of america has adopted the sha-1 hash algorithm described herein as a federal information processing standard . most of the text herein was taken by the authors from fips 180-1. only the c code implementation is `` original '' . story_separator_special_tag distributed sensor networks ( dsns ) are ad-hoc mobile networks that include sensor nodes with limited computation and communication capabilities . dsns are dynamic in the sense that they allow addition and deletion of sensor nodes after deployment to grow the network or replace failing and unreliable nodes . dsns may be deployed in hostile areas where communication is monitored and nodes are subject to capture and surreptitious use by an adversary . hence dsns require cryptographic protection of communications , sensor-capture detection , key revocation and sensor disabling . in this paper , we present a key-management scheme designed to satisfy both operational and security requirements of dsns . the scheme includes selective distribution and revocation of keys to sensor nodes as well as node re-keying without substantial computation and communication capabilities . it relies on probabilistic key sharing among the nodes of a random graph and uses simple protocols for shared-key discovery and path-key establishment , and for key revocation , re-keying , and incremental addition of nodes . the security and network connectivity characteristics supported by the key-management scheme are discussed and simulation experiments presented . story_separator_special_tag we consider routing security in wireless sensor networks . many sensor network routing protocols have been proposed , but none of them have been designed with security as a goal . we propose security goals for routing in sensor networks , show how attacks against ad-hoc and peer-to-peer networks can be adapted into powerful attacks against sensor networks , introduce two classes of novel attacks against sensor networks sinkholes and hello floods , and analyze the security of all the major sensor network routing protocols . we describe crippling attacks against all of them and suggest countermeasures and design considerations . this is the first such analysis of secure routing in sensor networks . story_separator_special_tag wireless sensor networks ( wsns ) use small nodes with constrained capabilities to sense , collect , and disseminate information in many types of applications . as sensor networks become wide-spread , security issues become a central concern , especially in mission-critical tasks . in this paper , we identify the threats and vulnerabilities to wsns and summarize the defense methods based on the networking protocol layer analysis first . then we give a holistic overview of security issues . these issues are divided into seven categories : cryptography , key management , attack detections and preventions , secure routing , secure location security , secure data fusion , and other security issues . along the way we analyze the advantages and disadvantages of current secure schemes in each category . in addition , we also summarize the techniques and methods used in these categories , and point out the open research issues and directions in each area . story_separator_special_tag we describe leapp ( localized encryption and authentication protocol ) , a key management protocol for sensor networks that is designed to support in-network processing , while at the same time restricting the security impact of a node compromise to the immediate network neighborhood of the compromised node . the design of the protocol is motivated by the observation that different types of messages exchanged between sensor nodes have different security requirements , and that a single keying mechanism is not suitable for meeting these different security requirements . leapp supports the establishment of four types of keys for each sensor node : an individual key shared with the base station , a pairwise key shared with another sensor node , a cluster key shared with multiple neighboring nodes , and a global key shared by all the nodes in the network . leapp also supports ( weak ) local source authentication without precluding in-network processing . our performance analysis shows that leapp is very efficient in terms of computational , communication , and storage costs . we analyze the security of leapp under various attack models and show that leapp is very effective in defending against many sophisticated attacks story_separator_special_tag nowadays , healthcare system ( hs ) such as hospitals faces many problems including privacy and information security . this reason the reputation of hospitals will be spoiled . the proposed system is to address the issue of privacy and information security in medical industries using group of sending scheme ( gss ) in wireless sensing healthcare system . to solve this problem , a healthcare system ( hs ) framework is designed that collect the medical data from the body sensor network ( bsn ) . body sensor network collects that medical data and then transmits that data into the wireless sensor networks ( wsn ) . once the medical data collected from the wireless sensor network ( wsn ) and then send that data to the server . the server maintains the medical data those are collected from the patient 's and send those data for both which doctor attending that patients and patient relation also . this is done by group of sending scheme ( gss ) that uses the key distribution scheme . so the patient relation knows what happened on inside that hospital . this scheme provide the highly security for transmitting the medical data story_separator_special_tag the invention relates to a method and apparatus for electrolytic refining of copper and the production of copper wires for electrical purposes on a continual basis which produces round copper wires directly from impure copper anodes and to treat such wires in order to impart the desired characteristics as electrical conductors . the apparatus handles copper anodes of customary size refining them at normal current densities of less than 55 amps/foot2 onto starting wires of adequate tensile strength which is done continuously , the wire being provided to an electrolytic bath and , after withdrawal from the bath , the wires are finished by drawing and annealing . story_separator_special_tag we introduce tinysec , the first fully-implemented link layer security architecture for wireless sensor networks . in our design , we leverage recent lessons learned from design vulnerabilities in security protocols for other wireless networks such as 802.11b and gsm . conventional security protocols tend to be conservative in their security guarantees , typically adding 16 -- 32 bytes of overhead . with small memories , weak processors , limited energy , and 30 byte packets , sensor networks can not afford this luxury . tinysec addresses these extreme resource constraints with careful design ; we explore the tradeoffs among different cryptographic primitives and use the inherent sensor network limitations to our advantage when choosing parameters to find a sweet spot for security , packet overhead , and resource requirements . tinysec is portable to a variety of hardware and radio platforms . our experimental results on a 36 node distributed sensor network application clearly demonstrate that software based link layer protocols are feasible and efficient , adding less than 10 % energy , latency , and bandwidth overhead . story_separator_special_tag wireless sensor networks routing protocols always neglect security problem at the designing step , while plenty of solutions of this problem exist , one of which is using key management . researchers have proposed many key management schemes , but most of them were designed for flat wireless sensor networks , which is not fit for cluster-based wireless sensor networks ( e.g . leach ) . in this paper , we investigate adding security to cluster-based routing protocols for wireless sensor networks which consisted of sensor nodes with severely limited resources , and propose a security solution for leach , a protocol in which the clusters are formed dynamically and periodically . our solution uses improved random pair-wise keys ( rpk ) scheme , an optimized security scheme that relys on symmetric-key methods ; is lightweight and preserves the core of the original leach . simulations show that security of rleach has been improved , with less energy consumption and lighter overhead . story_separator_special_tag wireless sensor networks ( wsn ) is vulnerable to node capture attacks in which an attacker can capture one or more sensor nodes and reveal all stored security information which enables him to compromise a part of the wsn communications . due to large number of sensor nodes and lack of information about deployment and hardware capabilities of sensor node , key management in wireless sensor networks has become a complex task . limited memory resources and energy constraints are the other issues of key management in wsn . hence an efficient key management scheme is necessary which reduces the impact of node capture attacks and consume less energy . in this paper , we develop a cluster based technique for key management in wireless sensor network . initially , clusters are formed in the network and the cluster heads are selected based on the energy cost , coverage and processing capacity . the sink assigns cluster key to every cluster and an ebs key set to every cluster head . the ebs key set contains the pairwise keys for intra-cluster and inter-cluster communication . during data transmission towards the sink , the data is made to pass through two story_separator_special_tag one of the challenges in wireless sensor networks is the design of scalable , energy-efficient secure routing protocols . this paper describes sheer , secure hierarchical energy-efficient routing protocol , which provides secure communication at the network layer . sheer uses a probabilistic broadcast mechanism and a three-level hierarchical clustering architecture to improve the network energy performance and increase its lifetime . to secure the routing mechanism from the inception of the network , sheer implements hikes [ 1 ] , a hierarchical key management and authentication scheme . simulation studies compare sheer and a secure version of leach using hikes . the simulation results show that sheer is more energy-efficient than the secure leach and has better scalability . story_separator_special_tag clustered sensor networks have recently been shown to increase system throughput , decrease system delay , and save energy while performing data aggregation . whereas those with rotating cluster heads , such as leach ( low-energy adaptive clustering hierarchy ) , have also advantages in terms of security , the dynamic nature of their communication makes most existing security solutions inadequate for them . in this paper , we investigate the problem of adding security to hierarchical ( cluster-based ) sensor networks where clusters are formed dynamically and periodically , such as leach . for this purpose , we show how random key predistribution , widely studied in the context of flat networks , and tesla , a building block from spins , can be both used to secure communications in this type of network . we present our solution , and provide a detailed analysis of how different values for the various parameters in such a system impact a hierarchical network in terms of security and energy efficiency . to the best of our knowledge , ours is the first that investigates security in hierarchical wsns with dynamic cluster formation . story_separator_special_tag wireless sensor networks are ad hoc networks comprised mainly of small sensor nodes with limited resources , and are rapidly emerging as a technology for large-scale , low-cost , automated sensing and monitoring of different environments of interest . cluster-based communication has been proposed for these networks for various reasons such as scalability and energy efficiency . in this paper , we investigate the problem of adding security to cluster-based communication protocols for homogeneous wireless sensor networks consisting of sensor nodes with severely limited resources , and propose a security solution for leach , a protocol where clusters are formed dynamically and periodically . our solution uses building blocks from spins , a suite of highly optimized security building blocks that rely solely on symmetric-key methods ; is lightweight and preserves the core of the original leach . story_separator_special_tag most routing protocols in wireless sensor networks take networks lifetime as design target , but not security . this paper researches on routing security and proposes ss-leach algorithm based on leach algorithm . the ss-leach algorithm makes use of nodes self-localization technology and keys pre-distribution strategy . it improves the method electing cluster-heads and forms dynamic stochastic multi-paths cluster-heads chains . the results of simulation demonstrate that the ss-leach algorithm not only prolongs the lifetime of wireless sensor networks effectively , but also enhances routing security strongly . story_separator_special_tag secure key management is crucial to meet the security goals to prevent the wireless sensor networks ( wsns ) being compromised by an adversary . owing to ad-hoc nature and resource limitations of sensor networks , provisioning a right key management is challenging . in this paper we present a novel secure key management ( nskm ) module providing an efficient scalable post-distribution key establishment that allows the hierarchical clustering topology platform to provision acceptable security services . to the best of our knowledge this module is the first implemented security module for wireless sensor networks that provisions reasonable resistance against replay and node capture attacks . furthermore , it provides highly lightweight and scalable scheme . also it is acceptable to be used in a wireless sensor network of thousands of sensor nodes . story_separator_special_tag key management is a critical security service in wireless sensor networks ( wsns ) . it is an essential cryptographic primitive upon which other security primitives are built . the most critical security requirements in wsns include authentication and confidentiality . these security requirements can be provided by a key management but it is difficult due to the ad hoc nature , intermittent connectivity , and resource limitations of the sensor networks . in this paper we propose an authenticated key management ( akm ) scheme for hierarchical networks based on the random key pre-distribution . further , a secure cluster formation algorithm is proposed . the base station periodically refreshes the network key , which provides the following : a ) the authenticated network communication , and b ) a global and continuous authentication of each network entity . multiple level of encryption is provided by using two keys : 1 ) a pair-wise shared key between nodes , and 2 ) a network key . the akm scheme is more resilient to node capture as compared to other random key pre-distribution schemes . the proposed key management scheme can be applied for different routing and energy efficient data story_separator_special_tag in a distributed sensor network , large number of sensors deployed which communicate among themselves to self-organize a wireless ad hoc network . we propose an energy-efficient level-based hierarchical system . we compromise between the energy consumption and shortest path route by utilizing number of neighbors ( nbr ) of a sensor and its level in the hierarchical clustering . in addition , we design a secure routing protocol for sensor networks ( srpsn ) to safeguard the data packet passing on the sensor networks under different types of attacks . we build the secure route from the source node to sink node . the sink node is guaranteed to receive correct information using our srpsn . we also propose a group key management scheme , which contains group communication policies , group membership requirements and an algorithm for generating a distributed group key for secure communication . story_separator_special_tag in this paper , we present a secure routing protocol for sensor networks ( secrout ) to safeguard sensor networks under different types of attacks . the secrout protocol uses the symmetric cryptography to secure messages , and uses a small cache in sensor nodes to record the partial routing path ( previous and next nodes ) to the destination . it guarantees that the destination will be able to identify and discard the tampered messages and ensure that the messages received are not tampered . comparing the performance with non-secure routing protocol aodv ( ad hoc on demand distance vector routing ) , the secrout protocol only has a small byte overhead ( less than 6 % ) , but packet delivery ratio is almost same as aodv and packet latency is better than aodv after the route discovery . story_separator_special_tag abstract wireless sensor networks are often deployed in hostile environments and operated on an unattended mode . in order to protect the sensitive data and the sensor readings , secret keys should be used to encrypt the exchanged messages between communicating nodes . due to their expensive energy consumption and hardware requirements , asymmetric key based cryptographies are not suitable for resource-constrained wireless sensors . several symmetric-key pre-distribution protocols have been investigated recently to establish secure links between sensor nodes , but most of them are not scalable due to their linearly increased communication and key storage overheads . furthermore , existing protocols can not provide sufficient security when the number of compromised nodes exceeds a critical value . to address these limitations , we propose an improved key distribution mechanism for large-scale wireless sensor networks . based on a hierarchical network model and bivariate polynomial-key generation mechanism , our scheme guarantees that two communicating parties can establish a unique pairwise key between them . compared with existing protocols , our scheme can provide sufficient security no matter how many sensors are compromised . fixed key storage overhead , full network connectivity , and low communication overhead can also be story_separator_special_tag wsns usually deployed in the targeted area to monitor or sense the environment and depending upon the application sensor node transmit the data to the base station . to relay the data intermediate nodes communicate together , select appropriate routing path and transmit data towards the base station . routing path selection depends on the routing protocol of the network . base station should receive unaltered and fresh data . to fulfill this requirement , routing protocol should beenergy-efficient and secure . hierarchical or cluster-base routing protocol for wsns is the most energy-efficient among other routing protocols . in this paper , we study different hierarchical routing technique for wsns . further we analyze and compare secure hierarchical routing protocols based on various criteria .
in cloud computing , there have led to an increase in the capability to store and record personal data ( < italic > microdata < /italic > ) in the cloud . in most cases , data providers have no/little control that has led to concern that the personal data may be beached . microaggregation techniques seek to protect microdata in such a way that data can be published and mined without providing any private information that can be linked to specific individuals . an optimal microaggregation method must minimize the information loss resulting from this replacement process . the challenge is how to minimize the information loss during the microaggregation process . this paper presents a sorting framework for statistical disclosure control ( sdc ) to protect microdata in cloud computing . it consists of two stages . in the first stage , an algorithm sorts all records in a data set in a particular way to ensure that during microaggregation very dissimilar observations are never entered into the same cluster . in the second stage a microaggregation method is used to create < inline-formula > < tex-math notation= '' latex '' > $ k $ < /tex-math > story_separator_special_tag statistical agencies that provide microdata for public use strive to keep the risk of disclosure of confidential information negligible . assessing the magnitude of the risk of disclosure is not easy , however . whether a data user or intruder attempts to obtain confidential information from a public-use file depends on the perceived costs of identifying a record , the perceived probability of success , and the information expected to be gained . in this article , a decision-theoretic framework for risk assessment that includes the intruder 's objectives and strategy for compromising the data base and the information gained by the intruder is developed . two kinds of microdata disclosure are distinguished disclosure of a respondent 's identity and disclosure of a respondent 's attributes as a result of an unauthorized identification . a formula for the risk of identity disclosure is given , and a simple approximation to it is evaluated . story_separator_special_tag americans are urged to be more savvy about preventing their personal information from falling into strangers ' hands . we are advised to shred documents containing our social security numbers , never to give our credit card numbers to people calling us , and not to put our children 's names on our personal web sites . at the same time , more and more information is being collected about each of us , just as we go about our daily lives-buying things , paying our taxes , and using public services . since 9/11 , americans have become more aware of the vast amount of personal information available as government proposals to integrate and mine this information to thwart terrorism have gotten public scrutiny . most of these proposals involve gaining information about individuals by using the newest technology to link files from diverse sources . critics have asked , `` what is to keep this technology from being used for inappropriate or illicit purposes ? '' maintaining a long tradition , government statistical agencies , such as the u.s. census bureau and the national center for education statistics , have established policies to protect the privacy of respondents story_separator_special_tag abstract a mathematical model is developed to provide a theoretical framework for a computer-oriented solution to the problem of recognizing those records in two files which represent identical persons , objects or events ( said to be matched ) . a comparison is to be made between the recorded characteristics and values in two records ( one from each file ) and a decision made as to whether or not the members of the comparison-pair represent the same person or event , or whether there is insufficient evidence to justify either of these decisions at stipulated levels of error . these three decisions are referred to as link ( a 1 ) , a non-link ( a 3 ) , and a possible link ( a 2 ) . the first two decisions are called positive dispositions . the two types of error are defined as the error of the decision a 1 when the members of the comparison pair are in fact unmatched , and the error of the decision a 3 when the members of the comparison pair are , in fact matched . the probabilities of these errors are defined as and respecti . story_separator_special_tag the growing expanse of e-commerce and the widespread availability of online databases raise many fears regarding loss of privacy and many statistical challenges . even with encryption and other nominal forms of protection for individual databases , we still need to protect against the violation of privacy through linkages across multiple databases . these issues parallel those that have arisen and received some attention in the context of homeland security . following the events of september 11 , 2001 , there has been heightened attention in the united states and elsewhere to the use of multiple government and private databases for the identification of possible perpetrators of future attacks , as well as an unprecedented expansion of federal government data mining activities , many involving databases containing personal information . we present an overview of some proposals that have surfaced for the search of multiple databases which supposedly do not compromise possible pledges of confidentiality to the individuals whose data are included . we also explore their link to the related literature on privacy-preserving data mining . in particular , we focus on the matching problem across databases and the concept of `` selective revelation '' and their confidentiality implications story_separator_special_tag statistical disclosure limitation is widely used by data collecting institutions to provide safe individual data . in this paper , we propose to combine two separate disclosure limitation techniques blanking and addition of independent noise in order to protect the original data . the proposed approach yields a decrease in the probability of reidentifying/disclosing the individual information , and can be applied to linear as well as nonlinear regression models . we show how to combine the blanking method and the measurement error method , and how to estimate the model by the combination of the simulation-extrapolation ( simex ) approach proposed by [ 4 ] and the inverse probability weighting ( ipw ) approach going back to [ 8 ] . we produce monte-carlo evidence on how the reduction of data quality can be minimized by this masking procedure . story_separator_special_tag we construct a decision-theoretic formulation of data swapping in which quantitative measures of disclosure risk and data utility are employed to select one release from a possibly large set of candidates . the decision variables are the swap rate , swap attribute ( s ) and possibly , constraints on the unswapped attributes . risk utility frontiers , consisting of those candidates not dominated in ( risk , utility ) space by any other candidate , are a principal tool for reducing the scale of the decision problem . multiple measures of disclosure risk and data utility , including utility measures based directly on use of the swapped data for statistical inference , are introduced . their behavior and resulting insights into the decision problem are illustrated using data from the current population survey , the well-studied czech auto worker data and data on schools and administrators generated by the national center for education statistics . story_separator_special_tag this article introduces the post randomisation method ( pram ) as a method for disclosure protection of the categorical variables in a microdata \xaele . applying pram means that for each record in a microdata \xaele the score on one or more categorical variables is changed ( independently of the other records ) according to a predetermined probability mechanism . since the original data \xaele is perturbed , it will be dif\xaecult for an intruder to identify records as corresponding to certain individuals in the population . the records in the original \xaele are thus protected , which is the main goal of applying pram . on the other hand , since the probability mechanism that is used when applying pram is known , characteristics of the ( latent ) true data can be estimated from the perturbed data \xaele . hence it is still possible to perform all kinds of statistical analyses after pram has been applied . originally we developed pram as the categorical variable analogon of noise addition to continuous variables ; see e.g. , fuller ( 1993 ) , hwang ( 1986 ) , and kim and winkler ( 1995 ) . only after we had story_separator_special_tag under given concrete exogenous conditions , the fraction of identifiable records in a microdata file without positive identifiers such as name and address is estimated . the effect of possible noise in the data , as well as the sample property of microdata files , is taken into account . using real microdata files , it is shown that there is no risk of disclosure if the information content of characteristics known to the investigator ( additional knowledge ) is limited . files with additional knowledge of large information content yield a high risk of disclosure . this can be eliminated only by massive modifications of the data records , which , however , involve large biases for complex statistical evaluations . in this case , the requirement for privacy protection and high-quality data perhaps may be fulfilled only if the linkage of such files with extensive additional knowledge is prevented by appropriate organizational and legal restrictions . story_separator_special_tag when statistical agencies release microdata to the public , malicious users ( intruders ) may be able to link records in the released data to records in external databases . releasing data in ways that fail to prevent such identifications may discredit the agency or , for some data , constitute a breach of law . to limit disclosures , agencies often release altered versions of the data ; however , there usually remain risks of identification . this article applies and extends the framework developed by duncan and lambert for computing probabilities of identification for sampled units . it describes methods tailored specifically to data altered by recoding and topcoding variables , data swapping , or adding random noise ( and combinations of these common data alteration techniques ) that agencies can use to assess threats from intruders who possess information on relationships among variables and the methods of data alteration . using data from the current population survey , the article illustrates a step-by-step process f . story_separator_special_tag in this paper we study the impact of statistical disclosure limitation in the setting of parameter estimation for a finite population . using a simulation experiment with microdata from the 2010 american community survey , we demonstrate a framework for applying risk-utility paradigms to microdata for a finite population , which incorporates a utility measure based on estimators with survey weights and risk measures based on record linkage techniques with composite variables . the simulation study shows a special caution on variance estimation for finite populations with the released data that are masked by statistical disclosure limitation . we also compare various disclosure limitation methods including a modified version of microaggregation that accommodates survey weights . the results confirm previous findings that a two-stage procedure , microaggregation with adding noise , is effective in terms of data utility and disclosure risk . story_separator_special_tag an intruder seeks to match a microdata file to an external file using a record linkage technique . the identification risk is defined as the probability that a match is correct . the nature of this probability and its estimation is explored . some connections are made to the literature on disclosure risk based on the notion of population uniqueness . story_separator_special_tag this article considers the assessment of the risk of identification of respondents in survey microdata , in the context of applications at the united kingdom ( uk ) office for national statistics ( ons ) . the threat comes from the matching of categorical key variables between microdata records and external data sources and from the use of log-linear models to facilitate matching . while the potential use of such statistical models is well established in the literature , little consideration has been given to model specification or to the sensitivity of risk assessment to this specification . in numerical work not reported here , we have found that standard techniques for selecting log-linear models , such as chi-squared goodness-of-fit tests , provide little guidance regarding the accuracy of risk estimation for the very sparse tables generated by typical applications at ons , for example , tables with millions of cells formed by cross-classifying six key variables , with sample sizes of 10 or 100,000. in this article we develop new criteria for assessing the specification of a log-linear model in relation to the accuracy of risk estimates . we find that , within a class of reasonable models , story_separator_special_tag this paper proposes criteria for evaluating the minimum amount of confidentiality provided in microdata releases . they were developed for use on business data or other data for which large amounts of similar information are publicly available . the paper also uses these criteria to compare microdata releases based on five releasing strategies -- adding random error , multiplying by random error , grouping , random rounding , and data swapping -- using data generated from the irs report : statistics of income -- 1977~ partnership returns . story_separator_special_tag abstract the current policy emphasis on data-driven decision-making is creating the right incentives for government agencies around the world that have not traditionally disseminated their administrative data to do so . the literature on statistical disclosure control focuses on the technical aspects of a variety of methods designed to protect data confidentiality . there is , however , a void in the literature in regard to what other elements are necessary to create and sustain a successful initiative . this paper examines six case studies of individual-level datasets . it reviews current practice in several domains and summarizes recommendations from expert practitioners including challenges for future initiatives . story_separator_special_tag statistical agencies often mask ( or distort ) microdata in public-use files so that the confidentiality of information associated with individual entities is preserved . the intent of many of the masking methods is to cause only minor distortions in some of the distributions of the data and possibly no distortion in a few aggregate or marginal statistics in record linkage ( as in nearest neighbor methods ) , metrics are used to determine how close a value of a variable in a record is from the value of the corresponding variable in another record . if a sufficient number of variables in one record have values that are close to values in another record , then the records may be a match and correspond to the same entity . this paper shows that it is possible to create metrics for which re- identification is straightforward in many situations where masking is currently done . we begin by demonstrating how to quickly construct metrics for continuous variables that have been micro-aggregated one at a time using conventional methods . we extend the methods to situations where rank swapping is performed and discuss the situation where several continuous variables are micro-aggregated
lux-zeplin ( lz ) is a next-generation dark matter direct detection experiment that will operate 4850 feet underground at the sanford underground research facility ( surf ) in lead , south dakota , usa . using a two-phase xenon detector with an active mass of 7 tonnes , lz will search primarily for low-energy interactions with weakly interacting massive particles ( wimps ) , which are hypothesized to make up the dark matter in our galactic halo . in this paper , the projected wimp sensitivity of lz is presented based on the latest background estimates and simulations of the detector . for a 1000 live day run using a 5.6-tonne fiducial mass , lz is projected to exclude at 90 % confidence level spin-independent wimp-nucleon cross sections above 1.4 \xd7 10-48cm2 for a 40 gev/c2 mass wimp . additionally , a 5 discovery potential is projected , reaching cross sections below the exclusion limits of recent experiments . for spin-dependent wimp-neutron ( -proton ) scattering , a sensitivity of 2.3 \xd7 10 43 cm2 ( 7.1 \xd7 10 42 cm2 ) for a 40 gev/c2 mass wimp is expected . with underground installation well underway , lz is on story_separator_special_tag we report the first dark matter search results from xenon1t , a 2000-kg-target-mass dual-phase ( liquid-gas ) xenon time projection chamber in operation at the laboratori nazionali del gran sasso in italy and the first ton-scale detector of this kind . the blinded search used 34.2 live days of data acquired between november 2016 and january 2017. inside the ( 1042\xb112 ) -kg fiducial mass and in the [ 5,40 ] kev_ { nr } energy range of interest for weakly interacting massive particle ( wimp ) dark matter searches , the electronic recoil background was ( 1.93\xb10.25 ) \xd710^ { -4 } events/ ( kg\xd7day\xd7kev_ { ee } ) , the lowest ever achieved in such a dark matter detector . a profile likelihood analysis shows that the data are consistent with the background-only hypothesis . we derive the most stringent exclusion limits on the spin-independent wimp-nucleon interaction cross section for wimp masses above 10 gev/c^ { 2 } , with a minimum of 7.7\xd710^ { -47 } cm^ { 2 } for 35-gev/c^ { 2 } wimps at 90 % \xa0c.l . story_separator_special_tag we report a new search for weakly interacting massive particles ( wimps ) using the combined low background data sets acquired in 2016 and 2017 from the pandax-ii experiment in china . the latest data set contains a new exposure of 77.1 live days , with the background reduced to a level of 0.8\xd710^ { -3 } evt/kg/day , improved by a factor of 2.5 in comparison to the previous run in 2016. no excess events are found above the expected background . with a total exposure of 5.4\xd710^ { 4 } kg day , the most stringent upper limit on the spin-independent wimp-nucleon cross section is set for a wimp with mass larger than 100 gev/c^ { 2 } , with the lowest 90 % \xa0c.l . exclusion at 8.6\xd710^ { -47 } cm^ { 2 } at 40 gev/c^ { 2 } . story_separator_special_tag abstract in an extended effective operator framework of isospin violating interactions with light mediators , we investigate the compatibility of the candidate signal of the cdms-ii-si with the latest constraints from darkside-50 and xenon-1t , etc . we show that the constraints from darkside-50 which utilizes argon as the target is complementary to that from xenon-1t which utilizes xenon . combining the results of the two experiments , we find that for isospin violating interaction with light mediator there is no parameter space which can be compatible with the positive signals from cdms-ii-si . as a concrete example of this framework , we investigate the dark photon model in detail . we obtain the combined limits on the dark matter mass m , the dark photon mass m a , and the kinetic mixing parameter e in the dark photon model . the darkside-50 gives more stringent upper limits in the region of mediator mass from 0.001 to 1 gev , for m 6 gev in the ( m a , e ) plane , and more stringent constraints for m 8 gev and e 10 8 in the ( m , m a ) plane . story_separator_special_tag deap-3600 is a single-phase liquid argon ( lar ) direct-detection dark matter experiment , operating 2 km underground at snolab ( sudbury , canada ) . the detector consists of 3279 kg of lar contained in a spherical acrylic vessel . this paper reports on the analysis of a 758 tonne day exposure taken over a period of 231 live-days during the first year of operation . no candidate signal events are observed in the wimp-search region of interest , which results in the leading limit on the wimp-nucleon spin-independent cross section on a lar target of 3.9\xd710 45 cm2 ( 1.5\xd710 44 cm2 ) for a 100 gev/c2 ( 1 tev/c2 ) wimp mass at 90 % c.l . in addition to a detailed background model , this analysis demonstrates the best pulse-shape discrimination in lar at threshold , employs a bayesian photoelectron-counting technique to improve the energy resolution and discrimination efficiency , and utilizes two position reconstruction algorithms based on the charge and photon detection time distributions observed in each photomultiplier tube . story_separator_special_tag the effects of astrophysical uncertainties on the exclusion limits at dark matter direct detection experiments are investigated for three scenarios : elastic , momentum dependent , and inelastically scattering dark matter . we find that varying the dark matter galactic escape velocity and the sun 's circular velocity can lead to significant variations in the exclusion limits for light ( $ \\ensuremath { \\lesssim } 10\\text { } \\text { } \\mathrm { gev } $ ) elastic and inelastic scattering dark matter . we also calculate the limits using 100 velocity distributions extracted from the via lactea ii and ghalo n-body simulations and find that a maxwell-boltzmann distribution with the same astrophysical parameters generally sets less constraining limits . the elastic and momentum dependent limits remain robust for masses $ \\ensuremath { \\gtrsim } 50\\text { } \\text { } \\mathrm { gev } $ under variations of the astrophysical parameters and the form of the velocity distribution . story_separator_special_tag we present results of searches for vector and pseudoscalar bosonic super-weakly interacting massive particles ( wimps ) , which are dark matter candidates with masses at the kev-scale , with the xenon100 experiment . xenon100 is a dual-phase xenon time projection chamber operated at the laboratori nazionali del gran sasso . a profile likelihood analysis of data with an exposure of 224.6 live days $ \\ifmmode\\times\\else\\texttimes\\fi { } 34\\text { } \\text { } \\mathrm { kg } $ showed no evidence for a signal above the expected background . we thus obtain new and stringent upper limits in the $ ( 8 -- 125 ) \\text { } \\text { } \\mathrm { kev } / { \\mathrm { c } } ^ { 2 } $ mass range , excluding couplings to electrons with coupling constants of $ { g } _ { ae } g3\\ifmmode\\times\\else\\texttimes\\fi { } { 10 } ^ { \\ensuremath { - } 13 } $ for pseudo-scalar and $ { \\ensuremath { \\alpha } } ^ { \\ensuremath { ' } } /\\ensuremath { \\alpha } g2\\ifmmode\\times\\else\\texttimes\\fi { } { 10 } ^ { \\ensuremath { - } 28 } $ for vector story_separator_special_tag the preponderance of matter over antimatter in the early universe , the dynamics of the supernovae that produced the heavy elements necessary for life , and whether protons eventually decay -- these mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our universe , its current state , and its eventual fate . the deep underground neutrino experiment ( dune ) is an international world-class experiment dedicated to addressing these questions as it searches for leptonic charge-parity symmetry violation , stands ready to capture supernova neutrino bursts , and seeks to observe nucleon decay as a signature of a grand unified theory underlying the standard model . the dune far detector technical design report ( tdr ) describes the dune physics program and the technical designs of the single- and dual-phase dune liquid argon tpc far detector modules . this tdr is intended to justify the technical choices for the far detector that flow down from the high-level physics goals through requirements at all levels of the project . volume i contains an executive summary that introduces the dune science program , the far detector and the strategy for its modular designs story_separator_special_tag we present the multiple particle identification ( mpid ) network , a convolutional neural network ( cnn ) for multiple object classification , developed by microboone . mpid provides the probabilities of $ e^- $ , $ \\gamma $ , $ \\mu^- $ , $ \\pi^\\pm $ , and protons in a single liquid argon time projection chamber ( lartpc ) readout plane . the network extends the single particle identification network previously developed by microboone \\cite { ub_singlepid } . mpid takes as input an image either cropped around a reconstructed interaction vertex or containing only activity connected to a reconstructed vertex , therefore relieving the tool from inefficiencies in vertex finding and particle clustering . the network serves as an important component in microboone 's deep learning based $ u_e $ search analysis . in this paper , we present the network 's design , training , and performance on simulation and data from the microboone detector . story_separator_special_tag a search for neutrinoless double- decay ( 0 ) in ^ { 136 } xe is performed with the full exo-200 dataset using a deep neural network to discriminate between 0 and background events . relative to previous analyses , the signal detection efficiency has been raised from 80.8 % to 96.4\xb13.0 % , and the energy resolution of the detector at the q value of ^ { 136 } xe 0 has been improved from /e=1.23 % to 1.15\xb10.02 % with the upgraded detector . accounting for the new data , the median 90 % \xa0confidence level 0 half-life sensitivity for this analysis is 5.0\xd710^ { 25 } yr with a total ^ { 136 } xe exposure of 234.1\xa0kg yr. no statistically significant evidence for 0 is observed , leading to a lower limit on the 0 half-life of 3.5\xd710^ { 25 } yr at the 90 % \xa0confidence level . story_separator_special_tag motivated by the possibility of guiding daughter ions from double beta decay events to single-ion sensors for barium tagging , the next collaboration is developing a program of r & d to test radio frequency ( rf ) carpets for ion transport in high pressure xenon gas . this would require carpet functionality in regimes at higher pressures than have been previously reported , implying correspondingly larger electrode voltages than in existing systems . this mode of operation appears plausible for contemporary rf-carpet geometries due to the higher predicted breakdown strength of high pressure xenon relative to low pressure helium , the working medium in most existing rf carpet devices . in this paper we present the first measurements of the high voltage dielectric strength of xenon gas at high pressure and at the relevant rf frequencies for ion transport ( in the 10 mhz range ) , as well as new dc and rf measurements of the dielectric strengths of high pressure argon and helium gases at small gap sizes . we find breakdown voltages that are compatible with stable rf carpet operation given the gas , pressure , voltage , materials and geometry of interest . story_separator_special_tag abstract the large underground xenon ( lux ) collaboration has designed and constructed a dual-phase xenon detector , in order to conduct a search for weakly interacting massive particles ( wimps ) , a leading dark matter candidate . the goal of the lux detector is to clearly detect ( or exclude ) wimps with a spin independent cross-section per nucleon of 2 \xd7 10 46 cm 2 , equivalent to 1 event / 100 kg / month in the inner 100-kg fiducial volume ( fv ) of the 370-kg detector . the overall background goals are set to have 1 background events characterized as possible wimps in the fv in 300 days of running . this paper describes the design and construction of the lux detector . story_separator_special_tag many extensions of the standard model of particle physics suggest that neutrinos should be majorana-type fermions that is , that neutrinos are their own anti-particles but this assumption is difficult to confirm . observation of neutrinoless double- decay ( 0 ) , a spontaneous transition that may occur in several candidate nuclei , would verify the majorana nature of the neutrino and constrain the absolute scale of the neutrino mass spectrum . recent searches carried out with 76ge ( the gerda experiment ) and 136xe ( the kamland-zen and exo ( enriched xenon observatory ) -200 experiments ) have established the lifetime of this decay to be longer than 1025 years , corresponding to a limit on the neutrino mass of 0.2 0.4 electronvolts . here we report new results from exo-200 based on a large 136xe exposure that represents an almost fourfold increase from our earlier published data sets . we have improved the detector resolution and revised the data analysis . the half-life sensitivity we obtain is 1.9 \xd7 1025 years , an improvement by a factor of 2.7 on previous exo-200 results . we find no statistically significant evidence for 0 decay and set a half-life limit story_separator_special_tag next is a new experiment to search for neutrinoless double beta decay using a 100 kg radio-pure high-pressure gaseous xenon tpc . the detector requires excellent energy resolution , which can be achieved in a xe tpc with electroluminescence readout . hamamatsu r8520-06sel photomultipliers are good candidates for the scintillation readout . the performance of this photomultiplier , used as vuv photosensor in a gas proportional scintillation counter , was investigated . initial results for the detection of primary and secondary scintillation produced as a result of the interaction of 5.9 kev x-rays in gaseous xenon , at room temperature and at pressures up to 3 bar , are presented . an energy resolution of 8.0 % was obtained for secondary scintillation produced by 5.9 kev x-rays . no significant variation of the primary scintillation was observed for different pressures ( 1 , 2 and 3 bar ) and for electric fields up to 0.8 v cm-1 torr-1 in the drift region , demonstrating negligible recombination luminescence . a primary scintillation yield of 81 \\pm 7 photons was obtained for 5.9 kev x-rays , corresponding to a mean energy of 72 \\pm 6 ev to produce a primary scintillation photon story_separator_special_tag measurements of double photoelectron emission ( dpe ) probabilities as a function of wavelength are reported for hamamatsu r8778 , r8520 , and r11410 vuv-sensitive photomultiplier tubes ( pmts ) . in dpe , a single photon strikes the pmt photocathode and produces two photoelectrons instead of a single one . it was found that the fraction of detected photons that result in dpe emission is a function of the incident photon wavelength , and manifests itself below ~250 nm . for the xenon scintillation wavelength of 175 nm , a dpe probability of 18 24 % was measured depending on the tube and measurement method . this wavelength-dependent single photon response has implications for the energy calibration and photon counting of current and future liquid xenon detectors such as lux , lz , xenon100/1t , panda-x and xmass . story_separator_special_tag the xenon100 experiment , in operation at the laboratori nazionali del gran sasso in italy , is designed to search for dark matter weakly interacting massive particles ( wimps ) scattering off 62 kg of liquid xenon in an ultralow background dual-phase time projection chamber . in this letter , we present first dark matter results from the analysis of 11.17 live days of nonblind data , acquired in october and november 2009. in the selected fiducial target of 40 kg , and within the predefined signal region , we observe no events and hence exclude spin-independent wimp-nucleon elastic scattering cross sections above 3.4 \xd7 10 cm\xb2 for 55 gev/c\xb2 wimps at 90 % confidence level . below 20 gev/c\xb2 , this result constrains the interpretation of the cogent and dama signals as being due to spin-independent , elastic , light mass wimp interactions . story_separator_special_tag we present constraints on weakly interacting massive particles ( wimp ) -nucleus scattering from the 2013 data of the large underground xenon dark matter experiment , including 1.4\xd710^ { 4 } kg day of search exposure . this new analysis incorporates several advances : single-photon calibration at the scintillation wavelength , improved event-reconstruction algorithms , a revised background model including events originating on the detector walls in an enlarged fiducial volume , and new calibrations from decays of an injected tritium source and from kinematically constrained nuclear recoils down to 1.1\xa0kev . sensitivity , especially to low-mass wimps , is enhanced compared to our previous results which modeled the signal only above a 3\xa0kev minimum energy . under standard dark matter halo assumptions and in the mass range above 4 gev c^ { -2 } , these new results give the most stringent direct limits on the spin-independent wimp-nucleon cross section . the 90 % \xa0c.l . upper limit has a minimum of 0.6\xa0zb at 33 gev c^ { -2 } wimp mass . story_separator_special_tag we report constraints on spin-independent weakly interacting massive particle ( wimp ) -nucleon scattering using a 3.35\xd710^ { 4 } kg day exposure of the large underground xenon ( lux ) experiment . a dual-phase xenon time projection chamber with 250\xa0kg of active mass is operated at the sanford underground research facility under lead , south dakota ( usa ) . with roughly fourfold improvement in sensitivity for high wimp masses relative to our previous results , this search yields no evidence of wimp nuclear recoils . at a wimp mass of 50 gev c^ { -2 } , wimp-nucleon spin-independent cross sections above 2.2\xd710^ { -46 } cm^ { 2 } are excluded at the 90 % confidence level . when combined with the previously reported lux exposure , this exclusion strengthens to 1.1\xd710^ { -46 } cm^ { 2 } at 50 gev c^ { -2 } . story_separator_special_tag bosonic superweakly interacting massive particles ( super-wimps ) are a candidate for warm dark matter . with the absorption of such a boson by a xenon atom , these dark matter candidates would deposit an energy equivalent to their rest mass in the detector . this is the first direct detection experiment exploring the vector super-wimps in the mass range between 40 and 120 kev . with the use of 165.9 day of data , no significant excess above background was observed in the fiducial mass of 41 kg . the present limit for the vector super-wimps excludes the possibility that such particles constitute all of dark matter . the absence of a signal also provides the most stringent direct constraint on the coupling constant of pseudoscalar super-wimps to electrons . the unprecedented sensitivity was achieved exploiting the low background at a level 10 ( -4 ) kg-1 kevee-1 day-1 in the detector . story_separator_special_tag a wide range of observational evidence suggests that the matter content of the universe is dominated by a non-baryonic and non-luminous component : dark matter . one of the most favoured candidate for dark matter is a big-bang relic population of weakly interacting massive particles ( wimps ) . the darkside program aims to the direct detection of wimps with a dual-phase liquid argon tpc and a background free exposure . the first phase of the experiment , darkside-50 , is running since oct 2013 and has ( 46 \xb1 0.7 ) kg active mass . a first run , with an atmospheric argon fill ( aar ) , provided the most sensitive limit ever obtained by an argon-based experiment . the current run , with an underground argon fill ( uar , depleted in 39ar ) , represents a milestone towards the construction of darkside-20k , a low-background dual-phase tpc with a fiducial mass of 20 t. this work is been mainly devoted to the description of g4ds , the darkside monte carlo simulation , and to its applications . g4ds is a geant4-based simulation , it provides the geometry description of each detector of the darkside program , story_separator_special_tag we report here methods and techniques for creating and improving a model that reproduces the scintillation and ionization response of a dual-phase liquid and gaseous xenon time-projection chamber . starting with the recent release of the noble element simulation technique ( nest v2.0 ) , electronic recoil data from the $ \\beta $ decays of $ { } ^3 $ h and $ { } ^ { 14 } $ c in the large underground xenon ( lux ) detector were used to tune the model , in addition to external data sets that allow for extrapolation beyond the lux data-taking conditions . this paper also presents techniques used for modeling complicated temporal and spatial detector pathologies that can adversely affect data using a simplified model framework . the methods outlined in this report show an example of the robust applications possible with nest v2.0 , while also providing the final electronic recoil model and detector parameters that will used in the new analysis package , the lux legacy analysis monte carlo application ( llama ) , for accurate reproduction of the lux data . as accurate background reproduction is crucial for the success of rare-event searches , such as story_separator_special_tag astronomical evidence indicates that 23 % of the energy density in the universe is comprised of some form of non-standard , non-baryonic matter that has yet to be observed . one of the predominant theories is that dark matter consists of wimps ( weakly interacting massive particles ) , so named because they do not interact electromagnetically or through the strong nuclear force . in direct dark matter detection experiments the goal is to look for evidence of collisions between wimps and other particles such as heavy nuclei . here , the challenge is to measure exceedingly rare interactions with very high precision . in recent years xenon has risen as a medium for particle detection , exhibiting a number of desirable qualities that make it well-suited for direct wimp searches . the lux ( large underground xenon ) experiment is a 350-kg xenon-based direct dark matter detection experiment currently deployed at the homestake mine in lead , south dakota , consisting of a two-phase ( liquid/gas ) xenon time projection chamber with a 100-kg fiducial mass . its projected sensitivity for 300 days of underground data acquisition is a cross-section of 7 \xd7 10-46 cm2 for a wimp mass story_separator_special_tag author ( s ) : akerib , ds ; alsum , s ; araujo , hm ; bai , x ; bailey , aj ; balajthy , j ; beltrame , p ; bernard , ep ; bernstein , a ; biesiadzinski , tp ; boulton , em ; bras , p ; byram , d ; cahn , sb ; carmona-benitez , mc ; chan , c ; currie , a ; cutter , je ; davison , tjr ; dobi , a ; druszkiewicz , e ; edwards , bn ; fallon , sr ; fan , a ; fiorucci , s ; gaitskell , rj ; genovesi , j ; ghag , c ; gilchriese , mgd ; hall , cr ; hanhardt , m ; haselschwardt , sj ; hertel , sa ; hogan , dp ; horn , m ; huang , dq ; ignarra , cm ; jacobsen , rg ; ji , w ; kamdin , k ; kazkaz , k ; khaitan , d ; knoche , r ; larsen , na ; lenardo , bg ; lesko , kt ; lindote , a ; lopes , mi ; manalaysay , a story_separator_special_tag the large underground xenon ( lux ) experiment is a dual-phase liquid xenon time projection chamber ( tpc ) operating at the sanford underground research facility in lead , south dakota . a calibration of nuclear recoils in liquid xenon was performed $ \\textit { in situ } $ in the lux detector using a collimated beam of mono-energetic 2.45 mev neutrons produced by a deuterium-deuterium ( d-d ) fusion source . the nuclear recoil energy from the first neutron scatter in the tpc was reconstructed using the measured scattering angle defined by double-scatter neutron events within the active xenon volume . we measured the absolute charge ( $ q_ { y } $ ) and light ( $ l_ { y } $ ) yields at an average electric field of 180 v/cm for nuclear recoil energies spanning 0.7 to 74 kev and 1.1 to 74 kev , respectively . this calibration of the nuclear recoil signal yields will permit the further refinement of liquid xenon nuclear recoil signal models and , importantly for dark matter searches , clearly demonstrates measured ionization and scintillation signals in this medium at recoil energies down to $ \\mathcal { o } $ story_separator_special_tag we study the basic integral equation in lindhard 's theory describing the energy given to atomic motion by nuclear recoils in a pure material when the atomic binding energy is taken into account . the numerical solution , which depends only on the slope of the velocity-proportional electronic stopping power and the binding energy , leads to an estimation of the ionization efficiency which is in good agreement with the available experimental measurements for si and ge . in this model , the quenching factor for nuclear recoils features a cut-off at an energy equal to twice the assumed binding energy . we argue that the model is a reasonable approximation for ge even for energies close to the cutoff , while for si is valid up to recoil energies greater than ~500 ev . story_separator_special_tag the xenon10 experiment at the gran sasso national laboratory uses a 15 kg xenon dual phase time projection chamber to search for dark matter weakly interacting massive particles ( wimps ) . the detector measures simultaneously the scintillation and the ionization produced by radiation in pure liquid xenon to discriminate signal from background down to 4.5 kev nuclear-recoil energy . a blind analysis of 58.6 live days of data , acquired between october 6 , 2006 , and february 14 , 2007 , and using a fiducial mass of 5.4 kg , excludes previously unexplored parameter space , setting a new 90 % c.l . upper limit for the wimp-nucleon spin-independent cross section of 8.8x10 ( -44 ) cm2 for a wimp mass of 100 gev/c2 , and 4.5x10 ( -44 ) cm2 for a wimp mass of 30 gev/c2 . this result further constrains predictions of supersymmetric models . story_separator_special_tag liquid xenon ( lxe ) is an excellent material for experiments designed to detect dark matter in the form of weakly interacting massive particles ( wimps ) . a low energy detection threshold is essential for a sensitive wimp search . the understanding of the relative scintillation efficiency ( $ { \\mathcal { l } } _ { \\mathrm { eff } } $ ) and ionization yield of low energy nuclear recoils in lxe is limited for energies below 10 kev . in this article , we present new measurements that extend the energy down to 4 kev , finding that $ { \\mathcal { l } } _ { \\mathrm { eff } } $ decreases with decreasing energy . we also measure the quenching of scintillation efficiency caused by the electric field in lxe , finding no significant field dependence . story_separator_special_tag particle detectors that use liquid xenon ( lxe ) as detection medium are among the leading technologies in the search for dark matter weakly interacting massive particles ( wimps ) . a key enabling element has been the low-energy detection threshold for recoiling nuclei produced by the interaction of wimps in lxe targets . in these detectors , the nuclear recoil energy scale is based on the lxe scintillation signal and thus requires knowledge of the relative scintillation efficiency of nuclear recoils , $ { \\mathcal { l } } _ { \\mathrm { eff } } $ . the uncertainty in $ { \\mathcal { l } } _ { \\mathrm { eff } } $ at low energies is the largest systematic uncertainty in the reported results from lxe wimp searches at low masses . in the context of the xenon dark matter project , a new lxe scintillation detector has been designed and built specifically for the measurement of $ { \\mathcal { l } } _ { \\mathrm { eff } } $ at low energies , with an emphasis on maximizing the scintillation light detection efficiency to obtain the lowest possible energy threshold . we story_separator_special_tag a comprehensive model for explaining scintillation yield in liquid xenon is introduced . we unify various definitions of work function which abound in the literature and incorporate all available data on electron recoil scintillation yield . this results in a better understanding of electron recoil , and facilitates an improved description of nuclear recoil . an incident gamma energy range of o ( 1 kev ) to o ( 1 mev ) and electric fields between 0 and o ( 10 kv/cm ) are incorporated into this heuristic model . we show results from a geant4 implementation , but because the model has a few free parameters , implementation in any simulation package should be simple . we use a quasi-empirical approach , with an objective of improving detector calibrations and performance verification . the model will aid in the design and optimization of future detectors . this model is also easy to extend to other noble elements . in this paper we lay the foundation for an exhaustive simulation code which we call nest ( noble element simulation technique ) . story_separator_special_tag we have measured the energy dependence of the liquid xenon ( lxe ) scintillation yield of electrons with energies between 2.1 and 120.2 kev , using the compton coincidence technique . a lxe scintillation detector with a very high light detection efficiency was irradiated with $ ^ { 137 } \\mathrm { cs } $ $ \\ensuremath { \\gamma } $ rays , and the energy of the compton-scattered $ \\ensuremath { \\gamma } $ rays was measured with a high-purity germanium detector placed at different scattering angles . the excellent energy resolution of the high-purity germanium detector allows the selection of events with compton electrons of known energy in the lxe detector . we find that the scintillation yield initially increases as the electron energy decreases from 120 to about 60 kev but then decreases by about 30 % from 60 to 2 kev . the scintillation yield was also measured with conversion electrons from the 32.1 and 9.4 kev transitions of the $ ^ { 83m } \\mathrm { kr } $ isomer , used as an internal calibration source . we find that the scintillation yield of the 32.1 kev transition is compatible with that obtained from story_separator_special_tag the noble element simulation technique ( nest ) is an extensive collection of models explaining both the scintillation light and ionization yields of noble elements as a function of particle type ( nuclear recoils , electron recoils , alphas ) , electric field , and incident energy or energy loss ( de/dx ) . it is packaged as c++ code for geant4 that implements said models , overriding the default model which does not account for certain complexities , such as the reduction in yields for nuclear recoils ( nr ) compared to electron recoils ( er ) . we present here improvements to the existing nest models and updates to the code which make the package even more realistic and turn it into a more full-fledged monte carlo simulation . all available liquid xenon data on nr and er to date have been taken into consideration in arriving at the current models . furthermore , nest addresses the question of the magnitude of the light and charge yields of nuclear recoils , including their electric field dependence , thereby helping to understand the capabilities of liquid xenon detectors for detection or exclusion of a low-mass dark matter wimp . story_separator_special_tag we show for the first time that the quenching of electronic excitation from nuclear recoils in liquid xenon is well-described by lindhard theory , if the nuclear recoil energy is reconstructed using the combined ( scintillation and ionization ) energy scale proposed by shutt et al . we argue for the adoption of this perspective in favor of the existing preference for reconstructing nuclear recoil energy solely from primary scintillation . we show that signal partitioning into scintillation and ionization is well-described by the thomas-imel box model . we discuss the implications for liquid xenon detectors aimed at the direct detection of dark matter . story_separator_special_tag the wimp limit set by the xenon10 experiment in 2007 signals a new era in direct detection of dark matter , with several large-scale liquid target detectors now under construction . a major challenge in these detectors will be to understand backgrounds at the level necessary to claim a positive wimp signal . in liquid xenon , these backgrounds are dominated by electron recoils , which may be distinguished from the wimp signal ( nuclear recoils ) by their higher charge-to-light ratio . during the construction and operation of xenon10 , the prototype detector xed probed the physics of this discrimination . particle interactions in liquid xenon both ionize and excite xenon atoms , giving charge and scintillation signals , respectively . some fraction of ions recombine , reducing the charge signal and creating additional scintillation . the charge-to-light ratio , determined by the initial exciton-ion ratio and the ion recombination fraction , provides the basis for discrimination between electron and nuclear recoils . intrinsic fluctuations in the recombination fraction limit discrimination . changes in recombination induce an exact anti-correlation between charge and light , and when calibrated this anti-correlation distinguishes recombination fluctuations from uncorrelated fluctua- tions in the measured story_separator_special_tag lux , the world 's largest dual-phase xenon time-projection chamber , with a fiducial target mass of 118 kg and 10,091 kg-days of exposure thus far , is currently the most sensitive direct dark matter search experiment . the initial null-result limit on the spin-independent wimp-nucleon scattering cross-section was released in october 2013 , with a primary scintillation threshold of 2 phe , roughly 3 kevnr for lux . the detector has been deployed at the sanford underground research facility ( surf ) in lead , south dakota , and is the first experiment to achieve a limit on the wimp cross-section lower than $ 10^ { -45 } $ cm $ ^ { 2 } $ . here we present a more in-depth discussion of the novel energy scale employed to better understand the nuclear recoil light and charge yields , and of the calibration sources , including the new internal tritium source . we found the lux data to be in conflict with low-mass wimp signal interpretations of other results . story_separator_special_tag we report results of a search for light ( 10 gev ) particle dark matter with the xenon10 detector . the event trigger was sensitive to a single electron , with the analysis threshold of 5 electrons corresponding to 1.4 kev nuclear recoil energy . considering spin-independent dark matter-nucleon scattering , we exclude cross sections ( n ) > 7\xd710 ( -42 ) cm ( 2 ) , for a dark matter particle mass m ( ) =7 gev . we find that our data strongly constrain recent elastic dark matter interpretations of excess low-energy events observed by cogent and cresst-ii , as well as the dama annual modulation signal . story_separator_special_tag we report constraints on light dark matter ( dm ) models using ionization signals in the xenon1t experiment . we mitigate backgrounds with strong event selections , rather than requiring a scintillation signal , leaving an effective exposure of ( 22\xb13 ) tonne\xa0day . above 0.4 kev_ { ee } , we observe < 1 event/ ( tonne day kev_ { ee } ) , which is more than 1000 times lower than in similar searches with other detectors . despite observing a higher rate at lower energies , no dm or cevns detection may be claimed because we can not model all of our backgrounds . we thus exclude new regions in the parameter spaces for dm-nucleus scattering for dm masses m_ { } within 3-6 gev/c^ { 2 } , dm-electron scattering for m_ { } > 30 mev/c^ { 2 } , and absorption of dark photons and axionlike particles for m_ { } within 0.186-1 kev/c^ { 2 } . story_separator_special_tag abstract xenon10 is the first two-phase xenon time projection chamber ( tpc ) developed within the xenon dark matter search program . the tpc , with an active liquid xenon ( lxe ) mass of about 14\xa0kg , was installed at the gran sasso underground laboratory ( lngs ) in italy , and operated for more than one year , with excellent stability and performance . results from a dark matter search with xenon10 have been published elsewhere . in this paper , we summarize the design and performance of the detector and its subsystems , based on calibration data using sources of gamma-rays and neutrons as well as background and monte carlo simulation data . the results on the detector s energy threshold , position resolution , and overall efficiency show a performance that exceeds design specifications , in view of the very low energy threshold achieved ( story_separator_special_tag xenon dual-phase time projection chambers designed to search for weakly interacting massive particles have so far shown a relative energy resolution which degrades with energy above $ \\sim $ 200 kev due to the saturation effects . this has limited their sensitivity in the search for rare events like the neutrinoless double-beta decay of $ ^ { 136 } \\hbox { xe } $ at its q value , $ q_ { \\beta \\beta } \\simeq 2.46\\ , \\hbox { mev } $ . for the xenon1t dual-phase time projection chamber , we demonstrate that the relative energy resolution at $ 1\\ , \\sigma /\\mu $ is as low as ( $ 0.80 \\pm 0.02 $ ) \xa0 % in its one-ton fiducial mass , and for single-site interactions at $ q_ { \\beta \\beta } $ . we also present a new signal correction method to rectify the saturation effects of the signal readout system , resulting in more accurate position reconstruction and indirectly improving the energy resolution . the very good result achieved in xenon1t opens up new windows for the xenon dual-phase dark matter detectors to simultaneously search for other rare events . story_separator_special_tag we report results from searches for new physics with low-energy electronic recoil data recorded with the xenon1t detector . with an exposure of 0.65\xa0tonne-years and an unprecedentedly low background rate of 76\xb12stat events/ ( tonne\xd7year\xd7kev ) between 1 and 30\xa0kev , the data enable one of the most sensitive searches for solar axions , an enhanced neutrino magnetic moment using solar neutrinos , and bosonic dark matter . an excess over known backgrounds is observed at low energies and most prominent between 2 and 3\xa0kev . the solar axion model has a 3.4 significance , and a three-dimensional 90 % \xa0confidence surface is reported for axion couplings to electrons , photons , and nucleons . this surface is inscribed in the cuboid defined by gae < 3.8\xd710-12 , gaeganeff < 4.8\xd710-18 , and gaega < 7.7\xd710-22 gev-1 , and excludes either gae=0 or gaega =gaeganeff=0 . the neutrino magnetic moment signal is similarly favored over background at 3.2 , and a confidence interval of ( 1.4,2.9 ) \xd710-11 b ( 90 % \xa0c.l . ) is reported . both results are in strong tension with stellar constraints . the excess can also be explained by decays of tritium at 3.2 story_separator_special_tag abstract we report the measurement of the emission time profile of scintillation from gamma-ray induced events in the xmass-i 832\xa0kg liquid xenon scintillation detector . decay time constant was derived from a comparison of scintillation photon timing distributions between the observed data and simulated samples in order to take into account optical processes such as absorption and scattering in liquid xenon . calibration data of radioactive sources , 55 fe , 241 am , and 57 co were used to obtain the decay time constant . assuming two decay components , 1 and 2 , the decay time constant 2 increased from 27.9\xa0ns to 37.0\xa0ns as the gamma-ray energy increased from 5.9\xa0kev to 122\xa0kev . the accuracy of the measurement was better than 1.5\xa0ns at all energy levels . a fast decay component with 1 2 ns was necessary to reproduce data . energy dependencies of 2 and the fraction of the fast decay component were studied as a function of the kinetic energy of electrons induced by gamma-rays . the obtained data almost reproduced previously reported results and extended them to the lower energy region relevant to direct dark matter searches . story_separator_special_tag liquid xenon ( lxe ) is expected to be an excellent target and detection medium to search for dark matter in the form of weakly interacting massive particles ( wimps ) . we have measured the scintillation efficiency of nuclear recoils with kinetic energy between 10.4 and 56.5 kev relative to that of 122 kev gamma rays from $ ^ { 57 } \\mathrm { c } \\mathrm { o } $ . the scintillation yield of 56.5 kev recoils was also measured as a function of applied electric field , and compared to that of gamma rays and alpha particles . the $ \\mathrm { xe } $ recoils were produced by elastic scattering of 2.4 mev neutrons in liquid xenon at a variety of scattering angles . the relative scintillation efficiency is $ 0.130\\ifmmode\\pm\\else\\textpm\\fi { } 0.024 $ and $ 0.227\\ifmmode\\pm\\else\\textpm\\fi { } 0.016 $ for the lowest and highest energy recoils , respectively . this is about $ 15 % $ less than the value predicted by lindhard , based on nuclear quenching . our results are in good agreement with more recent theoretical predictions that consider the additional reduction of scintillation yield due to biexcitonic collisions story_separator_special_tag we report the first measurements of the absolute ionization yield of nuclear recoils in liquid xenon , as a function of energy and electric field . independent experiments were carried out with two dual-phase time-projection chamber prototypes , developed for the xenon dark matter project . we find that the charge yield increases with decreasing recoil energy , and exhibits only a weak field dependence . these results are the first unambiguous demonstration of the capability of dual-phase xenon detectors to discriminate between electron and nuclear recoils down to 20 kev , a key requirement for a sensitive dark matter search . story_separator_special_tag the response of liquid xenon to low-energy electronic recoils is relevant in the search for dark-matter candidates which interact predominantly with atomic electrons in the medium , such as axions or axionlike particles , as opposed to weakly interacting massive particles which are predicted to scatter with atomic nuclei . recently , liquid-xenon scintillation light has been observed from electronic recoils down to 2.1 kev , but without applied electric fields that are used in most xenon dark-matter searches . applied electric fields can reduce the scintillation yield by hindering the electron-ion recombination process that produces most of the scintillation photons . we present new results of liquid xenon 's scintillation emission in response to electronic recoils as low as 1.5 kev , with and without an applied electric field . at zero field , a reduced scintillation output per unit deposited energy is observed below 10 kev , dropping to nearly 40 % of its value at higher energies . with an applied electric field of 450 v/cm , we observe a reduction of the scintillation output to about 75 % relative to the value at zero field . we see no significant energy dependence of this value between story_separator_special_tag we report on a search for weakly interacting massive particles ( wimps ) using 278.8\xa0days of data collected with the xenon1t experiment at lngs . xenon1t utilizes a liquid xenon time projection chamber with a fiducial mass of ( 1.30\xb10.01 ) ton , resulting in a 1.0\xa0ton yr exposure . the energy region of interest , [ 1.4,10.6 ] kev_ { ee } ( [ 4.9,40.9 ] kev_ { nr } ) , exhibits an ultralow electron recoil background rate of [ 82_ { -3 } ^ { +5 } ( syst ) \xb13 ( stat ) ] events/ ( ton yr kev_ { ee } ) . no significant excess over background is found , and a profile likelihood analysis parametrized in spatial and energy dimensions excludes new parameter space for the wimp-nucleon spin-independent elastic scatter cross section for wimp masses above 6 gev/c^ { 2 } , with a minimum of 4.1\xd710^ { -47 } cm^ { 2 } at 30 gev/c^ { 2 } and a 90 % \xa0confidence level . story_separator_special_tag india is the world 's third-largest emitter of carbon dioxide and is developing rapidly while india has pledged an emissions-intensity reduction as its contribution to the paris agreement , the country does not regularly report emissions statistics , making tracking progress difficult moreover , all estimates of india 's emissions in global datasets represent its financial year , which is not aligned to the calendar year used by almost all other countries here i compile monthly energy and industrial activity data allowing for the estimation of india 's co2 emissions by month and calendar year with a short lag emissions show clear seasonal patterns , and the series allows for the investigation of short-lived but highly significant events , such as the near-record monsoon in 2019 and the covid-19 crisis in 2020 data are available at https : //doi org/10 5281/zenodo 3894394 ( andrew , 2020a ) \xa9 author ( s ) 2020 story_separator_special_tag the large underground xenon ( lux ) experiment is a dual-phase xenon time-projection chamber operating at the sanford underground research facility ( lead , south dakota ) . the lux cryostat was filled for the first time in the underground laboratory in february 2013. we report results of the first wimp search data set , taken during the period from april to august 2013 , presenting the analysis of 85.3 live days of data with a fiducial volume of 118 kg . a profile-likelihood analysis technique shows our data to be consistent with the background-only hypothesis , allowing 90 % confidence limits to be set on spin-independent wimp-nucleon elastic scattering with a minimum upper limit on the cross section of 7.6 \xd7 10 ( -46 ) cm ( 2 ) at a wimp mass of 33 gev/c ( 2 ) . we find that the lux data are in disagreement with low-mass wimp signal interpretations of the results from several recent direct detection experiments . story_separator_special_tag the search of the neutrinoless double- decay address the major physics goals of revealing the nature of the neutrino and setting an absolute scale for its mass . the observation of a positive 0 signal , the unique signature of majorana neutrinos , would have deep consequences in particle physics and cosmology . therefore , any claim of observing a positive signal shall require extremely robust evidences . next is a new double- experiment which aims at building a 100 kg high pressure 136 xe gas tpc , to be hosted in the canfranc underground laboratory ( lsc ) , in spain . this paper address the novel design concept of next tpc believed to provide a pathway for an optimized and robust double- experiment . story_separator_special_tag the ionization of liquefied noble gases by radiation is known to be accompanied by fluctuations much larger than predicted by poisson statistics . we have studied the fluctuations of both scintillation and ionization in liquid xenon and have measured , for the first time , a strong anti-correlation between the two at a microscopic level , with coefficient -0.80 < { rho } { sub ep } < -0.60. this provides direct experimental evidence that electron-ion recombination is partially responsible for the anomalously large fluctuations and at the same time allows substantial improvement of calorimetric energy resolution . story_separator_special_tag author ( s ) : akerib , ds ; alsum , s ; araujo , hm ; bai , x ; bailey , aj ; balajthy , j ; beltrame , p ; bernard , ep ; bernstein , a ; biesiadzinski , tp ; boulton , em ; bramante , r ; bras , p ; byram , d ; cahn , sb ; carmona-benitez , mc ; chan , c ; chiller , aa ; chiller , c ; currie , a ; cutter , je ; davison , tjr ; dobi , a ; dobson , jey ; druszkiewicz , e ; edwards , bn ; faham , ch ; fiorucci , s ; gaitskell , rj ; gehman , vm ; ghag , c ; gibson , kr ; gilchriese , mgd ; hall , cr ; hanhardt , m ; haselschwardt , sj ; hertel , sa ; hogan , dp ; horn , m ; huang , dq ; ignarra , cm ; ihm , m ; jacobsen , rg ; ji , w ; kamdin , k ; kazkaz , k ; khaitan , d ; knoche , r ; larsen , na story_separator_special_tag author ( s ) : akerib , ds ; alsum , s ; araujo , hm ; bai , x ; balajthy , j ; baxter , a ; bernard , ep ; bernstein , a ; biesiadzinski , tp ; boulton , em ; boxer , b ; bras , p ; burdin , s ; byram , d ; carmona-benitez , mc ; chan , c ; cutter , je ; de viveiros , l ; druszkiewicz , e ; fan , a ; fiorucci , s ; gaitskell , rj ; ghag , c ; gilchriese , mgd ; gwilliam , c ; hall , cr ; haselschwardt , sj ; hertel , sa ; hogan , dp ; horn , m ; huang , dq ; ignarra , cm ; jacobsen , rg ; jahangir , o ; ji , w ; kamdin , k ; kazkaz , k ; khaitan , d ; korolkova , ev ; kravitz , s ; kudryavtsev , va ; leason , e ; lenardo , bg ; lesko , kt ; liao , j ; lin , j ; lindote , a ; lopes , mi ; manalaysay , story_separator_special_tag a comprehensive model for describing the characteristics of pulsed signals , generated by particle interactions in xenon detectors , is presented . an emphasis is laid on two-phase time projection chambers , but the models presented are also applicable to single phase detectors . in order to simulate the pulse shape due to primary scintillation light , the effects of the ratio of singlet and triplet dimer state populations , as well as their corresponding decay times , and the recombination time are incorporated into the model . in a two phase time projection chamber , when simulating the pulse caused by electroluminescence light , the ionization electron mean free path in gas , the drift velocity , singlet and triplet decay times , diffusion constants , and the electron trapping time , have been implemented . this modeling has been incorporated into a complete software package , which realistically simulates the expected pulse shapes for these types of detectors . story_separator_special_tag this note presents revised detector parameters applicable to data from the first science run of the zeplin-iii dark matter experiment ; these datasets were acquired in 2008 and reanalised in 2011. this run demonstrated electron recoil discrimination in liquid xenon at the level of 1 part in 10,000 below 40 kev nuclear recoil energy , at an electric field of 3.8 kv/cm ; this remains the best discrimination reported for this medium to date . building on relevant measurements published in recent years , the calibration of the scintillation and ionisation responses for both electron and nuclear recoils , which had been mapped linearly to co-57 gamma-ray interactions , is converted here into optical parameters which are better suited to relate the data to the emerging liquid xenon response models . additional information is given on the fitting of the electron and nuclear recoil populations at low energy . the aim of this note is to support the further development of these models with valuable data acquired at high field . story_separator_special_tag dual-phase xenon detectors , as currently used in direct detection dark matter experiments , have observed elevated rates of background electron events in the low energy region . while this background negatively impacts detector performance in various ways , its origins have only been partially studied . in this paper we report a systematic investigation of the electron pathologies observed in the lux dark matter experiment . we characterize different electron populations based on their emission intensities and their correlations with preceding energy depositions in the detector . by studying the background under different experimental conditions , we identified the leading emission mechanisms , including photoionization and the photoelectric effect induced by the xenon luminescence , delayed emission of electrons trapped under the liquid surface , capture and release of drifting electrons by impurities , and grid electron emission . we discuss how these backgrounds can be mitigated in lux and future xenon-based dark matter experiments . story_separator_special_tag the search for dark matter , the missing mass of the universe , is one of the most active fields of study within particle physics . the xenon1t experiment recently observed a $ 3.5\\ensuremath { \\sigma } $ excess potentially consistent with dark matter , or with solar axions . here , we will use the noble element simulation technique ( nest ) software to simulate the xenon1t detector , reproducing the excess . we utilize different detector efficiency and energy reconstruction models , but they primarily impact sub-kev energies and can not explain the xenon1t excess . however , using nest , we can reproduce their excess in multiple , unique ways , most easily via the addition of $ 31\\ifmmode\\pm\\else\\textpm\\fi { } 11 $ $ ^ { 37 } \\mathrm { ar } $ decays . furthermore , this results in new , modified background models , reducing the significance of the excess to $ \\ensuremath { \\le } 2.2\\ensuremath { \\sigma } $ at least using non-profile likelihood ratio ( plr ) methods . this is independent confirmation that the excess is a real effect , but potentially explicable by known physics . many cross-checks of our story_separator_special_tag author ( s ) : boulton , em ; bernard , e ; destefano , n ; edwards , bnv ; gai , m ; hertel , sa ; horn , m ; larsen , na ; tennyson , bp ; wahl , c ; mckinsey , dn | abstract : \xa9 2017 iop publishing ltd and sissa medialab . we calibrate a two-phase xenon detector at 0.27 kev in the charge channel and at 2.8 kev in both the light and charge channels using a 37ar source that is directly released into the detector . we map the light and charge yields as a function of electric drift field . for the 2.8 kev peak , we calculate the thomas-imel box parameter for recombination and determine its dependence on drift field . for the same peak , we achieve an energy resolution , e /emean , between 9.8 % and 10.8 % for 0.1 kv/cm to 2 kv/cm electric drift fields . story_separator_special_tag we report on the response of liquid xenon to low energy electronic recoils below 15\xa0kev from beta decays of tritium at drift fields of 92 v/cm , 154 v/cm and 366 v/cm using the xenon100 detector . a data-to-simulation fitting method based on markov chain monte\xa0carlo is used to extract the photon yields and recombination fluctuations from the experimental data . the photon yields measured at the two lower fields are in agreement with those from literature ; additional measurements at a higher field of 366 v/cm are presented . the electronic and nuclear recoil discrimination as well as its dependence on the drift field and photon detection efficiency are investigated at these low energies . the results provide new measurements in the energy region of interest for dark matter searches using liquid xenon . story_separator_special_tag the lux experiment has performed searches for dark-matter particles scattering elastically on xenon nuclei , leading to stringent upper limits on the nuclear scattering cross sections for dark matter . here , for results derived from 1.4\xd7104 kg days of target exposure in 2013 , details of the calibration , event-reconstruction , modeling , and statistical tests that underlie the results are presented . detector performance is characterized , including measured efficiencies , stability of response , position resolution , and discrimination between electron- and nuclear-recoil populations . models are developed for the drift field , optical properties , background populations , the electron- and nuclear-recoil responses , and the absolute rate of low-energy background events . innovations in the analysis include in situ measurement of the photomultipliers ' response to xenon scintillation photons , verification of fiducial mass with a low-energy internal calibration source , and new empirical models for low-energy signal yield based on large-sample , in situ calibrations . story_separator_special_tag the first searches for axions and axionlike particles with the large underground xenon experiment are presented . under the assumption of an axioelectric interaction in xenon , the coupling constant between axions and electrons g_ { ae } is tested using data collected in 2013 with an exposure totaling 95 live days \xd7118 kg . a double-sided , profile likelihood ratio statistic test excludes g_ { ae } larger than 3.5\xd710^ { -12 } ( 90 % \xa0c.l . ) for solar axions . assuming the dine-fischler-srednicki-zhitnitsky theoretical description , the upper limit in coupling corresponds to an upper limit on axion mass of 0.12 ev/c^ { 2 } , while for the kim-shifman-vainshtein-zhakharov description masses above 36.6 ev/c^ { 2 } are excluded . for galactic axionlike particles , values of g_ { ae } larger than 4.2\xd710^ { -13 } are excluded for particle masses in the range 1-16 kev/c^ { 2 } . these are the most stringent constraints to date for these interactions . story_separator_special_tag the dependence of the light and charge yield of liquid xenon on the applied electric field and recoil energy is important for dark matter detectors using liquid xenon time projections chambers . few measurements have been made of this field dependence at recoil energies less than 10 kev . in this paper we present results of such measurements using a specialized detector . recoil energies are determined via the compton coincidence technique at four drift fields relevant for liquid xenon dark matter detectors : 0.19 , 0.48 , 1.02 , and 2.32 kv/cm . mean recoil energies down to 1 kev were measured with unprecedented precision . we find that the charge and light yield are anti-correlated above 3 kev , and that the field dependence becomes negligible below 6 kev . however , below 3 kev we find a charge yield significantly higher than expectation and a reconstructed energy deviating from linearity . story_separator_special_tag we present the first results of searches for axions and axionlike particles with the xenon100 experiment . the axion-electron coupling constant , g ae , has been probed by exploiting the axioelectric effect in liquid xenon . a profile likelihood analysis of 224.6 live days \xd7 34-kg exposure has shown no evidence for a signal . by rejecting g ae larger than 7.7\xd710 12 ( 90 % c.l . ) in the solar axion search , we set the best limit to date on this coupling . in the frame of the dfsz and ksvz models , we exclude qcd axions heavier than 0.3 and 80 ev/c 2 , respectively . for axionlike particles , under the assumption that they constitute the whole abundance of dark matter in our galaxy , we constrain g ae to be lower than 1\xd710 12 ( 90 % c.l . ) for masses between 5 and 10 kev/c 2 . story_separator_special_tag geant4 has been used throughout the nuclear and high-energy physics community to simulate energy depositions in various detectors and materials . these simulations have mostly been run with a source beam outside the detector . in the case of low-background physics , however , a primary concern is the effect on the detector from radioactivity inherent in the detector parts themselves . from this standpoint , there is no single source or beam , but rather a collection of sources with potentially complicated spatial extent . luxsim is a simulation framework used by the lux collaboration that takes a component-centric approach to event generation and recording . a new set of classes allows for multiple radioactive sources to be set within any number of components at run time , with the entire collection of sources handled within a single simulation run . various levels of information can also be recorded from the individual components , with these record levels also being set at run time . this flexibility in both source generation and information recording is possible without the need to recompile , reducing the complexity of code management and the proliferation of versions . within the code itself , story_separator_special_tag the xenon1t experiment searches for dark matter particles through their scattering off xenon atoms in a 2 metric ton liquid xenon target . the detector is a dual-phase time projection chamber , which measures simultaneously the scintillation and ionization signals produced by interactions in target volume , to reconstruct energy and position , as well as the type of the interaction . the background rate in the central volume of xenon1t detector is the lowest achieved so far with a liquid xenon-based direct detection experiment . in this work we describe the response model of the detector , the background and signal models , and the statistical inference procedures used in the dark matter searches with a 1 metric ton\xd7year exposure of xenon1t data , that leads to the best limit to date on wimp-nucleon spin-independent elastic scatter cross section for wimp masses above 6 gev/c2 . story_separator_special_tag motivated by the recent xenon1t results , we explore various new physics models that can be discovered through searches for electron recoils in o $ $ \\mathcal { o } $ $ ( kev ) -threshold direct-detection experiments . first , we consider the absorption of axion-like particles , dark photons , and scalars , either as dark matter relics or being produced directly in the sun . in the latter case , we find that kev mass bosons produced in the sun provide an adequate fit to the data but are excluded by stellar cooling constraints . we address this tension by introducing a novel chameleon-like axion model , which can explain the excess while evading the stellar bounds . we find that absorption of bosonic dark matter provides a viable explanation for the excess only if the dark matter is a dark photon or an axion . in the latter case , photophobic axion couplings are necessary to avoid x-ray constraints . second , we analyze models of dark matter-electron scattering to determine which models might explain the excess . standard scattering of dark matter with electrons is generically in conflict with data from lower-threshold experiments . momentum-dependent story_separator_special_tag we apply deep neural networks ( dnn ) to data from the exo-200 experiment . in the studied cases , the dnn is able to reconstruct the relevant parameters - total energy and position - directly from raw digitized waveforms , with minimal exceptions . for the first time , the developed algorithms are evaluated on real detector calibration data . the accuracy of reconstruction either reaches or exceeds what was achieved by the conventional approaches developed by exo-200 over the course of the experiment . most existing dnn approaches to event reconstruction and classification in particle physics are trained on monte carlo simulated events . such algorithms are inherently limited by the accuracy of the simulation . we describe a unique approach that , in an experiment such as exo-200 , allows to successfully perform certain reconstruction and analysis tasks by training the network on waveforms from experimental data , either reducing or eliminating the reliance on the monte carlo . story_separator_special_tag we propose an approach to rapidly find the upper limit of separability between datasets that is directly applicable to hep classification problems . the most common hep classification task is to use $ n $ values ( variables ) for an object ( event ) to estimate the probability that it is signal vs. background . most techniques first use known samples to identify differences in how signal and background events are distributed throughout the $ n $ -dimensional variable space , then use those differences to classify events of unknown type . qualitatively , the greater the differences , the more effectively one can classify events of unknown type . we will show that the mutual information ( mi ) between the $ n $ -dimensional signal-background mixed distribution and the answers for the known events , tells us the upper-limit of separation for that set of $ n $ variables . we will then compare that value to the jensen-shannon divergence between the output distributions from a classifier to test whether it has extracted all possible information from the input variables . we will also discuss speed improvements to a standard method for calculating mi . our approach story_separator_special_tag weakly interacting massive particles ( wimps ) are a leading candidate for dark matter and are expected to produce nuclear recoil ( nr ) events within liquid xenon time-projection chambers . we present a measurement of liquid xenon scintillation characteristics in the lux dark matter detector and develop a pulse shaped based discrimination parameter to be used for particle identification . to accurately measure the scintillation characteristics , we develop a template-fitting method to reconstruct the detection time of photons . analyzing calibration data collected during the 2013-16 lux wimp search , we measure a singlet-to-triplet scintillation ratio for electron recoils ( er ) that is consistent with existing literature , and we make a first-ever measurement of the nr singlet-to-triplet ratio at recoil energies below 74 kev . a prompt fraction discrimination parameter exploits the difference of the photon time spectra for nr and er events and is optimized to have the least number of er events that occur in the 50\\ % nr acceptance region . when this discriminator is used in conjunction with charge-to-light discrimination on the calibration data , the signal-to-noise ratio in the nr dark matter acceptance region increases by up to a factor of story_separator_special_tag abstract we present the results of the three-month above-ground commissioning run of the large underground xenon ( lux ) experiment at the sanford underground research facility located in lead , south dakota , usa . lux is a 370\xa0kg liquid xenon detector that will search for cold dark matter in the form of weakly interacting massive particles ( wimps ) . the commissioning run , conducted with the detector immersed in a water tank , validated the integration of the various sub-systems in preparation for the underground deployment . using the data collected , we report excellent light collection properties , achieving 8.4 photoelectrons per kev for 662\xa0kev electron recoils without an applied electric field , measured in the center of the wimp target . we also find good energy and position resolution in relatively high-energy interactions from a variety of internal and external sources . finally , we have used the commissioning data to tune the optical properties of our simulation and report updated sensitivity projections for spin-independent wimp-nucleon scattering . story_separator_special_tag we present a novel analysis technique for liquid xenon time projection chambers that allows for a lower threshold by relying on events with a prompt scintillation signal consisting of single detected photons . the energy threshold of the lux dark matter experiment is primarily determined by the smallest scintillation response detectable , which previously required a twofold coincidence signal in its photomultiplier arrays , enforced in data analysis . the technique presented here exploits the double photoelectron emission effect observed in some photomultiplier models at vacuum ultraviolet wavelengths . we demonstrate this analysis using an electron recoil calibration dataset and place new constraints on the spin-independent scattering cross section of weakly interacting massive particles ( wimps ) down to 2.5 gev/c2 wimp mass using the 2013 lux dataset . this new technique is promising to enhance light wimp and astrophysical neutrino searches in next-generation liquid xenon experiments . story_separator_special_tag a difficult task with many particle detectors focusing on interactions below approximately 100 kev is to perform a calibration in the appropriate energy range that adequately probes all regions of the detector . because detector response can vary greatly in various locations within the device , a spatially uniform calibration is important . we present a new method for calibration of liquid xenon ( lxe ) detectors , using the short-lived ( 83m ) kr . this source has transitions at 9.4 and 32.1 kev , and as a noble gas like xe , it disperses uniformly in all regions of the detector . even for low source activities , the existence of the two transitions provides a method of identifying the decays that is free of background . we find that at decreasing energies , the lxe light yield increases , while the amount of electric field quenching is diminished . additionally , we show that if any long-lived radioactive backgrounds are introduced by this method , they will present less than 67x10 ( -6 ) events kg ( -1 ) day ( -1 ) kev ( -1 ) in the next generation of lxe dark matter direct detection story_separator_special_tag we present measurements of the electron-recoil ( er ) response of the lux dark matter detector based upon 170 000 highly pure and spatially uniform tritium decays . we reconstruct the tritium energy spectrum using the combined energy model and find good agreement with expectations . we report the average charge and light yields of er events in liquid xenon at 180 and 105 v/cm and compare the results to the nest model . we also measure the mean charge recombination fraction and its fluctuations , and we investigate the location and width of the lux er band . these results provide input to a reanalysis of the lux run 3 weakly interacting massive particle search . story_separator_special_tag this work details the development of a three-dimensional ( 3d ) electric field model for the lux detector . the detector took data during two periods of searching for weakly interacting massive particle ( wimp ) searches . after the first period completed , a time-varying non-uniform negative charge developed in the polytetrafluoroethylene ( ptfe ) panels that define the radial boundary of the detector 's active volume . this caused electric field variations in the detector in time , depth and azimuth , generating an electrostatic radially-inward force on electrons on their way upward to the liquid surface . to map this behavior , 3d electric field maps of the detector 's active volume were built on a monthly basis . this was done by fitting a model built in comsol multiphysics to the uniformly distributed calibration data that were collected on a regular basis . the modeled average ptfe charge density increased over the course of the exposure from -3.6 to $ -5.5~\\mu $ c/m $ ^2 $ . from our studies , we deduce that the electric field magnitude varied while the mean value of the field of $ \\sim200 $ ~v/cm remained constant throughout the exposure story_separator_special_tag for the determination of the absolute scintillation yields the number of scintillation photons per unit absorbed energy for a variety of particles in liquid argon , a series of simultaneous ionization and scintillation measurements were performed . the results verified that scintillation yields for relativistic heavy particles from ne to la are constant despite their extensive range of linear energy transfer . such a constant level , called `` flat top response '' level , manifests the maximum absolute scintillation yield in liquid argon . the maximum absolute scintillation yield is defined by the average energy to produce a single photon , wph ( max ) = 19.5\xb11.0 ev . in liquid xenon , the existence of the same flat top response level was also found by conducting scintillation measurements on relativistic heavy particles . the wph ( max ) in liquid xenon was evaluated to be 13.8\xb10.9 ev using the wph for 1 mev electrons , obtained experimentally . the ratio between the two maximum scintillation yields at the flat top response level obtained in liquid argon and xenon is in good agreement with the estimation by way of the energy resolutions of scintillation due to alpha particles in story_separator_special_tag title of dissertation : measurement of the electron recoil band of the lux dark matter detector with a tritium calibration source attila dobi , doctor of philosophy , 2014 dissertation directed by : professor carter hall department of physics the large underground xenon ( lux ) experiment has recently placed the most stringent limit for the spin-independent wimp-nucleon scattering cross-section . the wimp search limit was aided by an internal tritium source resulting in an unprecedented calibration and understanding of the electronic recoil background . here we discuss corrections to the signals in lux , the energy scale calibration and present the methodology for extracting fundamental properties of electron recoils in liquid xenon . the tritium calibration is used to measure the ionization and scintillation yield of xenon down to 1 kev , the results is compared to other experiments . recombination probability and its fluctuation is measured from 1 to 1000 kev , using betas from tritium and compton scatters from an external cs source . finally , the tritium source is described and the most recent results for er discrimination in lux is presented . measurement of the electron recoil band of the lux dark matter detector with story_separator_special_tag the xenon1t experiment at the laboratori nazionali del gran sasso is the most sensitive direct detection experiment for dark matter in the form of weakly interacting particles ( wimps ) with masses above $ 6\\ , $ gev/ $ c^2 $ scattering off nuclei . the detector employs a dual-phase time projection chamber with 2.0 tonnes of liquid xenon in the target . a one $ \\mathrm { tonne } \\times\\mathrm { year } $ exposure of science data was collected between october 2016 and february 2018. this article reports on the performance of the detector during this period and describes details of the data analysis that led to the most stringent exclusion limits on various wimp-nucleon interaction models to date . in particular , signal reconstruction , event selection and calibration of the detector response to signal-like and background-like interactions in xenon1t are discussed . story_separator_special_tag dual-phase xenon tpc detectors are a highly scalable and widely used technology to search for low-energy nuclear recoil signals from wimp dark matter or coherent nuclear scattering of $ \\sim $ mev neutrinos . such experiments expect to measure o ( kev ) ionization or scintillation signals from such sources . however , at $ \\sim1\\ , $ kev and below , the signal calibrations in liquid xenon carry large uncertainties that directly impact the assumed sensitivity of existing and future experiments . in this work , we report a new measurement of the ionization yield of nuclear recoil signals in liquid xenon down to 0.3 $ \\ , $ kev $ \\ , \\ , $ -- the lowest energy calibration reported to date -- at which energy the average event produces just 1.1~ionized~electrons . between 2 and 6 $ \\ , $ kev , our measurements agree with existing measurements , but significantly improve the precision . at lower energies , we observe a decreasing trend that deviates from simple extrapolations of existing data . we also study the dependence of ionization yield on the applied drift field in liquid xenon between 220v/cm and 6240v/cm , allowing these story_separator_special_tag we report on a search for neutrinoless double-beta decay of 136xe with exo-200 . no signal is observed for an exposure of 32.5 kg yr , with a background of 1.5\xd710 ( -3 ) kg ( -1 ) yr ( -1 ) kev ( -1 ) in the \xb11 region of interest . this sets a lower limit on the half-life of the neutrinoless double-beta decay t ( 1/2 ) ( 0 ) ( 136xe ) > 1.6\xd710 ( 25 ) yr ( 90 % c.l . ) , corresponding to effective majorana masses of less than 140-380 mev , depending on the matrix element calculation . story_separator_special_tag author ( s ) : akerib , ds ; akerlof , cw ; alqahtani , a ; alsum , sk ; anderson , tj ; angelides , n ; araujo , hm ; armstrong , je ; arthurs , m ; bai , x ; balajthy , j ; balashov , s ; bang , j ; baxter , a ; bensinger , j ; bernard , ep ; bernstein , a ; bhatti , a ; biekert , a ; biesiadzinski , tp ; birch , hj ; boast , ke ; boxer , b ; bras , p ; buckley , jh ; bugaev , vv ; burdin , s ; busenitz , jk ; cabrita , r ; carels , c ; carlsmith , dl ; carmona-benitez , mc ; cascella , m ; chan , c ; chott , ni ; cole , a ; cottle , a ; cutter , je ; dahl , ce ; de viveiros , l ; dobson , jey ; druszkiewicz , e ; edberg , tk ; eriksen , sr ; fan , a ; fiorucci , s ; flaecher , h ; fraser , ed ; fruth , story_separator_special_tag the xenon100 experiment , situated in the laboratori nazionali del gran sasso , aims at the direct detection of dark matter in the form of weakly interacting massive particles ( wimps ) , based on their interactions with xenon nuclei in an ultra low background dual-phase time projection chamber . this paper describes the general methods developed for the analysis of the xenon100 data . these methods have been used in the 100.9 and 224.6 live days science runs from which results on spin-independent elastic , spin-dependent elastic and inelastic wimp-nucleon cross-sections have already been reported . story_separator_special_tag we report the observation of two-neutrino double-beta decay in ( 136 ) xe with t ( 1/2 ) = 2.11 \xb1 0.04 ( stat ) \xb1 0.21 ( syst ) \xd7 10 ( 21 ) yr. this second-order process , predicted by the standard model , has been observed for several nuclei but not for ( 136 ) xe . the observed decay rate provides new input to matrix element calculations and to the search for the more interesting neutrinoless double-beta decay , the most sensitive probe for the existence of majorana particles and the measurement of the neutrino mass scale . story_separator_special_tag the energy resolution of the exo-200 detector is limited by electronics noise in the measurement of the scintillation response . here we present a new technique to extract optimal scintillation energy measurements for signals split across multiple channels in the presence of correlated noise . the implementation of these techniques improves the energy resolution of the detector at the neutrinoless double beta decay q-value from [ 1.9641 \xb1 0.0039 ] % to [ 1.5820 \xb1 0.0044 ] % . story_separator_special_tag searches for double beta decay of ^ ( 134 ) xe were performed with exo-200 , a single-phase liquid xenon detector designed to search for neutrinoless double beta decay of ^ ( 136 ) xe . using an exposure of 29.6 kg yr , the lower limits of t^ ( 2 _+ ( 1/2 ) > 8.7\xd710^ ( 20 ) yr and t^ ( 0 ) _ ( 1/2 ) > 1.1\xd710^ ( 23 ) yr at 90 % confidence level were derived , with corresponding half-life sensitivities of 1.2\xd710^ ( 21 ) yr and 1.9\xd710^ ( 23 ) yr. these limits exceed those in the literature for ^ ( 134 ) xe , improving by factors of nearly 105 and 2 for the two antineutrino and neutrinoless modes , respectively . story_separator_special_tag exo-200 is an experiment designed to search for double beta decay of 136xe with a single-phase , liquid xenon detector . it uses an active mass of 110 kg of xenon enriched to 80.6 % in the isotope 136 in an ultra-low background time projection chamber capable of simultaneous detection of ionization and scintillation . this paper describes the exo-200 detector with particular attention to the most innovative aspects of the design that revolve around the reduction of backgrounds , the efficient use of the expensive isotopically enriched xenon , and the optimization of the energy resolution in a relatively large volume . story_separator_special_tag the search for neutrinoless double-beta decay $ ( 0\\ensuremath { u } \\ensuremath { \\beta } \\ensuremath { \\beta } ) $ requires extremely low background and a good understanding of their sources and their influence on the rate in the region of parameter space relevant to the $ 0\\ensuremath { u } \\ensuremath { \\beta } \\ensuremath { \\beta } $ signal . we report on studies of various $ \\ensuremath { \\beta } $ and $ \\ensuremath { \\gamma } $ backgrounds in the liquid-xenon-based exo- $ 2000\\ensuremath { u } \\ensuremath { \\beta } \\ensuremath { \\beta } $ experiment . with this work we try to better understand the location and strength of specific background sources and compare the conclusions to radioassay results taken before and during detector construction . finally , we discuss the implications of these studies for exo-200 as well as for the next-generation , tonne-scale nexo detector . story_separator_special_tag a search for lorentz- and cpt-violating signals in the double beta decay spectrum of ^ ( 136 ) xe has been performed using an exposure of 100 kg yr with the exo-200 detector . no significant evidence of the spectral modification due to isotropic lorentz-violation was found , and a two-sided limit of 2.65\xd710^ ( 5 ) gev < \xe2^ ( ( 3 ) ) _ ( of ) < 7.60\xd710^ ( 6 ) gev ( 90 % c.l . ) is placed on the relevant coefficient within the standard-model extension ( sme ) . this is the first experimental study of the effect of the sme-defined oscillation-free and momentum-independent neutrino coupling operator on the double beta decay process . story_separator_special_tag abstract we report results from a systematic measurement campaign conducted to identify low radioactivity materials for the construction of the exo-200 double beta decay experiment . partial results from this campaign have already been reported in a 2008 paper by the exo collaboration . here we release the remaining data , collected since 2007 , to the public . the data reported were obtained using a variety of analytic techniques . the measurement sensitivities are among the best in the field . construction of the exo-200 detector has been concluded , and phase-i data was taken from 2011 to 2014. the detector s extremely low background implicitly verifies the measurements and the analysis assumptions made during construction and reported in this paper . story_separator_special_tag study of majorana nature of neutrinos is very important in fundamental physics , because it would clarify the extremely smallness of the neutrino mass and open the physics of very high-energy scale which might provide a key to solve the matter dominance of the current universe . neutrinoless double beta decay ( $ 0\xa5nu\xa5beta\xa5beta $ ) of nucleus is a unique process to test the majorana nature of neutrinos . kamland-zen ( kamland zero-neutrino double-beta decay ) is an experiment searching for $ 0\xa5nu\xa5beta\xa5beta $ of $ ^ { 136 } $ xe nucleus using kamland detector in kamioka mine in japan . the experiment has provided the most stringent limit on the $ ^ { 136 } $ xe $ 0\xa5nu\xa5beta\xa5beta $ lifetime of $ 1.07\xa5times10^ { 26 } $ yr ( 90\xa5 % c.l . ) corresponding to the upper limit ( 90\xa5 % c.l . ) of the effective majorana neutrino mass $ \xa5langle m_ { \xa5beta\xa5beta } \xa5rangle $ $ < ( 61-165 ) $ mev . in this talk current status and future plan of the kamland-zen experiment are reported . story_separator_special_tag abstract a search for double-beta decays of 136xe to excited states of 136ba has been performed with the first phase data set of the kamland-zen experiment . the 0 1 + , 2 1 + and 2 2 + transitions of 0 decay were evaluated in an exposure of 89.5 kg yr of 136xe , while the same transitions of 2 decay were evaluated in an exposure of 61.8 kg yr . no excess over background was found for all decay modes . the lower half-life limits of the 2 1 + state transitions of 0 and 2 decay were improved to t 1 / 2 0 ( 0 + 2 1 + ) > 2.6 \xd7 10 25 yr and t 1 / 2 2 ( 0 + 2 1 + ) > 4.6 \xd7 10 23 yr ( 90 % c.l . ) , respectively . we report on the first experimental lower half-life limits for the transitions to the 0 1 + state of 136xe for 0 and 2 decay . they are t 1 / 2 0 ( 0 + 0 1 + ) > 2.4 \xd7 10 25 yr and t 1 / 2 2 story_separator_special_tag we present limits on majoron-emitting neutrinoless double- decay modes based on an exposure of 112.3 days with 125 kg of 136xe . in particular , a lower limit on the ordinary ( spectral index n=1 ) majoron-emitting decay half-life of 136xe is obtained as t1/20 0 > 2.6\xd71024 yr at 90 % c.l. , a factor of five more stringent than previous limits . the corresponding upper limit on the effective majoron-neutrino coupling , using a range of available nuclear matrix calculations , is gee < ( 0.8-1.6 ) \xd710 5. this excludes a previously unconstrained region of parameter space and strongly limits the possible contribution of ordinary majoron emission modes to 0 decay for neutrino masses in the inverted hierarchy scheme . story_separator_special_tag we present results from the kamland-zen double-beta decay experiment based on an exposure of 77.6 days with 129 kg of 136xe . the measured two-neutrino double-beta decay half-life of 136xe is t1/22 =2.38\xb10.02 ( stat ) \xb10.14 ( syst ) \xd71021 yr , consistent with a recent measurement by exo-200 . we also obtain a lower limit for the neutrinoless double-beta decay half-life , t1/20 > 5.7\xd71024 yr at 90 % confidence level ( c. l. ) , which corresponds to almost a fivefold improvement over previous limits . story_separator_special_tag we present an improved search for neutrinoless double-beta ( 0 ) decay of ^ { 136 } xe in the kamland-zen experiment . owing to purification of the xenon-loaded liquid scintillator , we achieved a significant reduction of the ^ { 110m } ag contaminant identified in previous searches . combining the results from the first and second phase , we obtain a lower limit for the 0 decay half-life of t_ { 1/2 } ^ { 0 } > 1.07\xd710^ { 26 } yr at 90 % \xa0c.l. , an almost sixfold improvement over previous limits . using commonly adopted nuclear matrix element calculations , the corresponding upper limits on the effective majorana neutrino mass are in the range 61-165\xa0mev . for the most optimistic nuclear matrix elements , this limit reaches the bottom of the quasidegenerate neutrino mass region . story_separator_special_tag the enriched xenon observatory ( exo ) will search for double beta decays of 136xe . we report the results of a systematic study of trace concentrations of radioactive impurities in a wide range of raw materials and finished parts considered for use in the construction of exo-200 , the first stage of the exo experimental program . analysis techniques employed , and described here , include direct gamma counting , alpha counting , neutron activation analysis , and high-sensitivity mass spectrometry . story_separator_special_tag abstract in this paper we report on the characterization of the hamamatsu vuv4 ( s/n : s13370-6152 ) vacuum ultra-violet ( vuv ) sensitive silicon photo-multipliers ( sipms ) as part of the development of a solution for the detection of liquid xenon scintillation light for the nexo experiment . various sipm features , such as : dark noise , gain , correlated avalanches , direct crosstalk and photon detection efficiency ( pde ) were measured in a dedicated setup at triumf . sipms were characterized in the range 163 k t 233 k . at an over voltage of 3 . 1 \xb1 0 . 2 v and at t = 163 k we report a number of correlated avalanches ( cas ) per pulse in the 1 s interval following the trigger pulse of 0 . 161 \xb1 0 . 005 . at the same settings the dark-noise ( dn ) rate is 0 . 137 \xb1 0 . 002 hz/mm 2 . both the number of cas and the dn rate are within nexo specifications . the pde of the hamamatsu vuv4 was measured for two different devices at t = 233 k for a mean wavelength story_separator_special_tag understanding reflective properties of materials and photodetection efficiency ( pde ) of photodetectors is important for optimizing energy resolution and sensitivity of the next generation neutrinoless double beta decay , direct detection dark matter , and neutrino oscillation experiments that will use noble liquid gases , such as nexo , darwin , darkside-20k , and dune . little information is currently available about reflectivity and pde in liquid noble gases , because such measurements are difficult to conduct in a cryogenic environment and at short enough wavelengths . here we report a measurement of specular reflectivity and relative pde of hamamatsu vuv4 silicon photomultipliers ( sipms ) with 50 m micro-cells conducted with xenon scintillation light ( 175 nm ) in liquid xenon . the specular reflectivity at 15 incidence of three samples of vuv4 sipms is found to be 30.4\xb11.4 % , 28.6\xb11.3 % , and 28.0\xb11.3 % , respectively . the pde at normal incidence differs by \xb18 % ( standard deviation ) among the three devices . the angular dependence of the reflectivity and pde was also measured for one of the sipms . both the reflectivity and pde decrease as the angle of incidence increases . story_separator_special_tag results are presented from radioactivity screening of two models of photomultiplier tubes designed for use in current and future liquid xenon experiments . the hamamatsu 5.6 cm diameter r8778 pmt , used in the lux dark matter experiment , has yielded a positive detection of four common radioactive isotopes : 238u , 232th , 40k , and 60co . screening of lux materials has rendered backgrounds from other detector materials subdominant to the r8778 contribution . a prototype hamamatsu 7.6 cm diameter r11410 mod pmt has also been screened , with benchmark isotope counts measured at < 0.4 238u/ < 0.3 232th/ < 8.3 40k/2.0\xb10.2 60co mbq/pmt . this represents a large reduction , equal to a change of \xd7124 238u/\xd719 232th/\xd718 40k per pmt , between r8778 and r11410 mod , concurrent with a doubling of the photocathode surface area ( 4.5 6.4 cm diameter ) . 60co measurements are comparable between the pmts , but can be significantly reduced in future r11410 mod units through further material selection . assuming pmt activity equal to the measured 90 % upper limits , monte carlo estimates indicate that replacement of r8778 pmts with r11410 mod pmts will change lux pmt story_separator_special_tag the low-background , vuv-sensitive 3-inch diameter photomultiplier tube r11410 has been developed by hamamatsu for dark matter direct detection experiments using liquid xenon as the target material . we present the results from the joint effort between the xenon collaboration and the hamamatsu company to produce a highly radio-pure photosensor ( version r11410-21 ) for the xenon1t dark matter experiment . after introducing the photosensor and its components , we show the methods and results of the radioactive contamination measurements of the individual materials employed in the photomultiplier production . we then discuss the adopted strategies to reduce the radioactivity of the various pmt versions . finally , we detail the results from screening 286 tubes with ultra-low background germanium detectors , as well as their implications for the expected electronic and nuclear recoil background of the xenon1t experiment . story_separator_special_tag we present the design , data and results from the next prototype for double beta and dark matter ( next-dbdm ) detector , a high-pressure gaseous natural xenon electroluminescent time projection chamber ( tpc ) that was built at the lawrence berkeley national laboratory . it is a prototype of the planned next-100 136xe neutrino-less double beta decay ( 0 ) experiment with the main objectives of demonstrating near-intrinsic energy resolution at energies up to 662 kev and of optimizing the next-100 detector design and operating parameters . energy resolutions of 1 % fwhm for 662 kev gamma rays were obtained at 10 and 15 atm and 5 % fwhm for 30 kev fluorescence xenon x-rays . these results demonstrate that 0.5 % fwhm resolutions for the 2459 kev hypothetical neutrino-less double beta decay peak are realizable . this energy resolution is a factor 7-20 better than that of the current leading 0 experiments using liquid xenon and thus represents a significant advancement . we present also first results from a track imaging system consisting of 64 silicon photo-multipliers recently installed in next-dbdm that , along with the excellent energy resolution , demonstrates the key functionalities required for the next-100 story_separator_special_tag the next experiment aims to observe the neutrinoless double beta decay of $ ^ { 136 } $ xe in a high pressure gas tpc using electroluminescence ( el ) to amplify the signal from ionization . understanding the response of the detector is imperative in achieving a consistent and well understood energy measurement . the abundance of xenon k-shell x-ray emission during data taking has been identified as a multitool for the characterisation of the fundamental parameters of the gas as well as the equalisation of the response of the detector . the next-demo prototype is a ~1.5 kg volume tpc filled with natural xenon . it employs an array of 19 pmts as an energy plane and of 256 sipms as a tracking plane with the tpc light tube and sipm surfaces being coated with tetraphenyl butadiene ( tpb ) which acts as a wavelength shifter for the vuv scintillation light produced by xenon . this paper presents the measurement of the properties of the drift of electrons in the tpc , the effects of the el production region , and the extraction of position dependent correction constants using k $ _ { \\alpha } $ x-ray deposits story_separator_special_tag abstract the charge and energy resolution response of a liquid xenon ionization chamber has been measured with 207bi electrons and gamma-rays . the applied electric field was varied in the range 0.06 12 kv/cm . after an initial steep rise , charge collection increases slowly above a few kv/cm and does not reach saturation within our range . this field dependence has been analyzed in view of different recombination models . the best agreement is obtained by including the contribution of delta electrons . the best energy resolution , measured at 12 kv/cm , is 5.9 % fwhm for gamma-rays of 570 kev . this value is consistent with the expected recombination fluctuations in the number of electron-ion pairs associated with low energy delta electron tracks . story_separator_special_tag liquid xenon ( lxe ) is employed in a number of current and future detectors for rare event searches . we use the exo-200 experimental data to measure the absolute scintillation and ionization yields generated by interactions from th228 ( 2615 kev ) , ra226 ( 1764 kev ) , and co60 ( 1332 kev and 1173 kev ) calibration sources , over a range of electric fields . the w value that defines the recombination-independent energy scale is measured to be 11.5\xb10.5 ( syst . ) \xb10.1 ( stat . ) ev . these data are also used to measure the recombination fluctuations in the number of electrons and photons produced by the calibration sources at the mev scale , which deviate from extrapolations of lower-energy data . additionally , a semiempirical model for the energy resolution of the detector is developed , which is used to constrain the recombination efficiency , i.e. , the fraction of recombined electrons that result in the emission of a detectable photon . detailed measurements of the absolute charge and light yields for mev-scale electron recoils are important for predicting the performance of future neutrinoless double -decay detectors . story_separator_special_tag the zeplin-iii experiment in the palmer underground laboratory at boulby uses a 12 kg two-phase xenon time-projection chamber to search for the weakly interacting massive particles ( wimps ) that may account for the dark matter of our galaxy . the detector measures both scintillation and ionization produced by radiation interacting in the liquid to differentiate between the nuclear recoils expected from wimps and the electron-recoil background signals down to 10kev nuclear-recoil energy . an analysis of 847kg days of data acquired between february 27 , 2008 , and may 20 , 2008 , has excluded a wimp-nucleon elastic scattering spin-independent cross section above 8.1\xd710-8pb at 60gevc-2 with a 90 % confidence limit . it has also demonstrated that the two-phase xenon technique is capable of better discrimination between electron and nuclear recoils at low-energy than previously achieved by other xenon-based experiments . \xa9 2009 the american physical society . story_separator_special_tag we report results from an extensive set of measurements of the \\b { eta } -decay response in liquid xenon.these measurements are derived from high-statistics calibration data from injected sources of both $ ^ { 3 } $ h and $ ^ { 14 } $ c in the lux detector . the mean light-to-charge ratio is reported for 13 electric field values ranging from 43 to 491 v/cm , and for energies ranging from 1.5 to 145 kev . story_separator_special_tag the superconducting nanowire single-photon detector ( snspd ) is a quantum-limit superconducting optical detector based on the cooper-pair breaking effect by a single photon , which exhibits a higher detection efficiency , lower dark count rate , higher counting rate , and lower timing jitter when compared with those exhibited by its counterparts . snspds have been extensively applied in quantum information processing , including quantum key distribution and optical quantum computation . in this review , we present the requirements of single-photon detectors from quantum information , as well as the principle , key metrics , latest performance issues and other issues associated with snspd . the representative applications of snspds with respect to quantum information will also be covered . story_separator_special_tag liquid xenon particle detectors rely on excellent light collection efficiency for their performance . this depends on the high reflectivity of polytetrafluoroethylene ( ptfe ) at the xenon scintillation wavelength of 178\xa0nm , but the angular dependence of this reflectivity is not well-understood . ibex is designed to directly measure the angular distribution of xenon scintillation light reflected off ptfe in liquid xenon . these measurements are fully described by a microphysical reflectivity model with few free parameters . dependence on ptfe type , surface finish , xenon pressure , and wavelength of incident light is explored . total internal reflection is observed , which results in the dominance of specular over diffuse reflection and a reflectivity near 100 % for high angles of incidence . story_separator_special_tag after a short review of previous attempts to observe and measure the near-infrared scintillation in liquid argon , we present new results obtained with nir , a dedicated cryostat at the fermilab proton assembly building ( pab ) . the new results give confidence that the near-infrared light can be used as the much needed light signal in large liquid argon time projection chambers . story_separator_special_tag we examine electron and nuclear recoil backgrounds from radioactivity in the zeplin-iii dark matter experiment at boulby . the rate of low-energy electron recoils in the liquid xenon wimp target is 0.75\xb10.05 events/kg/day/kev , which represents a 20-fold improvement over the rate observed during the first science run . energy and spatial distributions agree with those predicted by component-level monte carlo simulations propagating the effects of the radiological contamination measured for materials employed in the experiment . neutron elastic scattering is predicted to yield 3.05\xb10.5 nuclear recoils with energy 5 50 kev per year , which translates to an expectation of 0.4 events in a 1-year dataset in anti-coincidence with the veto detector for realistic signal acceptance . less obvious background sources are discussed , especially in the context of future experiments . these include contamination of scintillation pulses with cherenkov light story_separator_special_tag abstract the large underground xenon ( lux ) dark matter experiment aims to detect rare low-energy interactions from weakly interacting massive particles ( wimps ) . the radiogenic backgrounds in the lux detector have been measured and compared with monte carlo simulation . measurements of lux high-energy data have provided direct constraints on all background sources contributing to the background model . the expected background rate from the background model for the 85.3\xa0day wimp search run is ( 2.6 \xb1 0.2 stat \xb1 0.4 sys ) \xd7 10 - 3 events kev ee - 1 kg - 1 day - 1 in a 118\xa0kg fiducial volume . the observed background rate is ( 3.6 \xb1 0.4 stat ) \xd7 10 - 3 events kev ee - 1 kg - 1 day - 1 , consistent with model projections . the expectation for the radiogenic background in a subsequent one-year run is presented . story_separator_special_tag we discuss an in-situ evaluation of the 85kr , 222rn , and 220rn background in pandax-i , a 120-kg liquid xenon dark matter direct detection experiment . combining with a simulation , their contributions to the low energy electron-recoil background in the dark matter search region are obtained . story_separator_special_tag abstractin this paper , we describe the xenon100 data analyses used to assess the target-intrinsic background sources radon ( ) , thoron ( ) and krypton ( ) . we detail the event selections of high-energy alpha particles and decay-specific delayed coincidences . we derive distributions of the individual radionuclides inside the detector and quantify their abundances during the main three science runs of the experiment over a period of $ $ \\sim 4\\ , \\hbox { years } $ $ 4years , from january 2010 to january 2014. we compare our results to external measurements of radon emanation and krypton concentrations where we find good agreement . we report an observed reduction in concentrations of radon daughters that we attribute to the plating-out of charged ions on the negatively biased cathode . story_separator_special_tag dual-phase liquid xenon ( lxe ) detectors lead the direct search for particle dark matter . understanding the signal production process of nuclear recoils in lxe is essential for the interpretation of lxe based dark matter searches . up to now , only two experiments have simultaneously measured both the light and charge yield at different electric fields , neither of which attempted to evaluate the processes leading to light and charge production . in this paper , results from a neutron calibration of liquid xenon with simultaneous light and charge detection are presented for nuclear recoil energies from 3 -- 74 kev , at electric fields of 0.19 , 0.49 , and $ 1.02\\text { } \\text { } \\mathrm { kv } /\\mathrm { cm } $ . no significant field dependence of the yields is observed . story_separator_special_tag of ultra-low energy calibration of the lux and lz dark matter detectors by dongqing huang , ph.d. , brown university , may 2020. the large underground xenon ( lux ) experiment is a 250 kg active mass dual-phase time-projection chamber ( tpc ) operating at the 4850 ft level of the sanford underground research facility in lead , sd . various sources , including 127xe , d-d neutrons , 83mkr , tritium , and ambe neutrons are used to perform calibrations of detector responses to electron recoils ( er ) and nuclear recoils ( nr ) . i will present an ultra-low energy calibration of er using an intrinsic 127xe source and of nr using a short pulsed d-d neutron generator . radioactive isotope 127xe is formed in the lux lxe volume due to cosmogenic activation before the detector was moved one mile underground . a measurement in the early stage of the lux ws2013 science run unveils 0.9 million 127xe atoms in the lux lxe volume , which provides an ideal source for low energy calibrations . 127xe decay is a form of electron capture in which a high energy gamma ( > 200 kev ) is emitted , story_separator_special_tag author ( s ) : edwards , bnv ; bernard , e ; boulton , em ; destefano , n ; gai , m ; horn , m ; larsen , n ; tennyson , b ; tvrznikova , l ; wahl , c ; mckinsey , dn | abstract : \xa9 2018 iop publishing ltd and sissa medialab . we present a measurement of the extraction efficiency of quasi-free electrons from the liquid into the gas phase in a two-phase xenon time-projection chamber . the measurements span a range of electric fields from 2.4 to 7.1 kv/cm in the liquid xenon , corresponding to 4.5 to 13.1 kv/cm in the gaseous xenon . extraction efficiency continues to increase at the highest extraction fields , implying that additional charge signal may be attained in two-phase xenon detectors through careful high-voltage engineering of the gate-anode region . story_separator_special_tag dual-phase xenon detectors are widely used in dark matter direct detection experiments , and have demonstrated the highest sensitivities to a variety of dark matter interactions . however , a key component of the dual-phase detector technology -- -the efficiency of charge extraction from liquid xenon into gas -- -has not been well characterized . in this paper , we report a new measurement of the electron extraction efficiency ( eee ) in a small xenon detector using two monoenergetic decay features of $ ^ { 37 } \\mathrm { ar } $ . by achieving stable operation at very high voltages , we measured the eee values at the highest extraction electric field strength reported to date . for the first time , an apparent saturation of the eee is observed over a large range of electric field ; between 7.5 and $ 10.4\\text { } \\text { } \\mathrm { kv } /\\mathrm { cm } $ extraction field in the liquid xenon the eee stays stable at the level of 1 % $ ( \\mathrm { kv } /\\mathrm { cm } { ) } ^ { \\ensuremath { - } 1 } $ . in the story_separator_special_tag abstract xenon10 is an experiment designed to directly detect particle dark matter . it is a dual phase ( liquid/gas ) xenon time-projection chamber with 3d position imaging . particle interactions generate a primary scintillation signal ( s 1 ) and ionization signal ( s 2 ) , which are both functions of the deposited recoil energy and the incident particle type . we present a new precision measurement of the relative scintillation yield l eff and the absolute ionization yield q y , for nuclear recoils in xenon . a dark matter particle is expected to deposit energy by scattering from a xenon nucleus . knowledge of l eff is therefore crucial for establishing the energy threshold of the experiment ; this in turn determines the sensitivity to particle dark matter . our l eff measurement is in agreement with recent theoretical predictions above 15\xa0kev nuclear recoil energy , and the energy threshold of the measurement is 4 kev . a knowledge of the ionization yield q y is necessary to establish the trigger threshold of the experiment . the ionization yield q y is measured in two ways , both in agreement with previous measurements and with a story_separator_special_tag liquid xenon detectors such as xenon10 and xenon100 obtain a significant fraction of their sensitivity to light ( 10 gev ) particle dark matter by looking for nuclear recoils of only a few kev , just above the detector threshold . yet in this energy regime a correct treatment of the detector threshold and resolution remains unclear . the energy dependence of the scintillation yield of liquid xenon for nuclear recoils also bears heavily on detector sensitivity , yet numerous measurements have not succeeded in obtaining concordant results . in this article we show that the ratio of detected ionization to scintillation can be leveraged to constrain the scintillation yield . we also present a rigorous treatment of liquid xenon detector threshold and energy resolution . notably , the effective energy resolution differs significantly from a simple poisson distribution . we conclude with a calculation of dark matter exclusion limits , and show that existing data from liquid xenon detectors strongly constrain recent interpretations of light dark matter . story_separator_special_tag we show that the energy threshold for nuclear recoils in the xenon10 dark matter search data can be lowered to ~1 kev , by using only the ionization signal . in other words , we make no requirement that a valid event contain a primary scintillation signal . we therefore relinquish incident particle type discrimination , which is based on the ratio of ionization to scintillation in liquid xenon . this method compromises the detector 's ability to precisely determine the z coordinate of a particle interaction . however , we show for the first time that it is possible to discriminate bulk events from surface events based solely on the ionization signal . story_separator_special_tag scintillation and ionisation yields for nuclear recoils in liquid xenon above 10 kevnr ( nuclear recoil energy ) are deduced from data acquired using broadband am be neutron sources . the nuclear recoil data from several exposures to two sources were compared to detailed simulations . energy-dependent scintillation and ionisation yields giving acceptable fits to the data were derived . efficiency and resolution effects are treated using a light collection monte carlo , measured photomultiplier response profiles and hardware trigger studies . a gradual fall in scintillation yield below 40 kevnr is found , together with a rising ionisation yield ; both are in agreement with the latest independent measurements . the analysis method is applied to the most recent zeplin-iii data , acquired with a significantly upgraded detector and a precision-calibrated am be source , as well as to the earlier data from the first run in 2008. a new method for deriving the recoil scintillation yield , which includes sub-threshold s1 events , is also presented which confirms the main analysis . story_separator_special_tag results from the nuclear recoil calibration of the xenon100 dark matter detector installed underground at the laboratori nazionali del gran sasso , italy are presented . data from measurements with an external amb ( 241 ) e neutron source are compared with a detailed monte carlo simulation which is used to extract the energy-dependent charge-yield q ( y ) and relative scintillation efficiency l-eff . a very good level of absolute spectral matching is achieved in both observable signal channels-scintillation s1 and ionization s2-along with agreement in the two-dimensional particle discrimination space . the results confirm the validity of the derived signal acceptance in earlier reported dark matter searches of the xenon100 experiment . story_separator_special_tag we report a systematic determination of the responses of pandax-ii , a dual phase xenon time projection chamber detector , to low energy recoils . the electron recoil ( er ) and nuclear recoil ( nr ) responses are calibrated , respectively , with injected tritiated methane or $ ^ { 220 } $ rn source , and with $ ^ { 241 } $ am-be neutron source , within an energy range from $ 1-25 $ kev ( er ) and $ 4-80 $ kev ( nr ) , under the two drift fields of 400 and 317 v/cm . an empirical model is used to fit the light yield and charge yield for both types of recoils . the best fit models can well describe the calibration data . the systematic uncertainties of the fitted models are obtained via statistical comparison against the data . story_separator_special_tag abstract the scintillation yields and decay shapes for recoil xe ions produced by wimps in liquid xenon have been examined . a quenching model based on a biexcitonic diffusion-reaction mechanism is proposed for electronic quenching . the total predicted quenching , nuclear and electronic , is compared with experimental results reported for nuclear recoils from neutrons . model calculations give the average energy to produce a vuv photon , wph , to be 75\xa0ev for 60\xa0kev recoil xe ions . some aspects of ionization relating to liquid xenon wimp detectors are also discussed . story_separator_special_tag direct searches for low mass dark matter particles via scattering off target nuclei require detection of recoiling atoms with energies of ~1 kev or less . the amount of electronic excitation produced by such atoms is quenched relative to a recoiling electron of the same energy . the lindhard model of this quenching , as originally formulated , remains widely used after more than 50 years . the present work shows that for very small energies , a simplifying approximation of that model must be removed . implications for the sensitivity of direct detection experiments are discussed . story_separator_special_tag we provide a new way of constraining the relative scintillation eciency le for liquid xenon . using a simple estimate for the electronic and nuclear stopping powers together with an analysis of recombination processes we predict both the ionization and the scintillation yields . using presently available data for the ionization yield , we can use the correlation between these two quantities to constrainle from below . moreover , we argue that more reliable data on the ionization yield would allow to verify our assumptions on the atomic cross sections and to predict the value ofle . we conclude that the relative scintillation eciency should not decrease at low nuclear recoil energies , which has important consequences for the robustness of exclusion limits for low wimp masses in liquid xenon dark matter searches . story_separator_special_tag abstract we perform a theoretical study of the scintillation efficiency of the low energy region crucial for liquid xenon dark matter detectors . we develop a computer program to simulate the cascading process of the recoiling xenon nucleus in liquid xenon and calculate the nuclear quenching effect due to atomic collisions . we use the electronic stopping power extrapolated from experimental data to the low energy region , and take into account the effects of electron escape from electron ion pair recombination using the generalized thomas-imel model fitted to scintillation data . our result agrees well with the experiments from neutron scattering and vanishes rapidly as the recoil energy drops below 3\xa0kev . story_separator_special_tag the ionization yield in the two-phase liquid xenon dark-matter detector has been studied in kev nuclear-recoil energy region . the newly-obtained nuclear quenching as well as the recently-measured average energy required to produce an electron-ion pair are used to calculate the total electric charges produced . to estimate the fraction of the electron charges collected , the thomas-imel model is generalized to describing the field dependence for nuclear recoils in liquid xenon . with free parameters fitted to experiment measured 56.5 kev nuclear recoils , the energy dependence of ionization yield for nuclear recoils is predicted , which increases with the decreasing of the recoiling energy and reaches the maximum value at 2~3 kev . this prediction agrees well with existing data and may help to lower the energy detection threshold for nuclear recoils to ~1 kev . story_separator_special_tag the xenon100 experiment , installed underground at the laboratori nazionali del gran sasso , aims to directly detect dark matter in the form of weakly interacting massive particles ( wimps ) via their elastic scattering off xenon nuclei . this paper presents a study on the nuclear recoil background of the experiment , taking into account neutron backgrounds from ( , n ) reactions and spontaneous fission due to natural radioactivity in the detector and shield materials , as well as muon-induced neutrons . based on monte carlo simulations and using measured radioactive contaminations of all detector components , we predict the nuclear recoil backgrounds for the wimp search results published by the xenon100 experiment in 2011 and 2012 , 0.11 events and 0.17 events , respectively , and conclude that they do not limit the sensitivity of the experiment . story_separator_special_tag xenon nuclei . we present a comprehensive study of the predicted electronic recoil background coming from radioactive decays inside the detector and shield materials , and intrinsic radioactivity in the liquid xenon . based on geant4 monte carlo simulations using a detailed geometry together with the measured radioactivity of all detector components , we predict an electronic recoil background in the wimp-search energy range and 30 kg ducial mass of less than 10 2 events kg 1 day 1 kev 1 , consistent with the experiment s design goal . the predicted background spectrum is in very good agreement with the data taken during the commissioning of the detector in fall 2009 . story_separator_special_tag deep underground environments are ideal for low background searches due to the attenuation of cosmic rays by passage through the earth . however , they are affected by backgrounds from -rays emitted by 40k and the 238u and 232th decay chains in the surrounding rock . the lux-zeplin ( lz ) experiment will search for dark matter particle interactions with a liquid xenon tpc located within the davis campus at the sanford underground research facility , lead , south dakota , at the 4850-foot level . in order to characterise the cavern background , in-situ -ray measurements were taken with a sodium iodide detector in various locations and with lead shielding . the integral count rates ( 0 3300 kev ) varied from 596 hz to 1355 hz for unshielded measurements , corresponding to a total flux from the cavern walls of 1.9 \xb1 0.4 cm 2s 1. the resulting activity in the walls of the cavern can be characterised as 220 \xb1 60 bq/kg of 40k , 29 \xb1 15 bq/kg of 238u , and 13 \xb1 3 bq/kg of 232th . story_separator_special_tag abstract we present a systematic derivation and discussion of the practical formulae needed to design and interpret direct searches for nuclear recoil events caused by hypothetical weakly interacting dark matter particles . modifications to the differential energy spectrum arise from the earth 's motion , recoil detection efficiency , instrumental resolution and threshold , multiple target elements , spin-dependent and coherent factors , and nuclear form factor . we discuss the normalization and presentation of results to allow comparison between different target elements and with theoretical predictions . equations relating to future directional detectors are also included . story_separator_special_tag of an absolute calibration of sub-1 kev nuclear recoils in liquid xenon using d-d neutron scattering kinematics in the lux detector by james richard verbus , ph.d. , brown university , may 2016. we propose a new technique for the calibration of nuclear recoils in large noble element dualphase time projection chambers ( tpcs ) used to search for wimp dark matter in the local galactic halo . this technique provides a measurement of the low-energy nuclear recoil response of the target media using the measured scattering angle between multiple neutron interactions within the detector volume . several strategies for improving this calibration technique are discussed , including the creation of a new type of quasi-monoenergetic 272 kev neutron source . we report results from a timeof-flight-based measurement of the neutron energy spectrum produced by an adelphi technology , inc. dd108 neutron generator , confirming its suitability for the proposed calibration . the large underground xenon ( lux ) experiment is a dual-phase liquid xenon tpc operating at the sanford underground research facility in lead , south dakota . our proposed calibration technique for nuclear recoils in liquid xenon was performed in situ in the lux detector using a collimated story_separator_special_tag we report a comprehensive study of the energy response to low-energy recoils in dual-phase xenon-based dark matter experiments . a recombination model is developed to explain the recombination probability as a function of recoil energy at zero field and non-zero field . the role of e-ion recombination is discussed for both parent recombination and volume recombination . we find that the volume recombination under non-zero field is constrained by a plasma effect , which is caused by a high density of charge carriers along the ionization track forming a plasma-like cloud of charge that shields the interior from the influence of the external electric field . subsequently , the plasma time that determines the volume recombination probability at non-zero field is demonstrated to be different between electronic recoils and nuclear recoils due to the difference of ionization density between two processes . we show a weak field-dependence of the plasma time for nuclear recoils and a stronger field-dependence of the plasma time for electronic recoils . as a result , the time-dependent recombination is implemented in the determination of charge and light yield with a generic model . our model agrees well with the available experimental data from xenon-based dark story_separator_special_tag direct dark matter detection experiments based on a liquid xenon target are leading the search for dark matter particles with masses above 5 gev/c^ { 2 } , but have limited sensitivity to lighter masses because of the small momentum transfer in dark matter-nucleus elastic scattering . however , there is an irreducible contribution from inelastic processes accompanying the elastic scattering , which leads to the excitation and ionization of the recoiling atom ( the migdal effect ) or the emission of a bremsstrahlung photon . in this letter , we report on a probe of low-mass dark matter with masses down to about 85 mev/c^ { 2 } by looking for electronic recoils induced by the migdal effect and bremsstrahlung using data from the xenon1t experiment . besides the approach of detecting both scintillation and ionization signals , we exploit an approach that uses ionization signals only , which allows for a lower detection threshold . this analysis significantly enhances the sensitivity of xenon1t to light dark matter previously beyond its reach . story_separator_special_tag we present an updated model of light and charge yields from nuclear recoils in liquid xenon with a simultaneously constrained parameter set . a global analysis is performed using measurements of electron and photon yields compiled from all available historical data , as well as measurements of the ratio of the two . these data sweep over energies from $ 1 - 300~\\hbox { kev } $ and external applied electric fields from $ 0 - 4060~\\hbox { v/cm } $ . the model is constrained by constructing global cost functions and using a simulated annealing algorithm and a markov chain monte carlo approach to optimize and find confidence intervals on all free parameters in the model . this analysis contrasts with previous work in that we do not unnecessarily exclude data sets nor impose artificially conservative assumptions , do not use spline functions , and reduce the number of parameters used in nest v0.98 . we report our results and the calculated best-fit charge and light yields . these quantities are crucial to understanding the response of liquid xenon detectors in the energy regime important for rare event searches such as the direct detection of dark matter particles . story_separator_special_tag the scattering of dark matter ( dm ) particles with sub-gev masses off nuclei is difficult to detect using liquid xenon-based dm search instruments because the energy transfer during nuclear recoils is smaller than the typical detector threshold . however , the tree-level dm-nucleus scattering diagram can be accompanied by simultaneous emission of a bremsstrahlung photon or a so-called `` migdal '' electron . these provide an electron recoil component to the experimental signature at higher energies than the corresponding nuclear recoil . the presence of this signature allows liquid xenon detectors to use both the scintillation and the ionization signals in the analysis where the nuclear recoil signal would not be otherwise visible . we report constraints on spin-independent dm-nucleon scattering for dm particles with masses of 0.4-5 gev/c^ { 2 } using 1.4\xd710^ { 4 } kg day of search exposure from the 2013 data from the large underground xenon ( lux ) experiment for four different classes of mediators . this analysis extends the reach of liquid xenon-based dm search instruments to lower dm masses than has been achieved previously . story_separator_special_tag the coherent elastic scattering of neutrinos off nuclei has eluded detection for four decades , even though its predicted cross-section is the largest by far of all low-energy neutrino couplings . this mode of interaction provides new opportunities to study neutrino properties , and leads to a miniaturization of detector size , with potential technological applications . we observe this process at a 6.7-sigma confidence level , using a low-background , 14.6-kg csi [ na ] scintillator exposed to the neutrino emissions from the spallation neutron source ( sns ) at oak ridge national laboratory . characteristic signatures in energy and time , predicted by the standard model for this process , are observed in high signal-to-background conditions . improved constraints on non-standard neutrino interactions with quarks are derived from this initial dataset . story_separator_special_tag we report the first detection of coherent elastic neutrino-nucleus scattering ( cevns ) on argon using the cenns-10 liquid argon detector at the oak ridge national laboratory spallation neutron source . two independent analyses prefer cevns over the background-only null hypothesis with greater than $ 3\\sigma $ significance . the measured cross section , averaged over the incident neutrino flux , is ( $ 2.2 \\pm 0.7 ) \\times 10^ { -39 } $ cm $ ^2 $ -- consistent with the standard model prediction . this is the lightest nucleus for which cevns has been observed , which allows us to verify the expected neutron-number dependence of the cross section and to better constrain non-standard neutrino interactions . story_separator_special_tag xenonnt is a dark matter direct detection experiment , utilizing 5.9 t of instrumented liquid xenon , located at the infn laboratori nazionali del gran sasso . in this work , we predict the experimental background and project the sensitivity of xenonnt to the detection of weakly interacting massive particles ( wimps ) . the expected average differential background rate in the energy region of interest , corresponding to ( 1 , 13 ) kev and ( 4 , 50 ) kev for electronic and nuclear recoils , amounts to $ 13.1 \\pm 0.6 $ ( kev t y ) $ ^ { -1 } $ and $ ( 2.2\\pm 0.5 ) \\times 10^ { -3 } $ ( kev t y ) $ ^ { -1 } $ , respectively , in a 4 t fiducial mass . we compute unified confidence intervals using the profile construction method , in order to ensure proper coverage . with the exposure goal of 20 t $ \\ , $ y , the expected sensitivity to spin-independent wimp-nucleon interactions reaches a cross-section of $ 1.4\\times10^ { -48 } $ cm $ ^2 $ for a 50 gev/c $ ^2 $ mass wimp story_separator_special_tag the xenon100 dark matter experiment : design , construction , calibration and 2010 search results with improved measurement of the scintillation response of liquid xenon to low-energy nuclear recoils story_separator_special_tag abstract the lux-zeplin dark matter search aims to achieve a sensitivity to the wimp-nucleon spin-independent cross-section down to ( 1 2 ) \xd7 10 12 \xa0pb at a wimp mass of 40 gev/c2 . this paper describes the simulations framework that , along with radioactivity measurements , was used to support this projection , and also to provide mock data for validating reconstruction and analysis software . of particular note are the event generators , which allow us to model the background radiation , and the detector response physics used in the production of raw signals , which can be converted into digitized waveforms similar to data from the operational detector . inclusion of the detector response allows us to process simulated data using the same analysis routines as developed to process the experimental data . story_separator_special_tag the dama liquid xenon ( lxe ) experimental set up is running deep underground at the gran sasso national laboratory ( lngs ) of the i.n.f.n . ( see fig . 1 ) since several years and various upgradings to improve its performance and sensitivity have been carried out . during the 80 s preliminary efforts have been carried out by some of us in the framework of the xelidon experiment , developing various kinds of lxe detectors . several prototype detectors were built and some related results published ( see e.g . ref.l ) . in particular , these efforts allowed us to identify the best lxe detector for wimp search ; in fact , considering the required : i ) low background features ; ii ) stability and reproducibility of the operating conditions ; iii ) relatively high duty cycle ; iv ) high global efficiency , a pure liquid xenon scintillator employing the smallest number of ( easily and highly radio-purifiable ) materials was chosen . in addition , the use of kr-free gas is mandatory to avoid the relevant kr content normally present in commercial xenon as well as the suitable choice of the purifier materials story_separator_special_tag abstract the scintillation efficiency of liquid xenon for nuclear recoils has been measured to be nearly constant in the recoil energy range from 140\xa0kev down to 5\xa0kev . the average ratio of the efficiency for recoils to that for -rays is found to be 0.19\xa0\xb1\xa00.02 . story_separator_special_tag abstract we propose a new technique for the calibration of nuclear recoils in large noble element dual-phase time projection chambers used to search for wimp dark matter in the local galactic halo . this technique provides an in situ measurement of the low-energy nuclear recoil response of the target media using the measured scattering angle between multiple neutron interactions within the detector volume . the low-energy reach and reduced systematics of this calibration have particular significance for the low-mass wimp sensitivity of several leading dark matter experiments . multiple strategies for improving this calibration technique are discussed , including the creation of a new type of quasi-monoenergetic neutron source with a minimum possible peak energy of 272\xa0kev . we report results from a time-of-flight-based measurement of the neutron energy spectrum produced by an adelphi technology , inc. dd108 neutron generator , confirming its suitability for the proposed nuclear recoil calibration . story_separator_special_tag geant4 is a toolkit for simulating the passage of particles through matter . it includes a complete range of functionality including tracking , geometry , physics models and hits . the physics processes offered cover a comprehensive range , including electromagnetic , hadronic and optical processes , a large set of long-lived particles , materials and elements , over a wide energy range starting , in some cases , from 250ev and extending in others to the tev energy range . it has been designed and constructed to expose the physics models utilised , to handle complex geometries , and to enable its easy adaptation for optimal use in different sets of applications . the toolkit is the result of a worldwide collaboration of physicists and software engineers . it has been created exploiting software engineering and object-oriented technology and implemented in the c++ programming language . it has been used in applications in particle physics , nuclear physics , accelerator design , space engineering and medical physics . story_separator_special_tag geant4 is a software toolkit for the simulation of the passage of particles through matter . it is used by a large number of experiments and projects in a variety of application domains , including high energy physics , astrophysics and space science , medical physics and radiation protection . its functionality and modeling capabilities continue to be extended , while its performance is enhanced . an overview of recent developments in diverse areas of the toolkit is presented . these include performance optimization for complex setups ; improvements for the propagation in fields ; new options for event biasing ; and additions and improvements in geometry , physics processes and interactive capabilities story_separator_special_tag abstract a new data analysis method based on physical observables for wimp dark matter searches with noble liquid xe dual-phase tpcs is presented . traditionally , the nuclear recoil energy from a scatter in the liquid target has been estimated by means of the initial prompt scintillation light ( s1 ) produced at the interaction vertex . the ionization charge ( c2 ) , or its secondary scintillation ( s2 ) , is combined with the primary scintillation in log10 ( s2/s1 ) vs. s1 only as a discrimination parameter against electron recoil background . arguments in favor of c2 as the more reliable nuclear recoil energy estimator than s1 are presented . the new phase space of log10 ( s1/c2 ) vs. c2 is introduced as more efficient for nuclear recoil acceptance and exhibiting superior energy resolution . this is achieved without compromising the discrimination power of the lxe tpc , nor its 3d event reconstruction and fiducialization capability , as is the case for analyses that exploit only the ionization channel . finally , the concept of two independent energy estimators for background rejection is presented : e2 as the primary ( based on c2 ) and e1 story_separator_special_tag we report results from tests of { sup 83 } kr { sup m } as a calibration source in liquid argon and liquid neon . { sup 83 } kr { sup m } atoms are produced in the decay of { sup 83 } rb , and a clear { sup 83 } kr { sup m } scintillation peak at 41.5 kev appears in both liquids when filling our detector through zeolite coated with { sup 83 } rb . based on this scintillation peak , we observe 6.0 photoelectrons/kev in liquid argon with a resolution of 8.2 % ( sigma/e ) and 3.0 photoelectrons/kev in liquid neon with a resolution of 19 % ( sigma/e ) . the observed peak intensity subsequently decays with the { sup 83 } kr { sup m } half-life after stopping the fill , and we find evidence that the spatial location of { sup 83 } kr { sup m } atoms in the chamber can be resolved . { sup 83 } kr { sup m } will be a useful calibration source for liquid argon , neon dark matter , and solar neutrino detectors . story_separator_special_tag we measure the liquid argon scintillation response to electronic recoils in the energy range of 2.82 to 1274.6 kev at null electric field . the single-phase detector with a large optical coverage used in this measurement yields $ 12.8\\ifmmode\\pm\\else\\textpm\\fi { } 0.3 ( 11.2\\ifmmode\\pm\\else\\textpm\\fi { } 0.3 ) \\text { } \\text { } \\text { photoelectron } /\\mathrm { kev } $ for 511.0-kev $ \\ensuremath { \\gamma } $ -ray events based on a photomultiplier tube single photoelectron response modeling with a gaussian plus an additional exponential term ( with only a gaussian term ) . it is exposed to a variety of calibration sources such as $ ^ { 22 } \\mathrm { na } $ and $ ^ { 241 } \\mathrm { am } $ $ \\ensuremath { \\gamma } $ -ray emitters , and a $ ^ { 252 } \\mathrm { cf } $ fast neutron emitter that induces quasimonoenergetic $ \\ensuremath { \\gamma } $ rays through a $ ( n , { n } ^ { \\ensuremath { ' } } \\ensuremath { \\gamma } ) $ reaction with $ ^ { 19 } \\mathrm { f } $ in polytetrafluoroethylene story_separator_special_tag we report the first results of darkside-50 , a direct search for dark matter operating in the underground laboratori nazionali del gran sasso ( lngs ) and searching for the rare nuclear recoils possibly induced by weakly interacting massive particles ( wimps ) . the dark matter detector is a liquid argon time projection chamber with a ( 46.4 +/- 0.7 ) kg active mass , operated inside a 30 t organic liquid scintillator neutron veto , which is in turn installed at the center of a 1 kt water cherenkov veto for the residual flux of cosmic rays . we report here the null results of a dark matter search for a ( 1422 +/- 67 ) kgd exposure with an atmospheric argon fill . this is the most sensitive dark matter search performed with an argon target , corresponding to a 90 % cl upper limit on the wimp-nucleon spin-independent cross section of 6.1 x 10 ( -44 ) cm ( 2 ) for a wimp mass of 100 gev/c ( 2 ) . ( c ) 2015 the authors . published by elsevier b.v . story_separator_special_tag this letter reports the first results of a direct dark matter search with the deap-3600 single-phase liquid argon ( lar ) detector . the experiment was performed 2\xa0km underground at snolab ( sudbury , canada ) utilizing a large target mass , with the lar target contained in a spherical acrylic vessel of 3600\xa0kg capacity . the lar is viewed by an array of pmts , which would register scintillation light produced by rare nuclear recoil signals induced by dark matter particle scattering . an analysis of 4.44 live days ( fiducial exposure of 9.87\xa0ton day ) of data taken during the initial filling phase demonstrates the best electronic recoil rejection using pulse-shape discrimination in argon , with leakage < 1.2\xd710^ { -7 } ( 90 % \xa0c.l . ) between 15 and 31 kev_ { ee } . no candidate signal events are observed , which results in the leading limit on weakly interacting massive particle ( wimp ) -nucleon spin-independent cross section on argon , < 1.2\xd710^ { -44 } cm^ { 2 } for a 100 gev/c^ { 2 } wimp mass ( 90 % \xa0c.l . ) . story_separator_special_tag the scintillation light yield of liquid argon from nuclear recoils relative to electronic recoils has been measured as a function of recoil energy from 10 kevr up to 250 kevr . the scintillation efficiency , defined as the ratio of the nuclear recoil scintillation response to the electronic recoil response , is 0.25 \\pm 0.01 + 0.01 ( correlated ) above 20 kevr . story_separator_special_tag we report on the first measurement of 39ar in argon from underground natural gas reservoirs . the gas stored in the us national helium reserve was found to contain a low level of 39ar . the ratio of 39ar to stable argon was found to be 4\xd710-17 ( 84 % c.l . ) , less than 5 % the value in atmospheric argon ( 39ar/ar=8\xd710-16 ) . the total quantity of argon currently stored in the national helium reserve is estimated at 1000 tons . 39ar represents one of the most important backgrounds in argon detectors for wimp dark matter searches . the findings reported demonstrate the possibility of constructing large multi-ton argon detectors with low radioactivity suitable for wimp dark matter searches . story_separator_special_tag the preponderance of matter over antimatter in the early universe , the dynamics of the supernovae that produced the heavy elements necessary for life , and whether protons eventually decay -- these mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our universe , its current state , and its eventual fate . dune is an international world-class experiment dedicated to addressing these questions as it searches for leptonic charge-parity symmetry violation , stands ready to capture supernova neutrino bursts , and seeks to observe nucleon decay as a signature of a grand unified theory underlying the standard model . central to achieving dune 's physics program is a far detector that combines the many tens-of-kiloton fiducial mass necessary for rare event searches with sub-centimeter spatial resolution in its ability to image those events , allowing identification of the physics signatures among the numerous backgrounds . in the single-phase liquid argon time-projection chamber ( lartpc ) technology , ionization charges drift horizontally in the liquid argon under the influence of an electric field towards a vertical anode , where they are read out with fine granularity . a photon detection system supplements story_separator_special_tag the microboone continuous readout stream is a parallel readout of the microboone liquid argon time projection chamber ( lartpc ) which enables detection of non-beam events such as those from a supernova neutrino burst . the low energies of the supernova neutrinos and the intense cosmic-ray background flux due to the near-surface detector location makes triggering on these events very challenging . instead , microboone relies on a delayed trigger generated by snews ( the supernova early warning system ) for detecting supernova neutrinos . the continuous readout of the lartpc generates large data volumes , and requires the use of real-time compression algorithms ( zero suppression and huffman compression ) implemented in an fpga ( field-programmable gate array ) in the readout electronics . we present the results of the optimization of the data reduction algorithms , and their operational performance . to demonstrate the capability of the continuous stream to detect low-energy electrons , a sample of michel electrons from stopping cosmic-ray muons is reconstructed and compared to a similar sample from the lossless triggered readout stream . story_separator_special_tag the deep underground neutrino experiment ( dune ) , a 40-kton underground liquid argon time projection chamber experiment , will be sensitive to the electron-neutrino flavor component of the burst of neutrinos expected from the next galactic core-collapse supernova . such an observation will bring unique insight into the astrophysics of core collapse as well as into the properties of neutrinos . the general capabilities of dune for neutrino detection in the relevant few- to few-tens-of-mev neutrino energy range will be described . as an example , dune 's ability to constrain the $ u_e $ spectral parameters of the neutrino burst will be considered . story_separator_special_tag we present a measurement of scintillation efficiency for a few tens of kev nuclear recoils ( nr ) with a liquid argon time projection chamber under electric fields ranging from 0 to $ 3\\text { } \\text { } \\mathrm { kv } /\\mathrm { cm } $ . the calibration data are taken with $ ^ { 252 } \\mathrm { cf } $ radioactive source . observed scintillation and electroluminescence spectra are simultaneously fit with spectra derived from geant4-based monte carlo simulation and an nr model . the scintillation efficiency extracted from the fit is reported as a function of recoil energy and electric field . this result can be used for designing the detector and for the interpretation of experimental data in searching for scintillation and ionization signals induced by wimp dark matter . story_separator_special_tag it has been demonstrated that the liquid argon scintillation detector is an excellent particle detector for use in various physics experiments , particularly those seeking evidence of wimps dark matter . the detector observes primary scintillation signal ( s1 ) and/or secondary electroluminescence signal ( s2 ) . it is crucial that the properties of the detector are understood comprehensively so that such projects reduce systematic uncertainty and improve sensitivity . this work covers the recent measurements , taken using the liquid argon test facility at waseda university , of the liquid argon scintillation and ionization responses for nuclear and electronic recoils . additionally , a basic study of the gaseous argon luminescence signal , which involves the s2 signal , is presented herein . the luminescence process of the s2 signal , based on this measurement , is then discussed along with a suggested new mechanism called neutral bremsstrahlung . story_separator_special_tag abstract fano factors in liquid argon , krypton , xenon and xenon-doped liquid argon are estimated from the fano formula by using the parameters in the energy balance equation for the absorbed energy of ionizing radiation . as a result , it is shown that the values for liquid argon , krypton and xenon are smaller than those in the gas phase and the value for xenon-doped liquid argon is smaller than that for liquid argon as in argon-molecular gas mixture . in particular , the value for liquid xenon is extremely small , i.e . about 0.05 , which is comparable to that for ge ( li ) detectors . using the so obtained fano factor and an electronic noise level which can easily be achieved , the fwhm in the liquid xenon ionization chamber is estimated to be about 3 kev for 1 mev electrons . story_separator_special_tag scintillation yields ( scintillation intensity per unit absorbed energy ) in liquid argon for ionizing particles are reviewed as a function of let for the particles . the maximum scintillation yield , which is obtained for relativistic heavy ions from ne to la , is about 1.2 times larger than that for gamma rays in nai ( tl ) crystal . in the low let region , the scintillation yields for relativistic electrons , protons and he ions are 10 20 % lower than the maximum yield . this tendency can be explained by taking into account the existence of the electrons which have escaped from their parent ions . in the high let region , a quenching effect due to high ionization density is observed for alpha particles , fission fragments and relativistic au ions . story_separator_special_tag the darkside staged program utilizes a two-phase time projection chamber ( tpc ) with liquid argon as the target material for the scattering of dark matter particles . efficient background reduction is achieved using low radioactivity underground argon as well as several experimental handles such as pulse shape , ratio of ionization over scintillation signal , 3d event reconstruction , and active neutron and muon vetos . the darkside-10 prototype detector has proven high scintillation light yield , which is a particularly important parameter as it sets the energy threshold for the pulse shape discrimination technique . the darkside-50 detector system , currently in commissioning phase at the gran sasso underground laboratory , will reach a sensitivity to dark matter spin-independent scattering cross section of 10 45 cm2 within 3 years of operation . story_separator_special_tag abstract we describe the first demonstration of a sub-kev electron recoil energy threshold in a dual-phase liquid argon time projection chamber . this is an important step in an effort to develop a detector capable of identifying the ionization signal resulting from nuclear recoils with energies of order a few kev and below . we obtained this result by observing the peaks in the energy spectrum at 2.82\xa0kev and 0.27\xa0kev , following the k- and l-shell electron capture decay of 37 ar respectively . the 37 ar source preparation is described in detail , since it enables calibration that may also prove useful in dark matter direct detection experiments . an internally placed 55 fe x-ray source simultaneously provided another calibration point at 5.9\xa0kev . we discuss the ionization yield and electron recombination in liquid argon at those three calibration energies . story_separator_special_tag we report on wimp search results of the xenon100 experiment , combining three runs summing up to 477 live days from january 2010 to january 2014. data from the first two runs were already published . a blind analysis was applied to the last run recorded between april 2013 and january 2014 prior to combining the results . the ultra-low electromagnetic background of the experiment , ~ $ 5 \\times 10^ { -3 } $ events/ ( kev $ _ { \\mathrm { ee } } \\times $ kg $ \\times $ day ) before electronic recoil rejection , together with the increased exposure of 48 kg $ \\times $ yr improves the sensitivity . a profile likelihood analysis using an energy range of ( 6.6 - 43.3 ) kev $ _ { \\mathrm { nr } } $ sets a limit on the elastic , spin-independent wimp-nucleon scattering cross section for wimp masses above 8 gev/ $ c^2 $ , with a minimum of 1.1 $ \\times 10^ { -45 } $ cm $ ^2 $ at 50 gev/ $ c^2 $ and 90 % confidence level . we also report updated constraints on the elastic , spin-dependent wimp-nucleon story_separator_special_tag cryogenic noble liquids emerged in the previous decade as one of the best media to perform wimp dark matter searches , in particular due to the possibility to scale detector volumes to multiton sizes . the warp experiment was then developed as one of the first to implement the idea of coupling argon in liquid and gas phase , in order to discriminate -interactions from nuclear recoils and then achieve reliable background rejection . since its construction , other projects spawned , employing argon and xenon and following its steps . the warp 100l detector was assembled in 2008 at the gran sasso national laboratories ( lngs ) , as the final step of a years-long r & d programme , aimed at characterising the technology of argon in double phase for dark matter detection . though it never actually performed a physics run , a technical run was taken in 2011 , to characterise the detector response . story_separator_special_tag the darkside-50 direct-detection dark matter experiment is a dual-phase argon time projection chamber operating at laboratori nazionali del gran sasso . this paper reports on the blind analysis and spin-independent dark matter-nucleon coupling results from a 532.4 live-days exposure , using a target of low-radioactivity argon extracted from underground sources . the background-free result in the dark matter selection box gave no evidence for dark matter . the present blind analysis sets a 90 % c.l . upper limit on the dark matter-nucleon spin-independent cross section of 1.1e-44 cm^2 ( 3.8e-44 cm^2 , 3.4e-43 cm^2 ) for a wimp mass of 100 gev/c^2 ( 1 tev/c^2 , 10 tev/c^2 ) . story_separator_special_tag a liquid argon time projection chamber , constructed for the argon response to ionization and scintillation ( aris ) experiment , is exposed to the highly collimated and quasimonoenergetic licorne neutron beam at the institut de physique nucleaire d orsay ( ipno ) in order to study the scintillation response to nuclear and electronic recoils . an array of liquid scintillator detectors , arranged around the apparatus , tag scattered neutrons and select nuclear recoil energies in the [ 7 , 120 ] \xa0kev energy range . the relative scintillation efficiency of nuclear recoils is measured to high precision at null field , and the ion-electron recombination probability is extracted for a range of applied electric fields . single-scattered compton electrons , produced by gammas emitted from the deexcitation of li * 7 in coincidence with the beam pulse , along with calibration gamma sources , are used to extract the recombination probability as a function of energy and electron drift field . the aris results are compared with three recombination probability parametrizations ( thomas-imel , doke-birks , and paris ) , allowing for the definition of a fully comprehensive model of the liquid argon response to nuclear and electronic story_separator_special_tag the deep underground neutrino experiment ( dune ) experiment , a 40-kton underground liquid argon time-projection-chamber detector , will have unique sensitivity to the electron flavor component of a core-collapse supernova neutrino burst . we present the expected capabilities of dune for measurements of neutrinos in the few-tens-of-mev range relevant for supernova detection and its corresponding sensitivity to both neutrino physics and supernova astrophysics . recent progress and some outstanding issues are highlighted . story_separator_special_tag we present a search for core-collapse supernovae in the milky way galaxy , using the miniboone neutrino detector . no evidence is found for core-collapse supernovae occurring in our galaxy in the period from december 14 , 2004 to july 31 , 2008 , corresponding to 98 % live time for collection . we set a limit on the core-collapse supernova rate out to a distance of 13.4 kpc to be less than 0.69 supernovae per year at 90 % c.l . story_separator_special_tag this document presents the conceptual design report ( cdr ) put forward by an international neutrino community to pursue the deep underground neutrino experiment at the long-baseline neutrino facility ( lbnf/dune ) , a groundbreaking science experiment for long-baseline neutrino oscillation studies and for neutrino astrophysics and nucleon decay searches . the dune far detector will be a very large modular liquid argon time-projection chamber ( lartpc ) located deep underground , coupled to the lbnf multi-megawatt wide-band neutrino beam . dune will also have a high-resolution and high-precision near detector . story_separator_special_tag the lariat liquid argon time projection chamber , placed in a tertiary beam of charged particles at the fermilab test beam facility , has collected large samples of pions , muons , electrons , protons , and kaons in the momentum range 0 30 0140 mev/c . this paper describes the main aspects of the detector and beamline , and also reports on calibrations performed for the detector and beamline components . story_separator_special_tag precise calorimetric reconstruction of 5 50\xa0mev electrons in liquid argon time projection chambers ( lartpcs ) will enable the study of astrophysical neutrinos in dune and could enhance the physics reach of oscillation analyses . liquid argon scintillation light has the potential to improve energy reconstruction for low-energy electrons over charge-based measurements alone . here we demonstrate light-augmented calorimetry for low-energy electrons in a single-phase lartpc using a sample of michel electrons from decays of stopping cosmic muons in the lariat experiment at fermilab . michel electron energy spectra are reconstructed using both a traditional charge-based approach as well as a more holistic approach that incorporates both charge and light . a maximum-likelihood fitter , using lariat s well-tuned simulation , is developed for combining these quantities to achieve optimal energy resolution . a sample of isolated electrons is simulated to better determine the energy resolution expected for astrophysical electron-neutrino charged-current interaction final states . in lariat , which has very low wire noise and an average light yield of 18 pe/mev , an energy resolution of /e 9.3 % /e 1.3 % is achieved . samples are then generated with varying wire noise levels and light yields to gauge story_separator_special_tag author ( s ) : adams , c ; alrashed , m ; an , r ; anthony , j ; asaadi , j ; ashkenazi , a ; balasubramanian , s ; baller , b ; barnes , c ; barr , g ; basque , v ; bass , m ; bay , f ; berkman , s ; bhanderi , a ; bhat , a ; bishai , m ; blake , a ; bolton , t ; camilleri , l ; caratelli , d ; terrazas , ic ; carr , r ; fernandez , rc ; cavanna , f ; cerati , g ; chen , y ; church , e ; cianci , d ; cohen , eo ; conrad , jm ; convery , m ; cooper-troendle , l ; crespo-anadon , ji ; tutto , md ; devitt , d ; diaz , a ; domine , l ; duffy , k ; dytman , s ; eberly , b ; ereditato , a ; sanchez , le ; esquivel , j ; evans , jj ; fitzpatrick , rs ; fleming , bt ; foppiani , n ; franco , d story_separator_special_tag the measurement of the $ w $ value , the average energy expended per ion pair , in liquid argon for internal conversion electrons emitted from $ ^ { 207 } \\mathrm { bi } $ , is carried out by the electron-pulse method . comparing with the $ w $ value ( 26.09 ev ) for $ \\ensuremath { \\alpha } $ particles in a gas mixture of argon ( 95 % ) and methane ( 5 % ) , the $ w $ value in liquid argon is determined to be $ { 23.6 } _ { \\ensuremath { - } 0.3 } ^ { +0.5 } $ ev . this value is clearly smaller than that in gaseous argon ( 26.4 ev ) . such a reduction of $ w $ value in liquid argon can be explained by assuming the existence of the conduction band in liquid argon as in solid argon . under this assumption , the $ w $ value in liquid argon is estimated by applying the energy-balance method , which was used for the rare gases by platzman . the calculated value thus obtained is in good agreement with the experimental result story_separator_special_tag we have measured the electric field dependence of electron-ion recombination in liquid argon and xenon . the observed relationship is incompatible with onsager 's geminate theory [ phys . rev . 54 , 554 ( 1938 ) ] of recombination so we have developed a new , single-parameter model to describe the data . the model is based on realistic assumptions about liquid argon and xenon and yields a simple , analytic result . this work is part of a program to build a high-resolution liquid-xenon detector . story_separator_special_tag there is a fundamental limit to the resolution that can be achieved in liquid-argon and liquid-xenon ionization detectors due to statistical fluctuations in the rate of charge collection along the track of a primary particle . the limit is typically much larger than the resolution expected from poisson statistics . we present resolution data in both argon and xenon , a model to describe the data , and a discussion of how the model parameters can be manipulated in order to achieve even greater resolution . story_separator_special_tag energy resolution of lrg ionization spectrometers is up to now very important and not fully understandable parameter . it is no doubt that at least part of contributions into overall energy resolution determines the free-ion yield nonlinearity . two opportunities of free-ion yield definition are discussed - jaffe approach and birks ' law . experimental results known up to now are analyzed to receive parameters that can be used for energy resolution calculations . story_separator_special_tag electron recombination in liquid argon ( lar ) is studied by means of charged particle tracks collected in various icarus liquid argon tpc prototypes . the dependence of the recombination on the particle stopping power has been fitted with a birks functional dependence . the simulation of the process of electron recombination in monte carlo calculations is discussed . a quantitative comparison with previously published data is carried out . story_separator_special_tag larsoft is a set of detector-independent software tools for the simulation , reconstruction and analysis of data from liquid argon ( lar ) neutrino experiments the common features of lar time projection chambers ( tpcs ) enable sharing of algorithm code across detectors of very different size and configuration . larsoft is currently used in production simulation and reconstruction by the argoneut , dune , larlat , microboone , and sbnd experiments . the software suite offers a wide selection of algorithms and utilities , including those for associated photo-detectors and the handling of auxiliary detectors outside the tpcs . available algorithms cover the full range of simulation and reconstruction , from raw waveforms to high-level reconstructed objects , event topologies and classification . the common code within larsoft is contributed by adopting experiments , which also provide detector-specific geometry descriptions , and code for the treatment of electronic signals . larsoft is also a collaboration of experiments , fermilab and associated software projects which cooperate in setting requirements , priorities , and schedules . in this talk , we outline the general architecture of the software and the interaction with external libraries and detector-specific code . we also describe story_separator_special_tag abstract a re-encounter model is proposed for electron ion recombination on low-let tracks in liquid argon . consistency with measured escape probability for fields > 1 kv cm 1 requires an encounter recombination probability 0.01. the initial e ion separation ( 1500 1800 nm ) is reconciled with thermalization by elastic collisions . due to homogeneous recombination the experimental values fall below the calculated ones for fields less than 1 kv cm 1 . story_separator_special_tag abstract a gridded ionization chamber was used to study the energy resolution in liquid argon with electrons from a 207bi radioactive source . argon was purified in the gas phase with a simple and reliable system , capable of reducing the impurity level below 1 ppb o2 equivalent , as inferred by a pulse shape analysis of the ionization signals . the electron spectrum was measured at different drift fields , up to 10.9 kv/cm . at this maximum field , a total energy resolution of 32 kev ( fwhm ) , corresponding to a noise-subtracted energy resolution of 26 kev ( fwhm ) , was obtained for the 976 kev conversion electron line . this value is the best reported so far in liquid argon but is still a factor of seven worse than the theoretical limit set by the fano factor . the reasons for this discrepancy are discussed . story_separator_special_tag mev-scale energy depositions by low-energy photons produced in neutrino-argon interactions have been identified and reconstructed in argoneut liquid argon time projection chamber ( lartpc ) data . arg . story_separator_special_tag we present the results of a search for dark matter weakly interacting massive particles ( wimps ) in the mass range below 20 gev/c^ { 2 } using a target of low-radioactivity argon with a 6786.0\xa0kg\xa0d exposure . the data were obtained using the darkside-50 apparatus at laboratori nazionali del gran sasso . the analysis is based on the ionization signal , for which the darkside-50 time projection chamber is fully efficient at 0.1 kevee . the observed rate in the detector at 0.5 kevee is about 1.5 event/kevee/kg/d and is almost entirely accounted for by known background sources . we obtain a 90 % \xa0c.l . exclusion limit above 1.8 gev/c^ { 2 } for the spin-independent cross section of dark matter wimps on nucleons , extending the exclusion region for dark matter below previous limits in the range 1.8-6 gev/c^ { 2 } . story_separator_special_tag release of coherent collaboration data from the first detection of coherent elastic neutrino-nucleus scattering ( cevns ) on argon . this release corresponds with the results of `` analysis a '' published in akimov et al. , arxiv:2003.10630 [ nucl-ex ] . data is shared in a binned , text-based format representing both `` signal '' and `` backgrounds '' along with associated uncertainties such that the included data can be used to perform independent analyses . this document describes the contents of the data release as well as guidance on the use of the data . included example code in c++ ( root ) and python show one possible use of the included data . story_separator_special_tag experiments searching for weak interacting massive particles with noble gases such as liquid argon require very low detection thresholds for nuclear recoils . a determination of the scintillation efficiency is crucial to quantify the response of the detector at low energy . we report the results obtained with a small liquid argon cell using a monoenergetic neutron beam produced by a deuterium-deuterium fusion source . the light yield relative to electrons was measured for six argon recoil energies between 11 and 120 kev at zero electric drift field . story_separator_special_tag in the framework of developments for liquid argon dark matter detectors we assembled a laboratory setup to scatter neutrons on a small liquid argon target . the neutrons are produced mono-energetically ( ekin = 2.45 mev ) by nuclear fusion in a deuterium plasma and are collimated onto a 3 liquid argon cell operating in single-phase mode ( zero electric field ) . organic liquid scintillators are used to tag scattered neutrons and to provide a time-of-flight measurement . the setup is designed to study light pulse shapes and scintillation yields from nuclear and electronic recoils as well as from alpha particles at working points relevant for dark matter searches . liquid argon offers the possibility to scrutinise scintillation yields in noble liquids with respect to the population strength of the two fundamental excimer states . here we present experimental methods and first results from recent data towards such studies . story_separator_special_tag we have exposed a dual-phase liquid argon time projection chamber ( lar-tpc ) to a low energy pulsed narrow-band neutron beam , produced at the notre dame institute for structure and nuclear astrophysics , to study the scintillation light yield of recoiling nuclei . liquid scintillation counters were arranged to detect and identify neutrons scattered in the lar-tpc and to select the energy of the recoiling nuclei . we report the observation of a significant dependence ( up to 32 % ) on the drift field of liquid argon scintillation from nuclear recoils of energies between 10.8 and 49.9 kev . the field dependence is stronger at lower energies . since it is the first measurement of such an effect in liquid argon , this observation is important because , to date , estimates of the sensitivity of lar-tpc dark matter searches are based on the assumption that the electric field has only a small effect on the light yield from nuclear recoils . story_separator_special_tag we have measured the scintillation and ionization yield of recoiling nuclei in liquid argon as a function of applied electric field by exposing a dual-phase liquid argon time projection chamber ( lar-tpc ) to a low energy pulsed narrow band neutron beam produced at the notre dame institute for structure and nuclear astrophysics . liquid scintillation counters were arranged to detect and identify neutrons scattered in the tpc and to select the energy of the recoiling nuclei . we report measurements of the scintillation yields for nuclear recoils with energies from 10.3 to 57.3 kev and for median applied electric fields from 0 to $ 970\\text { } \\text { } \\mathrm { v } /\\mathrm { cm } $ . for the ionization yields , we report measurements from 16.9 to 57.3 kev and for electric fields from 96.4 to $ 486\\text { } \\text { } \\mathrm { v } /\\mathrm { cm } $ . we also report the observation of an anticorrelation between scintillation and ionization from nuclear recoils , which is similar to the anticorrelation between scintillation and ionization from electron recoils . assuming that the energy loss partitions into excitons and ion pairs from story_separator_special_tag the scintillation yield for recoil ar ions of 5 to 250 kev energy in liquid argon have been evaluated for direct dark matter searches . lindhard theory is taken for estimating nuclear quenching . a theoretical model based on a biexcitonic diffusion-reaction mechanism is performed for electronic ( scintillation ) quenching . the electronic let ( linear energy transfer ) is evaluated and used to obtain the initial track structure due to recoil ar ions . the results are compared with experimental values reported for nuclear recoils from neutrons . the behavior of scintillation and ionization on the field are discussed . with notes on radiation physics and chemistry for dark matter searches . story_separator_special_tag the darkside-50 experiment at the laboratori nazionali del gran sasso is a search for dark matter using a dual phase time projection chamber with 50 kg of low radioactivity argon as target . light signals from interactions in the argon are detected by a system of 38 photo-multiplier tubes ( pmts ) , 19 above and 19 below the tpc volume inside the argon cryostat . we describe the electronics which processes the signals from the photo-multipliers , the trigger system which identifies events of interest , and the data-acquisition system which records the data for further analysis . the electronics include resistive voltage dividers on the pmts , custom pre-amplifiers mounted directly on the pmt voltage dividers in the liquid argon , and custom amplifier/discriminators ( at room temperature ) . after amplification , the pmt signals are digitized in caen waveform digitizers , and caen logic modules are used to construct the trigger , the data acquisition system for the tpc is based on the fermilab `` artdaq '' software . the system has been in operation since early 2014 . story_separator_special_tag abstract scintillation efficiency of low-energy nuclear recoils in noble liquids plays a crucial role in interpreting results from some direct searches for weakly interacting massive particle ( wimp ) dark matter . however , the cause of a reduced scintillation efficiency relative to electronic recoils in noble liquids remains unclear at the moment . we attribute such a reduction of scintillation efficiency to two major mechanisms : ( 1 ) energy loss and ( 2 ) scintillation quenching . the former is commonly described by lindhard s theory and the latter by birk s saturation law . we propose to combine these two to explain the observed reduction of scintillation yield for nuclear recoils in noble liquids . birk s constants kb for argon , neon and xenon determined from experimental data are used to predict noble liquid scintillator s response to low-energy nuclear recoils and low-energy electrons . we find that energy loss due to nuclear stopping power that contributes little to ionization and excitation is the dominant reduction mechanism in scintillation efficiency for nuclear recoils , but that significant additional quenching results from the nonlinear response of scintillation to the ionization density . story_separator_special_tag deap-3600 uses liquid argon contained in a spherical acrylic vessel as a target medium to perform a sensitive spin-independent dark matter search . argon scintillates in the vacuum ultraviolet spectrum , which requires wavelength shifting to convert the vuv photons to visible so they can be transmitted through the acrylic light guides and detected by the surrounding photomultiplier tubes . the wavelength shifter 1,1,4,4-tetraphenyl-1,3-butadiene was evaporatively deposited to the inner surface of the acrylic vessel under vacuum . two evaporations were performed on the deap-3600 acrylic vessel with an estimated coating thickness of 3.00 $ \\pm $ 0.02 $ \\mu $ m which is successfully wavelength shifting with liquid argon in the detector . details on the wavelength shifter coating requirements , deposition source , testing , and final performance are presented . story_separator_special_tag a large number of current and future experiments in neutrino and dark matter detection use the scintillation light from noble elements as a mechanism for measuring energy deposition . the scintillation light from these elements is produced in the extreme ultraviolet ( euv ) range , from 60 { 200 nm . currently , the most practical technique for observing light at these wavelengths is to surround the scintillation volume with a thin lm of tetraphenyl butadiene ( tpb ) to act as a uor . the tpb lm absorbs euv photons and reemits visible photons ,
despite recent work in reading comprehension ( rc ) , progress has been mostly limited to english due to the lack of large-scale datasets in other languages . in this work , we introduce the first rc system for languages without rc training data . given a target language without rc training data and a pivot language with rc training data ( e.g . english ) , our method leverages existing rc resources in the pivot language by combining a competitive rc model in the pivot language with an attentive neural machine translation ( nmt ) model . we first translate the data from the target to the pivot language , and then obtain an answer using the rc model in the pivot language . finally , we recover the corresponding answer in the original language using soft-alignment attention scores from the nmt model . we create evaluation sets of rc data in two non-english languages , namely japanese and french , to evaluate our method . experimental results on these datasets show that our method significantly outperforms a back-translation baseline of a state-of-the-art product-level machine translation system . story_separator_special_tag there is a practically unlimited amount of natural language data available . still , recent work in text comprehension has focused on datasets which are small relative to current computing possibilities . this article is making a case for the community to move to larger data and as a step in that direction it is proposing the booktest , a new dataset similar to the popular children 's book test ( cbt ) , however more than 60 times larger . we show that training on the new data improves the accuracy of our attention-sum reader model on the original cbt test data by a much larger margin than many recent attempts to improve the model architecture . on one version of the dataset our ensemble even exceeds the human baseline provided by facebook . we then show in our own human study that there is still space for further improvement . story_separator_special_tag machine reading comprehension is a challenging task and hot topic in natural language processing . its goal is to develop systems to answer the questions regarding a given context . in this paper , we present a comprehensive survey on different aspects of machine reading comprehension systems , including their approaches , structures , input/outputs , and research novelties . we illustrate the recent trends in this field based on 241 reviewed papers from 2016 to 2020. our investigations demonstrate that the focus of research has changed in recent years from answer extraction to answer generation , from single to multi-document reading comprehension , and from learning from scratch to using pre-trained embeddings . we also discuss the popular datasets and the evaluation metrics in this field . the paper ends with investigating the most cited papers and their contributions . story_separator_special_tag in this paper , we train a semantic parser that scales up to freebase . instead of relying on annotated logical forms , which is especially expensive to obtain at large scale , we learn from question-answer pairs . the main challenge in this setting is narrowing down the huge number of possible logical predicates for a given question . we tackle this problem in two ways : first , we build a coarse mapping from phrases to predicates using a knowledge base and a large text corpus . second , we use a bridging operation to generate additional predicates based on neighboring predicates . on the dataset of cai and yates ( 2013 ) , despite not having annotated logical forms , our system outperforms their state-of-the-art parser . additionally , we collected a more realistic and challenging dataset of question-answer pairs and improves over a natural baseline . story_separator_special_tag training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions . this paper studies the impact of multitask and transfer learning for simple question answering ; a setting for which the reasoning required to answer is quite easy , as long as one can retrieve the correct evidence given a question , which can be difficult in large-scale conditions . to this end , we introduce a new dataset of 100k questions that we use in conjunction with existing benchmarks . we conduct our study within the framework of memory networks ( weston et al. , 2015 ) because this perspective allows us to eventually scale up to more complex reasoning , and show that memory networks can be successfully trained to achieve excellent performance . story_separator_special_tag the dataset , provided by spinn3r.com , is a set of 44 million blog posts made between august 1st and october 1st , 2008. the post includes the text as syndicated , as well as metadata such as the blog 's homepage , timestamps , etc . the data is formatted in xml and is further arranged into tiers approximating to some degree search engine ranking . the total size of the dataset is 142 gb uncompressed , ( 27 gb compressed ) . this dataset spans a number of big news events ( the olympics ; both us presidential nominating conventions ; the beginnings of the financial crisis ; . ) as well as everything else you might expect to find posted to blogs . to get access to the spinn3r dataset , please download and sign the usage agreement , and email it to dataset-request ( at ) icwsm.org . once your form is processed ( usually within 1-3 days ) , you will be sent a url and password where you can download the collection . here is a sample of blog posts from the collection . the xml format is described on the spinn3r website . story_separator_special_tag recently , multilingual question answering became a crucial research topic , and it is receiving increased interest in the nlp community . however , the unavailability of large-scale datasets makes it challenging to train multilingual qa systems with performance comparable to the english ones . in this work , we develop the translate align retrieve ( tar ) method to automatically translate the stanford question answering dataset ( squad ) v1.1 to spanish . we then used this dataset to train spanish qa systems by fine-tuning a multilingual-bert model . finally , we evaluated our qa models with the recently proposed mlqa and xquad benchmarks for cross-lingual extractive qa . experimental results show that our models outperform the previous multilingual-bert baselines achieving the new state-of-the-art values of 68.1 f1 on the spanish mlqa corpus and 77.6 f1 on the spanish xquad corpus . the resulting , synthetically generated squad-es v1.1 corpora , with almost 100 % of data contained in the original english version , to the best of our knowledge , is the first large-scale qa training resource for spanish . story_separator_special_tag understanding stories is a challenging reading comprehension problem for machines as it requires reading a large volume of text and following long-range dependencies . in this paper , we introduce the shmoop corpus : a dataset of 231 stories that are paired with detailed multi-paragraph summaries for each individual chapter ( 7,234 chapters ) , where the summary is chronologically aligned with respect to the story chapter . from the corpus , we construct a set of common nlp tasks , including cloze-form question answering and a simplified form of abstractive summarization , as benchmarks for reading comprehension on stories . we then show that the chronological alignment provides a strong supervisory signal that learning-based methods can exploit leading to significant improvements on these tasks . we believe that the unique structure of this corpus provides an important foothold towards making machine story comprehension more approachable . story_separator_special_tag commonsense reasoning is a critical ai capability , but it is difficult to construct challenging datasets that test common sense . recent neural question answering systems , based on large pre-trained models of language , have already achieved near-human-level performance on commonsense knowledge benchmarks . these systems do not possess human-level common sense , but are able to exploit limitations of the datasets to achieve human-level scores . we introduce the codah dataset , an adversarially-constructed evaluation dataset for testing common sense . codah forms a challenging extension to the recently-proposed swag dataset , which tests commonsense knowledge using sentence-completion questions that describe situations observed in video . to produce a more difficult dataset , we introduce a novel procedure for question acquisition in which workers author questions designed to target weaknesses of state-of-the-art neural question answering systems . workers are rewarded for submissions that models fail to answer correctly both before and after fine-tuning ( in cross-validation ) . we create 2.8k questions via this procedure and evaluate the performance of multiple state-of-the-art question answering systems on our dataset . we observe a significant gap between human performance , which is 95.3 % , and the performance of the story_separator_special_tag we present quac , a dataset for question answering in context that contains 14k information-seeking qa dialogs ( 100k questions in total ) . the dialogs involve two crowd workers : ( 1 ) a student who poses a sequence of freeform questions to learn as much as possible about a hidden wikipedia text , and ( 2 ) a teacher who answers the questions by providing short excerpts from the text . quac introduces challenges not found in existing machine comprehension datasets : its questions are often more open-ended , unanswerable , or only meaningful within the dialog context , as we show in a detailed qualitative evaluation . we also report results for a number of reference models , including a recently state-of-the-art reading comprehension architecture extended to model dialog context . our best model underperforms humans by 20 f1 , suggesting that there is significant room for future work on this data . dataset , baseline , and leaderboard available at this http url story_separator_special_tag we present a framework for question answering that can efficiently scale to longer documents while maintaining or even improving performance of state-of-the-art models . while most successful approaches for reading comprehension rely on recurrent neural networks ( rnns ) , running them over long documents is prohibitively slow because it is difficult to parallelize over sequences . inspired by how people first skim the document , identify relevant parts , and carefully read these parts to produce an answer , we combine a coarse , fast model for selecting relevant sentences and a more expensive rnn for producing the answer from those sentences . we treat sentence selection as a latent variable trained jointly from the answer only using reinforcement learning . experiments demonstrate state-of-the-art performance on a challenging subset of the wikireading dataset and on a new dataset , while speeding up the model by 3.5x-6.7x . story_separator_special_tag in this paper we study yes/no questions that are naturally occurring meaning that they are generated in unprompted and unconstrained settings . we build a reading comprehension dataset , boolq , of such questions , and show that they are unexpectedly challenging . they often query for complex , non-factoid information , and require difficult entailment-like inference to solve . we also explore the effectiveness of a range of transfer learning baselines . we find that transferring from entailment data is more effective than transferring from paraphrase or extractive qa data , and that it , surprisingly , continues to be very beneficial even when starting from massive pre-trained language models such as bert . our best method trains bert on multinli and then re-trains it on our train set . it achieves 80.4 % accuracy compared to 90 % accuracy of human annotators ( and 62 % majority-baseline ) , leaving a significant gap for future work . story_separator_special_tag confidently making progress on multilingual modeling requires challenging , trustworthy evaluations . we present tydi qa a question answering dataset covering 11 typologically diverse languages with . story_separator_special_tag the recent breakthroughs in the field of deep learning have lead to state-of-the-art results in several nlp tasks such as question answering ( qa ) . nevertheless , the training requirements in cross-linguistic settings are not satisfied : the datasets suitable for training of question answering systems for non english languages are often not available , which represents a significant barrier for most neural methods . this paper explores the possibility of acquiring a large scale although lower quality dataset for an open-domain factoid questions answering system in italian . it consists of more than 60 thousands question-answer pairs and was used to train a system able to answer factoid questions against the italian wikipedia . the paper describes the dataset and the experiments , inspired by an equivalent counterpart for english . these show that results achievable for italian are worse , even though they are already applicable to concrete qa tasks . story_separator_special_tag we introduce a new language representation model called bert , which stands for bidirectional encoder representations from transformers . unlike recent language representation models ( peters et al. , 2018a ; radford et al. , 2018 ) , bert is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers . as a result , the pre-trained bert model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks , such as question answering and language inference , without substantial task-specific architecture modifications . bert is conceptually simple and empirically powerful . it obtains new state-of-the-art results on eleven natural language processing tasks , including pushing the glue score to 80.5 ( 7.7 point absolute improvement ) , multinli accuracy to 86.7 % ( 4.6 % absolute improvement ) , squad v1.1 question answering test f1 to 93.2 ( 1.5 point absolute improvement ) and squad v2.0 test f1 to 83.1 ( 5.1 point absolute improvement ) . story_separator_special_tag we present two new large-scale datasets aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text . the quasar-s dataset consists of 37000 cloze-style ( fill-in-the-gap ) queries constructed from definitions of software entity tags on the popular website stack overflow . the posts and comments on the website serve as the background corpus for answering the cloze questions . the quasar-t dataset consists of 43000 open-domain trivia questions and their answers obtained from various internet sources . clueweb09 serves as the background corpus for extracting these answers . we pose these datasets as a challenge for two related subtasks of factoid question answering : ( 1 ) searching for relevant pieces of text that include the correct answer to a query , and ( 2 ) reading the retrieved text to answer the query . we also describe a retrieval system for extracting relevant sentences and documents from the corpus given a query , and include these in the release for researchers wishing to only focus on ( 2 ) . we evaluate several baselines on both datasets , ranging from simple heuristics to powerful neural models , story_separator_special_tag reading comprehension has recently seen rapid progress , with systems matching humans on the most popular datasets for the task . however , a large body of work has highlighted the brittleness of these systems , showing that there is much work left to be done . we introduce a new english reading comprehension benchmark , drop , which requires discrete reasoning over the content of paragraphs . in this crowdsourced , adversarially-created , 96k-question benchmark , a system must resolve references in a question , perhaps to multiple input positions , and perform discrete operations over them ( such as addition , counting , or sorting ) . these operations require a much more comprehensive understanding of the content of paragraphs than what was necessary for prior datasets . we apply state-of-the-art methods from both the reading comprehension and semantic parsing literature on this dataset and show that the best systems only achieve 32.7 % f1 on our generalized accuracy metric , while expert human performance is 96.0 % . we additionally present a new model that combines reading comprehension methods with simple numerical reasoning to achieve 47.0 % f1 . story_separator_special_tag many tasks aim to measure machine reading comprehension ( mrc ) , often focusing on question types presumed to be difficult . rarely , however , do task designers start by considering what systems should in fact comprehend . in this paper we make two key contributions . first , we argue that existing approaches do not adequately define comprehension ; they are too unsystematic about what content is tested . second , we present a detailed definition of comprehension a `` template of understanding '' for a widely useful class of texts , namely short narratives . we then conduct an experiment that strongly suggests existing systems are not up to the task of narrative understanding as we define it . story_separator_special_tag we publicly release a new large-scale dataset , called searchqa , for machine comprehension , or question-answering . unlike recently released datasets , such as deepmind cnn/dailymail and squad , the proposed searchqa was constructed to reflect a full pipeline of general question-answering . that is , we start not from an existing article and generate a question-answer pair , but start from an existing question-answer pair , crawled from j ! archive , and augment it with text snippets retrieved by google . following this approach , we built searchqa , which consists of more than 140k question-answer pairs with each pair having 49.6 snippets on average . each question-answer-context tuple of the searchqa comes with additional meta-data such as the snippet 's url , which we believe will be valuable resources for future research . we conduct human evaluation as well as test two baseline methods , one simple word selection and the other deep learning based , on the searchqa . we show that there is a meaningful gap between the human and machine performances . this suggests that the proposed dataset could well serve as a benchmark for question-answering . story_separator_special_tag it has become commonplace for people to share their opinions about all kinds of products by posting reviews online . it has also become commonplace for potential customers to do research about the quality and limitations of these products by posting questions online . we test the extent to which reviews are useful in question-answering by combining two amazon datasets and focusing our attention on yes/no questions . a manual analysis of 400 cases reveals that the reviews directly contain the answer to the question just over a third of the time . preliminary reading comprehension experiments with this dataset prove inconclusive , with accuracy in the range 50-66 % . story_separator_special_tag sberquad -- a large scale analog of stanford squad in the russian language - is a valuable resource that has not been properly presented to the scientific community . we fill this gap by providing a description , a thorough analysis , and baseline experimental results . story_separator_special_tag previous work on question-answering systems mainly focuses on answering individual questions , assuming they are independent and devoid of context . instead , we investigate sequential question answering , asking multiple related questions . we present qblink , a new dataset of fully human-authored questions . we extend existing strong question answering frameworks to include previous questions to improve the overall question-answering accuracy in open-domain question answering . the dataset is publicly available at http : //sequential.qanta.org . story_separator_special_tag in this article , we propose a unified deep neural network framework for multilingual question answering ( qa ) . the proposed network deals with the multilingual questions and answers snippets . the input to the network is a pair of factoid question and snippet in the multilingual environment ( english and hindi ) , and output is the relevant answer from the snippet . we begin by generating the snippet using a graph-based language-independent algorithm , which exploits the lexico-semantic similarity between the sentences . the soft alignment of the question words from the english and hindi languages has been used to learn the shared representation of the question . the learned shared representation of question and attention-based snippet representation are passed as an input to the answer extraction layer of the network , which extracts the answer span from the snippet . evaluation on a standard multilingual qa dataset shows the state-of-the-art performance with 39.44 exact match ( em ) and 44.97 f1 values . similarly , we achieve the performance of 50.11 exact match ( em ) and 53.77 f1 values on translated squad dataset . story_separator_special_tag every day , thousands of customers post questions on amazon product pages . after some time , if they are fortunate , a knowledgeable customer might answer their question . observing that many questions can be answered based upon the available product reviews , we propose the task of review-based qa . given a corpus of reviews and a question , the qa system synthesizes an answer . to this end , we introduce a new dataset and propose a method that combines information retrieval techniques for selecting relevant reviews ( given a question ) and `` reading comprehension '' models for synthesizing an answer ( given a question and review ) . our dataset consists of 923k questions , 3.6m answers and 14m reviews across 156k products . building on the well-known amazon dataset , we collect additional annotations , marking each question as either answerable or unanswerable based on the available reviews . a deployed system could first classify a question as answerable and then attempt to generate an answer . notably , unlike many popular qa datasets , here , the questions , passages , and answers are all extracted from real human interactions . we evaluate story_separator_special_tag this paper introduces dureader , a new large-scale , open-domain chinese machine reading comprehension ( mrc ) dataset , designed to address real-world mrc . dureader has three advantages over previous mrc datasets : ( 1 ) data sources : questions and documents are based on baidu search and baidu zhidao ; answers are manually generated . ( 2 ) question types : it provides rich annotations for more question types , especially yes-no and opinion questions , that leaves more opportunity for the research community . ( 3 ) scale : it contains 200k questions , 420k answers and 1m documents ; it is the largest chinese mrc dataset so far . experiments show that human performance is well above current state-of-the-art baseline systems , leaving plenty of room for the community to make improvements . to help the community make these improvements , both dureader and baseline systems have been posted online . we also organize a shared competition to encourage the exploration of more models . since the release of the task , there are significant improvements over the baselines . story_separator_special_tag teaching machines to read natural language documents remains an elusive challenge . machine reading systems can be tested on their ability to answer questions posed on the contents of documents that they have seen , but until now large scale training and test datasets have been missing for this type of evaluation . in this work we define a new methodology that resolves this bottleneck and provides large scale supervised reading comprehension data . this allows us to develop a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure . story_separator_special_tag we present wikireading , a large-scale natural language understanding task and publicly-available dataset with 18 million instances . the task is to predict textual values from the structured knowledge base wikidata by reading the text of the corresponding wikipedia articles . the task contains a rich variety of challenging classification and extraction sub-tasks , making it well-suited for end-to-end models such as deep neural networks ( dnns ) . we compare various state-of-the-art dnn-based architectures for document classification , information extraction , and question answering . we find that models supporting a rich answer space , such as word or character sequences , perform best . our best-performing model , a word-level sequence to sequence model with a mechanism to copy out-of-vocabulary words , obtains an accuracy of 71.8 % . story_separator_special_tag abstract : we introduce a new test of how well language models capture meaning in children 's books . unlike standard language modelling benchmarks , it distinguishes the task of predicting syntactic function words from that of predicting lower-frequency words , which carry greater semantic content . we compare a range of state-of-the-art models , each with a different way of encoding what has been previously read . we show that models which store explicit representations of long-term contexts outperform state-of-the-art neural language models at predicting semantic content words , although this advantage is not observed for syntactic function words . interestingly , we find that the amount of text encoded in a single memory representation is highly influential to the performance : there is a sweet-spot , not too big and not too small , between single words and full sentences that allows the most meaningful information in a text to be effectively retained and recalled . further , the attention over such window-based memories can be trained effectively through self-supervision . we then assess the generality of this principle by applying it to the cnn qa benchmark , which involves identifying named entities in paraphrased summaries of news story_separator_special_tag understanding narratives requires reading between the lines , which in turn , requires interpreting the likely causes and effects of events , even when they are not mentioned explicitly . in this paper , we introduce cosmos qa , a large-scale dataset of 35,600 problems that require commonsense-based reading comprehension , formulated as multiple-choice questions . in stark contrast to most existing reading comprehension datasets where the questions focus on factual and literal understanding of the context paragraph , our dataset focuses on reading between the lines over a diverse collection of people 's everyday narratives , asking such questions as `` what might be the possible reason of . ? `` , or `` what would have happened if . '' that require reasoning beyond the exact text spans in the context . to establish baseline performances on cosmos qa , we experiment with several state-of-the-art neural architectures for reading comprehension , and also propose a new architecture that improves over the competitive baselines . experimental results demonstrate a significant gap between machine ( 68.4 % ) and human performance ( 94 % ) , pointing to avenues for future research on commonsense machine comprehension . dataset , code story_separator_special_tag machine reading comprehension aims to teach machines to understand a text like a human and is a new challenging direction in artificial intelligence . datasets play an important role while describing or building an algorithm for machine reading comprehension . the type of answers we required from developed algorithm depends on datasets . the datasets are classified into two types , namely datasets with extractive answers and datasets with descriptive answers . this article summarize both datasets with an example of each type to get a better insight into datasets in machine reading comprehension and which datasets to use depending on the requirements . story_separator_special_tag we introduce pubmedqa , a novel biomedical question answering ( qa ) dataset collected from pubmed abstracts . the task of pubmedqa is to answer research questions with yes/no/maybe ( e.g . : do preoperative statins reduce atrial fibrillation after coronary artery bypass grafting ? ) using the corresponding abstracts . pubmedqa has 1k expert-annotated , 61.2k unlabeled and 211.3k artificially generated qa instances . each pubmedqa instance is composed of ( 1 ) a question which is either an existing research article title or derived from one , ( 2 ) a context which is the corresponding abstract without its conclusion , ( 3 ) a long answer , which is the conclusion of the abstract and , presumably , answers the research question , and ( 4 ) a yes/no/maybe answer which summarizes the conclusion . pubmedqa is the first qa dataset where reasoning over biomedical research texts , especially their quantitative contents , is required to answer the questions . our best performing model , multi-phase fine-tuning of biobert with long answer bag-of-word statistics as additional supervision , achieves 68.1 % accuracy , compared to single human performance of 78.0 % accuracy and majority-baseline of 55.2 % story_separator_special_tag we present triviaqa , a challenging reading comprehension dataset containing over 650k question-answer-evidence triples . triviaqa includes 95k question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents , six per question on average , that provide high quality distant supervision for answering the questions . we show that , in comparison to other recently introduced large-scale datasets , triviaqa ( 1 ) has relatively complex , compositional questions , ( 2 ) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences , and ( 3 ) requires more cross sentence reasoning to find answers . we also present two baseline algorithms : a feature-based classifier and a state-of-the-art neural network , that performs well on squad reading comprehension . neither approach comes close to human performance ( 23 % and 40 % vs. 80 % ) , suggesting that triviaqa is a challenging testbed that is worth significant future study . story_separator_special_tag we introduce the task of multi-modal machine comprehension ( m3c ) , which aims at answering multimodal questions given a context of text , diagrams and images . we present the textbook question answering ( tqa ) dataset that includes 1,076 lessons and 26,260 multi-modal questions , taken from middle school science curricula . our analysis shows that a significant portion of questions require complex parsing of the text and the diagrams and reasoning , indicating that our dataset is more complex compared to previous machine comprehension and visual question answering datasets . we extend state-of-the-art methods for textual machine comprehension and visual question answering to the tqa dataset . our experiments show that these models do not perform well on tqa . the presented dataset opens new challenges for research in question answering and reasoning across multiple modalities . story_separator_special_tag the machine reading task , where a computer reads a document and answers questions about it , is important in artificial intelligence research . recently , many models have been proposed to address it . word-level models , which have words as units of input and output , have proven to yield state-of-theart results when evaluated on english datasets . however , in morphologically richer languages , many more unique words exist than in english due to highly productive prefix and suffix mechanisms . this may set back word-level models , since vocabulary sizes too big to allow for efficient computing may have to be employed . multiple alternative input granularities have been proposed to avoid large input vocabularies , such as morphemes , character n-grams , and bytes . bytes are advantageous as they provide a universal encoding format across languages , and allow for a small vocabulary size , which , moreover , is identical for every input language . in this work , we investigate whether bytes are suitable as input units across morphologically varied languages . to test this , we introduce two large-scale machine reading datasets in morphologically rich languages , turkish and russian . story_separator_special_tag we present a reading comprehension challenge in which questions can only be answered by taking into account information from multiple sentences . we solicit and verify questions and answers for this challenge through a 4-step crowdsourcing experiment . our challenge dataset contains 6,500+ questions for 1000+ paragraphs across 7 different domains ( elementary school science , news , travel guides , fiction stories , etc ) bringing in linguistic diversity to the texts and to the questions wordings . on a subset of our dataset , we found human solvers to achieve an f1-score of 88.1 % . we analyze a range of baselines , including a recent state-of-art reading comprehension system , and demonstrate the difficulty of this challenge , despite a high human performance . the dataset is the first to study multi-sentence inference at scale , with an open-ended set of question types that requires reasoning skills . story_separator_special_tag reading comprehension ( rc ) in contrast to information retrieval requires integrating information and reasoning about events , entities , and their relations across a full document . question answering . story_separator_special_tag we present the natural questions corpus , a question answering data set . questions consist of real anonymized , aggregated queries issued to the google search engine . an annotator is presented with a . story_separator_special_tag we present race , a new dataset for benchmark evaluation of methods in the reading comprehension task . collected from the english exams for middle and high school chinese students in the age range between 12 to 18 , race consists of near 28,000 passages and near 100,000 questions generated by human experts ( english instructors ) , and covers a variety of topics which are carefully designed for evaluating the students ' ability in understanding and reasoning . in particular , the proportion of questions that requires reasoning is much larger in race than that in other benchmark datasets for reading comprehension , and there is a significant gap between the performance of the state-of-the-art models ( 43 % ) and the ceiling human performance ( 95 % ) . we hope this new dataset can serve as a valuable resource for research and evaluation in machine comprehension . the dataset is freely available at this http url and the code is available at this https url . story_separator_special_tag machine reading comprehension gives the capability for machine to read/understand the text and answer the question raised by the user from the text given . improving machine s understanding of text is an important task for many applications like question answering system , information retrieval , document summarization , robotics etc . many researchers have been working in this area since 1970 s. to evaluate the capability of machine reading researchers need good datasets . this paper intends to compile all the datasets available for machine reading comprehension systems . story_separator_special_tag recently , various datasets for question answering ( qa ) research have been released , such as squad , marco , wikiqa , mctest , and searchqa . however , such existing training resources for these task mostly support only english . in contrast , we study semi-automated creation of the korean question answering dataset ( k-quad ) , by using automatically translated squad and a qa system bootstrapped on a small qa pair set . as a na ve approach for other language , using only machine-translated squad shows limited performance due to translation errors . we study why such approach fails and motivate needs to build seed resources to enable leveraging such resources . specifically , we annotate seed qa pairs of small size ( 4k ) for korean language , and design how such seed can be combined with translated english resources . these approach , by combining two resources , leads to 71.50 f1 on korean qa ( comparable to 77.3 f1 on squad ) . story_separator_special_tag the past few years have witnessed the rapid development of machine reading comprehension ( mrc ) , especially the challenging sub-task , multiple-choice reading comprehension ( mcrc ) . and the release of large scale datasets promotes the research in this field . yet previous methods have already achieved high accuracy of the mcrc datasets , e.g . race . it s necessary to propose a more difficult dataset which needs more reasoning and inference for evaluating the understanding capability of new methods . to respond to such demand , we present race-c , a new multi-choice reading comprehension dataset collected from college english examinations in china . and further we integrate it with race-m and race-h , collected by lai et al . ( 2017 ) from middle and high school exams respectively , to extend race to be race++ . based on race++ , we propose a three-stage curriculum learning framework , which is able to use the best of the characteristic that the difficulty level within these three sub-datasets is in ascending order . statistics show the higher difficulty level of our collected dataset , race-c , compared to race s two sub-datasets , i.e. , race-m story_separator_special_tag machine reading comprehension ( mrc ) is a task that requires machine to understand natural language and answer questions by reading a document . it is the core of automatic response technology such as chatbots and automatized customer supporting systems . we present korean question answering dataset ( korquad ) , a large-scale korean dataset for extractive machine reading comprehension task . it consists of 70,000+ human generated question-answer pairs on korean wikipedia articles . we release korquad1.0 and launch a challenge at this https url to encourage the development of multilingual natural language processing research . story_separator_special_tag machine reading comprehension ( mrc ) , which requires the machine to answer questions based on the given context , has gained increasingly wide attention with the incorporation of various deep learning techniques over the past few years . although the research of mrc based on deep learning is flourishing , there remains a lack of a comprehensive survey to summarize existing approaches and recent trends , which motivates our work presented in this article . specifically , we give a thorough review of this research field , covering different aspects including ( 1 ) typical mrc tasks : their definitions , differences and representative datasets ; ( 2 ) general architecture of neural mrc : the main modules and prevalent approaches to each of them ; and ( 3 ) new trends : some emerging focuses in neural mrc as well as the corresponding challenges . last but not least , in retrospect of what has been achieved so far , the survey also envisages what the future may hold by discussing the open issues left to be addressed . story_separator_special_tag reading comprehension is a well studied task , with huge training datasets in english . this work focuses on building reading comprehension systems for czech , without requiring any manually annotated czech training data . first of all , we automatically translated squad 1.1 and squad 2.0 datasets to czech to create training and development data , which we release at http : //hdl.handle.net/11234/1-3249 . we then trained and evaluated several bert and xlm-roberta baseline models . however , our main focus lies in cross-lingual transfer models . we report that a xlm-roberta model trained on english data and evaluated on czech achieves very competitive performance , only approximately 2 % points worse than a model trained on the translated czech data . this result is extremely good , considering the fact that the model has not seen any czech data during training . the cross-lingual transfer approach is very flexible and provides a reading comprehension in any language , for which we have enough monolingual raw texts . story_separator_special_tag we develop a recursive neural network ( rnn ) to extract answers to arbitrary natural language questions from supporting sentences , by training on a crowdsourced data set ( to be released upon presentation ) . the rnn defines feature representations at every node of the parse trees of questions and supporting sentences , when applied recursively , starting with token vectors from a neural probabilistic language model . in contrast to prior work , we fix neither the types of the questions nor the forms of the answers ; the system classifies tokens to match a substring chosen by the question s author . our classifier decides to follow each parse tree node of a support sentence or not , by classifying its rnn embedding together with those of its siblings and the root node of the question , until reaching the tokens it selects as the answer . a novel co-training task for the rnn , on subtree recognition , boosts performance , along with a scheme to consistently handle words that are not well-represented in the language model . on our data set , we surpass an open source system epitomizing a classic pattern bootstrapping approach to story_separator_special_tag directly reading documents and being able to answer questions from them is an unsolved challenge . to avoid its inherent difficulty , question answering ( qa ) has been directed towards using knowledge bases ( kbs ) instead , which has proven effective . unfortunately kbs often suffer from being too restrictive , as the schema can not support certain types of answers , and too sparse , e.g . wikipedia contains much more information than freebase . in this work we introduce a new method , key-value memory networks , that makes reading documents more viable by utilizing different encodings in the addressing and output stages of the memory read operation . to compare using kbs , information extraction or wikipedia documents directly in a single framework we construct an analysis tool , wikimovies , a qa dataset that contains raw text alongside a preprocessed kb , in the domain of movies . our method reduces the gap between all three settings . it also achieves state-of-the-art results on the existing wikiqa benchmark . story_separator_special_tag representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding . this issue is particularly challenging for understanding casual and correlational relationships between events . while this topic has received a lot of interest in the nlp community , research has been hindered by the lack of a proper evaluation framework . this paper attempts to address this problem with a new framework for evaluating story understanding and script learning : the ` story cloze test . this test requires a system to choose the correct ending to a four-sentence story . we created a new corpus of 50k five-sentence commonsense stories , rocstories , to enable this evaluation . this corpus is unique in two ways : ( 1 ) it captures a rich set of causal and temporal commonsense relations between daily events , and ( 2 ) it is a high quality collection of everyday life stories that can also be used for story generation . experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the story cloze test . we discuss these story_separator_special_tag the lsdsem 17 shared task is the story cloze test , a new evaluation for story understanding and script learning . this test provides a system with a four-sentence story and two possible endings , and the system must choose the correct ending to the story . successful narrative understanding ( getting closer to human performance of 100 % ) requires systems to link various levels of semantics to commonsense knowledge . a total of eight systems participated in the shared task , with a variety of approaches including . story_separator_special_tag machine reading comprehension ( mrc ) is the task of natural language processing which studies the ability to read and understand unstructured texts and then find the correct answers for questions . until now , we have not yet had any mrc dataset for such a low-resource language as vietnamese . in this paper , we introduce vimmrc , a challenging machine comprehension corpus with multiple-choice questions , intended for research on the machine comprehension of vietnamese text . this corpus includes 2,783 multiple-choice questions and answers based on a set of 417 vietnamese texts used for teaching reading comprehension for 1st to 5th graders . answers may be extracted from the contents of single or multiple sentences in the corresponding reading text . a thorough analysis of the corpus and experimental results in this paper illustrate that our corpus vimmrc demands reasoning abilities beyond simple word matching . we proposed the method of boosted sliding window ( bsw ) that improves 5.51 % in accuracy over the best baseline method . we also measured human performance on the corpus and compared it to our mrc models . the performance gap between humans and our best experimental model indicates that story_separator_special_tag we introduce a large scale machine reading comprehension dataset , which we name ms marco . the dataset comprises of 1,010,916 anonymized questions -- -sampled from bing 's search query logs -- -each with a human generated answer and 182,669 completely human rewritten generated answers . in addition , the dataset contains 8,841,823 passages -- -extracted from 3,563,535 web documents retrieved by bing -- -that provide the information necessary for curating the natural language answers . a question in the ms marco dataset may have multiple answers or no answers at all . using this dataset , we propose three different tasks with varying levels of difficulty : ( i ) predict if a question is answerable given a set of context passages , and extract and synthesize the answer as a human would ( ii ) generate a well-formed answer ( if possible ) based on the context passages that can be understood with the question and passage context , and finally ( iii ) rank a set of retrieved passages given a question . the size of the dataset and the fact that the questions are derived from real user search queries distinguishes ms marco from other well-known story_separator_special_tag we have constructed a new `` who-did-what '' dataset of over 200,000 fill-in-the-gap ( cloze ) multiple choice reading comprehension problems constructed from the ldc english gigaword newswire corpus . the wdw dataset has a variety of novel features . first , in contrast with the cnn and daily mail datasets ( hermann et al. , 2015 ) we avoid using article summaries for question formation . instead , each problem is formed from two independent articles -- - an article given as the passage to be read and a separate article on the same events used to form the question . second , we avoid anonymization -- - each choice is a person named entity . third , the problems have been filtered to remove a fraction that are easily solved by simple baselines , while remaining 84 % solvable by humans . we report performance benchmarks of standard systems and propose the wdw dataset as a challenge task for the community . story_separator_special_tag we introduce a large dataset of narrative texts and questions about these texts , intended to be used in a machine comprehension task that requires reasoning using commonsense knowledge . our dataset complements similar datasets in that we focus on stories about everyday activities , such as going to the movies or working in the garden , and that the questions require commonsense knowledge , or more specifically , script knowledge , to be answered . we show that our mode of data collection via crowdsourcing results in a substantial amount of such inference questions . the dataset forms the basis of a shared task on commonsense and script knowledge organized at semeval 2018 and provides challenging test cases for the broader natural language understanding community . story_separator_special_tag we introduce mcscript2.0 , a machine comprehension corpus for the end-to-end evaluation of script knowledge . mcscript2.0 contains approx . 20,000 questions on approx . 3,500 texts , crowdsourced based on a new collection process that results in challenging questions . half of the questions can not be answered from the reading texts , but require the use of commonsense and , in particular , script knowledge . we give a thorough analysis of our corpus and show that while the task is not challenging to humans , existing machine comprehension models fail to perform well on the data , even if they make use of a commonsense knowledge base . the dataset is available at http : //www.sfb1102 . uni-saarland.de/ ? page_id=2582 story_separator_special_tag we propose a novel methodology to generate domain-specific large-scale question answering ( qa ) datasets by re-purposing existing annotations for other nlp tasks . we demonstrate an instance of this methodology in generating a large-scale qa dataset for electronic medical records by leveraging existing expert annotations on clinical notes for various nlp tasks from the community shared i2b2 datasets . the resulting corpus ( emrqa ) has 1 million question-logical form and 400,000+ question-answer evidence pairs . we characterize the dataset and explore its learning potential by training baseline models for question to logical form and question to answer mapping . story_separator_special_tag we introduce lambada , a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task . lambada is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage , but not if they only see the last sentence preceding the target word . to succeed on lambada , computational models can not simply rely on local context , but must be able to keep track of information in the broader discourse . we show that lambada exemplifies a wide range of linguistic phenomena , and that none of several state-of-the-art language models reaches accuracy above 1 % on this novel benchmark . we thus propose lambada as a challenging test set , meant to encourage the development of new models capable of genuine understanding of broad context in natural language text . story_separator_special_tag recent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream nlp tasks . whilst learning linguistic knowledge , these models may also be storing relational knowledge present in the training data , and may be able to answer queries structured as `` fill-in-the-blank '' cloze statements . language models have many advantages over structured knowledge bases : they require no schema engineering , allow practitioners to query about an open class of relations , are easy to extend to more data , and require no human supervision to train . we present an in-depth analysis of the relational knowledge already present ( without fine-tuning ) in a wide range of state-of-the-art pretrained language models . we find that ( i ) without fine-tuning , bert contains relational knowledge competitive with traditional nlp methods that have some access to oracle knowledge , ( ii ) bert also does remarkably well on open-domain question answering against a supervised baseline , and ( iii ) certain types of factual knowledge are learned much more readily than others by standard language model pretraining approaches . the surprisingly strong ability of these models to recall factual knowledge story_separator_special_tag enabling a machine to read and comprehend the natural language documents so that it can answer some questions remains an elusive challenge . in recent years , the popularity of deep learning and the establishment of large-scale datasets have both promoted the prosperity of machine reading comprehension . this paper aims to present how to utilize the neural network to build a reader and introduce some classic models , analyze what improvements they make . further , we also point out the defects of existing models and future research directions story_separator_special_tag extractive reading comprehension systems can often locate the correct answer to a question in a context document , but they also tend to make unreliable guesses on questions for which the correct answer is not stated in the context . existing datasets either focus exclusively on answerable questions , or use automatically generated unanswerable questions that are easy to identify . to address these weaknesses , we present squadrun , a new dataset that combines the existing stanford question answering dataset ( squad ) with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones . to do well on squadrun , systems must not only answer questions when possible , but also determine when no answer is supported by the paragraph and abstain from answering . squadrun is a challenging natural language understanding task for existing models : a strong neural system that gets 86 % f1 on squad achieves only 66 % f1 on squadrun . we release squadrun to the community as the successor to squad . story_separator_special_tag [ 1 ] robin jia and percy liang . adversarial examples for evaluating reading comprehension systems . corr , abs/1707.07328 , 2017 . [ 2 ] pranav rajpurkar , jian zhang , konstantin lopyrev , and percy liang . squad : 100 , 000+ questions for machine comprehension of text . corr , abs/1606.05250 , 2016 . [ 3 ] yicheng wang and mohit bansal . robust machine comprehension models via adversarial training . corr , abs/1804.06473 , 2018 . [ 12 ] jacob devlin , ming-wei chang , kenton lee , and kristina toutanova . bert : pre-training of deep bidirectional transformers for language understanding . arxiv preprint arxiv:1810.04805 , 2018 . [ 13 ] natural language computing group . r-net : machine reading comprehension with self-matching networks . may 2017 . [ 14 ] ashish vaswani , noam shazeer , niki parmar , jakob uszkoreit , llion jones , aidan n. gomez , lukasz kaiser , and illia polosukhin . attention is all you need . 2017. references motivations story_separator_special_tag humans gather information through conversations involving a series of interconnected questions and answers . for machines to assist in information gathering , it is therefore essential to enable them to answer conversational questions . we introduce coqa , a novel dataset for building conversational question answering systems . our dataset contains 127k questions with answers , obtained from 8k conversations about text passages from seven diverse domains . the questions are conversational , and the answers are free-form text with their corresponding evidence highlighted in the passage . we analyze coqa in depth and show that conversational questions have challenging phenomena not present in existing reading comprehension datasets , e.g. , coreference and pragmatic reasoning . we evaluate strong dialogue and reading comprehension models on coqa . the best system obtains an f1 score of 65.4 % , which is 23.4 points behind human performance ( 88.8 % ) , indicating there is ample room for improvement . we present coqa as a challenge to the community at https : //stanfordnlp.github.io/coqa story_separator_special_tag we present mctest , a freely available set of stories and associated questions intended for research on the machine comprehension of text . previous work on machine comprehension ( e.g. , semantic modeling ) has made great strides , but primarily focuses either on limited-domain datasets , or on solving a more restricted goal ( e.g. , open-domain relation extraction ) . in contrast , mctest requires machines to answer multiple-choice reading comprehension questions about fictional stories , directly tackling the high-level goal of open-domain machine comprehension . reading comprehension can test advanced abilities such as causal reasoning and understanding the world , yet , by being multiple-choice , still provide a clear metric . by being fictional , the answer typically can be found only in the story itself . the stories and questions are also carefully limited to those a young child would understand , reducing the world knowledge that is required for the task . we present the scalable crowd-sourcing methods that allow us to cheaply construct a dataset of 500 stories and 2000 questions . by screening workers ( with grammar tests ) and stories ( with grading ) , we have ensured that the data story_separator_special_tag the recent explosion in question answering research produced a wealth of both factoid reading comprehension ( rc ) and commonsense reasoning datasets . combining them presents a different kind of task : deciding not simply whether information is present in the text , but also whether a confident guess could be made for the missing information . we present quail , the first rc dataset to combine text-based , world knowledge and unanswerable questions , and to provide question type annotation that would enable diagnostics of the reasoning strategies by a given qa system . quail contains 15k multi-choice questions for 800 texts in 4 domains . crucially , it offers both general and text-specific questions , unlikely to be found in pretraining data . we show that quail poses substantial challenges to the current state-of-the-art systems , with a 30 % drop in accuracy compared to the most similar existing dataset . story_separator_special_tag most work in machine reading focuses on question answering problems where the answer is directly expressed in the text to read . however , many real-world question answering problems require the reading of text not because it contains the literal answer , but because it contains a recipe to derive an answer together with the reader 's background knowledge . one example is the task of interpreting regulations to answer `` can i. ? '' or `` do i have to. ? '' questions such as `` i am working in canada . do i have to carry on paying uk national insurance ? '' after reading a uk government website about this topic . this task requires both the interpretation of rules and the application of background knowledge . it is further complicated due to the fact that , in practice , most questions are underspecified , and a human assistant will regularly have to ask clarification questions such as `` how long have you been working abroad ? '' when the answer can not be directly derived from the question and text . in this paper , we formalise this task and develop a crowd-sourcing strategy to collect story_separator_special_tag we propose duorc , a novel dataset for reading comprehension ( rc ) that motivates several new challenges for neural approaches in language understanding beyond those offered by existing rc datasets . duorc contains 186,089 unique question-answer pairs created from a collection of 7680 pairs of movie plots where each pair in the collection reflects two versions of the same movie - one from wikipedia and the other from imdb - written by two different authors . we asked crowdsourced workers to create questions from one version of the plot and a different set of workers to extract or synthesize answers from the other version . this unique characteristic of duorc where questions and answers are created from different versions of a document narrating the same underlying story , ensures by design , that there is very little lexical overlap between the questions created from one version and the segments containing the answer in the other version . further , since the two versions have different levels of plot detail , narration style , vocabulary , etc. , answering questions from the second version requires deeper language understanding and incorporating external background knowledge . additionally , the narrative style of story_separator_special_tag the ijcnlp-2017 multi-choice question answering ( mcqa ) task aims at exploring the performance of current question answering ( qa ) techniques via the realworld complex questions collected from chinese senior high school entrance examination papers and ck12 website1 . the questions are all 4-way multi-choice questions writing in chinese and english respectively that cover a wide range of subjects , e.g . biology , history , life science and etc . and , all questions are restrained within the elementary and middle school level . during the whole procedure of this task , 7 teams submitted 323 runs in total . this paper describes the collected data , the format and size of these questions , formal run statistics and results , overview and performance statistics of different methods story_separator_special_tag we present dream , the first dialogue-based multiple-choice reading comprehension dataset . collected from english-as-a-foreign-language examinations designed by human experts to evaluate the comprehension level of chinese learners of english , our dataset contains 10,197 multiple-choice questions for 6,444 dialogues . in contrast to existing reading comprehension datasets , dream is the first to focus on in-depth multi-turn multi-party dialogue understanding . dream is likely to present significant challenges for existing reading comprehension systems : 84 % of answers are non-extractive , 85 % of questions require reasoning beyond a single sentence , and 34 % of questions also involve commonsense knowledge . we apply several popular neural reading comprehension models that primarily exploit surface information within the text and find them to , at best , just barely outperform a rule-based approach . we next investigate the effects of incorporating dialogue structure and different kinds of general world knowledge into both rule-based and ( neural and non-neural ) machine learning-based reading comprehension models . experimental results on the dream dataset show the effectiveness of dialogue structure and general world knowledge . dream will be available at this https url . story_separator_special_tag we present a new dataset for machine comprehension in the medical domain . our dataset uses clinical case reports with around 100,000 gap-filling queries about these cases . we apply several baselines and state-of-the-art neural readers to the dataset , and observe a considerable gap in performance ( 20 % f1 ) between the best human and machine readers . we analyze the skills required for successful answering and show how reader performance varies depending on the applicable skills . we find that inferences using domain knowledge and object tracking are the most frequently required skills , and that recognizing omitted information and spatio-temporal reasoning are the most difficult for the machines . story_separator_special_tag when answering a question , people often draw upon their rich world knowledge in addition to the particular context . recent work has focused primarily on answering questions given some relevant document or context , and required very little general background . to investigate question answering with prior knowledge , we present commonsenseqa : a challenging new dataset for commonsense question answering . to capture common sense beyond associations , we extract from conceptnet ( speer et al. , 2017 ) multiple target concepts that have the same semantic relation to a single source concept . crowd-workers are asked to author multiple-choice questions that mention the source concept and discriminate in turn between each of the target concepts . this encourages workers to create questions with complex semantics that often require prior knowledge . we create 12,247 questions through this procedure and demonstrate the difficulty of our task with a large number of strong baselines . our best baseline is based on bert-large ( devlin et al. , 2018 ) and obtains 56 % accuracy , well below human performance , which is 89 % . story_separator_special_tag we introduce the movieqa dataset which aims to evaluate automatic story comprehension from both video and text . the dataset consists of 14,944 questions about 408 movies with high semantic diversity . the questions range from simpler `` who '' did `` what '' to `` whom '' , to `` why '' and `` how '' certain events occurred . each question comes with a set of five possible answers , a correct one and four deceiving answers provided by human annotators . our dataset is unique in that it contains multiple sources of information video clips , plots , subtitles , scripts , and dvs [ 32 ] . we analyze our data through various statistics and methods . we further extend existing qa techniques to show that question-answering with such open-ended semantics is hard . we make this data set public along with an evaluation benchmark to encourage inspiring work in this challenging domain . story_separator_special_tag we present newsqa , a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs . crowdworkers supply questions and answers based on a set of over 10,000 news articles from cnn , with answers consisting of spans of text from the corresponding articles . we collect this dataset through a four-stage process designed to solicit exploratory questions that require reasoning . a thorough analysis confirms that newsqa demands abilities beyond simple word matching and recognizing textual entailment . we measure human performance on the dataset and compare it to several strong neural models . the performance gap between humans and machines ( 0.198 in f1 ) indicates that significant progress can be made on newsqa through future research . the dataset is freely available at this https url . story_separator_special_tag this paper presents the reco , a human-curated chinese reading comprehension dataset on opinion . the questions in reco are opinion based queries issued to commercial search engine . the passages are provided by the crowdworkers who extract the support snippet from the retrieved documents . finally , an abstractive yes/no/uncertain answer was given by the crowdworkers . the release of reco consists of 300k questions that to our knowledge is the largest in chinese reading comprehension . a prominent characteristic of reco is that in addition to the original context paragraph , we also provided the support evidence that could be directly used to answer the question . quality analysis demonstrates the challenge of reco that it requires various types of reasoning skills such as causal inference , logical reasoning , etc . current qa models that perform very well on many question answering problems , such as bert ( devlin et al . 2018 ) , only achieves 77 % accuracy on this dataset , a large margin behind humans nearly 92 % performance , indicating reco present a good challenge for machine reading comprehension . the codes , dataset and leaderboard will be freely available at https story_separator_special_tag to provide a survey on the existing tasks and models in machine reading comprehension ( mrc ) , this report reviews : 1 ) the dataset collection and performance evaluation of some representative simple-reasoning and complex-reasoning mrc tasks ; 2 ) the architecture designs , attention mechanisms , and performance-boosting approaches for developing neural-network-based mrc models ; 3 ) some recently proposed transfer learning approaches to incorporating text-style knowledge contained in external corpora into the neural networks of mrc models ; 4 ) some recently proposed knowledge base encoding approaches to incorporating graph-style knowledge contained in external knowledge bases into the neural networks of mrc models . besides , according to what has been achieved and what are still deficient , this report also proposes some open problems for the future research . story_separator_special_tag open domain question answering ( qa ) systems must interact with external knowledge sources , such as web pages , to find relevant information . information sources like wikipedia , however , are not well structured and difficult to utilize in comparison with knowledge bases ( kbs ) . in this work we present a two-step approach to question answering from unstructured text , consisting of a retrieval step and a comprehension step . for comprehension , we present an rnn based attention model with a novel mixture mechanism for selecting answers from either retrieved articles or a fixed vocabulary . for retrieval we introduce a hand-crafted model and a neural model for ranking relevant articles . we achieve state-of-the-art performance on w iki m ovies dataset , reducing the error by 40 % . our experimental results further demonstrate the importance of each of the introduced components . story_separator_special_tag we present a novel method for obtaining high-quality , domain-targeted multiple choice questions from crowd workers . generating these questions can be difficult without trading away originality , relevance or diversity in the answer options . our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions . it produces model suggestions for document selection and answer distractor choice which aid the human question generation process . with this method we have assembled sciq , a dataset of 13.7k multiple choice science exam questions . we demonstrate that the method produces in-domain questions by providing an analysis of this new dataset and by showing that humans can not distinguish the crowdsourced questions from original questions . when using sciq as additional training data to existing questions , we observe accuracy improvements on real science exams . story_separator_special_tag most reading comprehension methods limit themselves to queries which can be answered using a single sentence , paragraph , or document . enabling models to combine disjoint pieces of textual evidence . story_separator_special_tag abstract : one long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language , in particular building an intelligent dialogue agent . to measure progress towards that goal , we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering . our tasks measure understanding in several ways : whether a system is able to answer questions via chaining facts , simple induction , deduction and many more . the tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human . we believe many existing learning systems can currently not solve them , and hence our aim is to classify these tasks into skill sets , so that researchers can identify ( and then rectify ) the failings of their systems . we also extend and improve the recently introduced memory networks model , and show it is able to solve some , but not all , of the tasks . story_separator_special_tag cloze tests are widely adopted in language exams to evaluate students ' language proficiency . in this paper , we propose the first large-scale human-created cloze test dataset cloth , containing questions used in middle-school and high-school language exams . with missing blanks carefully created by teachers and candidate choices purposely designed to be nuanced , cloth requires a deeper language understanding and a wider attention span than previously automatically-generated cloze datasets . we test the performance of dedicatedly designed baseline models including a language model trained on the one billion word corpus and show humans outperform them by a significant margin . we investigate the source of the performance gap , trace model deficiencies to some distinct properties of cloth , and identify the limited ability of comprehending the long-term context to be the key bottleneck . story_separator_special_tag with social media becoming increasingly pop-ular on which lots of news and real-time eventsare reported , developing automated questionanswering systems is critical to the effective-ness of many applications that rely on real-time knowledge . while previous datasets haveconcentrated on question answering ( qa ) forformal text like news and wikipedia , wepresent the first large-scale dataset for qa oversocial media data . to ensure that the tweetswe collected are useful , we only gather tweetsused by journalists to write news articles . wethen ask human annotators to write questionsand answers upon these tweets . unlike otherqa datasets like squad in which the answersare extractive , we allow the answers to be ab-stractive . we show that two recently proposedneural models that perform well on formaltexts are limited in their performance when ap-plied to our dataset . in addition , even the fine-tuned bert model is still lagging behind hu-man performance with a large margin . our re-sults thus point to the need of improved qasystems targeting social media text . story_separator_special_tag understanding and reasoning about cooking recipes is a fruitful research direction towards enabling machines to interpret procedural text . in this work , we introduce recipeqa , a dataset for multimodal comprehension of cooking recipes . it comprises of approximately 20k instructional recipes with multiple modalities such as titles , descriptions and aligned set of images . with over 36k automatically generated question-answer pairs , we design a set of comprehension and reasoning tasks that require joint understanding of images and text , capturing the temporal flow of events and making sense of procedural knowledge . our preliminary results indicate that recipeqa will serve as a challenging test bed and an ideal benchmark for evaluating machine comprehension systems . the data and leaderboard are available at this http url . story_separator_special_tag we describe the wikiqa dataset , a new publicly available set of question and sentence pairs , collected and annotated for research on open-domain question answering . most previous work on answer sentence selection focuses on a dataset created using the trec-qa data , which includes editor-generated questions and candidate answer sentences selected by matching content words in the question . wikiqa is constructed using a more natural process and is more than an order of magnitude larger than the previous dataset . in addition , the wikiqa dataset also includes questions for which there are no correct sentences , enabling researchers to work on answer triggering , a critical component in any qa system . we compare several systems on the task of answer sentence selection on both datasets and also describe the performance of a system on the problem of answer triggering using the wikiqa dataset . story_separator_special_tag existing question answering ( qa ) datasets fail to train qa systems to perform complex reasoning and provide explanations for answers . we introduce hotpotqa , a new dataset with 113k wikipedia-based question-answer pairs with four key features : ( 1 ) the questions require finding and reasoning over multiple supporting documents to answer ; ( 2 ) the questions are diverse and not constrained to any pre-existing knowledge bases or knowledge schemas ; ( 3 ) we provide sentence-level supporting facts required for reasoning , allowing qa systems to reason with strong supervision and explain the predictions ; ( 4 ) we offer a new type of factoid comparison questions to test qa systems ' ability to extract relevant facts and perform necessary comparison . we show that hotpotqa is challenging for the latest qa systems , and the supporting facts enable models to improve performance and make explainable predictions . story_separator_special_tag recent powerful pre-trained language models have achieved remarkable performance on most of the popular datasets for reading comprehension . it is time to introduce more challenging datasets to push the development of this field towards more comprehensive reasoning of text . in this paper , we introduce a new reading comprehension dataset requiring logical reasoning ( reclor ) extracted from standardized graduate admission examinations . as earlier studies suggest , human-annotated datasets usually contain biases , which are often exploited by models to achieve high accuracy without truly understanding the text . in order to comprehensively evaluate the logical reasoning ability of models on reclor , we propose to identify biased data points and separate them into easy set while the rest as hard set . empirical results show that state-of-the-art models have an outstanding ability to capture biases contained in the dataset with high accuracy on easy set . however , they struggle on hard set with poor performance near that of random guess , indicating more research is needed to essentially enhance the logical reasoning ability of current models . story_separator_special_tag given a partial description like she opened the hood of the car , humans can reason about the situation and anticipate what might come next ( then , she examined the engine ) . in this paper , we introduce the task of grounded commonsense inference , unifying natural language inference and commonsense reasoning . we present swag , a new dataset with 113k multiple choice questions about a rich spectrum of grounded situations . to address the recurring challenges of the annotation artifacts and human biases found in many existing datasets , we propose adversarial filtering ( af ) , a novel procedure that constructs a de-biased dataset by iteratively training an ensemble of stylistic classifiers , and using them to filter the data . to account for the aggressive adversarial filtering , we use state-of-the-art language models to massively oversample a diverse set of potential counterfactuals . empirical results demonstrate that while humans can solve the resulting inference problems with high accuracy ( 88 % ) , various competitive models struggle on our task . we provide comprehensive analysis that indicates significant opportunities for future research . story_separator_special_tag recent work by zellers et al . ( 2018 ) introduced a new task of commonsense natural language inference : given an event description such as `` a woman sits at a piano , '' a machine must select the most likely followup : `` she sets her fingers on the keys . '' with the introduction of bert , near human-level performance was reached . does this mean that machines can perform human level commonsense inference ? in this paper , we show that commonsense inference still proves difficult for even state-of-the-art models , by presenting hellaswag , a new challenge dataset . though its questions are trivial for humans ( > 95 % accuracy ) , state-of-the-art models struggle ( < 48 % ) . we achieve this via adversarial filtering ( af ) , a data collection paradigm wherein a series of discriminators iteratively select an adversarial set of machine-generated wrong answers . af proves to be surprisingly robust . the key insight is to scale up the length and complexity of the dataset examples towards a critical 'goldilocks ' zone wherein generated text is ridiculous to humans , yet often misclassified by state-of-the-art models . our story_separator_special_tag machine reading comprehension ( mrc ) is a challenging natural language processing ( nlp ) research field with wide real-world applications . the great progress of this field in recent years is mainly due to the emergence of large-scale datasets and deep learning . at present , a lot of mrc models have already surpassed human performance on various benchmark datasets despite the obvious giant gap between existing mrc models and genuine human-level reading comprehension . this shows the need for improving existing datasets , evaluation metrics , and models to move current mrc models toward `` real '' understanding . to address the current lack of comprehensive survey of existing mrc tasks , evaluation metrics , and datasets , herein , ( 1 ) we analyze 57 mrc tasks and datasets and propose a more precise classification method of mrc tasks with 4 different attributes ; ( 2 ) we summarized 9 evaluation metrics of mrc tasks , 7 attributes and 10 characteristics of mrc datasets ; ( 3 ) we also discuss key open issues in mrc research and highlighted future research directions . in addition , we have collected , organized , and published our data on story_separator_special_tag we present a large-scale dataset , record , for machine reading comprehension requiring commonsense reasoning . experiments on this dataset demonstrate that the performance of state-of-the-art mrc systems fall far behind human performance . record represents a challenge for future research to bridge the gap between human and machine commonsense reading comprehension . record is available at this http url story_separator_special_tag reading and understanding text is one important component in computer aided diagnosis in clinical medicine , also being a major research problem in the field of nlp . in this work , we introduce a question-answering task called medqa to study answering questions in clinical medicine using knowledge in a large-scale document collection . the aim of medqa is to answer real-world questions with large-scale reading comprehension . we propose our solution seareader -- a modular end-to-end reading comprehension model based on lstm networks and dual-path attention architecture . the novel dual-path attention models information flow from two perspectives and has the ability to simultaneously read individual documents and integrate information across multiple documents . in experiments our seareader achieved a large increase in accuracy on medqa over competing models . additionally , we develop a series of novel techniques to demonstrate the interpretation of the question answering process in seareader . story_separator_special_tag machine reading comprehension aims to teach machines to understand a text like a human and is a new challenging direction in artificial intelligence . this article summarizes recent advances in mrc , mainly focusing on two aspects ( i.e. , corpus and techniques ) . the specific characteristics of various mrc corpus are listed and compared . the main ideas of some typical mrc techniques are also described . story_separator_special_tag books are a rich source of both fine-grained information , how a character , an object or a scene looks like , as well as high-level semantics , what someone is thinking , feeling and how these states evolve through a story . this paper aims to align books to their movie releases in order to provide rich descriptive explanations for visual content that go semantically far beyond the captions available in current datasets . to align movies and books we exploit a neural sentence embedding that is trained in an unsupervised way from a large corpus of books , as well as a video-text neural embedding for computing similarities between movie clips and sentences in the book . we propose a context-aware cnn to combine information from multiple sources . we demonstrate good quantitative performance for movie/book alignment and show several qualitative examples that showcase the diversity of tasks our model can be used for .
introductionas a result of advances in communication , computation , sensor and energy storage technologies , as well as carbon fiber-reinforced plastic materials , micro unmanned aerial vehicles ( uav ) are available at affordable prices . on this basis many new application areas , such as the in-depth reconnaissance and surveillance of major incidents , will be possible . uncontrolled emissions of liquid or gaseous contaminants in cases of volcanic eruptions , large fires , industrial incidents , or terrorist attacks can be analyzed by utilizing uav ( figure 1 ) . hence , the use of cognitive unmanned aerial systems ( uas ) for distributing mobile sensors in incident areas is in general a significant value added for remote sensing , reconnaissance , surveillance , and communication purposes.1figure 1 : deployment scenario : chemical plume detection with an autonomous micro uav mesh network.in the near future police departments , fire brigades and other homeland security organizations will have access to medium- and small-size uav and will integrate them in their work flow . the use of non-military frequencies and civil communication technologies gains in importance for purposes of safety and security missions , since the frequency pool is story_separator_special_tag unmanned aerial vehicles ( uavs ) can be used in a wide application range . for example , uavs are utilized to observe critical areas in disaster situations without jeopardizing individuals or to extend the transmission range of communication networks . connecting a set of uavs with each other within a wireless mesh network helps to increase the coverage of the observable area . due to their limited performance and energy resources and , thereby , restricted communication capabilities , a tailored qos management scheme has to be used to optimize occurring data flows in these networks . we present a qos control scheme that works on the basis of process-patterns , each describing the context-dependent behavior of uavs according to the execution order of services for different situations . furthermore , the logical communication path is optimized within the mesh network based on each node 's areal position using a dynamic hierarchical communication structure . thereby , performance and fairness within a network of uavs can be increased demand-actuated . story_separator_special_tag one of the most important design problems for multi-uav ( unmanned air vehicle ) systems is the communication which is crucial for cooperation and collaboration between the uavs . if all uavs are directly connected to an infrastructure , such as a ground base or a satellite , the communication between uavs can be realized through the in-frastructure . however , this infrastructure based communication architecture restricts the capabilities of the multi-uav systems . ad-hoc networking between uavs can solve the problems arising from a fully infrastructure based uav networks . in this paper , flying ad-hoc networks ( fanets ) are surveyed which is an ad hoc network connecting the uavs . the differences between fanets , manets ( mobile ad-hoc networks ) and vanets ( vehicle ad-hoc networks ) are clarified first , and then the main fanet design challenges are introduced . along with the existing fanet protocols , open research issues are also discussed . story_separator_special_tag the integration of unmanned aircraft systems ( uas ) into the national airspace system ( nas ) poses a variety of technical challenges to uas developers and aviation regulators . in response to growing demand for access to civil airspace in the united states , the federal aviation administration ( faa ) has produced a roadmap identifying key areas requiring further research and development . one such technical challenge is the development of a detect and avoid system ( daa ) capable of providing a means of compliance with the see and avoid requirement in manned aviation . the purpose of the daa system is to support the pilot , situated at a ground control station ( gcs ) , in maintaining daa well clear of nearby aircraft through the use of gcs displays and alerts . in addition to its primary function of aiding the pilot in maintaining daa well clear , the daa system must also safely interoperate with existing nas systems and operations , such as the airspace management procedures of air traffic controllers ( atc ) and collision avoidance ( ca ) systems currently in use by manned aircraft , namely the traffic alert and collision story_separator_special_tag within the short span of a decade , wi-fi hotspots have revolutionized internet service provisioning . with the increasing popularity and rising demand for more public wi-fi hotspots , network service providers are facing a daunting task . wi-fi hotspots typically require extensive wired infrastructure to access the backhaul network , which is often expensive and time consuming to provide in such situations . wireless mesh networks ( wmns ) offer an easy and economical alternative for providing broadband wireless internet connectivity and could be called the web-in-the-sky . in place of an underlying wired backbone , a wmn forms a wireless backhaul network , thus obviating the need for extensive cabling . they are based on multihop communication paradigms that dynamically form a connected network . however , multihop wireless communication is severely plagued by many limitations such as low throughput and limited capacity . in this article we point out key challenges that are impeding the rapid progress of this upcoming technology . we systematically examine each layer of the network and discuss the feasibility of some state-of-the-art technologies/protocols for adequately addressing these challenges . we also provide broader and deeper insight to many other issues that are story_separator_special_tag at present , the uav as the tactical communications relay played an important role in battlefield communication support . the future development of uav ad hoc network technology will provide the necessary technology to ensure the cooperative engagement of multi-uav and its further integration into the battlefield communications network . this article begin with introduction to the uav and the ad hoc network , followed by the presentation of several key technology of the uav ad hoc network in the battlefield and the difficulties and challenges in the course of the study . finally , the paper discusses the future development of the uav battlefield communications network . story_separator_special_tag in recent years , the capabilities and roles of unmanned aerial vehicles ( uavs ) have rapidly evolved , and their usage in military and civilian areas is extremely popular as a result of the advances in technology of robotic systems such as processors , sensors , communications , and networking technologies . while this technology is progressing , development and maintenance costs of uavs are decreasing relatively . the focus is changing from use of one large uav to use of multiple uavs , which are integrated into teams that can coordinate to achieve high-level goals . this level of coordination requires new networking models that can be set up on highly mobile nodes such as uavs in the fleet . such networking models allow any two nodes to communicate directly if they are in the communication range , or indirectly through a number of relay nodes such as uavs . setting up an ad-hoc network between flying uavs is a challenging issue , and requirements can differ from traditional networks , mobile ad-hoc networks ( manets ) and vehicular ad-hoc networks ( vanets ) in terms of node mobility , connectivity , message routing , service quality , story_separator_special_tag we developed uavnet , a framework for the autonomous deployment of a flying wireless mesh network using small quadrocopter-based unmanned aerial vehicles ( uavs ) . the flying wireless mesh nodes are automatically interconnected to each other and building an ieee 802.11s wireless mesh network . the implemented uavnet prototype is able to autonomously interconnect two end systems by setting up an airborne relay , consisting of one or several flying wireless mesh nodes . the developed software includes basic functionality to control the uavs and to setup , deploy , manage , and monitor a wireless mesh network . our evaluations have shown that uavnet can significantly improve network performance . story_separator_special_tag with the advances in computation , sensor , communication and networking technologies , utilization of unmanned aerial vehicles ( uavs ) for military and civilian areas has become extremely popular for the last two decades . since small uavs are relatively cheap , the focus is changing , and usage of several small uavs is preferred rather than one large uav . this change in orientation is dramatic , and it is resulting to develop new networking technologies between uavs , which can constitute swarm uav teams for executing specific tasks with different levels of intra and inter vehicle communication especially for coordination and control of the system . setting up a uav network not only extends operational scope and range but also enables quick and reliable response time . because uavs are highly mobile nodes for networking , setting up an ad-hoc network is a challenging issue , and this networking has some requirements , which differ from traditional networks , mobile ad-hoc networks ( manets ) and vehicular ad-hoc networks ( vanets ) in terms of connectivity , routing process , services , applications , etc . in this paper , it is aimed to point out the story_separator_special_tag we propose prophet+ , a routing scheme for opportunistic networks designed to maximize successful data delivery rate and minimize transmission delay . prophet+ computes a deliverability value to determine the routing path for packets . deliverability is calculated using a weighted function consisting of evaluations of nodes buffer size , power , location , popularity , and the predictability value from prophet . even though the proposed prophet+ s weights are chosen based on qualitative considerations , it is possible for prophet+ to perform even more efficiently in various environments by shifting the weights accordingly . our simulation illustrates that prophet+ can perform better or equal to the routing protocol prophet if logical choices for weights are used . story_separator_special_tag the control of networked multivehicle systems designed to perform complex coordinated tasks is currently an important and challenging field of research . this paper addresses a cooperative search problem where a team of uninhabited aerial vehicles ( uavs ) seeks to find targets of interest in an uncertain environment . we present a practical framework for online planning and control of a group of uavs for cooperative search based on two interdependent tasks : ( i ) incrementally updating `` cognitive maps '' used as the representation of the environment through new sensor readings ; ( ii ) continuously planning the path for each vehicle based on the information obtained through the search . we formulate the cooperative search problem and develop a decentralized strategy based on an opportunistic cooperative learning method , where the emergent coordination among vehicles is enabled by letting each vehicle consider other vehicles ' actions in its path planning procedure . by using the developed strategy , physically feasible paths for the vehicles to follow are generated , where constraints on aerial vehicles , including physical maneuverabilities , are considered and the dynamic nature of the environment is taken into account . we also present story_separator_special_tag advances in electronics and software are allowing the rapid development of small unmanned aerial vehicles ( uavs ) , capable of performing autonomous coordinated actions . developments in the area of lithium polymer batteries and carbon fiber-reinforce plastic materials let uavs become an aerial platform , that can be equipped with a variety of sensors such as cameras . furthermore , it is also possible to mount communication modules on the uav platform in order to let the uavs work as communication relays to build a wireless aerial backbone network . however , the cooperative operation between multiple autonomous unmanned aerial vehicles is usually constrained by sensor range , communication limits , and operational environments . stable communication systems of networked uavs and sensing nodes will be the key technologies for high-performance and remote operation in these applications . the topology of the uav ad-hoc network plays an important role in the system performance . this paper discusses the state-of-art schemes that could be applied as the topology control of the uav ad-hoc networks . story_separator_special_tag flying ad hoc networks ( fanets ) is one of the most effective multi communication architectures through its capability of transferring data simultaneously without any infrastructure . the fanet task allocation problem is one-to-one assignment of agents to tasks so that the overall benefit of all the agents is maximized by taking delays and costs into account , given a set of agents and a set of tasks . a coordination based task allocation system ensuring spatial and temporal coordination between uavs is essential for fanets . in this paper , a new multi uav task planning heuristic is proposed for fanets to visit all target points in a minimum time , while preserving all time network connectivity . effectiveness in the mission execution and cost efficiency in the task allocation have been presented by conducting a bunch of experiments performed on 2d terrains . performance results validated the usage of our algorithms for the connected multi uav task planning problem for fanet . story_separator_special_tag in this paper we examine mobile ad-hoc networks ( manet ) composed by unmanned aerial vehicles ( uavs ) . due to the high-mobility of the nodes , these networks are very dynamic and the existing routing protocols partly fail to provide a reliable communication . we present predictive-olsr an extension to the optimized link-state routing ( olsr ) protocol : it enables efficient routing in very dynamic conditions . the key idea is to exploit gps information to aid the routing protocol . predictive-olsr weights the expected transmission count ( etx ) metric , taking into account the relative speed between the nodes . we provide numerical results obtained by a mac-layer emulator that integrates a flight simulator to reproduce realistic flight conditions . these numerical results show that predictive-olsr significantly outperforms olsr and babel , providing a reliable communication even in very dynamic conditions . story_separator_special_tag this paper explores the role of meshed airborne communication networks in the operational performance of small unmanned aircraft systems . small unmanned aircraft systems have the potential to create new applications and markets in civil domains , enable many disruptive technologies , and put considerable stress on air traffic control systems . we argue that of the existing networked communication architectures , only meshed ad hoc networking can meet the communication demands for the large number of small aircraft expected to be deployed in future . experimental results using the heterogeneous unmanned aircraft system are presented to show that meshed airborne communication is feasible , that it extends the operational envelope of small unmanned aircraft at the expense of increased communication variability , and that net-centric operation of multiple cooperating aircraft is possible . additionally , the ability of airborne networks of small unmanned aircraft to exploit controlled mobility to improve performance is discussed . story_separator_special_tag a fundamental but challenging problem in cooperation and control of multiple unmanned aerial vehicles ( uavs ) is efficient networking of the uavs over the wireless medium in rapidly changing environments . in this paper , we introduce four communication architectures for networking uavs and review some military communication standards applicable to uav communications . after discussions of pros and cons of each communication architecture , we conclude that a uav ad hoc network is the most appropriate architecture to network a team of uavs , while a multi-layer uav ad hoc network is more suitable for multiple groups of heterogeneous uavs . by comparing various legacy and next-generation military data link systems , we highlight that one important feature of the next-generation waveforms is their capability of internet protocol ( ip ) based ad hoc networking , which allows uavs to communicate with each other in a single- or multi-layer uav ad hoc networks . story_separator_special_tag mesh wireless networks represent a solution to create adaptive networks during emergency situations . in this paper we propose a network solution , using wimax technology without fixed access points , based on unmanned aerial vehicles ( uavs ) to realize the network backbone in emergency scenario . we analyze coverage properties of uav nodes and we provide a methodology for the network planning and uav deploying . starting from the area to be covered , we plan network in terms of number and positions of uavs by determining the single cell radius setting also the desired modulation scheme . we present our results obtained by considering different scenarios : free space scenario , random scenario , which is characterized by the presence of obstacles of random dimensions and positions and manhattan scenario , which reminds the typical perpendicular streets of manhattan in new york city . story_separator_special_tag this paper describes a methodology that allows wireless networking allowing interconnection\xa0to the internet through a gateway , to interact and obtain products and\xa0services delivery . these wireless mesh network ( wmn ) , are based on routers which\xa0are programmed to work as nodes of a network . there are certain routers that allow\xa0the programming of its firmware to form network nodes . communication is transmitted\xa0between nodes in the network and it is possible to cover long distances . the signal\xa0of a distant node , hop from node to node till reach the gateway . this generates delays\xa0and congestion in the network . a path that contains nodes that make more faster\xa0connection to the gateway can be designed as a solution . this is called a backbone , \xa0has a different channel frequency of the common nodes . the characteristics of these\xa0networks is its fast implementation and low cost . this make them useful for rural\xa0areas , for developing countries and remote regions . story_separator_special_tag airborne networks are special types of ad hoc wireless networks that can be used to enhance situational awareness , flight coordination , and flight efficiency in civil aviation . a unique challenge to the proper design of airborne networks for these applications is the dynamic coupling between airborne networks and flight vehicles . this coupling directly affects the operations and performances of both the networks and the vehicles . accordingly , an appropriate method of airborne network design must consider its interactions with vehicle flight maneuvers and vice versa . in this paper , fundamental issues in the systematic design of airborne networks for civil aviation are addressed . we particularly focus on the issues that cover various aspects of the design such as establishing the airborne network , maintaining the network , and evaluating the performances of the network . story_separator_special_tag the internet of things is a paradigm that allows the interaction of ubiquitous devices through a network to achieve common goals . this paradigm like any man-made infrastructure is subject to disasters , outages and other adversarial conditions . under these situations provisioned communications fail , rendering this paradigm with little or no use . hence , network self-organization among these devices is needed to allow for communication resilience . this paper presents a survey of related work in the area of self-organization and discusses future research opportunities and challenges for self-organization in the internet of things . we begin this paper with a system perspective of the internet of things . we then identify and describe the key components of self-organization in the internet of things and discuss enabling technologies . finally we discuss possible tailoring of prior work of other related applications to suit the needs of self-organization in the internet of things paradigm . story_separator_special_tag a communication network with radio nodes which is organized in a mesh topology is called as wireless mesh network or wmn . they are used for variety application such as building automation , transportation , citywide wireless internet services etc . the wmn experience link failure due to application bandwidth demands , channel interference etc . these failures will cause performance degradation . reconfiguration is needed to preserve the network from dynamic link failure . the most of the existing algorithms are not able to give full improvement at the time of dynamic link failure . the resource allocation require global configuration changes , greedy channel assignment algorithm might not be able to realize full improvement . the proposed work is for reconfigure the network at the time of dynamic link failure . autonomous reconfiguration system ( ars ) is used to reconfigure the network . the system generates necessary changes in channel assignment in order to recover from link failure . the performance is evaluated using different types of quality parameters such as throughput , pdr , delay . comparing with existing schemes this will provide fast recovery . story_separator_special_tag in this paper , motivated by the vision that future internets will comprise infrastructure -- based and infrastructure -- less networks , we explore the use of the software-defined networking ( sdn ) paradigm in these so-called `` heterogeneous '' networked environments . to make the case for sdn in heterogeneous networks , we examine an application scenario in which sdn is a key enabling technology . we also identify the additional requirements imposed by the sdn paradigm and discuss the research challenges they raised . story_separator_special_tag vehicular ad hoc networks ( vanets ) have in recent years been viewed as one of the enabling technologies to provide a wide variety of services , such as vehicle road safety , enhanced traffic and travel efficiency , and convenience and comfort for passengers and drivers . however , current vanet architectures lack in flexibility and make the deployment of services/protocols in large-scale a hard task . in this paper , we demonstrate how software-defined networking ( sdn ) , an emerging network paradigm , can be used to provide the flexibility and programmability to networks and introduces new services and features to today 's vanets . we take the concept of sdn , which has mainly been designed for wired infrastructures , especially in the data center space , and propose sdn-based vanet architecture and its operational mode to adapt sdn to vanet environments . we also discuss benefits of a software-defined vanet and the services that can be provided . we demonstrate in simulation the feasibility of a software-defined vanet by comparing sdn-based routing with traditional manet/vanet routing protocols . we also show in simulation fallback mechanisms that must be provided to apply the sdn concept into story_separator_special_tag the { \\it software defined networking } ( sdn ) paradigm promises to dramatically simplify network configuration and resource management . such features are extremely valuable to network operators and therefore , the industrial ( besides the academic ) research and development community is paying increasing attention to sdn . although wireless equipment manufacturers are increasing their involvement in sdn-related activities , to date there is not a clear and comprehensive understanding of what are the opportunities offered by sdn in most common networking scenarios involving wireless infrastructure less communications and how sdn concepts should be adapted to suit the characteristics of wireless and mobile communications . this paper is a first attempt to fill this gap as it aims at analyzing how sdn can be beneficial in wireless infrastructure less networking environments with special emphasis on wireless personal area networks ( wpan ) . furthermore , a possible approach ( called \\emph { sdwn } ) for such environments is presented and some design guidelines are provided . story_separator_special_tag with the rapid development of the mobile internet and the internet of things ( iot ) , mobile data traffic has been exploded . wireless communication networks have entered the era of big data . anomalous user can be studied with their negative experience by analyzing users activities in wireless networks . in this paper , we propose a novel mobile big data ( mbd ) architecture consisting of four layers , including storage layer , fusion layer , analysis layer and the application layer . based on the mdb architecture , we present a data-driven user experience prediction as a case study of applying the proposed mbd architecture in wireless network . by leveraging machine learning algorithms , the proposed user experience prediction can pre-evaluate user experience through network performance and user behavior features in a data-driven fashion . first , we perform a preliminary analysis on consumer complaints records obtained from the network monitoring system of a major mobile network operator ( mno ) in china . second , up-sampling and down-sampling are combined to combat the severe imbalanced negative and positive samples . the results show that proposed automated machine learning algorithm improves the prediction accuracy compared story_separator_special_tag this whitepaper proposes openflow : a way for researchers to run experimental protocols in the networks they use every day . openflow is based on an ethernet switch , with an internal flow-table , and a standardized interface to add and remove flow entries . our goal is to encourage networking vendors to add openflow to their switch products for deployment in college campus backbones and wiring closets . we believe that openflow is a pragmatic compromise : on one hand , it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high port-density ; while on the other hand , vendors do not need to expose the internal workings of their switches . in addition to allowing researchers to evaluate their ideas in real-world traffic settings , openflow could serve as a useful campus component in proposed large-scale testbeds like geni . two buildings at stanford university will soon run openflow networks , using commercial ethernet switches and routers . we will work to encourage deployment at other schools ; and we encourage you to consider deploying openflow in your university network too story_separator_special_tag in this paper we propose to integrate software defined networking ( sdn ) principles in wireless mesh networks ( wmn ) formed by openflow switches . the use of a centralized network controller and the ability to setup arbitrary paths for data flows make sdn a handy tool to deploy fine-grained traffic engineering algorithms in wmns . however , centralized control may be harmful in multi-hop radio networks formed by commodity devices ( e.g . wireless community networks ) , in which node isolation and network fragmentation are not rare events . to exploit the pros and mitigate the cons , our framework uses the traditional openflow centralized controller to engineer the routing of data traffic , while it uses a distributed controller based on olsr to route : i ) openflow control traffic , ii ) data traffic , in case of central controller failure . we implemented and tested our wireless mesh software defined network ( wmsdn ) showing its applicability to a traffic engineering use-case , in which the controller logic balances outgoing traffic among the internet gateways of the mesh . albeit simple , this use case allows showing a possible usage of sdn that improves story_separator_special_tag everal protocols for routing and forwarding in wireless mesh networks ( wmn ) have been proposed , such as aodv , olsr or b.a.t.m.a.n . however , providing support for e.g . flow-based routing where flows of one source take different paths through the network is hard to implement in a unified way using traditional routing protocols . openflow is an emerging technology which makes network elements such as routers or switches programmable via a standardized interface . by using virtualization and flow-based routing , openflow enables a rapid deployment of novel packet forwarding and routing algorithms , focusing on fixed networks . we propose an architecture that integrates openflow with wmns and provides such flow-based routing and forwarding capabilities . to demonstrate the feasibility of our openflow based approach , we have implemented a simple solution to solve the problem of client mobility in a wmn which handles the fast migration of client addresses ( e.g . ip addresses ) between mesh access points and the interaction with re-routing without the need for tunneling . measurements from a real mesh testbed ( kaumesh ) demonstrate the feasibility of our approach based on the evaluation of forwarding performance , control story_separator_special_tag emerging mega-trends ( e.g. , mobile , social , cloud , and big data ) in information and communication technologies ( ict ) are commanding new challenges to future internet , for which ubiquitous accessibility , high bandwidth , and dynamic management are crucial . however , traditional approaches based on manual configuration of proprietary devices are cumbersome and error-prone , and they can not fully utilize the capability of physical network infrastructure . recently , software-defined networking ( sdn ) has been touted as one of the most promising solutions for future internet . sdn is characterized by its two distinguished features , including decoupling the control plane from the data plane and providing programmability for network application development . as a result , sdn is positioned to provide more efficient configuration , better performance , and higher flexibility to accommodate innovative network designs . this paper surveys latest developments in this active research area of sdn . we first present a generally accepted definition for sdn with the aforementioned two characteristic features and potential benefits of sdn . we then dwell on its three-layer architecture , including an infrastructure layer , a control layer , and an application story_separator_special_tag conventional ad hoc routing protocols face challenges in airborne network due to aircraft movement , which often results in intermittent links and can cause dramatic topology changes . in this paper , we propose a cluster-based reactive routing protocol to alleviate these problems . our solution takes advantage of mesh routers installed in unmanned aerial vehicles ( uavs ) or aircraft capable of hovering , when such airborne assets are available . as those mesh points usually have relatively stable connections among themselves , they play the role of cluster heads , forming a hierarchical routing structure . a simple self-organizing rule is introduced in cluster management to limit the cluster control overhead and route discovery flooding . in addition , a disruption tolerant mechanism ( dtm ) can be deployed in the routing protocol to increase resilience to temporary link or node failure . story_separator_special_tag the corresponding author . abstract : delay tolerant networks ( dtns ) are a class of emerging networks that experience frequent and longduration partitions . compared with the conventional networks , the distinguished feature is that there is no endto-end connectivity between source and destination . the network topology may change dynamically and randomly , and the non-existence of an end-to-end path poses a number of challenges in routing in dtns . in this paper , we survey the state-of-the-art routing protocols and give a comparison of them with respect to the important challenging issues in dtns . the routing protocols are classified into two categories based on which property is used to find the destination : flooding families and forwarding families . the pros and cons as well as performance are disccused and compared for the routing protocols . story_separator_special_tag abstract the 1990s have seen a rapid growth of research interests in mobile ad hoc networking . the infrastructureless and the dynamic nature of these networks demands new set of networking strategies to be implemented in order to provide efficient end-to-end communication . this , along with the diverse application of these networks in many different scenarios such as battlefield and disaster recovery , have seen manets being researched by many different organisations and institutes . manets employ the traditional tcp/ip structure to provide end-to-end communication between nodes . however , due to their mobility and the limited resource in wireless networks , each layer in the tcp/ip model require redefinition or modifications to function efficiently in manets . one interesting research area in manet is routing . routing in the manets is a challenging task and has received a tremendous amount of attention from researches . this has led to development of many different routing protocols for manets , and each author of each proposed protocol argues that the strategy proposed provides an improvement over a number of different strategies considered in the literature for a given network scenario . therefore , it is quite difficult to determine which story_separator_special_tag wireless mesh networks ( wmns ) consist of mesh routers and mesh clients , where mesh routers have minimal mobility and form the backbone of wmns . they provide network access for both mesh and conventional clients . the integration of wmns with other networks such as the internet , cellular , ieee 802.11 , ieee 802.15 , ieee 802.16 , sensor networks , etc. , can be accomplished through the gateway and bridging functions in the mesh routers . mesh clients can be either stationary or mobile , and can form a client mesh network among themselves and with mesh routers . wmns are anticipated to resolve the limitations and to significantly improve the performance of ad hoc networks , wireless local area networks ( wlans ) , wireless personal area networks ( wpans ) , and wireless metropolitan area networks ( wmans ) . they are undergoing rapid progress and inspiring numerous deployments . wmns will deliver wireless services for a large variety of applications in personal , local , campus , and metropolitan areas . despite recent advances in wireless mesh networking , many research challenges remain in all protocol layers . this paper presents a detailed story_separator_special_tag we consider the task of using one or more unmanned aerial vehicles ( uavs ) to relay messages between two distant ground nodes . for delay-tolerant applications like latency-insensitive bulk data transfer , we seek to maximize throughput by having a uav load from a source ground node , carry the data while flying to the destination , and finally deliver the data to a destination ground node . we term this the `` load-carry-and-deliver '' ( lcad ) paradigm and compare it against the conventional multi-hop , store-and-forward paradigm . we identify and analyze several of the most important factors in constructing a throughput-maximizing framework subject to constraints on both application allowable delay and uav maneuverability . we report performance measurement results for ieee 802.11g devices in three flight tests , based on which we derive a statistical model for predicting throughput performance for lcad . due to the nature of commercial off-the-shelf systems , this methodology is of essential importance for allowing better flight-path design to achieve high throughput . story_separator_special_tag this document describes the optimized link state routing ( olsr ) protocol for mobile ad hoc networks . the protocol is an optimization of the classical link state algorithm tailored to the requirements of a mobile wireless lan . the key concept used in the protocol is that of multipoint relays ( mprs ) . mprs are selected nodes which forward broadcast messages during the flooding process . this technique substantially reduces the message overhead as compared to a classical flooding mechanism , where every node retransmits each message when it receives the first copy of the message . in olsr , link state information is generated only by nodes elected as mprs . thus , a second optimization is achieved by minimizing the number of control messages flooded in the network . as a third optimization , an mpr node may chose to report only links between itself and its mpr selectors . hence , as contrary to the classic link state algorithm , partial link state information is distributed in the network . this information is then used for route calculation . olsr provides optimal routes ( in terms of number of hops ) . the protocol is story_separator_special_tag in highly dynamic airborne networks , multi-hop routing becomes increasingly difficult due to high mobility , intermittent links and link quality , and the need to scale . traditionally , airborne tactical networks have leveraged existing manet proactive , reactive , and hybrid routing protocols with modifications for cross-layer information , to provide multihop routing . although there has been some success with utilizing these protocols individually in airborne networks , a proper comparison of all types of manet routing protocols at scale , with mobility patterns associated with airborne tactical networks , is lacking . in this paper , we compare a variety of proactive and reactive manet routing protocols such as aodv , olsr and ospf-mdr , under relative node velocities and mobility patterns associated with airborne networks . specifically , we evaluate each protocol in terms of routing overhead traffic , end-to-end message completion rate , and end-to-end delay , to examine performance vs. tradeoff.1 story_separator_special_tag babel is a loop-avoiding distance-vector routing protocol that is robust and efficient both in ordinary wired networks and in wireless mesh networks . this document describes the babel routing protocol , and obsoletes rfcs 6126 and 7557 . story_separator_special_tag mobile ad hoc networks ( manets ) can make dynamic changes in topology . the most realistic characteristic which differentiates manets from other networks is that it is capable of changing its location . the result of the surveys over the last 5 years , show that it can overcome the two difficulties such as congestion and communication between nodes . different routing algorithms are used while maximizing packet delivery ratio ( pdr ) , throughput and minimizing end to end delay , routing load to improve the performance . maximum pdr and throughput gives best performance . this review paper describes the performance of the three routing protocols in case of change in traffic , no . of nodes and node mobility . to analyze the performance , reactive and proactive routing protocols are considered . the routing protocols include better approach to mobile ad hoc networks ( batman ) , dynamic source routing ( dsr ) and optimized link state routing algorithm ( olsr ) . this paper systematically analyzes the performance of reactive and proactive routing protocol . keywordsmanet , routing protocols , batman , dsr , olsr , proactive , reactive story_separator_special_tag this study experimentally compares the performance of three different multi hop ad hoc network routing protocols . traditional routing protocols have proven inadequate in wireless ad hoc networks , motivating the need for ad hoc specific routing protocols . this study tests link state , distance vector and biologically inspired approaches to routing using olsr , babel and batman routing protocols . the importance of osi layers is also discussed . this study concludes that the routing protocol 's overhead is the largest determinant of performance in small multi hop ad hoc networks . the results show that babel outperforms olsr and batman routing protocols and that the osi layer of the routing protocol has little impact on performance . story_separator_special_tag 2nd ifip international symposium on wireless communications and information technology in developing countries , csir , pretoria , south africa , 6-7 october 2008 story_separator_special_tag the dynamic source routing protocol ( dsr ) is a simple and efficient routing protocol designed specifically for use in multi-hop wireless ad hoc networks of mobile nodes . dsr allows the network to be completely self-organizing and self-configuring , without the need for any existing network infrastructure or administration . the protocol is composed of the two mechanisms of `` route discovery '' and `` route maintenance '' , which work together to allow nodes to discover and maintain source routes to arbitrary destinations in the ad hoc network . the use of source routing allows packet routing to be trivially loop-free , avoids the need for up-to-date routing information in the intermediate nodes through which packets are forwarded , and allows nodes forwarding or overhearing packets to cache the routing information in them for their own future use . all aspects of the protocol operate entirely on-demand , allowing the routing packet overhead of dsr to scale automatically to only that needed to react to changes in the routes currently in use . this document specifies the operation of the dsr protocol for routing unicast ip packets in multi-hop wireless ad hoc networks . story_separator_special_tag * \xa7 , this paper describes an implementation of a wireless mobile ad hoc network with radio nodes mounted at fixed sites , on ground vehicles , and in small ( 10kg ) uavs . the ad hoc networking allows any two nodes to communicate either directly or through an arbitrary number of other nodes which act as relays . we envision two sce narios for this type of network . in the first , the uav acts as a prominent radio node that connects disconnected ground radios . in the second , the networking enables groups of uavs to communicate with each other to extend small uavs ' operational scope and range . the network consists of mesh network radios assembled from low -cost commercial off the shelf components . the radio is an ieee 802.11b ( wifi ) wireless interface and is controlled by an embedded computer . the network protocol is an implementation of the dynamic source routing ad hoc networking protocol . the radio is mounted either in an environmental enclosure for outdoor fixed and vehicle mounting or directly in our custom built uavs . a monitoring architecture has been embedded into the radios for de story_separator_special_tag the ad hoc on-demand distance vector ( aodv ) routing protocol is intended for use by mobile nodes in an ad hoc network . it offers quick adaptation to dynamic link conditions , low processing and memory overhead , low network utilization , and determines unicast routes to destinations within the ad hoc network . it uses destination sequence numbers to ensure loop freedom at all times ( even in the face of anomalous delivery of routing control messages ) , avoiding problems ( such as `` counting to infinity '' ) associated with classical distance vector protocols . story_separator_special_tag the ietf manet working group mandate was to standardise ip routing protocols in manets . the rfc 2501 specifies the charter for the working group . the rfcs still has unanswered questions concerning either implementation or deployment of the protocols . nevertheless , the working group identifies the proposed algorithms as a trial technology . aggressive research in this area has continued since then , with prominent studies on routing protocols such as aodv , dsr , tora and olsr . several studies have been done on the performance evaluation of routing protocols using different evaluation methods . different methods and simulation environments give different results and consequently , there is need to broaden the spectrum to account for effects not taken into consideration in a particular environment . in this project , we evaluate the performance of aodv , olsr , dsr and tora ad hoc routing protocols in opnet . we simulate a mobile ad hoc network with all nodes in the network receiving ftp traffic from a common source ( ftp server ) . in this way , the results of this analysis would also represent a situation where the manet receives traffic from another network via story_separator_special_tag this document describes the zone routing protocol ( zrp ) , a hybrid routing protocol suitable for a wide variety of mobile ad-hoc networks , especially those with large network spans and diverse mobility patterns . each node proactively maintains routes within a local region ( referred to as the routing zone ) . knowledge of the routing zone topology is leveraged by the zrp to improve the efficiency of a reactive route query/reply mechanism . the proactive maintenance of routing zones also helps improve the quality of discovered routes , by making them more robust to changes in network topology . the zrp can be configured for a particular network by proper selection of a single parameter , the routing zone radius . haas , pearlman expires september 2000 [ page i ] internet draft the zone routing protocol march 2000 story_separator_special_tag if a large scale of disaster occurs such as the east japan great earthquake , there would be some possibility of the informational isolation from others because of the disconnection of communication network or high congestion . in facts , the east japan great earthquake isolated many japanese coastal resident areas , and the lack of disaster information is considered to affect the speed of rescue , evacuation , and sending life materials . dtn ( delay tolerant network ) is supposed to be one of the effective methods to transmit significant data even under poor network conditions . however , when dtn is applied to local areas such as the japanese northern east coastal cities which were severely damaged by the earthquake , dtn might not work effectively . that is because there are some considerable problems such as fewer roads , cars , and pedestrians than in urban areas . moreover , the scale of area is likely wider than that of urban areas because it includes many non-residential areas such as rice fields , gardens , and woods . therefore , it is necessary to consider additional effective functions when dtn is applied for a disaster information story_separator_special_tag delay tolerant networks ( dtns ) represent a class of wireless networks that experience frequent and long lasting partitions due to sparse distribution of nodes in the topology . a traditional tcp/ip setting assumes the definite existence of a contemporaneous end-to-end path between any source-destination pair in the network . any setting that violates this assumption may be considered as a potential application for the dtn architecture . to cope with this situation , dtn nodes utilize a store-carry-forward approach in which messages are buffered for extended intervals of time until an appropriate forwarding opportunity is recognized . numerous studies have tackled the challenging problem of routing in dtns . routing proposals include stochastic approaches such as random , spray-and-wait and epidemic routing , or deterministic approach such as history-based , model-based , coding-based and variations of these approaches . the number of routing schemes in the literature is increasing rapidly without a clear mapping of which is more suitable for any of the vast array of potential dtn application . this document surveys the main routing schemes in the dtn literature . it provides a detailed insight to the dtn approach and describes in some depth the policies and story_separator_special_tag delay tolerant networks ( dtns ) are sparse and highly mobile wireless ad hoc networks , where no contemporaneous end-to-end path may ever exist at any given time instant , and thus the store-carry-forward kind of schemes becomes a natural routing option . a lot of models have been proposed to analyze the unicast performance of such routing schemes in the dtns , while few works consider the multicast scenario . in this paper , we develop a general continuous time markov chain-based theoretical framework to characterize the complicated message delivery process of the dtn multicast scenarios , based on which analytical expressions are further derived for both the expected delivery delay and expected delivery cost . the developed theoretical framework is general in the sense that : 1 ) it can be used to analyze the dtn multicast performance under the common store-carry-forward routing schemes ; 2 ) it can also be used for the common mobility models ; 3 ) it covers some available models developed for the dtn unicast as special cases . we then apply the theoretical framework to explore the delivery performance of two popular routing schemes , the epidemic routing and the two-hop relaying story_separator_special_tag in order to provide network connectivity in highly partitioned ad-hoc networks , we propose a routing strategy that incorporates an existing ad-hoc routing protocol , ad hoc on demand distance vector ( aodv ) , with disruption tolerant networking ( dtn ) a la store-carry-forward mechanisms using unmanned aerial vehicles ( uavs ) as carriers . this paper focuses on the design , implementation , and evaluation of the routing strategy . the major contribution of this work is the implementation of our dtn aware routing protocol on top of existing and mostly unmodified aodv . we show the advantage of the dtn protocol through simulation using ns-2 . story_separator_special_tag preface . contributors . about the editor . 1. algorithms for mobile ad hoc networks ( azzedine boukerche , daniel camara , antonio a.f . loureiro , and carlos m.s . figueiredo ) . 2. establishing a communication infrastructure in ad hoc networks ( michel barbeau , evangelos kranakis , and ioannis lambadaris ) . 3. robustness control for network-wide broadcast in multihop wireless networks ( paul rogers and nael b. abu-ghazaleh ) . 4. encoding for efficient data distribution in multihop ad hoc networks ( luciana pelusi , andrea passarella , and marco conti ) . 5. a taxonomy of routing protocols for mobile ad hoc networks ( azzedine boukerche , mohammad z. ahmad , damla turgut , and begumhan turgut ) . 6. adaptive backbone multicast routing for mobile ad hoc networks ( chaiporn jaikaeo and chien-chung shen ) . 7. effect of interference on routing in multihop wireless networks ( vinay kolar and nael b. abu-ghazaleh ) . 8. routing protocols in intermittently connected mobile ad hoc networks and delay-tolerant networks ( zhensheng zhang ) . 9. transport layer protocols for mobile ad hoc networks ( lap kong law , srikanth v. krishnamurthy , and michalis faloutsos story_separator_special_tag communication networks are traditionally assumed to be connected . however , emerging wireless applications such as vehicular networks , pocket-switched networks , etc. , coupled with volatile links , node mobility , and power outages , will require the network to operate despite frequent disconnections . to this end , opportunistic routing techniques have been proposed , where a node may store-and-carry a message for some time , until a new forwarding opportunity arises . although a number of such algorithms exist , most focus on relatively homogeneous settings of nodes . however , in many envisioned applications , participating nodes might include handhelds , vehicles , sensors , etc . these various `` classesrdquo have diverse characteristics and mobility patterns , and will contribute quite differently to the routing process . in this paper , we address the problem of routing in intermittently connected wireless networks comprising multiple classes of nodes . we show that proposed solutions , which perform well in homogeneous scenarios , are not as competent in this setting . to this end , we propose a class of routing schemes that can identify the nodes of `` highest utilityrdquo for routing , improving the delay story_separator_special_tag in this paper , we proposed a new routing protocol for unmanned aerial vehicles ( uavs ) that equipped with directional antenna . we named this protocol directional optimized link state routing protocol ( dolsr ) . this protocol is based on the well known protocol that is called optimized link state routing protocol ( olsr ) . we focused in our protocol on the multipoint relay ( mpr ) concept which is the most important feature of this protocol . we developed a heuristic that allows dolsr protocol to minimize the number of the multipoint relays . with this new protocol the number of overhead packets will be reduced and the end-to-end delay of the network will also be minimized . we showed through simulation that our protocol outperformed optimized link state routing protocol , dynamic source routing ( dsr ) protocol and ad- hoc on demand distance vector ( aodv ) routing protocol in reducing the end-to-end delay and enhancing the overall throughput . our evaluation of the previous protocols was based on the opnet network simulation tool . story_separator_special_tag much of the work on networking and communications is based on the premise that components interact in one of two ways : either they are connected via a stable wired or wireless network , or they make use of persistent storage repositories accessible to the communicating parties . a new generation of networks raises serious questions about the validity of these fundamental assumptions . in mobile ad hoc wireless networks connections are transient and availability of persistent storage is rare . this paper is concerned with achieving communication among mobile devices that may never find themselves in direct or indirect contact with each other at any point in time . a unique feature of our contribution is the idea of exploiting information associated with the motion and availability profiles of the devices making up the ad hoc network . this is the starting point for an investigation into a range of possible solutions whose essential features are controlled by the manner in which motion profiles are acquired and the extent to which such knowledge is available across an ad hoc network . story_separator_special_tag n the last few years , there has been much research activity in mobile , wireless , ad hoc networks ( manet ) . manets are infrastructure-less , and nodes in the networks are constantly moving . in manets , nodes can directly communicate with each other if they enter each others ' communication range . a node can terminate packets or forward packets ( serve as a relay ) . thus , a packet traverses an ad hoc network by being relayed from one node to another , until it reaches its destination . as nodes are moving , this becomes a challenging task , since the topology of the network is in constant change . how to find a destination , how to route to that destination , and how to insure robust communication in the face of constant topology change are major challenges in mobile ad hoc networks . routing in mobile ad hoc networks is a well-studied topic . to accommodate the dynamic topology of mobile ad hoc networks , an abundance of routing protocols have recent-for all these routing protocols , it is implicitly assumed that the network is connected and there is a contemporaneous story_separator_special_tag advances in micro-electro-mechanical systems ( mems ) have revolutionized the digital age to a point where animate and inanima te objects can be used as a communication channel . in addition , the ubiquit y of mobile phones with increasing capabilities and ample resources me ans people are now effectively mobile sensors that can be used to sense the e nvironment as well as data carriers . these objects , along with their devic es , form a new kind of networks that are characterized by frequent disconn ections , resource constraints and unpredictable or stochastic mobility patt erns . a key underpinning in these networks is routing or data dissemination p rotocols that are designed specifically to handle the aforementioned charact eristics . therefore , there is a need to review state-of-the-art routing pro toc ls , categorize them , and compare and contrast their approaches in terms of d elivery rate , resource consumption and end-to-end delay . to this end , thi s paper reviews 63 unicast , multicast and coding-based routing protocols t hat are designed specifically to run in delay tolerant or challenged networks . we provide an extensive qualitative comparison of all protocols , story_separator_special_tag many dtn routing protocols use a variety of mechanisms , including discovering the meeting probabilities among nodes , packet replication , and network coding . the primary focus of these mechanisms is to increase the likelihood of finding a path with limited information , so these approaches have only an incidental effect on such routing metrics as maximum or average delivery latency . in this paper , we present rapid , an intentional dtn routing protocol that can optimize a specific routing metric such as worst-case delivery latency or the fraction of packets that are delivered within a deadline . the key insight is to treat dtn routing as a resource allocation problem that translates the routing metric into per-packet utilities which determine how packets should be replicated in the system.we evaluate rapid rigorously through a prototype of rapid deployed over a vehicular dtn testbed of 40 buses and simulations based on real traces . to our knowledge , this is the first paper to report on a routing protocol deployed on a real dtn at this scale . our results suggest that rapid significantly outperforms existing routing protocols for several metrics . we also show empirically that for small story_separator_special_tag we describe prioritized epidemic ( prep ) for routing in opportunistic networks . prep prioritizes bundles based on costs to destination , source , and expiry time . costs are derived from per-link `` average availability '' information that is disseminated in an epidemic manner . prep maintains a gradient of replication density that decreases with increasing distance from the destination . simulation results show that prep outperforms aodv and epidemic routing by a factor of about 4 and 1.4 respectively , with the gap widening with decreasing density and decreasing storage . we expect prep to be of greater value than other proposed solutions in highly disconnected and mobile networks where no schedule information or repeatable patterns exist . story_separator_special_tag disruption-tolerant networks ( dtns ) attempt to route network messages via intermittently connected nodes . routing in such environments is difficult because peers have little information about the state of the partitioned network and transfer opportunities between peers are of limited duration . in this paper , we propose maxprop , a protocol for effective routing of dtn messages . maxprop is based on prioritizing both the schedule of packets transmitted to other peers and the schedule of packets to be dropped . these priorities are based on the path likelihoods to peers according to historical data and also on several complementary mechanisms , including acknowledgments , a head-start for new packets , and lists of previous intermediaries . our evaluations show that maxprop performs better than protocols that have access to an oracle that knows the schedule of meetings between peers . our evaluations are based on 60 days of traces from a real dtn network we have deployed on 30 buses . our network , called umassdieselnet , serves a large geographic area between five colleges . we also evaluate maxprop on simulated topologies and show it performs well in a wide variety of dtn environments . story_separator_special_tag an ad hoc network is formed by a group of mobile hosts upon a wireless network interface . previous research in communication in ad hoc networks has concentrated on routing algorithms which are designed for fully connected networks . the traditional approach to communication in a disconnected ad hoc network is to let the mobile computer wait for network reconnection passively . this method may lead to unacceptable transmission delays . we propose an approach that guarantees message transmission in minimal time . in this approach , mobile hosts actively modify their trajectories to transmit messages . we develop algorithms that minimize the trajectory modifications under two different assumptions : ( a ) the movements of all the nodes in the system are known and ( b ) the movements of the hosts in the system are not known . story_separator_special_tag mobile ad hoc networks ( manets ) provide rapidly deployable and self-configuring network capacity required in many critical applications , e.g. , battlefields , disaster relief and wide area sensing . in this paper we study the problem of efficient data delivery in sparse manets where network partitions can last for a significant period . previous approaches rely on the use of either long range communication which leads to rapid draining of nodes ' limited batteries , or existing node mobility which results in low data delivery rates and large delays . in this paper , we describe a message ferrying ( mf ) approach to address the problem . mf is a mobility-assisted approach which utilizes a set of special mobile nodes called message ferries ( or ferries for short ) to provide communication service for nodes in the deployment area . the main idea behind the mf approach is to introduce non-randomness in the movement of nodes and exploit such non-randomness to help deliver data . we study two variations of mf , depending on whether ferries or nodes initiate proactive movement . the mf design exploits mobility to improve data delivery performance and reduce energy consumption in story_separator_special_tag this article examines the evolution of routing protocols for intermittently connected ad hoc networks and discusses the trend toward socialbased routing protocols . a survey of current routing solutions is presented , where routing protocols for opportunistic networks are classified based on the network graph employed . the need to capture performance trade-offs from a multi-objective perspective is highlighted . story_separator_special_tag with an increase in the amount of daily uav flights and the number of digital video broadcast return channel satellite ( dvbrcs ) suites in the central command ( centcom ) theater of operations , the demand for a constant access to the operational picture has also increased . until recently , there have been limited solutions for enlarging the access to dvbrcs video feeds . with the advent of wireless technologies , such as wifi , wimax , 3g , and lte , the opportunity to extend the access should be considered . in particular , the ieee 802.21 standard , known as media independent handover services , could be the solution to not only extending the network beyond the reaches of the forward operating bases , but allowing for no loss in connectivity , due to its ability to conduct seamless handovers , while on the move . in this thesis , a proof of concept evaluation of the compatibility of the ieee 802.21 standard and the dvbrcs system , using an open source implementation , is presented . this work is to determine if the standard is to be a viable solution for extending the services of story_separator_special_tag there has been significant progress in the field of vehicular ad hoc networks ( vanet ) over the last several years which support vehicle-to-vehicle and vehicle-to-infrastructure communications . with the two types of communication modes , user can access the internet . however , internet connection for vehicular ad-hoc networks faces a great challenge : vehicle is moving so fast that it may cause the frequent handoffs , which may cause packet delay and packet loss problem . this paper presents an overview of the steps involved in a vanet handoff process along with handoff classification and reviews of some related studies which reduces the handoff latency . keywords vanet ; handoff ; network mobility ; mobile router ; story_separator_special_tag in a near future , vehicles will be equipped with wave-compliant communication technology , enabling not only safety message exchange , but also infotainment and internet access . however , without complete market penetration , other technologies must still be used , such as ieee 802.11g/n to connect to public wi-fi hotspots in the city , or even 3g and 4g cellular networks . due to the high mobility of nodes in vehicular ad-hoc networks ( vanets ) , the connectivity time between nodes becomes very short ; therefore , it is essential to ensure the lowest handover time when moving between road side units ( rsus ) and other vehicles . in this paper , we implemented a multi-technology seamless handover mechanism for vehicular networks that integrates extended mobility protocols based on mipv6 and pmipv6 , with a mobility manager that provides seamless communication between vehicles and the infrastructure , electing the best technology to maintain the vehicle connected without breaking any active sessions . to validate and evaluate the proposed handover approaches , a real-world vehicular testbed was setup , combining three technologies : ieee 802.11p , ieee 802.11g and 3g ; handover metrics were obtained for all story_separator_special_tag nowadays , everything is moving towards the infrastructure less wireless environment to bring the smartness of the society . in this situation , it is necessary to bring the smart technologies in the adhoc network environment . as vehicular traffic is a foremost problem in modern cities and on highway . huge amount of time and resources are wasted while traveling due to traffic congestion . vanet is providing comfort and safety for passengers . moreover , various transactions like information on accident , road condition , petrol bank details , menu in the restaurant , and discount sales can be provided to the drivers and passengers . the speed and time in which the message is sent and received plays an essential part in the intelligent transport system ( its ) . for this the vanet requires efficient and reliable methods for data communication , gathering and retrieving information for seamless handoff in vanet . in this paper we discusses the architecture of vanet consists clusters that s designed by mobile agents having instantaneous conditions of mobile nodes available in vanet . for efficient data communication , an attempt has been made to create a new clustering concept with story_separator_special_tag vehicle ad hoc network ( vanet ) has been recently attracting an increasing attention from both research and industry communities . the specific characteristics of vanet pose difficulties and challenges on network control techniques , particularly , the mobility management techniques of vehicles , which are essential in providing seamless communication and guaranteeing user qos requirements . in this paper , an overview of mobility management techniques in vanet is presented . the mobility scenario and the technical challenges in mobility management in vanet are discussed . recent studies on performance enhancement of mobility management protocols in vanet is summarized and the problem of optimal mobile gateway selection is discussed . finally , the related open research issues are discussed . story_separator_special_tag the goal of the network mobility management is to effectively reduce the complexity of handoff procedure and keep mobile devices connecting to the internet . when users are going to leave an old subnet and enter a new subnet , the handoff procedure is executed on the mobile device , and it may break off the real-time service , such as voip or mobile tv , because of the mobility of mobile devices . because a vehicle is moving so fast , it may cause the handoff and packet loss problems . both of the problems will lower down the throughput of the network . to overcome these problems , we propose a novel network mobility protocol for vehicular ad hoc networks . in a highway , because every car is moving in a fixed direction at a high speed , a car adopting our protocol can acquire an ip address from the vehicular ad hoc network through the vehicle-to-vehicle communications . the vehicle can rely on the assistance of a front vehicle to execute the prehandoff procedure , or it may acquire a new ip address through multihop relays from the car on the lanes of the same or story_separator_special_tag measuring the performance of an implementation of a set of protocols and analyzing the results is crucial to understanding the performance and limitations of the protocols in a real network environment . based on this information , the protocols and their interactions can be improved to enhance the performance of the whole system . to this end , we have developed a network mobility testbed and implemented the network mobility ( nemo ) basic support protocol and have identified problems in the architecture which affect the handoff and routing performance . to address the identified handoff performance issues , we have proposed the use of make-before-break handoffs with two network interfaces for nemo . we have carried out a comparison study of handoffs with nemo and have shown that the proposed scheme provides near-optimal performance . further , we have extended a previously proposed route optimization ( ro ) scheme , optinets . we have compared the routing and header overheads using experiments and analysis and shown that the use of the extended optinets scheme reduces these overheads of nemo to a level comparable with mobile ipv6 ro . finally , this paper shows that the proposed handoff and ro story_separator_special_tag in recent years , multi-technology enabled terminals are becoming available . such multi-mode terminals pose new challenges to mobility management . in order to address some of these challenges , the ieee is currently working on a new specification on media independent handover services ( ieee 802.21 mih ) . the main aim of this specification is to improve user experience of mobile terminals by enabling handovers between heterogeneous technologies while optimizing session continuity . in this article , we provide an overview of the current status of the ieee 802.21 specification . story_separator_special_tag now a days vehicular ad hoc network is an emerging technology . mobility management is one of the most challenging research issues for vehicular ad hoc network to support variety of intelligent transportation system applications . vehicular ad hoc networks are getting importance for inter-vehicle communication , because they allow the communication among vehicles without any infrastructure , configuration effort , and without the high costs of cellular networks . besides local data exchange , vehicular applications may be used to accessing internet services . the access is provided by internet gateways located on the site of roadside . however , the internet integration requires a respective mobility support of the vehicular ad hoc network . in this paper we will study about the network mobility approach in vehicular ad hoc network ; the model will describe the movement of vehicles from one network to other network . the proposed handover scheme reduces the handover latency , packet loss signaling overhead . story_separator_special_tag providing users of multi-interface devices the ability to roam between different access networks is becoming a key requirement for service providers . the availability of multiple mobile broadband access technologies , together with the increasing use of real-time multimedia applications , is creating strong demand for handover solutions that can seamlessly and securely transition user sessions across different access technologies . a key challenge to meeting this growing demand is to ensure handover performance , measured in terms of latency and loss . in addition , handover solutions must allow service providers , application providers , and other entities to implement handover policies based on a variety of operational and business requirements . therefore , standards are required that can facilitate seamless handover between such heterogeneous access networks and that can work with multiple mobility management mechanisms . the ieee 802.21 standard addresses this problem space by providing a media-independent framework and associated services to enable seamless handover between heterogeneous access technologies . in this article , we discuss how the ieee 802.21 standard framework and services are addressing the challenges of seamless mobility for multi-interface devices . in addition , we describe and discuss design considerations for a proof-of-concept story_separator_special_tag internet access and service utilization has been exploding in mobile devices , through the leverage of wlan , 3g and now lte connections . it is this explosion as well that is stressing the underlying fabric of the internet , and motivating new solutions , such as software defined networking ( sdn ) , to build the controlling support and extension capabilities of the future internet . however , sdn has yet to reach the necessary traction to be deployed , and has been more relayed towards experimentation supporting frameworks and away from wireless environments . this paper explores sdn mechanisms and increments them with media independent handover services from the ieee 802.21 standard , coupling them in a single framework for the dynamic optimized support of openflow path establishment and wireless connectivity establishment . the framework was implemented over open-source software in a physical testbed , with results showing the benefits that this solution brings in terms of performance and signaling overhead , when compared with more basic approaches . story_separator_special_tag reducing co2 emissions is an important global environmental issue . over the recent years , wireless and mobile communications have increasingly become popular with consumers . an increasingly popular type of wireless access is the so-called wireless mesh networks ( wmns ) that provide wireless connectivity through much cheaper and more flexible backhaul infrastructure compared with wired solutions . wireless mesh network ( wmn ) is an emerging new technology which is being adopted as the wireless internetworking solution for the near future . due to increased energy consumption in the information and communication technology ( ict ) industries , and its consequent environmental effects , energy efficiency has become a key factor to evaluate the performance of a communication network . this paper mainly focuses on classification layer of the largest existing approaches dedicated to energy conservation . it is also discussing the most interesting works on energy saving in wmns networks . story_separator_special_tag summary although establishing correct and efficient routes is an important design issue in mobile ad hoc networks ( manets ) , a more challenging goal is to provide energy efficient routes because mobile nodes operation time is the most critical limiting factor . this article surveys and classifies the energy-aware routing protocols proposed for manets . they minimize either the active communication energy required to transmit or receive packets or the inactive energy consumed when a mobile node stays idle but listens to the wireless medium for any possible communication requests from other nodes . transmission power control approach and load distribution approach belong to the former category , and sleep/power-down mode approach belongs to the latter category . while it is not clear whether any particular algorithm or a class of algorithms is the best for all scenarios , each protocol has definite advantages/disadvantages and is well suited for certain situations . the purpose of this paper is to facilitate the research efforts in combining the existing solutions to offer a more energy efficient routing mechanism . copyright # 2003 john wiley & sons , ltd . story_separator_special_tag an ad-hoc network of wireless static nodes is considered as it arises in a rapidly deployed , sensor-based , monitoring system . information is generated in certain nodes and needs to reach a set of designated gateway nodes . each node may adjust its power within a certain range that determines the set of possible one hop away neighbors . traffic forwarding through multiple hops is employed when the intended destination is not within immediate reach . the nodes have limited initial amounts of energy that is consumed at different rates depending on the power level and the intended receiver . we propose algorithms to select the routes and the corresponding power levels such that the time until the batteries of the nodes drain-out is maximized . the algorithms are local and amenable to distributed implementation . when there is a single power level , the problem is reduced to a maximum flow problem with node capacities and the algorithms converge to the optimal solution . when there are multiple power levels then the achievable lifetime is close to the optimal ( that is computed by linear programming ) most of the time . it turns out that in order story_separator_special_tag we present the pulse protocol which is designed for multi-hop wireless infrastructure access . while similar to the more traditional access point model , it is extended to operate across multiple hops . this is particularly useful for conference , airport , or large corporate deployments . in these types of environments where users are highly mobile , energy efficiency becomes of great importance . the pulse protocol utilizes a periodic flood initiated at the network gateways which provides both routing and synchronization to the network . this synchronization is used to allow idle nodes to power off their radios for a large percentage of the time when they are not needed for packet forwarding . this results in substantial energy savings . through simulation we validate the performance of the routing protocol with respect to both packet delivery and energy savings . story_separator_special_tag in this paper , we model and characterize the performance of multihop radio networks in the presence of energy constraints and design routing algorithms to optimally utilize the available energy . the energy model allows vastly different energy sources in heterogeneous environments . the proposed algorithm is shown to achieve a competitive ratio ( i.e. , the ratio of the performance of any off-line algorithm that has knowledge of all past and future packet arrivals to the performance of our online algorithm ) that is asymptotically optimal with respect to the number of nodes in the network . the algorithm assumes no statistical information on packet arrivals and can easily be incorporated into existing routing frameworks ( e.g. , proactive or on-demand methodologies ) in a distributed fashion . simulation results confirm that the algorithm performs very well in terms of maximizing the throughput of an energy-constrained network . further , a new threshold-based scheme is proposed to reduce the routing overhead while incurring only minimum performance degradation . story_separator_special_tag as mobile computing requires more computation as well as communication activities , energy efficiency becomes the most critical issue for battery-operated mobile devices . specifically , in ad hoc networks where each node is responsible for forwarding neighbor nodes ' data packets , care has to be taken not only to reduce the overall energy consumption of all relevant nodes but also to balance individual battery levels . unbalanced energy usage will result in earlier node failure in overloaded nodes , and in turn may lead to network partitioning and reduced network lifetime . this paper presents a new routing algorithm , called local energy-aware routing ( lear ) , which achieves a trade-off between balanced energy consumption and shortest routing delay , and at the same time avoids the blocking and route cache problems . our performance study based on glomosim simulator shows that compared to dsr the proposed lear improves the energy balance 1.0-35 % depending on node mobility . story_separator_special_tag ad hoc networks are non-infrastructure networks which consist of mobile nodes . since the mobile nodes have limited battery power , it is very important to use energy efficiently in ad hoc networks . in order to maximize the lifetime of ad hoc networks , traffic should be sent via a route that can be avoid nodes with low energy while minimizing the total transmission power . in addition , considering that the nodes of ad hoc networks are mobile , on-demand routing protocols are preferred for ad hoc networks . however , most existing power-aware routing algorithms do not meet these requirements . although some power-aware routing algorithms try to compromise between two objectives , they have difficulty in implementation into on-demand version . in this paper , we propose a novel on-demand power aware routing algorithm called dear . dear prolongs its network lifetime by compromising between minimum energy consumption and fair energy consumption without additional control packets . dear also improves its data packet delivery ratio . story_separator_special_tag the wireless mesh network is a new emerging technology that will change the world of industrial networks connectivity to more efficient and profitable . mesh networks consist of static wireless nodes and mobile customer ; have emerged as a key technology fornew generation networks . the quality of service ( qos ) is designed to promote and support multimedia applications ( audio and video ) , real time . however guarantee of qos on wireless networks is a difficult problem by comparison at its deployment in a wired ip network . in this paper , we present an efficient routing protocol named as qos- cluster based routing protocol ( q-cbrp ) to support qos in wireless mesh network . story_separator_special_tag we introduce a geographical adaptive fidelity ( gaf ) algorithm that reduces energy consumption in ad hoc wireless networks . gaf conserves energy by identifying nodes that are equivalent from a routing perspective and turning off unnecessary nodes , keeping a constant level of routing fidelity . gaf moderates this policy using application- and system-level information ; nodes that source or sink data remain on and intermediate nodes monitor and balance energy use . gaf is independent of the underlying ad hoc routing protocol ; we simulate gaf over unmodified aodv and dsr . analysis and simulation studies of gaf show that it can consume 40 % to 60 % less energy than an unmodified ad hoc routing protocol . moreover , simulations of gaf suggest that network lifetime increases proportionally to node density ; in one example , a four-fold increase in node density leads to network lifetime increase for 3 to 6 times ( depending on the mobility pattern ) . more generally , gaf is an example of adaptive fidelity , a technique proposed for extending the lifetime of self-configuring systems by exploiting redundancy to conserve energy while maintaining application fidelity . story_separator_special_tag this paper presents span , a power saving technique for multi-hop ad hoc wireless networks that reduces energy consumption without significantly diminishing the capacity or connectivity of the network . span builds on the observation that when a region of a sharedchannel wireless network has a sufficient density of nodes , only a small number of them need be on at any time to forward traffic for active connections . span is a distributed , randomized algorithm where nodes make local decisions on whether to sleep , or to join a forwarding backbone as a coordinator . each node bases its decision on an estimate of how many of its neighbors will benefit from it being awake , and the amount of energy available to it . we give a randomized algorithm where coordinators rotate with time , demonstrating how localized node decisions lead to a connected , capacity-preserving global topology . improvement in system lifetime due to span increases as the ratio of idle-to-sleep energy consumption increases . our simulations show that with a practical energy model , system lifetime of an 802.11 network in power saving mode with span is a factor of two better than without . story_separator_special_tag the idea of virtual backbone routing for ad hoc wireless networks is to operate routing protocols over a virtual backbone . one purpose of virtual backbone routing is to alleviate the serious broadcast storm problem suffered by many exiting on-demand routing protocols for route detection . thus constructing a virtual backbone is very important . in our study , the virtual backbone is approximated by a minimum connected dominating set ( mcds ) in a unit-disk graph . this is a np-hard problem [ 6 ] . we propose a distributed approximation algorithm with performance ratio at most 8. this algorithm has time complexity o ( n ) and message complexity o ( n ) , where n is the number of hosts and is the maximum degree . to our knowledge , this is the best ( time and message efficient ) distributed algorithm known so far . we first find a maximal independent set . then we use a steinter tree to connect all vertices in the set . the performance of our algorithm is witnessed by both simulation results and theoretical analysis . story_separator_special_tag energy efficient communication devices are essential to minimize the operational cost of future networks and to reduce the negative effects of global warming . in this paper we propose a novel energy reduction approach on network level that takes load-dependent energy consumption information of communication equipment into account . case study calculation results show that energy savings of more than 35 % and with it operational cost can be saved by applying energy profile aware routing . story_separator_special_tag services are specified here by describing the service primitives and parameters that characterize each service . this definition is independent of any particular implementation . in particular , the phy-sap operations are defined and described as instantaneous ; however , this may be difficult to achieve in an implementation . 576 copyright \xa9 2007 ieee . all rights reserved . ieee part 11 : wireless lan mac and phy specifications std 802.11-2007 16.2 ir plcp sublayer while the plcp sublayer and the pmd sublayer are described separately , the separation and distinction between these sublayers is artificial , and is not meant to imply that the implementation must separate these functions . this distinction is made primarily to provide a point of reference from which to describe certain functional components and aspects of the pmd . the functions of the plcp can be subsumed by a pmd sublayer ; in this case , the pmd will incorporate the phy-sap as its interface , and will not offer a pmd_sap . story_separator_special_tag [ 1 ] ieee standard for wireless lan medium access control ( mac ) and physical layer ( phy ) specifications , iso/iec 8802-11:1999 ( e ) , 1999 [ 2 ] ieee draft for wireless medium access control ( mac ) and physical layer ( phy ) specifications : medium access control ( mac ) enhancements for quality of service ( qos ) , ieee std 802.1 le/draft13.0 , october 2005 [ 3 ] g.bianchi , ieee 802.11 , saturation throughput analysis , ieee communication lettlers , vol . 2 , no.12 , december , 1998 . [ 4 ] ieee standard for information technology telecommunications and information exchange between systems local and metropolitan area networks specific requirements part 11 : wireless lan medium access control ( mac ) and physical layer ( phy ) specifications amendment 8 : medium access control ( mac ) quality of service enhancements , 2005 . [ 5 ] j. farooq , b. rauf , an overview of wireless lan standards ieee 802.11 and ieee 802.11e , 2006 [ 6 ] lixiang xiong , and guoqiang mao , an analysis of the coexistence of ieee 802.11 dcf and ieee 802.11e edca , wcnc story_separator_special_tag this paper proposes s-mac , a medium access control ( mac ) protocol designed for wireless sensor networks . wireless sensor networks use battery-operated computing and sensing devices . a network of these devices will collaborate for a common application such as environmental monitoring . we expect sensor networks to be deployed in an ad hoc fashion , with nodes remaining largely inactive for long time , but becoming suddenly active when something is detected . these characteristics of sensor networks and applications motivate a mac that is different from traditional wireless macs such as ieee 802.11 in several ways : energy conservation and self-configuration are primary goals , while per-node fairness and latency are less important . s-mac uses a few novel techniques to reduce energy consumption and support self-configuration . it enables low-duty-cycle operation in a multihop network . nodes form virtual clusters based on common sleep schedules to reduce control overhead and enable traffic-adaptive wake-up . s-mac uses in-channel signaling to avoid overhearing unnecessary traffic . finally , s-mac applies message passing to reduce contention latency for applications that require in-network data processing . the paper presents measurement results of s-mac performance on a sample sensor node story_separator_special_tag in this paper we describe t-mac , a contention-based medium access control protocol for wireless sensor networks . applications for these networks have some characteristics ( low message rate , insensitivity to latency ) that can be exploited to reduce energy consumption by introducing an activesleep duty cycle . to handle load variations in time and location t-mac introduces an adaptive duty cycle in a novel way : by dynamically ending the active part of it . this reduces the amount of energy wasted on idle listening , in which nodes wait for potentially incoming messages , while still maintaining a reasonable throughput.we discuss the design of t-mac , and provide a head-to-head comparison with classic csma ( no duty cycle ) and s-mac ( fixed duty cycle ) through extensive simulations . under homogeneous load , t-mac and s-mac achieve similar reductions in energy consumption ( up to 98 % ) compared to csma . in a sample scenario with variable load , however , t-mac outperforms s-mac by a factor of 5. preliminary energy-consumption measurements provide insight into the internal workings of the t-mac protocol . story_separator_special_tag this paper presents a tdma based energy efficient cognitive radio multichannel medium access control ( mac ) protocol called ecr-mac for wireless ad hoc networks . ecr-mac requires only a single half-duplex radio transceiver on each node that integrates the spectrum sensing at physical ( phy ) layer and the packet scheduling at mac layer . in addition to explicit frequency negotiation which is adopted by conventional multichannel mac protocols , ecr-mac introduces lightweight explicit time negotiation . this two-dimensional negotiation enables ecr-mac to exploit the advantage of both multiple channels and tdma , and achieve aggressive power savings by allowing nodes that are not involved in communication to go into doze mode . the ieee 802.11 standard allows for the use of multiple channels available at the phy layer , but its mac protocol is designed only for a single channel . a single channel mac protocol does not work well in a multichannel environment , because of the multichannel hidden terminal problem . the proposed energy efficient ecr-mac protocol allows sus to identify and use the unused frequency spectrum in a way that constrains the level of interference to the primary users ( pus ) . extensive simulation story_separator_special_tag we study energy-efficient topologies and protocols in wireless sensor networks ( wsn ) and personal area networks ( pan ) . in both networks , energy-efficiency is the primary objective for designing communication protocols since the deployed devices are usually battery-powered and resource-constrained . we also identified network performance requirements ( e.g. , end-to-end delay , throughput , etc . ) , and improve energy-efficiency without sacrificing network performance . we identify wsn s distinct requirements on the medium access control ( mac ) protocol and trade-offs among delay , energy , and throughput . based on these requirements and trade-offs , we propose ecr-mac ( for energy-efficient contention-resilient mac ) which employs a dynamic forwarder selection technique to improve both energy-efficiency and delay without requiring additional synchronization or radio hardware support , and efficiently handle spatially-correlated contention . besides conserving independent node s energy in ecr-mac , we also study the energy hole problem in wsn , and propose a differential duty cycle assigning approach to balance energy consumption overall the network . results show that ecr-mac provides significant energy-savings , low delay and high network throughput , and our differential duty cycle assigning approach can further improve network story_separator_special_tag abstract based on wireless mesh network ( wmn ) theory , according to the characteristics of wireless sensor network ( wsn ) clustering topology , wsn and wmn are combined to construct wireless mesh sensor network ( wmsn ) topology . in order to improve the performance of energy-efficiency , throughput and delay in wmsn , this paper designs a multi-hop tdma energy-efficient sleeping mac ( mt-mac ) protocol and gives the performance simulation with matlab . in mt-mac , a tdma frame was divided into a number of time slots for sensor nodes in wmsn to send or receive data packets . as the simulation result shows , compared with smac protocol , mt-mac not only saves 25 % energy consumption of the network , but also decreases 45 % latency of the whole network . story_separator_special_tag in wireless sensor networks , energy efficiency is crucial to achieving satisfactory network lifetime . to reduce the energy consumption significantly , a node should turn off its radio most of the time , except when it has to participate in data forwarding . we propose a new technique , called sparse topology and energy management ( stem ) , which efficiently wakes up nodes from a deep sleep state without the need for an ultra low-power radio . the designer can trade the energy efficiency of this sleep state for the latency associated with waking up the node . in addition , we integrate stem with approaches that also leverage excess network density . we show that our hybrid wakeup scheme results in energy savings of over two orders of magnitude compared to sensor networks without topology management . furthermore , the network designer is offered full flexibility in exploiting the energy-latency-density design space by selecting the appropriate parameter settings of our protocol . story_separator_special_tag in wireless sensor networks , efficient usage of energy helps in improving the network lifetime . as the battery of a sensor node , in most cases , can not be recharged or replaced after the deployment of the sensors , energy management becomes a critical issue in such networks . in order to detect an event , a sensor network spends majority of the time in monitoring its environment , during which a significant amount of energy can be saved by placing the radio in the low-power sleep mode . this can be achieved by using a dual frequency radio setup . however , such energy saving protocols increase the latency encountered in setting up a multihop path . we , in this paper , propose a reservation scheme , latency minimized energy efficient mac protocol ( leem ) , which is a novel hop-ahead reservation scheme in a dual frequency radio to minimize the latency in the multihop path data transmission by reserving the next hop 's channel a priori . thus , in a multihop sensor network , a packet can be forwarded to the next hop , as soon as it is received by a sensor story_separator_special_tag our main contribution is to propose a medium access protocol for mobile ad-hoc networks ( manet ) , specifically designed to maximize data throughput and to minimize power consumption in the face of mobility . our protocol that we call power and mobility-aware wireless protocol ( pmaw ) , is a substantial extension of power aware multi-access protocol with signaling for ad hoc networks ( pamas ) . pamas itself is a modification of the well-known maca protocol with an additional focus on power savings . we expose conditions under which pamas fails to properly detect and avoid data collisions , many of which are caused by the introduction of true mobility into the system . pmaw has the following desirable characteristics : ( 1 ) it is power-aware : the end users will power themselves off to the largest extent possible in order to conserve energy and to avoid detection in hostile environments ; ( 2 ) it is mobility-aware : the power level in end-to-end connections is dynamically set to the minimum level that supports the desired signal-to-noise ratio ( snr ) . moreover changes in the snr allow nodes to detect approaching nodes and to take preventive story_separator_special_tag summary form only given . the xtc ad-hoc network topology control algorithm introduced shows three main advantages over previously proposed algorithms . first , it is extremely simple and strictly local . second , it does not assume the network graph to be a unit disk graph ; xtc proves correct also on general weighted network graphs . third , the algorithm does not require availability of node position information . instead , xtc operates with a general notion of order over the neighbors ' link qualities . in the special case of the network graph being a unit disk graph , the resulting topology proves to have bounded degree , to be a planar graph , and - on average-case graphs - to be a good spanner . story_separator_special_tag topology control not only achieves the objective of power saving but also increases the system throughput by increasing the spatial reuse of communication channels . however , there exists a hidden terminal problem due to asymmetric transmission radii among nodes after topology control . in this paper , we propose a distributed protocol that deals with topology control at network layer and hidden terminal problem at mac layer . each node in the networks determines its power for data transmission and control packets transmission according to the received beacon messages from its neighbors . the proposed protocol works without location information and uses little control packet overhead to prevent the potential collisions due to the hidden terminals . simulations show that our protocol significantly decreases total power consumption in the networks and has a better network throughput compared to previous work . story_separator_special_tag scarce resources of wireless medium ( e.g. , bandwidth , battery power , and so on ) significantly restrict the progress of wireless local area networks ( wlans ) . heavy traffic load and high station density are most likely to incur collisions , and further consume bandwidth and energy . in this paper , a distributed power-saving protocol , power-efficient mac protocol ( pem ) , to avoid collisions and to save energy is proposed . pem takes advantage of power control technique to reduce the interferences among transmission pairs and increase the spatial reuse of wlans . based on the concept of maximum independent set ( mis ) , a novel heuristic scheme with the aid of interference relationship is proposed to provide as many simultaneous transmission pairs as possible . in pem , all stations know when to wake up and when they can enter doze state . thus , stations need not waste power to idle listen and can save much power . the network bandwidth can be efficiently utilized as well . to verify the performance of pem , a lot of simulations are performed . the experimental results show that with the property of story_separator_special_tag |full text : pdf ( 444 kb ) 11. a survey on sensor networks akyildiz , i.f . ; weilian su ; sankarasubramaniam , y. ; cayirci , e. communications magazine , ieee volume 40 , issue 8 , date : aug 2002 , pages : 102114 digital object identifier 10.1109/mcom.2002.1024422 abstract |full text : pdf ( 990 kb ) 12. realization of the next-generation network chae-sub lee ; knight , d. communications magazine , ieee volume 43 , issue 10 , date : oct. 2005 , pages : 3441 digital object identifier 10.1109/mcom.2005.1522122 abstract |full text : pdf ( 151 kb ) 13. a survey on wireless mesh networks akyildiz , i.f . ; xudong wang communications magazine , ieee volume 43 , issue 9 , date : sept. 2005 , pages : s23s30 digital object identifier 10.1109/mcom.2005.1509968 abstract |full text : pdf ( 138 kb ) 14. a 2.4 ghz cmos sub-sampling mixer with integrated filtering pekau , h. ; haslett , j.w . solid-state circuits , ieee journal of volume 40 , issue 11 , date : nov. 2005 , pages : 21592166 digital object identifier 10.1109/jssc.2005.857364 abstract |full text : pdf ( 784 kb ) page story_separator_special_tag wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications . in this paper , we look at communication protocols , which can have significant impact on the overall energy dissipation of these networks . based on our findings that the conventional protocols of direct transmission , minimum-transmission-energy , multi-hop routing , and static clustering may not be optimal for sensor networks , we propose leach ( low-energy adaptive clustering hierarchy ) , a clustering-based protocol that utilizes randomized rotation of local cluster based station ( cluster-heads ) to evenly distribute the energy load among the sensors in the network . leach uses localized coordination to enable scalability and robustness for dynamic networks , and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station . simulations show the leach can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols . in addition , leach is able to distribute energy dissipation evenly throughout the sensors , doubling the useful system lifetime for the networks we simulated . story_separator_special_tag recent developments in processor , memory and radio technology have enabled wireless sensor networks which are deployed to collect useful information from an area of interest . the sensed data must be gathered and transmitted to a base station where it is further processed for end-user queries . since the network consists of low-cost nodes with limited battery power , power efficient methods must be employed for data gathering and aggregation in order to achieve long network lifetimes.in an environment where in a round of communication each of the sensor nodes has data to send to a base station , it is important to minimize the total energy consumed by the system in a round so that the system lifetime is maximized . with the use of data fusion and aggregation techniques , while minimizing the total energy per round , if power consumption per node can be balanced as well , a near optimal data gathering and routing scheme can be achieved in terms of network lifetime.so far , besides the conventional protocol of direct transmission , two elegant protocols called leach and pegasis have been proposed to maximize the lifetime of a sensor network . in this paper story_separator_special_tag a novel energy reduction strategy to maximally exploit the dynamic workload variation is proposed for the offline voltage scheduling of preemptive systems . the idea is to construct a fully-preemptive schedule that leads to minimum energy consumption when the tasks take on approximately the average execution cycles yet still guarantees no deadline violation during the worst-case scenario . end-time for each sub-instance of the tasks obtained from the schedule is used for the on-line dynamic voltage scaling ( dvs ) of the tasks . for the tasks that normally require a small number of cycles but occasionally a large number of cycles to complete , such a schedule provides more opportunities for slack utilization and hence results in larger energy saving . the concept is realized by formulating the problem as a non-linear programming ( nlp ) optimization problem . experimental results show that , by using the proposed scheme , the total energy consumption at runtime is reduced by as high as 60 % for randomly generated task sets when comparing with the static scheduling approach only using worst case workload . story_separator_special_tag wireless mesh networks ( wmns ) have become a better alternative for extending wireless local area networks ( wlans ) to provide network coverage up to the furthest of far flung rural areas . this has been implemented by using a meshed backbone network interconnecting the mesh access points ( maps ) that manages each of the wlans . the routing algorithms use the mesh backbone to establish inter-wlan routes which are subsequently used by the distantly distributed mesh clients to communicate in a multihop fashion . in this work , a fully scalable power-efficient localized distributed topology control algorithm is proposed to effectively construct such a meshed backbone network of access points . the performance of the algorithm is evaluated via several simulations carried out in the ns-2 simulation environment . the resultant network topology is shown to have ( 1 ) reduced average node degree which leads to reduced traffic interference , ( 2 ) increased throughput and ( 3 ) increased network lifetime . story_separator_special_tag a considerable amount of energy is consumed during transmission and reception of messages in a wireless mesh network ( wmn ) . reducing per-node transmission power would greatly increase the network lifetime via power conservation in addition to increasing the network capacity via better spatial bandwidth reuse . in this work , the problem of topology control in a hybrid wmn of heterogeneous wireless devices with varying maximum transmission ranges is considered . a localized distributed topology control algorithm is presented which calculates the optimal transmission power so that ( 1 ) network connectivity is maintained ( 2 ) node transmission power is reduced to cover only the nearest neighbours ( 3 ) networks lifetime is extended . simulations and analysis of results are carried out in the ns-2 environment to demonstrate the correctness and effectiveness of the proposed algorithm . story_separator_special_tag topology control ( tc ) is one of the most important techniques used in wireless ad hoc and sensor networks to reduce energy consumption ( which is essential to extend the network operational time ) and radio interference ( with a positive effect on the network traffic carrying capacity ) . the goal of this technique is to control the topology of the graph representing the communication links between network nodes with the purpose of maintaining some global graph property ( e.g. , connectivity ) , while reducing energy consumption and/or interference that are strictly related to the nodes ' transmitting range . in this article , we state several problems related to topology control in wireless ad hoc and sensor networks , and we survey state-of-the-art solutions which have been proposed to tackle them . we also outline several directions for further research which we hope will motivate researchers to undertake additional studies in this field . story_separator_special_tag wireless nodes equipped with multiple radio interfaces open up new fields of application . ranging from multi-channel usage in a cell in order to increase the bandwidth to the creation of meshed multi-hop topologies . using multiple wireless cards demands a large physical space , more energy consumption and as a consequence decreasing in the battery lifetime . virtualization of the wireless network interface , which means to use a single wireless network interface to connect to more than one network simultaneously , seems to be a promising approach , since it allows us to realize the mentioned scenarios only with one radio interface . in this paper , we want to shed light on the state of the art and want to introduce new approaches to push this field beyond the current status . story_separator_special_tag energy consumption of communication systems is becoming a fundamental issue and , among all the sectors , wireless access networks are largely responsible for the increase in consumption . in addition to the access segment , wireless technologies are also gaining popularity for the backhaul infrastructure of cellular systems mainly due to their cost and easy deployment . in this context , wireless mesh networks ( wmn ) are commonly considered the most suitable architecture because of their versatility that allows flexible configurations . in this paper we combine the flexibility of wmn with the need for energy consumption reduction by presenting an optimization framework for network management that takes into account the trade off between the network energy needs and the daily variations of the demand . a resolution approach and a thorough discussion on the details related to wmn energy management are also presented . story_separator_special_tag we present the design and evaluation of two forms of power management schemes that reduce the energy consumption of networks . the first is based on putting network components to sleep during idle times , reducing energy consumed in the absence of packets . the second is based on adapting the rate of network operation to the offered workload , reducing the energy consumed when actively processing packets . for real-world traffic workloads and topologies and using power constants drawn from existing network equipment , we show that even simple schemes for sleeping or rate-adaptation can offer substantial savings . for instance , our practical algorithms stand to halve energy consumption for lightly utilized networks ( 10-20 % ) . we show that these savings approach the maximum achievable by any algorithms using the same power management primitives . moreover this energy can be saved without noticeably increasing loss and with a small and controlled increase in latency ( < 10ms ) . finally , we show that both sleeping and rate adaptation are valuable depending ( primarily ) on the power profile of network equipment and the utilization of the network itself . story_separator_special_tag ieee 802.11s is the standard for wlan ( wireless lan ) mesh networking . wireless mesh networks ( wmns ) have evolved as the key technology for next generation wireless networking . energy consumption is increasing at an exponential rate with heavy growth in the number of smart devices . with this ever increasing demand for energy in every field coupled with the increase in carbon dioxide levels in the environment produced by wireless devices when in the idle mode , it is also very essential that a technology for reduced energy consumption which is environment friendly , is developed which is fit to be implemented in wireless mesh networks . this paper mainly focuses on the development of an energy efficient technology using modern electronic devices . story_separator_special_tag reducing energy consumption for data transmissions and prolonging network lifetime are crucial in the design of energy-efficient routing protocols . the proportion of successful data transmissions is significant for the reduction of data transmission and traffic load energy consumption , although the energy remaining in node is important for prolonging network lifetime . in this study , the authors propose an energy-efficient cross-layer design for the network layer and medium access control ( mac ) layer that reduces energy consumption and prolongs network lifetime . in the network layer , a minimum transmission energy consumption ( mtec ) routing protocol is proposed for selecting the mtec path for data transmission , based on the proportion of successful data transmissions , the number of channel events , the remaining node energy of nodes and the traffic load of nodes . the authors design an adaptive contention window ( acw ) for the mac layer that provides nodes with high successful transmission rates with greater opportunity for contending for a channel to save energy . they used simulations to compare the proposed cross-layer design ( mtec with acw ) with related protocols , including dynamic source routing , traffic-size aware and the story_separator_special_tag in the wireless sensor networks ( wsns ) , severe energy constraint necessitates energy-efficient protocols to fulfill application objectives . in this paper , we propose a novel cross-layer energy-efficient protocol-cleep , which adopts cross-layer strategy that considers physical layer , mac layer , and network layer jointly . in the physical layer , we first coordinate the transmission power between two nodes and maintain the nodes ' neighbor tables periodically to save the transmission energy . then we construct the optimal routing path by exploiting the transmission power and neighbor tables of the physical layer , which minimize the total energy consumption . finally , mac layer make use of the routing information to determine the node 's duty-cycle , in order to prolong the node 's sleep time . simulation reveals that cleep is energy-efficient and able to achieve significant performance improvement as well . story_separator_special_tag the mobile ad-hoc network is a growing type of wireless network characterized by decentralized and dynamic topology . one of the main challenges in mobile ad-hoc networks ( manets ) is that it has very limited power supply . to overcome the challenge , there are several power-aware routing protocols that have been developed in recent years . this paper describes a survey on some of those energy aware routing protocols for mobile ad-hoc networks . the first category of power aware protocol schemes minimizes the total transmission power and the second category of schemes tries to increase the remaining battery level of every individual node to increase the lifetime of the entire network . the optimizations between these two objectives are important issues in power aware routing . this discussion focuses on different power saving algorithms and their development and modifications . after analyzing the existing works it has been seen that there are still several fields ( using a dual threshold , passive energy saving etc ) where we can give more focus in the future . story_separator_special_tag the paper presents a cross-layer energy-efficient and reliable routing ( named cle2ar2 ) protocol to construct an energy efficient and reliable route and resist the variation of wireless channels for wireless ad hoc networks . cle2ar2not only considers how to construct a route from the source to the destination , but also takes some important lower-layer factors , such as power strength , data transfer rate , and interference , into account to reflect the real situation of a wireless channel dynamically and instantly . based on these factors , a reliable route can be constructed such that the retransmission cost can be reduced and the energy consumption can be saved . simulation results also show that cle2ar2can indeed construct an energy efficient and reliable route in comparison with the related work . story_separator_special_tag in parallel with steady research and development in ad hoc and wireless sensor networks , many testbeds have been implemented and deployed in real world . furthermore , some research works have addressed design issues for deployment in three-dimensional space such as sky or ocean . since many research challenges in three-dimensional spaces have not been explored yet as much as two dimensional spaces , it is required to define the challenging tasks to provide reliable communication in three-dimensional space . in this survey , we aim to identify the unique properties of communication environments in three-dimensional space and address the overview of the state of the art in this research area . to achieve this , the survey is organized according to two good example networks , airborne ad hoc networks ( aanets ) and underwater wireless sensor networks ( uwsns ) . for each network , we introduce and review the related research works to focus on infrastructure , localization , topology design , and position-based routing . finally , open research issues are also discussed and presented . story_separator_special_tag in recent years , unmanned aerial vehicle ( uav ) has been widely adopted in military and civilian applications . for small uavs , cooperation based on communication networks can effectively expand their working area . although the uav networks are quite similar to the traditional mobile ad hoc networks , the special characteristics of the uav application scenario have not been considered in the literature . in this paper , we propose a distributed gateway selection algorithm with dynamic network partition by taking into account the application characteristics of uav networks . in the proposed algorithm , the influence of the asymmetry information phenomenon on uavs topology control is weakened by dividing the network into several subareas . during the operation of the network , the partition of the network can be adaptively adjusted to keep the whole network topology stable even though uavs are moving rapidly . meanwhile , the number of gateways can be completely controlled according to the system requirements . in particular , we define the stability of uav networks , build a network partition model , and design a distributed gateway selection algorithm . simulation results show using our proposed scheme that the faster story_separator_special_tag the design of a routing protocol for unmanned aeronautical ad-hoc networks ( uaanets ) is quite challenging , especially due to the high mobility of unmanned aerial vehicles ( uavs ) and the low network density . among all routing protocols that have been designed for uaanets , the reactive-greedy-reactive ( rgr ) protocol has been proposed as a promising routing protocol in high mobility and density-variable scenarios . while prior results have shown that rgr is competitive to aodv or other purely reactive routing protocols , it is not fully exploiting all the unique features of uaanets and the knowledge maintained by intermediate nodes . this report presents a number of different enhancements , namely scoped flooding , delayed route request , and mobility prediction , in order to improve the performance of rgr in terms of overhead , packet delivery ratio and delay respectively . these modifications were implemented in opnet , and simulation results show that we can reduce the protocol overhead by about 30 % , while at the same time increasing pdr by about 3-5 % and reducing packet latency . the results further show that there is still significant scope for further performance improvements story_separator_special_tag emerging networked systems require domain-specific routing protocols to cope with the challenges faced by the aeronautical environment . we present a geographic routing protocol aerorp for multihop routing in highly dynamic manets . the aerorp algorithm uses velocity-based heuristics to deliver the packets to destinations in a multi-mach speed environment . furthermore , we present the decision metrics used to forward the packets by the various aerorp operational modes . the analysis of the ns-3 simulations shows aerorp has several advantages over other manet routing protocols in terms of pdr , accuracy , delay , and overhead . moreover , aerorp offers performance tradeoffs in the form of different aerorp modes . story_separator_special_tag many location based routing protocols have been developed for ad hoc networks . this paper presents the results of a detailed performance evaluation on two of these protocols : location-aided routing ( lar ) and distance routing effect algorithm for mobility ( dream ) . we compare the performance of these two protocols with the dynamic source routing ( dsr ) protocol and a minimum standard ( i.e. , a protocol that floods all data packets ) . we used ns-2 to simulate 50 nodes moving according to the random waypoint model . our main goal for the performance investigation was to stress the evaluated protocols with high data loads during both low and high speeds . our performance investigation produced the following conclusions . first , the added protocol complexity of dream does not appear to provide benefits over a flooding protocol . second , promiscuous mode operation improves the performance of dsr significantly . third , adding location information to dsr ( i.e. , similar to lar ) increases both the network load and the data packet delivery ratio ; our results conclude that the increase in performance is worth the increase in cost . lastly , our story_separator_special_tag an ad-hoc network is an accumulation of wireless nodes forming a provisional network without any established infrastructure . today , there exist various routing protocols for this environment . this paper compares the performance of some of them . furthermore , the idea of extending a cellular network with ad-hoc routing facilities is haunted and the performance of some existing ad-hoc routing protocols is tested . the performance tests are done by using network simulator 2 ( ns2 ) . by doing the simulations the need of a routing protocol adapted to the new situation is shown . story_separator_special_tag unmanned aerial vehicles ( uavs ) are an emerging technology offering new opportunities for innovative applications and efficient overall process management in the areas of public security , cellular networks and surveying . a key factor for the optimizations yielded by this technology is an advanced mesh network design for fast and reliable information sharing between uavs . in this paper , we analyze the performance of four available mesh routing protocol implementations ( open80211s , batman , batman advanced and olsr ) in the context of swarming applications for uavs . the protocols are analyzed by means of goodput in one static and one mobile scenario using the same embedded hardware platform installed at uavs in current research projects . our results show that layer-2 protocols suit better for mobile applications in comparison to layer-3 . on the other hand , they often cause routing flippings , which are unwanted route changes , in static scenarios imposing a small performance decrease . hence , given the aforementioned routing protocols , we recommend to currently use open80211s or batman-advanced to establish a reliable multi-hop mesh network for swarming applications . story_separator_special_tag almost all geographic routing protocols have been designed for 2-d. we present a novel geographic routing protocol , named multihop delaunay triangulation ( mdt ) , for 2-d , 3-d , and higher dimensions with these properties : 1 ) guaranteed delivery for any connected graph of nodes and physical links , and 2 ) low routing stretch from efficient forwarding of packets out of local minima . the guaranteed delivery property holds for node locations specified by accurate , inaccurate , or arbitrary coordinates . the mdt protocol suite includes a packet forwarding protocol together with protocols for nodes to construct and maintain a distributed mdt for routing . we present the performance of mdt protocols in 3-d and 4-d as well as performance comparisons of mdt routing versus representative geographic routing protocols for nodes in 2-d and 3-d. experimental results show that mdt provides the lowest routing stretch in the comparisons . furthermore , mdt protocols are specially designed to handle churn , i.e. , dynamic topology changes due to addition and deletion of nodes and links . experimental results show that mdt 's routing success rate is close to 100 % during churn , and node states story_separator_special_tag efficient geometric routing algorithms have been studied extensively in two-dimensional ad hoc networks , or simply 2d networks . these algorithms are efficient and they have been proven to be the worst-case optimal , localized routing algorithms . however , few prior works have focused on efficient geometric routing in 3d networks due to the lack of an efficient method to limit the search once the greedy routing algorithm encounters a local-minimum , like face routing in 2d networks . in this paper , we tackle the problem of efficient geometric routing in 3d networks . we propose routing on hulls , a 3d analogue to face routing , and present the first 3d partial unit delaunay triangulation ( pudt ) algorithm to divide the entire network space into a number of closed subspaces . the proposed greedy- hull-greedy ( ghg ) routing is efficient because it bounds the local- minimum recovery process from the whole network to the surface structure ( hull ) of only one of the subspaces . story_separator_special_tag we reconsider the problem of geographic routing in wireless ad hoc networks . we are interested in local , memoryless routing algorithms , i.e . each network node bases its routing decision solely on its local view of the network , nodes do not store any message state , and the message itself can only carry information about o ( 1 ) nodes . in geographic routing schemes , each network node is assumed to know the coordinates of itself and all adjacent nodes , and each message carries the coordinates of its target . whereas many of the aspects of geographic routing have already been solved for 2d networks , little is known about higher-dimensional networks . it has been shown only recently that there is in fact no local memoryless routing algorithm for 3d networks that delivers messages deterministically . in this paper , we show that a cubic routing stretch constitutes a lower bound for any local memoryless routing algorithm , and propose and analyze several randomized geographic routing algorithms which work well for 3d network topologies . for unit ball graphs , we present a technique to locally capture the surface of holes in the network story_separator_special_tag geographic routing is of interest for sensor networks because a point-to-point primitive is an important building block for data-centric applications . while there is a significant body of work on geographic routing algorithms for two-dimensional ( 2d ) networks , geographic routing for practical three-dimensional ( 3d ) sensor networks is relatively unexplored . we show that existing 2d geographic routing algorithms like cldp/gpsr and gdstr perform poorly in practical 3d sensor network deployments and describe gdstr-3d , a new 3d geographic routing algorithm that uses 2-hop neighbor information in greedy forwarding and 2d convex hulls to aggregate node location information . we compare gdstr-3d to existing algorithms , including cldp/gpsr , gdstr , aodv , vrr and s4 , both in a real wireless sensor testbed and with tossim simulations to show that gdstr-3d is highly scalable , requires only a modest amount of storage and achieves routing stretch close to 1 . story_separator_special_tag mobile ad hoc routing protocols allow nodes with wireless adaptors to communicate with one another without any pre-existing network infrastructure . existing ad hoc routing protocols , while robust to rapidly changing network topology , assume the presence of a connected path from source to destination . given power limitations , the advent of short-range wireless networks , and the wide physical conditions over which ad hoc networks must be deployed , in some scenarios it is likely that this assumption is invalid . in this work , we develop techniques to deliver messages in the case where there is never a connected path from source to destination or when a network partition exists at the time a message is originated . to this end , we introduce epidemic routing , where random pair-wise exchanges of messages among mobile hosts ensure eventual message delivery . the goals of epidemic routing are to : i ) maximize message delivery rate , ii ) minimize message latency , and iii ) minimize the total resources consumed in message delivery . through an implementation in the monarch simulator , we show that epidemic routing achieves eventual delivery of 100 % of messages with story_separator_special_tag intermittently connected mobile networks are sparse wireless networks where most of the time there does not exist a complete path from the source to the destination . these networks fall into the general category of delay tolerant networks . there are many real networks that follow this paradigm , for example , wildlife tracking sensor networks , military networks , inter-planetary networks , etc . in this context , conventional routing schemes would fail.to deal with such networks researchers have suggested to use flooding-based routing schemes . while flooding-based schemes have a high probability of delivery , they waste a lot of energy and suffer from severe contention , which can significantly degrade their performance . furthermore , proposed efforts to significantly reduce the overhead of flooding-based schemes have often be plagued by large delays . with this in mind , we introduce a new routing scheme , called spray and wait , that `` sprays '' a number of copies into the network , and then `` waits '' till one of these nodes meets the destination.using theory and simulations we show that spray and wait outperforms all existing schemes with respect to both average message delivery delay and story_separator_special_tag definition : wireless networks with intermittent connectivity ( also called delay or disruption tolerant networks ) , are characterized by sporadic availability of end-to-end paths between end hosts . existing tcp/ip packet routing protocols can not cope with the lack of reliable end-to-end connectivity . new routing mechanisms are necessary . the internet has been exceedingly successful in establishing a global communication network built on the concept of a common set of tcp/ip protocols . within the last ten years there have been tremendous research efforts spent adapting the tcp/ip protocol stack to various types of wireless and mobile networks . routing has been recognized as the most challenging problem in networks with a dynamic topology . protocols , such as aodv [ 15 ] , dsr [ 10 ] , olsr [ 4 ] and many others have been thoroughly analyzed in multiple scenarios . their main limitation comes from the fact that , by design , they work only if there is a contemporaneous end-to-end path between endpoints . these protocols are able to find a route only if the destination router can complete the route discovery protocol ( for on-demand routing protocols ) or successfully disseminate link story_separator_special_tag as a result of high mobility of unmanned aerial vehicles ( uavs ) , designing a good routing protocol is challenging for unmanned aeronautical ad-hoc networks ( uaanets ) . geographic-based routing mechanisms are seen to be an interesting option for routing in uaanets due to the fact that location information of uavs is readily available . in this paper , a combined routing protocol , called the reactive-greedy-reactive ( rgr ) , is presented for uaanet applications , which combines the mechanisms of the greedy geographic forwarding ( ggf ) and reactive routing . the proposed rgr employs location information of uavs as well as reactive end-to-end paths in the routing process . simulation results show that rgr outperforms existing protocols such as ad-hoc on-demand distance vector ( aodv ) in search uaanet missions in terms of delay and packet delivery ratio , yet its overhead is similar to traditional mechanisms . story_separator_special_tag in this paper , we address the problem of routing in intermittently connected networks . in such networks there is no guarantee that a fully connected path between source and destination exists at any time , rendering traditional routing protocols unable to deliver messages between hosts . there does , however , exist a number of scenarios where connectivity is intermittent , but where the possibility of communication still is desirable . thus , there is a need for a way to route through networks with these properties . we propose prophet , a probabilistic routing protocol for intermittently connected networks and compare it to the earlier presented epidemic routing protocol through simulations . we show that prophet is able to deliver more messages than epidemic routing with a lower communication overhead . story_separator_special_tag some forms of ad-hoc networks need to operate in extremely performance-challenged environments where end-to-end connectivity is rare . such environments can be found for example in very sparse mobile networks where nodes `` meet '' only occasionally and are able to exchange information , or in wireless sensor networks where nodes sleep most of the time to conserve energy . forwarding mechanisms in such networks usually resort to some form of intelligent flooding , as for example in probabilistic routing.we propose a communication algorithm that significantly reduces the overhead of probabilistic routing algorithms , making it a suitable building block for a delay-tolerant network architecture . our forwarding scheme is based on network coding . nodes do not simply forward packets they overhear but may send out information that is coded over the contents of several packets they received . we show by simulation that this algorithm achieves the reliability and robustness of flooding at a small fraction of the overhead . story_separator_special_tag natural disasters are an unexpected fact of life that may occur during unpredictable times and in unpredictable ways . ability to mitigate and adapt to natural disasters after many devastating events is becoming a greater challenge to the emergency response operators . inefficiencies in the technology during rescue operations makes the communication between the rescuers problematic . the emerging role of gpr and gsm remote cameras makes it possible to capture and process the mission-critical data for the use of first responders . due to its portability and affordable cost , it is feasible to integrate them into environment monitoring tasks in critical care regions . a problem is that there is a need to switch between different access networks for providing effective mission critical communication . ieee 802.21 standard provides a media independent framework that enables seamless handover between heterogeneous access technologies . we proposed a life detection framework that assists the rescue operators in detecting alive humans and thereby provides a smooth communication between them . this framework together with media independent handover scheme and real time data distribution service operates in a reliable and timely manner against unpredictable environments . keywords : media independent handover ( mih story_separator_special_tag unmanned aerial vehicles ( uavs ) are of increasing interest to researchers because of their diverse applications , such as military operations and search and rescue . the problem we have chosen to focus on is using a swarm of small , inexpensive uavs to discover static targets in a search space . though many different swarm models have been used for similar problems , our proposed model , the icosystem swarm game , to our knowledge has not been evaluated for this particular problem of target search.further , we propose to simulate the performance of this model in a semi-realistic communications environment . the challenge here is to find the optimal multi-hop configuration for the uavs , so that they can find the most targets , avoid collision with each other as much as possible , and still communicate efficiently . we implement this through a weighted shortest-path problem using dijkstra 's algorithm , with the weights being the transmission cost over distance . testing has shown that our multi-hop communications perform , in terms of target-finding and collision avoidance with other uavs , as well as an idealized communications environment . story_separator_special_tag with the extensive applications of unmanned aerial vehicle ( uav ) , there is an urgent need for building uav fleet networks to enhance the overall operational efficiency , in which the architecture of mobile ad hoc network ( manet ) should be adopted . in this paper , we propose a novel routing protocol to address the issues of routing in uav fleet networks , referred to as cluster-based locationaided dynamic source routing ( cbladsr ) . cbladsr forms stable cluster architecture of uav fleet as the basis and then performs route discovery and route maintenance by using the geographic location of uavs . the clustering process utilizes node-weight heuristic algorithm to elect cluster heads and form clusters , while the routing process is a combination of intra cluster routing and inter cluster routing , which employs short-range transmission and long-range transmission respectively . simulation results have shown that cbladsr outperforms dsr and grp significantly in successful delivery ratio and average end-to-end delay , as well as in scalability and dynamic performance , which make it more suitable to be applied in uav fleet networks . story_separator_special_tag unmanned aerial vehicles ( uavs ) , and unmanned aerial systems ( uas ) as such in general , need wireless networks in order to communicate . uas are very flexible and hence allow for a wide range of missions by means of utilizing dierent uavs according to the mission requirements . each of these missions also poses special needs and requirements on the communication network . especially , mission scenarios calling for uav swarms increase the complexity and call for specialized communication solutions . this work focuses on these specialties and needs and describes the selection process , adaptation and implementation of an ad-hoc routing protocol tailored to an uav surrounding and a correspondingly adapted communication method . story_separator_special_tag vehicular communication networks , such as the 802.11p and wireless access in vehicular environments ( wave ) technologies , are becoming a fundamental platform for providing real-time access to safety and entertainment information . in particular , infotainment applications and , consequently , ip-based communications , are key to leverage market penetration and deployment costs of the 802.11p/wave network . however , the operation and performance of ip in 802.11p/wave are still unclear as the wave standard guidelines for being ip compliant are rather minimal . this paper studies the 802.11p/wave standard and its limitations for the support of infrastructure-based ip applications , and proposes the vehicular ip in wave ( vip-wave ) framework . vip-wave defines the ip configuration for extended and non-extended ip services , and a mobility management scheme supported by proxy mobile ipv6 over wave . it also exploits multi-hop communications to improve the network performance along roads with different levels of infrastructure presence . furthermore , an analytical model considering mobility , handoff delays , collisions , and channel conditions is developed for evaluating the performance of ip communications in wave . extensive simulations are performed to demonstrate the accuracy of our analytical model and
an unoriented flow in a graph , is an assignment of real numbers to the edges , such that the sum of the values of all edges incident with each vertex is zero . this is equivalent to a flow in a bidirected graph all of whose edges are extraverted . a nowhere-zero unoriented k-flow is an unoriented flow with values from the set { \xb11 , . , \xb1 ( k 1 ) } . it has been conjectured that if a graph has a nowhere-zero unoriented flow , then it admits a nowhere-zero unoriented 6-flow . we prove that this conjecture is true for hamiltonian graphs , with 6 replaced by 12 . story_separator_special_tag abstract for a graph g , a zero-sum flow is an assignment of non-zero real numbers on the edges of g such that the total sum of all edges incident with any vertex of g is zero . a zero-sum k - flow for a graph g is a zero-sum flow with labels from the set { \xb1 1 , , \xb1 ( k - 1 ) } . in this paper for a graph g , a necessary and sufficient condition for the existence of zero-sum flow is given . we conjecture that if g is a graph with a zero-sum flow , then g has a zero-sum 6 -flow . it is shown that the conjecture is true for 2 -edge connected bipartite graphs , and every r -regular graph with r even , r > 2 , or r = 3 . story_separator_special_tag let $ g $ be a graph . a zero-sum flow of $ g $ is an assignment of non-zero real numbers to the edges of $ g $ such that the sum of the values of all edges incident with each vertex is zero . let $ k $ be a natural number . a zero-sum $ k $ -flow is a flow with values from the set $ \\ { \\pm 1 , \\ldots , \\pm ( k-1 ) \\ } $ . it has been conjectured that every $ r $ -regular graph , $ r\\geq 3 $ , admits a zero-sum $ 5 $ -flow . in this paper we provide an affirmative answer to this conjecture , except for\xa0 $ r=5 $ . story_separator_special_tag answering a question raised in [ siam j . comput. , 10 ( 1981 ) , pp . 746 750 ] , we show that every bridgeless multigraph with v vertices and e edges can be covered by simple circuits whose total length is at most $ \\min ( \\tfrac { 5 } { 3 } e , e + \\tfrac { 7 } { 3 } v - \\tfrac { 7 } { 3 } ) $ . our proof supplies an efficient algorithm for finding such a cover . story_separator_special_tag zaslavsky proved in 2012 that , up to switching isomorphism , there are six different signed petersen graphs and that they could be told apart by their chromatic polynomials , by showing that the latter give distinct results when evaluated at 3. he conjectured that the six different signed petersen graphs also have distinct zero-free chromatic polynomials , and that both types of chromatic polynomials have distinct evaluations at any positive integer . we developed and executed a computer program ( running in sage ) that efficiently determines the number of proper k-colorings for a given signed graph ; our computations for the signed petersen graphs confirm zaslavsky s conjecture . we also computed the chromatic polynomials of all signed complete graphs with up to five vertices . graph coloring problems are ubiquitous in many areas within and outside of mathematics . we are interested in certain enumerative questions about coloring signed graphs . a signed graph = ( , ) consists of a graph = ( v , e ) and a signature { \xb1 } . the underlying graph may have multiple edges and , besides the usual links and loops , also half edges ( with only story_separator_special_tag a nowhere-zero $ k $ -flow on a graph $ \\gamma $ is a mapping from the edges of $ \\gamma $ to the set $ \\ { \\pm1 , \\pm2 , . , \\pm ( k-1 ) \\ } \\subset \\bbz $ such that , in any fixed orientation of $ \\gamma $ , at each node the sum of the labels over the edges pointing towards the node equals the sum over the edges pointing away from the node . we show that the existence of an \\emph { integral flow polynomial } that counts nowhere-zero $ k $ -flows on a graph , due to kochol , is a consequence of a general theory of inside-out polytopes . the same holds for flows on signed graphs . we develop these theories , as well as the related counting theory of nowhere-zero flows on a signed graph with values in an abelian group of odd order . our results are of two kinds : polynomiality or quasipolynomiality of the flow counting functions , and reciprocity laws that interpret the evaluations of the flow polynomials at negative integers in terms of the combinatorics of the graph . story_separator_special_tag abstract it is shown that the edges of a bridgeless graph g can be covered with cycles such that the sum of the lengths of the cycle is at most |e ( g ) + min { 2 3 |e ( g ) | , 7 3 ( |v ( g ) | 1 ) } . story_separator_special_tag graph theory is a flourishing discipline containing a body of beautiful and powerful theorems of wide applicability . its explosive growth in recent years is mainly due to its role as an essential structure underpinning modern applied mathematics computer science , combinatorial optimization , and operations research in particular but also to its increasing application in the more applied sciences . the versatility of graphs makes them indispensable tools in the design and analysis of communication networks , for instance . the primary aim of this book is to present a coherent introduction to the subject , suitable as a textbook for advanced undergraduate and beginning graduate students in mathematics and computer science . it provides a systematic treatment of the theory of graphs without sacrificing its intuitive and aesthetic appeal . commonly used proof techniques are described and illustrated , and a wealth of exercises - of varying levels of difficulty - are provided to help the reader master the techniques and reinforce their grasp of the material . a second objective is to serve as an introduction to research in graph theory . to this end , sections on more advanced topics are included , and a number story_separator_special_tag abstract it is proved that every bidirected graph which can be provided with a nowhere-zero integral flow can also be provided with a nowhere-zero integral flow with absolute values less than 216. the connection between these flows and the local tensions on a graph which is 2-cell imbedded in a closed 2-manifold is explained . these local tensions will be studied in a subsequent paper . story_separator_special_tag we present a conceptually simple , flexible , and general framework for object instance segmentation . our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance . the method , called mask r-cnn , extends faster r-cnn by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition . mask r-cnn is simple to train and adds only a small overhead to faster r-cnn , running at 5 fps . moreover , mask r-cnn is easy to generalize to other tasks , e.g. , allowing us to estimate human poses in the same framework . we show top results in all three tracks of the coco suite of challenges , including instance segmentation , bounding-box object detection , and person keypoint detection . without bells and whistles , mask r-cnn outperforms all existing , single-model entries on every task , including the coco 2016 challenge winners . we hope our simple and effective approach will serve as a solid baseline and help ease future research in instance-level recognition . code has been made available at : https : //github.com/facebookresearch/detectron . story_separator_special_tag this paper is to introduce circuit , bond , flow , and tension spaces and lattices for signed graphs , and to study the relations among these spaces and lattices . the key ingredient is to introduce circuit and bond characteristic vectors so that the desired spaces and lattices can be defined such that their dimensions and ranks match well to that of matroids of signed graphs . the main results can be stated as follows : ( 1 ) the classification of minimal directed cuts ; ( 2 ) the circuit space ( lattice ) equals flow space ( lattice ) , and the bond space equals the tension space ; ( 3 ) the bond lattice equals the row lattice of the incidence matrix , and the reduced bond lattice equals the tension lattice ; and ( 4 ) for unbalanced signed graphs , the module of potentials is isomorphic to the module of tensions if the coefficient ring is 2-torsion free . story_separator_special_tag a passenger compartment mounted sensor senses crashes when located outside of the crush zone in response to velocity changes . the velocity change needed to fire the sensor is much greater than the velocity change associated with overcoming the sensor bias . story_separator_special_tag tutte observed that every nowhere-zero $ k $ -flow on a plane graph gives rise to a $ k $ -vertex-coloring of its dual , and vice versa . thus nowhere-zero integer flow and graph coloring can be viewed as dual concepts . jaeger further shows that if a graph $ g $ has a face- $ k $ -colorable 2-cell embedding in some orientable surface , then it has a nowhere-zero $ k $ -flow . however , if the surface is nonorientable , then a face- $ k $ -coloring corresponds to a nowhere-zero $ k $ -flow in a signed graph arising from $ g $ . graphs embedded in orientable surfaces are therefore a special case that the corresponding signs are all positive . in this paper , we prove that if an 8-edge-connected signed graph admits a nowhere-zero integer flow , then it has a nowhere-zero 3-flow . our result extends thomassen 's 3-flow theorem on 8-edge-connected graphs to the family of all 8-edge-connected signed graphs . and it also improves zhu 's 3-flow theorem on 11-edge-connected signed graphs . story_separator_special_tag a signed relative clique number of signed graph where edges are assigned positive or negative signs is the size of a largest subset x of vertices such that every two vertices are either adjacent or are part of a 4-cycle with an odd number of negative edges . the signed relative clique number is sandwiched between two other parameters of signed graphs , namely , the signed absolute clique number and the signed chromatic number , all three notions defined in [ r. naserasr , e. rollova , and e. sopena . homomorphisms of signed graphs . journal of graph theory , 2014 ] . thus , together with a result from [ p. ochem , a. pinlou , and s. sen. homomorphisms of signed planar graphs . arxiv preprint arxiv:1401.3308 , 2014 . ] , the lower bound of 8 and upper bound of 40 has already been proved for the signed relative clique number of the family of planar graphs . here we improve the upper bound to 15. furthermore , we determine the exact values of signed relative clique number of the families of outerplanar graphs and triangle-free planar graphs . story_separator_special_tag a zero-sum k-flow for a graph g is a vector in the null-space of the 0 , 1 -incidence matrix of g such that its entries belong to { ? 1 , ? , ? ( k - 1 ) } . akbari et al . ( 2009 ) 5 conjectured that if g is a graph with a zero-sum flow , then g admits a zero-sum 6-flow . ( 2 , 3 ) -semiregular graphs are an important family in studying zero-sum flows . akbari et al . ( 2009 ) 5 proved that if zero-sum conjecture is true for any ( 2 , 3 ) -semiregular graph , then it is true for any graph . in this paper , we show that there is a polynomial time algorithm to determine whether a given ( 2 , 3 ) -graph g has a zero-sum 3-flow . in fact , we show that , there is a polynomial time algorithm to determine whether a given ( 2 , 4 ) -graph g with n vertices has a zero-sum 3-flow , where the number of vertices of degree four is o ( log ? n ) . furthermore , we story_separator_special_tag bouchet conjectured that every bidirected graph which admits a nowhere-zero bidirected flow will admit a nowhere-zero bidirected 6-flow [ a. bouchet , nowhere-zero integer flows on a bidirected graph , j. combin . theory ser . b 34 ( 1983 ) 279-292 ] . he proved that this conjecture is true with 6 replaced by 216. zyka proved in his ph.d dissertation that it is true with 6 replaced by 30. khelladi proved it is true with 6 replaced with 18 for 4-connected graphs [ a. khelladi , nowhere-zero integer chains and flows in bidirected graphs , j. combin . theory ser . b 43 ( 1987 ) 95-115 ] . in this paper , we prove that bouchet 's conjecture is true for 6-edge connected bidirected graphs . story_separator_special_tag a main purpose of this work is to give a good algorithm for a certain well-described class of integer linear programming problems , called matching problems ( or the matching problem ) . methods developed for simple matching [ 2 ] story_separator_special_tag we show that depth first search can be used to give a proper coloring of connected signed graphs g using at most $ $ \\delta ( g ) $ $ colors , provided g is different from a balanced complete graph , a balanced cycle of odd length , and an unbalanced cycle of even length , thus giving a new , short proof to the generalization of brooks theorem to signed graphs , first proved by macajova , raspaud , and skoviera . story_separator_special_tag a signed graph ( g , ) is an undirected graph g together with an assignment of signs ( positive or negative ) to all its edges , where denotes the set of negative edges . two signatures are said to be equivalent if one can be obtained from the other by a sequence of resignings ( i.e . switching the sign of all edges incident to a given vertex ) . extending the notion of usual graph homomorphisms , homomorphisms of signed graphs were introduced , and have lead to some extensions and strengthenings in the theory of graph colorings and homomorphisms . we study the complexity of deciding whether a given signed graph admits a homomorphism to a fixed target signed graph [ h , ] , i.e . the ( h , ) -coloring problem . we prove a dichotomy result for the class of all ( c k , ) -coloring problems ( where c k is a cycle of length k 3 ) : ( c k , ) -coloring is np-complete , unless both k and the size of are even . we conjecture that this dichotomy can be extended to all signed graphs story_separator_special_tag a perfect matching cover of a graph $ g $ is a set of perfect matchings of $ g $ such that each edge of $ g $ is contained in at least one member of it . berge conjectured that every bridgeless cubic graph has a perfect matching cover of order at most 5. the berge conjecture is largely open and it is even unknown whether a constant integer $ c $ does exist such that every bridgeless cubic graph has a perfect matching cover of order at most $ c $ . in this paper , we show that a bridgeless cubic graph $ g $ has a perfect matching cover of order at most 11 if $ g $ has a 2-factor in which the number of odd circuits is 2 . story_separator_special_tag this paper studies the choosability of signed planar graphs . we prove that every signed planar graph is 5-choosable and that there is a signed planar graph which is not 4-choosable while the unsigned graph is 4-choosable . for each $ k \\in \\ { 3,4,5,6\\ } $ , every signed planar graph without circuits of length $ k $ is 4-choosable . furthermore , every signed planar graph without circuits of length 3 and of length 4 is 3-choosable . we construct a signed planar graph with girth 4 which is not 3-choosable but the unsigned graph is 3-choosable . story_separator_special_tag bouchet conjectured in 1983 that each signed graph that admits a nowhere-zero flow has a nowhere-zero 6-flow . we prove that the conjecture is true for all signed series-parallel graphs . unlike the unsigned case , the restriction to series-parallel graphs is nontrivial ; in fact , the result is tight for infinitely many graphs . story_separator_special_tag general results on nowhere-zero integral chain groups are proved and then specialized to the case of flows in bidirected graphs . for instance , it is proved that every 4-connected ( resp . 3-connected and balanced triangle free ) bidirected graph which has at least an unbalanced circuit and a nowhere-zero flow can be provided with a nowhere-zero integral flow with absolute values less than 18 ( resp . 30 ) . this improves , for these classes of graphs , bouchet 's 216-flow theorem ( j. combin . theory ser . b 34 ( 1982 ) , 279 292 ) . we also approach his 6-flow conjecture by proving it for a class of 3-connected graphs . our method is inspired by seymour 's proof of the 6-flow theorem ( j. combin . theory ser . b 30 ( 1981 ) , 130 136 ) , and makes use of new connectedness properties of signed graphs . story_separator_special_tag the main theorem of this paper provides partial results on some major open problems in graph theory , such as [ email\xa0protected ] ? s 3-flow conjecture ( from the 1970s ) that every 4-edge connected graph admits a nowhere-zero 3-flow , the conjecture of jaeger , linial , payan and tarsi ( 1992 ) that every 5-edge-connected graph is z '' 3-connected , [ email\xa0protected ] ? s circular flow conjecture ( 1984 ) that for every odd natural number k > =3 , every ( 2k-2 ) -edge-connected graph has a modulo k-orientation , etc . it was proved recently by thomassen that , for every odd number k > =3 , every ( 2k^2+k ) -edge-connected graph g has a modulo k-orientation ; and every 8-edge-connected graph g is z '' 3-connected and admits therefore a nowhere-zero 3-flow . in the present paper , [ email\xa0protected ] ? s method is refined to prove the following : for every odd numberk > =3 , every ( 3k-3 ) -edge-connected graph has a modulo k-orientation . as a special case of the main result , every 6-edge-connected graph isz '' 3-connected and admits therefore a nowhere-zero 3-flow . story_separator_special_tag we introduce the concept of a signed circuit cover of a signed graph . a signed circuit cover is a natural analog of a circuit cover of a graph and is equivalent to a covering of the corresponding signed graphic matroid with circuits . as in the case of graphs , a signed graph has a signed circuit cover only when it admits a nowhere-zero integer flow . in the present article , we establish the existence of a universal coefficient q r such that every signed graph g that admits a nowhere-zero integer flow has a signed circuit cover of total length at most qi\xbe ? |eg| . we show that if g is bridgeless , then qi\xbe ? 9 , and in the general case qi\xbe ? 11 . story_separator_special_tag backgroundhigh-throughput profiling of dna methylation status of cpg islands is crucial to understand the epigenetic regulation of genes . the microarray-based infinium methylation assay by illumina is one platform for low-cost high-throughput methylation profiling . both beta-value and m-value statistics have been used as metrics to measure methylation levels . however , there are no detailed studies of their relations and their strengths and limitations.resultswe demonstrate that the relationship between the beta-value and m-value methods is a logit transformation , and show that the beta-value method has severe heteroscedasticity for highly methylated or unmethylated cpg sites . in order to evaluate the performance of the beta-value and m-value methods for identifying differentially methylated cpg sites , we designed a methylation titration experiment . the evaluation results show that the m-value method provides much better performance in terms of detection rate ( dr ) and true positive rate ( tpr ) for both highly methylated and unmethylated cpg sites . imposing a minimum threshold of difference can improve the performance of the m-value method but not the beta-value method . we also provide guidance for how to select the threshold of methylation differences.conclusionsthe beta-value has a more intuitive biological interpretation , story_separator_special_tag abstract we determine the flow numbers of signed complete and signed complete bipartite graphs . story_separator_special_tag bouchet 's conjecture asserts that each signed graph which admits a nowhere-zero flow has a nowhere-zero 6-flow . we verify this conjecture for two basic classes of signed graphs-signed complete and signed complete bipartite graphs by proving that each such flow-admissible graph admits a nowhere-zero 4-flow and we characterise those which have a nowhere-zero 2-flow and a nowhere-zero 3-flow . story_separator_special_tag let f c ( g ) and f ( g ) be the circular and the integer flow number of a flow-admissible bidirected graph g , respectively . raspaud and zhu proved that f ( g ) ? 2 ? f c ( g ) ? - 1 . this note shows that this result can not be improved . moreover , in the same paper , raspaud and zhu conjectured that f ( g ) - f c ( g ) < 1 for every flow-admissible bidirected graph g . this conjecture was disproved by schubert and steffen , who showed that ? f ? 2 , where ? f = sup { f ( g ) - f c ( g ) : g ? is ? a ? flow-admissible ? bidirected ? graph } . our result implies that ? f ? 3 . furthermore , if bouchet 's 6-flow conjecture is true , then ? f = 3 . story_separator_special_tag we generalise to signed graphs a classical result of tutte [ canad . j. math . 8 ( 1956 ) , 13 -- 28 ] stating that every integer flow can be expressed as a sum of characteristic flows of circuits . in our generalisation , the r\\^ole of circuits is taken over by signed circuits of a signed graph which occur in two types -- either balanced circuits or pairs of disjoint unbalanced circuits connected with a path intersecting them only at its ends . as an application of this result we show that a signed graph $ g $ admitting a nowhere-zero $ k $ -flow has a covering with signed circuits of total length at most $ 2 ( k-1 ) |e ( g ) | $ . story_separator_special_tag this paper is devoted to a detailed study of nowhere-zero flows on signed eulerian graphs . we generalise the well-known fact about the existence of nowhere-zero 2flows in eulerian graphs by proving that every signed eulerian graph that admits an integer nowhere-zero flow has a nowhere-zero 4-flow . we also characterise signed eulerian graphs with flow number 2 , 3 , and 4 , as well as those that do not have an integer nowhere-zero flow . finally , we discuss the existence of nowhere-zero a-flows on signed eulerian graphs for an arbitrary abelian group a . story_separator_special_tag we characterise signed cubic graphs that admit a nowhere-zero 3 -flow , a nowhere-zero 4 -flow , and nowhere-zero flows with values in abelian groups of order 3 and 4 . most of our characterisations feature the concept of an antibalanced signature , one that is switching-equivalent to the all-negative signature . in particular , we prove that a signed cubic graph has a nowhere-zero 3 -flow if and only if it has a perfect matching and is antibalanced . our results suggest several interesting problems for further investigation . story_separator_special_tag we conjecture that every signed graph of unbalanced girth 2g , whose underlying graph is bipartite and planar , admits a homomorphism to the signed projective cube of dimension 2g1 . our main result is to show that for a given g , this conjecture is equivalent to the corresponding case ( k = 2g ) of a conjecture of seymour claiming that every planar k-regular multigraph with no odd edge-cut of less than k edges is k-edge-colorable . to this end , we exhibit several properties of signed projective cubes and establish a folding lemma for planar even signed graphs . story_separator_special_tag we study the homomorphism relation between signed graphs where the underlying graph g is bipartite . we show that this notion captures the notions of chromatic number and graph homomorphisms . in particular we will study hadwiger s conjecture in this setting . we show that for small values of the chromatic number there are natural strengthening of this conjecture but such extensions will not work for larger chromatic numbers . story_separator_special_tag a signed graph [ g , ] is a graph g together with an assignment of signs + and - to all the edges of g where is the set of negative edges . furthermore [ g , 1 ] and [ g , 2 ] are considered to be equivalent if the symmetric difference of 1 and 2 is an edge cut of g. naturally arising from matroid theory , several notions of graph theory , such as the theory of minors and the theory of nowhere-zero flows , have been already extended to signed graphs . in an unpublished manuscript , b. guenin introduced the notion of signed graph homomorphisms where he showed how some well-known conjectures can be captured using this notion . a signed graph [ g , ] is said to map to [ h , 1 ] if there is an equivalent signed graph [ g , ' ] of [ g , ] and a mapping i\xbe ? : vgi\xbe ? vh such that i if xy eg then i\xbe ? xi\xbe ? y eh and iixy ' if and only if i\xbe ? xi\xbe ? y 1. the chromatic number of a story_separator_special_tag the comments below apply to all printings of the book dated 2005 or earlier . the table following contains more than just a list of typing errors . some statements and proofs have been corrected , simplified , or clarified . moreover , the current status has been given for all the unsolved problems or conjectures that appear in chapter 14. for those changes that simply involve the insertion of extra words , the corrected text is given with the inserted words underlined . it is planned to update this table at regular intervals and , eventually , these changes should be incorporated into the next printing of the book . the reader is encouraged to send the author < oxley @ math.lsu.edu > corrections that do not appear in the table below . story_separator_special_tag the circular flow number @ f '' c ( g , @ s ) of a signed graph ( g , @ s ) is the minimum r for which an orientation of ( g , @ s ) admits a circular r-flow . we prove that the circular flow number of a signed graph ( g , @ s ) is equal to the minimum imbalance ratio of an orientation of ( g , @ s ) . we then use this result to prove that if g is 4-edge-connected and ( g , @ s ) has a nowhere zero flow , then @ f '' c ( g , @ s ) ( as well as @ f ( g , @ s ) ) is at most 4. if g is 6-edge-connected and ( g , @ s ) has a nowhere zero flow , then @ f '' c ( g , @ s ) is strictly less than 4 . story_separator_special_tag apesar da gest\xe3o de carreiras ser uma preocupa\xe7\xe3o recente no quadro da gest\xe3o das rela\xe7\xf5es de trabalho , \xe9 hoje evidente a crise em que se encontra a forma tradicional de abordagem desta problem\xe1tica . tendo nascido para assegurar a satisfa\xe7\xe3o das necessidades de quadros superiores e interm\xe9dios com que as empresas se deparavam no \xe2mbito de um sistema de rela\xe7\xf5es de trabalho dualista e fortemente hierarquizado , a carreira profissional era concebida como um processo cumulativo atrav\xe9s do qual se ascendia na estrutura hier\xe1rquica das organiza\xe7\xf5es , \xe0 imagem das tradicionais concep\xe7\xf5es evolucionistas da vida social . contudo , as transforma\xe7\xf5es em curso nas sociedades `` desenvolvidas '' t\xeam vindo a fazer com que estejamos hoje confrontados com a necessidade de questionar esta abordagem evolucionista . para essas transforma\xe7\xf5es tem vindo a contribuir um conjunto diversificado de factores dos quais salientamos : factores de natureza organizacional que se prendem com o crescente achatamento e emagrecimento das estruturas e com o recurso ao outsourcing diminuindo , desta forma , o n\xfamero de oportunidades de progress\xe3o vertical ; factores de natureza ambiental que resultam do aumento do desemprego , da import\xe2ncia da forma\xe7\xe3o cont\xednua e do dinamismo do mercado de trabalho story_separator_special_tag curcumin has been shown highly cytotoxic towards various cancer cell lines , but its water-insolubility and instability make its bioavailability exceedingly low and thus it generally demonstrates low anticancer activity in in vivo tests . herein , we report a novel type of polymer-drug conjugates -- the high molecular weight curcumin polymers ( polycurcumins ) made by condensation polymerization of curcumin . the polycurcumins as backbone-type conjugates have advantages of high drug loading efficiency , fixed drug loading contents , stabilized curcumin in their backbones , and tailored water-solubility . the polycurcumins may have many potential applications and their antitumor activities are investigated in this work . the polycurcumins are cytotoxic to cancer cells , but a polyacetal-based polycurcumin ( pcurc 8 ) is highly cytotoxic to skov-3 , ovcar-3 ovarian cancers , and mcf-7 breast cancer cell lines . it can be quickly taken up by cancer cells into their lysosomes , where pcurc 8 hydrolyzes and releases active curcumin . it arrests skov-3 cell cycle at g ( 0 ) /g ( 1 ) phase in vitro and induces cell apoptosis partially through the caspase-3 dependent pathway . in vivo , intravenously ( i.v . ) injected pcurc story_separator_special_tag we study the flow spectrum s ( g ) and the integer flow spectrum s ? ( g ) of signed ( 2 t + 1 ) -regular graphs . we show that if r ? s ( g ) , then r = 2 + 1 t or r ? 2 + 2 2 t - 1 . furthermore , 2 + 1 t ? s ( g ) if and only if g has a t -factor . if g has a 1-factor , then 3 ? s ? ( g ) , and for every t ? 2 , there is a signed ( 2 t + 1 ) -regular graph ( h , ? ) with 3 ? s ? ( h ) and h does not have a 1-factor.if g ( ? k 2 3 ) is a cubic graph which has a 1-factor , then { 3 , 4 } ? s ( g ) ? s ? ( g ) . furthermore , the following four statements are equivalent : ( 1 ) g has a 1-factor . ( 2 ) 3 ? s ( g ) . ( 3 ) 3 ? story_separator_special_tag a signed graph is a graph in which each edge is labeled with +1 or 1. a ( proper ) vertex coloring of a signed graph is a mapping that assigns to each vertex vv ( g ) a color ( v ) z such that every edge vw of g satisfies ( v ) ( vw ) ( w ) , where ( vw ) is the sign of the edge vw . for an integer h0 , let z2h= { 1,2 , ,h } and z2h+1=z2h { 0 } . following majov et al . ( 2016 ) , the chromatic number ( g ) of the signed graph g is the least integer k such that g admits a vertex coloring with im ( ) zk . as proved in majov et al . ( 2016 ) , every signed graph g satisfies ( g ) ( g ) +1 and there are three types of signed connected simple graphs for which equality holds . we will extend this brooks type result by considering graphs having multiple edges . we will also prove a list version of this result by characterizing degree choosable signed graphs . furthermore story_separator_special_tag abstract we prove that every graph with no isthmus has a nowhere-zero 6-flow , that is , a circulation in which the value of the flow through each edge is one of \xb11 , \xb12 , , \xb15 . this improves jaeger 's 8-flow theorem , and approaches tutte 's 5-flow conjecture . story_separator_special_tag a projective-planar signed graph has no two vertex-disjoint negative circles . we prove that every signed graph with no two vertex-disjoint negative circles and no balancing vertex is obtained by taking a projective-planar signed graph or a copy of -k '' 5 and then taking 1- , 2- , and 3-sums with balanced signed graphs . story_separator_special_tag we give a decomposition theorem for signed graphs whose frame matroids are binary and a decomposition theorem for signed graphs whose frame matroids are quaternary . story_separator_special_tag the circular flow number fc ( g ) of a graph g = ( v , e ) is the minimum r such that g admits a flow with 1 ( e ) r 1 , for each e e. we determine the circular flow number of some regular multigraphs . in particular , we characterize the bipartite ( 2t+1 ) -regular graphs ( t 1 ) . our results imply that there are gaps for possible circular flow numbers for ( 2t+1 ) -regular graphs , e.g. , there is no cubic graph g with 3 < fc ( g ) < 4. we further show that there are snarks with circular flow number arbitrarily close to 4 , answering a question of x. zhu . \xa9 2000 john wiley & sons , inc. j graph theory 36 : 24 34 , 2001 story_separator_special_tag we show that , for each natural number k > 1 , every graph ( possibly with multiple edges but with no loops ) of edge-connectivity at least 2k^2+k has an orientation with any prescribed outdegrees modulo k provided the prescribed outdegrees satisfy the obvious necessary conditions . for k=3 the edge-connectivity 8 suffices . this implies the weak 3-flow conjecture proposed in 1988 by jaeger ( a natural weakening of tutte @ ? s 3-flow conjecture which is still open ) and also a weakened version of the more general circular flow conjecture proposed by jaeger in 1982. it also implies the tree-decomposition conjecture proposed in 2006 by barat and thomassen when restricted to stars . finally , it is the currently strongest partial result on the ( 2+ @ e ) -flow conjecture by goddyn and seymour . story_separator_special_tag two polynomials ( g , n ) and ( g , n ) connected with the colourings of a graph g or of associated maps are discussed . a result believed to be new is proved for the lesser-known polynomial ( g , n ) . attention is called to some unsolved problems concerning ( g , n ) which are natural generalizations of the four colour problem from planar graphs to general graphs . a polynomial ( g , x , y ) in two variables x and y , which can be regarded as generalizing both ( g , n ) and ( g , n ) is studied . for a connected graph ( g , x , y ) is defined in terms of the spanning trees of g ( which include every vertex ) and in terms of a fixed enumeration of the edges . story_separator_special_tag as an analogous concept of a nowhere-zero flow for directed graphs , we consider zero-sum flows for undirected graphs in this article . for an undirected graph g , a zero-sum flow is an assignment of non-zero integers to the edges such that the sum of the values of all edges incident with each vertex is zero , and we call it a zero-sum k -flow if the values of edges are less than k . we define the zero-sum flow number of g as the least integer k for which g admitting a zero-sum k -flow . in this paper , among others we calculate the zero-sum flow numbers for regular graphs and also the zero-sum flow numbers for cartesian products of regular graphs with paths . story_separator_special_tag as an analogous concept of a nowhere-zero flow for directed graphs , we consider zero-sum flows for undirected graphs in this article . for an undirected graph g , a zero-sum flow is an assignment of non-zero integers to the edges such that the sum of the values of all edges incident with each vertex is zero , and we call it a zero-sum k -flow if the values of edges are less than k. note that from algebraic point of view finding such zero-sum flows is the same as finding nowhere zero vectors in the null space of the incidence matrix of the graph . we consider in more details a combinatorial optimization problem , by defining the zero-sum flow number of g as the least integer k for which g admitting a zero-sum k-flow . it is well known that grids are extremely useful in all areas of computer science . previously we studied flow numbers over hexagonal grids and obtained the optimal upper bound . in this paper , with new techniques we give completely zero-sum flow numbers for certain classes of triangular grid graphs , namely , regular triangular grids , triangular belts , fans , story_separator_special_tag as an analogous concept of nowhere-zero flows for directed and bi-directed graphs , we consider zero-sum flows for undirected graphs in this article . for an undirected graph g , a zero-sum k -flow is an assignment of non-zero integers whose absolute values less than k to the edges , such that the sum of the values of all edges incident with each vertex is zero . furthermore we generalize the notion via considering a combinatorial optimization problem , which is to calculate the zero-sum minimum flow number of a graph g , namely , the least integer k for which g may admit a zero-sum k-flow . the zero-sum 6-flow conjecture was raised by akbari et al . in 2009 : if a graph with a zero-sum flow , it admits a zero-sum 6-flow . it turns out that this conjecture was proved to be equivalent to the classical bouchet 6-flow conjecture for bi-directed flows . in this paper , we study zero-sum minimum flow numbers of graphs induced from plane tiling by regular hexagons in an arbitrary way , namely , the hexagonal grid graphs . in particular we are able to verify the zero-sum 6-flow conjecture for story_separator_special_tag it was conjectured by bouchet that every bidirected graph which admits a nowhere-zero k flow will admit a nowhere-zero 6-flow . he proved that the conjecture is true when 6 is replaced by 216. zyka improved the result with 6 replaced by 30. xu and zhang showed that the conjecture is true for 6-edge-connected graphs . and for 4-edge-connected graphs , raspaud and zhu proved it is true with 6 replaced by 4. in this paper , we show that bouchet s conjecture is true with 6 replaced by 15 for 3-edge-connected graphs . story_separator_special_tag it was conjectured by a. bouchet that every bidirected graph which admits a nowhere-zero k-flow admits a nowhere-zero 6-flow . he proved that the conjecture is true when 6 is replaced by 216. o. zyka improved the result with 6 replaced by 30. r. xu and c. q. zhang showed that the conjecture is true for 6-edge-connected graph , which is further improved by a. raspaud and x. zhu for 4-edge-connected graphs . the main result of this paper improves zyka s theorem by showing the existence of a nowhere-zero 25-flow for all 3-edge-connected graphs . story_separator_special_tag a signed graph is a graph with a positive or negative sign on each edge . regarding each edge as two half edges , an orientation of a signed graph is an assignment of a direction to each of its half edges such that the two half edges of a positive edge receive the same direction and that of a negative edge receive opposite directions . a signed graph with such an orientation is called a bidirected graph . a nowhere-zero $ k $ -flow of a bidirected graph is an assignment of an integer from $ \\ { - ( k-1 ) , \\ldots , -1 , 1 , \\ldots , ( k-1 ) \\ } $ to each of its half edges such that kirchhoff 's law is respected , that is , the total incoming flow is equal to the total outgoing flow at each vertex . a signed graph is said to admit a nowhere-zero $ k $ -flow if it has an orientation such that the corresponding bidirected graph admits a nowhere-zero $ k $ -flow . it was conjectured by bouchet that every signed graph admitting a nowhere-zero $ k $ -flow for some story_separator_special_tag a graph with signed edges ( a signed graph ) is k-colorable if its vertices can be colored using only the colors 0 , \xb11 , . , \xb1k so that the colors of the endpoints of a positive edge are unequal while those of a negative edge are not negatives of each other . consider the signed graphs without positive loops that embed in the klein bottle so that a closed walk preserves orientation iff its sign product is positive . all of them are 2-colorable but not all are 1-colorable , not even if one restricts to the signed graphs that embed in the projective plane . if the color 0 is excluded , all are 3-colorable but , even restricting to the projective plane , not necessarily 2-colorable . story_separator_special_tag we continue the study initiated in `` signed graph coloring '' of the chromatic and whitney polynomials of signed graphs . in this article we prove and apply to examples three types of general theorem which have no analogs for ordinary graph coloring . first is a balanced expansion theorem which reduces calculation of the chromatic and whitney polynomials to that of the simpler balanced polynomials . second is a group of formulas based on counting colorings by their magnitudes or their signs ; among them are a combinatorial interpretation of signed coloring ( which implies an equivalence between proper colorings of certain signed graphs and matching in ordinary graphs ) and a signed-graphic switching formula ( which for instance gives the polynomials of a two-graph in terms of those of its associated ordinary graphs ) . third are addition/deletion formulas obtained by constructing one signed graph from another through adding and removing arcs ; one such formula expresses the chromatic polynomial as a combination of those of ordinary graphs , while another ( in one example ) yields a complementation formula for ordinary matchings . the examples treated are the sign-symmetric graphs ( among them in effect the classical story_separator_special_tag coloring a signed graph by signed colors , one has a chromatic polynomial with the same enumerative and algebraic properties as for ordinary graphs . new phenomena are the interpretability only of odd arguments and the existence of a second chromatic polynomial counting zero-free colorings . the generalization to voltage graphs is outlined . story_separator_special_tag abstract in this paper we study connected signed graphs with 2 eigenvalues from several ( theoretical and computational ) perspectives . we give some basic results concerning the eigenvalues and cyclic structure of such signed graphs ; in particular , we complete the list of those that are 3 or 4-regular . there is a natural relation between signed graphs and systems of lines in a euclidean space that are pairwise orthogonal or at fixed angle , with a special role of those with 2 eigenvalues . in this context we derive a relative bound for the number of such lines ( an extension of the similar bound related to unsigned graphs ) . we also determine all such graphs whose negative eigenvalue in not less than 2 , except for so-called exceptional signed graphs . using the computer search , we determine those with at most 10 vertices . several constructions are given and the possible spectra of those with at most 30 vertices are listed . story_separator_special_tag abstract the zero-free chromatic number of a signed graph is the smallest positive number k for which the vertices can be colored using \xb11 , \xb12 , , \xb1k so that endpoints of a positive edge are not colored the same and those of a negative edge are not colored oppositely . we establish the value of for some special signed graphs and prove in general that equals the minimum size of a vertex partition inducing an antibalanced subgraph of , and also the minimum chromatic number of the positive subgraph of any signed graph switching equivalent to . we characterize those signed graphs with the largest and smallest possible , that is n , n 1 , and 1 , and the simple ones with the maximum and minimum , that is [ n 2 ] and 1 , where n is the number of vertices . we give tighter bounds on in terms of the underlying graphs , but they are not sharp . we conclude by observing that determining is an np-complete problem . story_separator_special_tag a signed graph is a graph whose edges are labeled by signs . this is a bibliography of signed graphs and related mathematics . several kinds of labelled graph have been called `` signed '' yet are mathematically very different . i distinguish four types : group-signed graphs : the edge labels are elements of a 2-element group and are multiplied around a polygon ( or along any walk ) . among the natural generalizations are larger groups and vertex signs . sign-colored graphs , in which the edges are labelled from a two-element set that is acted upon by the sign group : - interchanges labels , + leaves them unchanged . this is the kind of `` signed graph '' found in knot theory . the natural generalization is to more colors and more general groups \xa0 or no group . weighted graphs , in which the edge labels are the elements +1 and -1 of the integers or another additive domain . weights behave like numbers , not signs ; thus i regard work on weighted graphs as outside the scope of the bibliography ex cept ( to some extent ) when the author calls the weights story_separator_special_tag introduction to integer flows basic properties of integer flows nowhere-zero 4-flows nowhere-zero 3-flows nowhere-zero k-flows faithful cycle covers cycle double covers shortest cycle covers generalization and unification compatible decompositions . appendices : fundamental theories hints for exercises terminology . story_separator_special_tag the circuit double cover conjecture ( cdc conjecture ) is easy to state : for every 2-connected graph , there is a family \\ ( \\mathcal { f } \\ ) of circuits such that every edge of the graph is covered by precisely two members of \\ ( \\mathcal { f } \\ ) . the cdc conjecture ( and its numerous variants ) is considered by most graph theorists as one of the major open problems in the field . the cdc conjecture , tutte s 5-flow conjecture , and the berge-fulkerson conjecture are three major snark family conjectures since they are all trivial for 3-edge-colorable cubic graphs and remain widely open for snarks . this chapter is a brief survey of the progress on this famous open problem . story_separator_special_tag this paper proves that for any positive integer $ k $ , every essentially $ ( 2k+1 ) $ -unbalanced $ ( 12k-1 ) $ -edge connected signed graph has circular flow number at most $ 2+\\frac 1k $ .
the subject of this review is the solution environment of nonpolar molecules dissolved in liquid water , and the molecular description of the most likely encounters between such solutes in aqueous solution . these subjects have been traditionally discussed under the name , `` hydrophobic effects . '' several reviews have been written previously about hydrophobic effects , and from a variety of different perspectives . the perspectives include the peculiarities of solution thermodynamic properties ( la , b , 2 ) , the formation of membranes and micelles ( 3 , 4 ) , and the influence of solution environment on the structure of proteins ( 5 , 6 ) . these are problems of obvious importance , and they have been the subject of research over many decades . this research has produced a rich array of thermodynamic and spectroscopic data on aqueous solutions and evolved speculations about the molecular-level structure of the systems . in fact , until the late 1970s these topics were often discussed in much the same ways as they were decades previously . however , at about that time a new category of molecular-level information bearing on hydrophobic effects began to be available story_separator_special_tag this paper reviews the molecular theory of hydrophobic effects relevant to biomolecular structure and assembly in aqueous solution . recent progress has resulted in simple , validated molecular statistical thermodynamic theories and clarification of confusing theories of decades ago . current work is resolving effects of wider variations of thermodynamic state , e.g . pressure denaturation of soluble proteins , and more exotic questions such as effects of surface chemistry in treating stability of macromolecular structures in aqueous solution story_separator_special_tag hydrophobicity manifests itself differently on large and small length scales . this review focuses on large-length-scale hydrophobicity , particularly on dewetting at single hydrophobic surfaces and drying in regions bounded on two or more sides by hydrophobic surfaces . we review applicable theories , simulations , and experiments pertaining to large-scale hydrophobicity in physical and biomolecular systems and clarify some of the critical issues pertaining to this subject . given space constraints , we can not review all the significant and interesting work in this active field . story_separator_special_tag self-assembled monolayers ( sams ) form when organic molecules spontaneously chemisorb on the surfaces of solids ( e.g . organic thiols and disulfides on gold , silver , and copper or carboxylic acids on the surface of alumina ) .1 ' 2 the most robust and best characterized sams are those comprising alkanethiolates on gold.l by variation of the length of the alkane chain and the identityof the functional group at its terminus , the thickness of the organic layer and the chemical properties of the exposed interface can be controlled with great precision . we and others have used these sams for studies in tribology , t adhesion , a wetting , s and other fields.6 in this paper , we describe a technique for patterning the formation of sams , using an elastomeric stamp , that can routinely produce patterns with dimensions from 1 to 100 lrm ; 7 features as small as 0.2 mm ( 200 nm ) have been generated using this procedure , although these very small features are not always easily reproduced . these patterned surfaces have geometrically well-defined regions with different chemical and physical properties . we demonstrate a number of uses for story_separator_special_tag the synthesis and properties of hydrophobic silica membranes are described . these membranes show very high gas permeance for small molecules , such as h2 , co2 , n2 , o2 , and ch4 , and permselectivities of 20 50 for these gases with respect to sf6 and larger alkanes like c3h8 and i-c4h10 . the membranes are prepared by repeated dip coating of supported -alumina membranes in a silica sol solution , followed by drying and calcining . the hydrophobic nature of the membranes is obtained by adding methyl-tri-ethoxy-silane ( mtes ) to the sol prepared by acid-catalysed hydrolysis and condensation of tetra-ethyl-ortho-silicate ( teos ) . the double silica membrane layer has a total thickness of 60 nm and a pore o of ca . 0.7 nm . the membranes are 10\xd7 more hydrophobic than the state-of-the-art silica membranes which makes them more suitable for application in humid process streams . besides that , the very high permeance obtained for n2 and o2 of 4 and 7 \xd7 10 7 mol/m2 s pa , respectively , offer perspectives on dedicated air purification in which larger impurity molecules are blocked by molecular sieving effects . story_separator_special_tag we describe a simple and inexpensive method to produce super-hydrophobic surfaces on aluminum and its alloy by oxidation and chemical modification . water or aqueous solutions ( ph = 1 14 ) have contact angles of 168 \xb1 2 and 161 \xb1 2\xb0 on the treated surfaces of al and al alloy , respectively . the super-hydrophobic surfaces are produced by the cooperation of binary structures at micro- and nanometer scales , thus reducing the energies of the surfaces . such super-hydrophobic properties will greatly extend the applications of aluminum and its alloy as lubricating materials . story_separator_special_tag a molecular-scale interpretation of interfacial processes is often downplayed in the analysis of traditional water treatment methods ; however , such a fundamental approach is perhaps critical for the realization of enhanced performance in traditional desalination and related treatments , and in the development of novel water treatment technologies . specifically , we examine in this article the molecular-scale processes that affect water and ion selectivity at the nanopore scale as inspired by nature , the behavior of a model polysaccharide as a biofilm , and the use of cluster-surfactant flocculants in viral sequestration . story_separator_special_tag neutron reflectivity ( nr ) was used to study the effectiveness of superhydrophobic ( sh ) films as corrosion inhibitors . a low-temperature , low-pressure technique was used to prepare a rough , highly porous organosilica aerogel-like film . uv/ozone treatments were used to control the surface coverage of hydrophobic organic ligands on the silica framework , allowing the contact angle with water to be continuously varied over the range of 160\xb0 ( sh ) to < 10\xb0 ( hydrophilic ) . thin ( 5000\xa0a ) nano-porous films were layered onto aluminium surfaces and submerged in 5\xa0wt % nacl in d2o . nr measurements were taken over time to observe interfacial changes in thickness , density , and roughness , and therefore monitor the corrosion of the metal . nr shows that the sh nature of the surface prevents infiltration of water into the porous sh film and thus limits the exposure of corrosive elements to the metal surface . story_separator_special_tag since the first hydrogen water clathrates were synthesized,1 much attention has been placed on them as a means of storing h2 for fuel . 2 the type ii hydrogen clathrate contains 136 water molecules consisting of 24 cages . sixteen of these cages have a small pentagonal dodecahedral ( d-512 structure , while the remaining cages are in a large 16-hedra ( h-51264 structure . initial reports demonstrated a 5.3 % mass storing capacity , above the 2005 doe target ( 4.5 wt % ) of h2 fuel storage . because of this large capacity , the distribution of the h2 molecules through these cages has been the emphasis of several works . initial studies showed two guest h2 molecules occupying the small cages while four guest h2 molecules were stored in the large cage.3 4 recent experimental5 and theoretical molecular dynamics ( md ) simulations,6 however , suggested that only one h2 molecule is stored in the small cage , thereby reducing the mass ratio to 3.9 wt % . this work will reinvestigate the hydrogen storage in type-ii hydrate clathrates using ab initio quantum chemical calculations . story_separator_special_tag gas hydrates are crystalline inclusion compounds , where molecular cages of water trap lighter species under specific thermodynamic conditions . hydrates play an essential role in global energy systems , as both a hinderance when formed in traditional fuel production and a substantial resource when formed by nature . in both traditional and unconventional fuel production , hydrates share interfaces with a tremendous diversity of materials , including hydrocarbons , aqueous solutions , and inorganic solids . this article presents a state-of-the-art understanding of hydrate interfacial thermodynamics and growth kinetics , and the physiochemical controls that may be exerted on both . specific attention is paid to the molecular structure and interactions of water , guest molecules , and hetero-molecules ( e.g. , surfactants ) near the interface . gas hydrate nucleation and growth mechanics are also presented , based on studies using a combination of molecular modeling , vibrational spectroscopy , and x-ray and neutron diffraction . the fundamental physical and chemical knowledge and methods presented in this review may be of value in probing parallel systems of crystal growth in solid inclusion compounds , crystal growth modifiers , emulsion stabilization , and reactive particle flow in solid slurries story_separator_special_tag the protein folding problem consists of three closely related puzzles : ( a ) what is the folding code ? ( b ) what is the folding mechanism ? ( c ) can we predict the native structure of a protein from its amino acid sequence ? once regarded as a grand challenge , protein folding has seen great progress in recent years . now , foldable proteins and nonbiological polymers are being designed routinely and moving toward successful applications . the structures of small proteins are now often well predicted by computer methods . and , there is now a testable explanation for how a protein can fold so quickly : a protein solves its large global optimization problem as a series of smaller local optimization problems , growing and assembling the native structure from peptide fragments , local structures first . story_separator_special_tag biological organization may be viewed as consisting of two stages : biosynthesis and assembly . the assembly process is largely under thermodynamic control ; that is , as a first approximation it represents a search by each structural molecule for its state of lowest chemical potential . the hydrophobic effect is a unique organizing force , based on repulsion by the solvent instead of attractive forces at the site of organization . it is responsible for assembly of membranes of cells and intracellular compartments , and the absence of strong attractive forces makes the membranes fluid and deformable . the spontaneous folding of proteins , however , involves directed polar bonds , leading to more rigid structures . intercellular organization probably involves polar bonds between cell surface proteins . story_separator_special_tag there are strong reasons to believe that the laws , principles and constraints of physics and chemistry are universal . it is much less clear how this universality translates into our understanding of the origins of life . conventionally , discussions of this topic focus on chemistry that must be sufficiently rich to seed life . although this is clearly a prerequisite for the emergence of living systems , i propose to focus instead on self-organization of matter into functional structures capable of reproduction , evolution and responding to environmental changes . in biology , most essential functions are largely mediated by noncovalent interactions ( interactions that do not involve making or breaking chemical bonds ) . forming chemical bonds is only a small part of what living systems do . there are specific implications of this point of view for universality . i will concentrate on one of these implications . strength of non-covalent interactions must be properly tuned . if they were too weak , the system would exhibit undesired , uncontrolled response to natural fluctuations of physical and chemical parameters . if they were too strong kinetics of biological processes would be slow and energetics costly . story_separator_special_tag the role of solute attractive forces on hydrophobic interactions is studied by coordinated development of theory and simulation results for ar atoms in water . we present a concise derivation of the local molecular field ( lmf ) theory for the effects of solute attractive forces on hydrophobic interactions , a derivation that clarifies the close relation of lmf theory to the exp approximation applied to this problem long ago . the simulation results show that change from purely repulsive atomic solute interactions to include realistic attractive interactions \\emph { diminishes } the strength of hydrophobic bonds . for the ar-ar rdfs considered pointwise , the numerical results for the effects of solute attractive forces on hydrophobic interactions are of opposite sign and larger in magnitude than predicted by lmf theory . that comparison is discussed from the point of view of quasi-chemical theory , and it is suggested that the first reason for this difference is the incomplete evaluation within lmf theory of the hydration energy of the ar pair . with a recent suggestion for the system-size extrapolation of the required correlation function integrals , the ar-ar rdfs permit evaluation of osmotic second virial coefficients $ b_2 $ story_separator_special_tag in this work we investigate the fast anomalous diffusion of hydrogen molecules in water using car parrinello molecular dynamics simulations . we employ voronoi polyhedra analysis to distinguish between void diffusion and void hopping . our results indicate that a combination of both mechanism is sufficient to explain anomalous diffusion . furthermore , we investigate the geometrical and electronical structure of the first solvation shell . story_separator_special_tag we report on our studies of the structural properties of a hydrogen molecule dissolved in liquid water . the radial distribution function , coordination number and coordination number distribution are calculated using different representations of the interatomic forces within molecular dynamics ( md ) , monte carlo ( mc ) and ab initio molecular dynamics ( aimd ) simulation frameworks . although structural details differ in the radial distribution functions generated from the different force fields , all approaches agree that the average and most probable number of water molecules occupying the inner hydration sphere around hydrogen is 16. furthermore , all results exclude the possibility of clathrate-like organization of water molecules around the hydrophobic molecular hydrogen solute . story_separator_special_tag the thermodynamic properties of hydrogen gas in liquid water are investigated using monte carlo molecular simulation and the quasichemical theory of liquids . the free energy of hydrogen hydration obtained by monte carlo simulations agrees well with the experimental result , indicating that the classical force fields used in this work provide an adequate description of intermolecular interactions in the aqueous hydrogen system . two estimates of the hydration free energy for hydrogen made within the framework of the quasichemical theory also agree reasonably well with experiment provided local anharmonic motions and distant interactions with explicit solvent are treated . both quasichemical estimates indicate that the hydration free energy results from a balance between chemical association and molecular packing . additionally , the results suggest that the molecular packing term is almost equally driven by unfavorable enthalpic and entropic components . story_separator_special_tag the aqueous hydrogen molecule is studied with molecular dynamics simulations at ambient temperature and pressure conditions , using a newly developed flexible and polarizable h2 molecule model . the design and implementation of this model , compatible with an existing flexible and polarizable force field for water , is presented in detail . the structure of the hydration layer suggests that first-shell water molecules accommodate the h2 molecule without major structural distortions and two-dimensional , radial-angular distribution functions indicate that as opposed to strictly tangential , the orientation of these water molecules is such that the solute is solvated with one of the free electron pairs of h2o . the calculated self-diffusion coefficient of h2 ( aq ) agrees very well with experimental results and the time dependence of mean square displacement suggests the presence of caging on a time scale corresponding to hydrogen bond network vibrations in liquid water . orientational correlation function of h2 experiences an extremely short-scale decay , making the h2-h2o interaction potential essentially isotropic by virtue of rotational averaging . the inclusion of explicit polarizability in the model allows for the calculation of raman spectra that agree very well with available experimental data on h2 story_separator_special_tag abstract solution equilibria are at the core of solvent-catalyzed reactions , solute separations , drug delivery , vapor partitioning and interfacial phenomena . molecular simulation using thermodynamic integration or perturbation theory allows the calculation of these equilibria from parameterized force field models ; however , the statistical many-body nature of solution environments inevitably complicates molecular interpretations of these phenomena . if our goal is molecular understanding in addition to prediction , then the statistical thermodynamic theories designed for mechanistic insight from structural analyses are especially important . in this report , we survey recent advances in the thermodynamic analysis of rigorous local structural models based on chemical structure . story_separator_special_tag normal hexane is adopted as a typical organic solvent for comparison with liquid water in modern theories of hydrophobic hydration , and detailed results are worked-out here for the c-atom density in contact with a hard-sphere solute , cg ( r ) , for the full range of solute radii . the intramolecular structure of an n-hexane molecule introduces qualitative changes in g ( r ) compared to scaled-particle models for liquid water . also worked-out is a revised scaled-particle model implemented with molecular simulation results for liquid n-hexane . the classic scaled-particle model , acknowledging the intramolecular structure of an n-hexane molecule , is in qualitative agreement with the revised scaled-particle model results , and is consistent in sizing the methyl/methylene sites which compose n-hexane in the simulation model . the classic and revised scaled-particle models disagree for length scales greater than the radius of a methyl group , however . the liquid vapor surface tension of n-hexane predicted by the classic s . story_separator_special_tag simple theoretical concepts and models have been helpful to understand the folding rates and routes of single-domain proteins . as reviewed in this article , a physical principle that appears to underly these models is loop closure . story_separator_special_tag parallel-tempering md results for a ch3 ( ch2 o ch2 ) mch3 chain in water are exploited as a database for analysis of collective structural characteristics of the peo globule with a goal of defining models permitting statistical thermodynamic analysis of dispersants of corexit type . the chain structure factor , relevant to neutron scattering from a deuterated chain in null water , is considered specifically . the traditional continuum-gaussian structure factor is inconsistent with the simple k behavior , but we consider a discrete-gaussian model that does achieve that consistency . shifting and scaling the discrete-gaussian model helps to identify the low-k to high-k transition near k 2 /0.6 nm when an empirically matched number of gaussian links is about one-third of the total number of effective atom sites . this short distance-scale boundary of 0.6 nm is directly verified with the r space distributions , and this distance is thus identified with a natural size for coarsened monomers . the probability distrib . story_separator_special_tag the weighted histogram analysis method ( wham ) , an extension of ferrenberg and swendsen 's multiple histogram technique , has been applied for the first time on complex biomolecular hamiltonians . the method is presented here as an extension of the umbrella sampling method for free energy and potential of mean force calculations . this algorithm possesses the following advantages over methods that are currently employed : ( 1 ) it provides a built in estimate of sampling errors thereby yielding objective estimates of the optimal location and length of additional simulations needed to achieve a desired level of precision ; ( 2 ) it yields the best value of free energies by taking into account all the simulations so as to minimize the statistical errors ; ( 3 ) in addition to optimizing the links between simulations , it also allows multiple overlaps of probability distributions for obtaining better estimates of the free energy differences . by recasting the ferrenberg swendsen multiple histogram equations in a form suitable for molecular mechanics type hamiltonians , we have demonstrated the feasibility and robustness of this method by applying it to a test problem of the generation of the potential of story_separator_special_tag expressions of some thermodynamic functions as correlation-function integrals , such as the ornstein-zernike integral , the kirkwood-buff integrals , and the integral formulas for virial coefficients , are recalled . it is noted , as has been remarked before , that the choice of molecular centers from which intermolecular distances are measured is arbitrary and that different choices lead to different forms of the correlation functions but that the integrals must be independent of those choices . this is illustrated with the second virial coefficients of hard spheres in one , two , and three dimensions , with that of gaseous propane in three dimensions , and with computer simulations of the pair correlations in water and in a dilute aqueous solution of propane . story_separator_special_tag molecular dynamics simulations of water with both multi-kr and single kr atomic solutes are carried out to implement quasi-chemical theory evaluation of the hydration free energy of kr ( aq ) . this approach obtains free energy differences reflecting kr-kr interactions at higher concentrations . those differences are negative changes in hydration free energies with increasing concentrations at constant pressure . the changes are due to a slight reduction of packing contributions in the higher concentration case . the observed kr-kr distributions , analyzed with the extrapolation procedure of kr\xfcger et al. , yield a modestly attractive osmotic second virial coefficient , b2 -60 cm ( 3 ) /mol . the thermodynamic analysis interconnecting these two approaches shows that they are closely consistent with each other , providing support for both approaches . story_separator_special_tag we report methane s osmotic virial coefficient over the temperatures 275 to 370 k and pressures from 1 bar up to 5000 bar evaluated using molecular simulations of a united-atom description of methane in tip4p/2005 water . in the first half of this work , we describe an approach for calculating the water-mediated contribution to the methane methane potential-of-mean force over all separations down to complete overlap . the enthalpic , entropic , heat capacity , volumetric , compressibility , and thermal expansivity contributions to the water-mediated interaction free energy are subsequently extracted from these simulations by fitting to a thermodynamic expansion over all the simulated state points . in the second half of this work , methane s correlation functions are used to evaluate its osmotic second virial coefficient in the temperature pressure plane . the virial coefficients evaluated from the mcmillan mayer correlation function integral are shown to be in excellent agreement with those determined from the conc . story_separator_special_tag on the basis of a gaussian quasi-chemical model of hydration , a model of non van der waals character , we explore the role of attractive methane-water interactions in the hydration of methane and in the potential of mean force between two methane molecules in water . we find that the hydration of methane is dominated by packing and a mean-field energetic contribution . contributions beyond the mean-field term are unimportant in the hydration phenomena for a hydrophobic solute such as methane . attractive solute-water interactions make a net repulsive contribution to these pair potentials of mean force . with no conditioning , the observed distributions of binding energies are super-gaussian and can be effectively modeled by a gumbel ( extreme value ) distribution . this further supports the view that the characteristic form of the unconditioned distribution in the high-e tail is due to energetic interactions with a small number of molecules . generalized extreme value distributions also effectively model the results with minimal conditioning , but in those cases the distributions are sufficiently narrow that details of their shape are n't significant . story_separator_special_tag the hydrophobic interaction between two apolar ( lennard jones ) spheres dissolved in a model of liquid water ( st 2 water ) is simulated using the force bias monte carlo technique recently devised by the authors . importance sampling techniques are devised and used to give a relatively accurate determination of the potential of mean force of the two apolar spheres as a function of their separation . this determination shows that there are two relatively stable configurations for the spheres . in one configuration each member of the pair sits in its own water cage with one water molecule fitting between them . there is a free energy barrier separating this from the other stable configuration which is such that no water molecule sits between the spheres . this conclusion is shown to be quantitatively consistent with the recent semiempirical theory of pratt and chandler and is in disagreement with some previous monte carlo studies . story_separator_special_tag abstract a molecular model of poorly understood hydrophobic effects is heuristically developed using the methods of information theory . because primitive hydrophobic effects can be tied to the probability of observing a molecular-sized cavity in the solvent , the probability distribution of the number of solvent centers in a cavity volume is modeled on the basis of the two moments available from the density and radial distribution of oxygen atoms in liquid water . the modeled distribution then yields the probability that no solvent centers are found in the cavity volume . this model is shown to account quantitatively for the central hydrophobic phenomena of cavity formation and association of inert gas solutes . the connection of information theory to statistical thermodynamics provides a basis for clarification of hydrophobic effects . the simplicity and flexibility of the approach suggest that it should permit applications to conformational equilibria of nonpolar solutes and hydrophobic residues in biopolymers . story_separator_special_tag an information theory model is used to construct a molecular explanation why hydrophobic solvation entropies measured in calorimetry of protein unfolding converge at a common temperature . the entropy convergence follows from the weak temperature dependence of occupancy fluctuations for molecular-scale volumes in water . the macroscopic expression of the contrasting entropic behavior between water and common organic solvents is the relative temperature insensitivity of the water isothermal compressibility . the information theory model provides a quantitative description of small molecule hydration and predicts a negative entropy at convergence . interpretations of entropic contributions to protein folding should account for this result . story_separator_special_tag water offers a large temperature domain of stable liquid , and the characteristic hydrophobic effects are first a consequence of the temperature insensitivity of equation-of-state features of the aqueous medium , compared to other liquids . on this basis , the known aqueous media and conditions offer low risk compared to alternatives as a matrix to which familiar molecular biological structures and processes have adapted . the current molecular-scale understanding of hydrophobic hydration is not conformant in detail with a standard structural entropy rationalization . that classic pictorial explanation may serve as a mnemonic , but is n't necessary . a more defensible view is that peculiar hydrophobic effects can be comprehended by examination of engineering parameters characterizing liquid water . story_separator_special_tag underlying assumptions have been examined in scaled-particle theory for the case of a rigid-sphere solute in liquid water . as a result , it has been possible to improve upon pierotti s corresponding analysis in a way that explicitly incorporates measured surface tensions and radial-distribution functions for pure water . it is pointed out along the way that potential energy nonadditivity should create an orientational bias for molecules in the liquid-vapor interface that is peculiar to water . some specific conclusions have been drawn about the solvation mode for the nonpolar rigid-sphere solute . story_separator_special_tag this letter considers several physical arguments about contributions to hydrophobic hydration of inert gases , constructs default models to test them within information theories , and gives information theory predictions using those default models with moment information drawn from simulation of liquid water . tested physical features include : packing or steric effects , the role of attractive forces that lower the solvent pressure , and the roughly tetrahedral coordination of water molecules in liquid water . packing effects ( hard sphere default model ) and packing effects plus attractive forces ( lennard-jones default model ) are ineffective in improving the prediction of hydrophobic hydration free energies of inert gases over the previously used gibbs and flat default models . however , a conceptually simple cluster poisson model that incorporates tetrahedral coordination structure in the default model is one of the better performers for these predictions . these results provide a partial rationalization of the remarkable performance of the flat default model with two moments in previous applications . the cluster poisson default model thus will be the subject of further refinement . story_separator_special_tag hydrophobic hydration plays a crucial role in self-assembly processes over multiple length scales , from the microscopic origins of inert gas solubility in water , to the mesoscopic organization of proteins and surfactant structures , to macroscopic phase separation . many theoretical studies focus on the molecularly detailed interactions between oil and water , but the extrapolation of molecular-scale models to larger-length-scale hydration phenomena is sometimes not warranted . scaled particle theories are based upon an interpolative view of that $ \\text { microscopic } \\ensuremath { \\leftrightarrow } \\text { macroscopic } $ issue . this colloquium revisits the scaled particle theory proposed 30 years ago by stillinger [ j. solution chem . 2 , 141 ( 1973 ) ] , adopts a practical generalization , and considers the implications for hydrophobic hydration in light of our current understanding . the generalization is based upon identifying a molecular length , implicit in previous applications of scaled particle models , which provides an effective radius for joining microscopic and macroscopic descriptions . it will be demonstrated that the generalized theory correctly reproduces many of the anomalous thermodynamic properties of hydrophobic hydration for molecularly sized solutes , including solubility minima and story_separator_special_tag a microscopic theory is developed which can describe many of the structural and thermodynamic properties of infinitely dilute solutions of apolar solutes in liquid water . the theory is based on an integral equation for the pair correlation functions associated with spherical apolar species dissolved in water . it requires as input the experimentally determined oxygen oxygen correlation function for pure liquid water . the theory is tested by computing thermodynamic properties for aqueous solutions of apolar solute species . the predictions of both the henry s law constant and the entropy of solution are in good agreement with experiment . the calculation of the latter quantity is essentially independent of any adjustable parameters . it is shown how the correlation functions we have calculated can be used to predict the solubility of more complicated , aspherical , and nonrigid solutes in liquid water . for the more complex molecules it is convenient to study the difference between the excess chemical potential o . story_separator_special_tag molecular dynamics simulations , 1.0 ns long , of methane particles in water show the existence of a tendency for aggregation of these solutes which increases with temperature in the range 319 351 k. this is measured by a rise in the contact-configuration peak in the methane methane radial distribution function at the expense of a decrease in the solvent-separated configuration peak . the observed solute solute interaction reaches a maximum at 351 k before decreasing in magnitude at 358 k. these observations are consistent with an entropy-driven hydrophobic attraction . story_separator_special_tag exact expressions for finite-volume kirkwood-buff ( kb ) integrals are derived for hyperspheres in one , two , and three dimensions . these integrals scale linearly with inverse system size . from this , accurate estimates of kb integrals for infinite systems are obtained , and it is shown that they converge much better than the traditional expressions . we show that this approach is very suitable for the computation of kb integrals from molecular dynamics simulations , as we obtain kb integrals for open systems by simulating closed systems . story_separator_special_tag molecular dynamics calculations have been performed for systems of two and several atomic solutes dissolved in a model for liquid water to study the hydrophobic association of nonpolar solutes in aqueous solutions . the parameters of the potentials were chosen to be appropriate for an aqueous solution of kr . in accordance with previous work , it is found that at infinite dilution a pair of atoms has two preferred configurations , one in which they are near neighbors and another in which they are separated by a water molecule , and that the latter is more stable . in the simulations of systems containing several atomic solutes , the atoms were found to form clusters in which the atom-atom distances corresponded to the formation of near-neighbor and solvent-separated pairs . on the average , the effect of the solvent is to tend to keep the solutes apart rather than to cause them to associate . this result contradicts the conventional wisdom on hydrophobic interaction , and a new analysis of experimental solubility data gives support to the result . story_separator_special_tag we examine in detail the theoretical underpinnings of previous successful applications of local molecular field ( lmf ) theory to charged systems . lmf theory generally accounts for the averaged effects of long-ranged components of the intermolecular interactions by using an effective or restructured external field . the derivation starts from the exact yvon born green hierarchy and shows that the approximation can be very accurate when the interactions averaged over are slowly varying at characteristic nearest-neighbor distances . application of lmf theory to coulomb interactions alone allows for great simplifications of the governing equations . lmf theory then reduces to a single equation for a restructured electrostatic potential that satisfies poisson 's equation defined with a smoothed charge density . because of this charge smoothing by a gaussian of width , this equation may be solved more simply than the detailed simulation geometry might suggest . proper choice of the smoothing length plays a major role in ensuring the accuracy of this approximation . we examine the results of a basic confinement of water between corrugated walls and justify the simple lmf equation used in a previous publication . we further generalize these results to confinements that include fixed story_separator_special_tag introduction . statistical mechanics and molecular distribution functions . computer `` experiments '' on liquids . diagrammatic expansions . distribution function theories . perturbation theories . time-dependent correlation functions and response functions . hydrodynamics and transport coefficients . microscopic theories of time-correlation functions . ionic liquids . simple liquid metals . molecular liquids . appendices . references . index . story_separator_special_tag perturbation theory is used to study the solvation of nonpolar molecules in water , supported by extensive computer simulations . two contributions to the solvent-mediated solute-water interactions are identified : a cavity potential of mean force that transforms by a simple translation when the solute size changes , and a solute-size-independent cavity-expulsion potential . the latter results in weak dewetting of the solute-water interface that can explain the approximate area dependence of solvation free energies with apparent surface tensions similar to macroscopic values . { copyright } { ital 1998 } { ital the american physical society } story_separator_special_tag we simulated the interface between liquid water and a stationary phase of tethered n-c18 alkyl chains at a thermodynamic state of low pressure and water vapor liquid coexistence . the interfacial water ( oxygen atom ) density profile so obtained is compared with a precisely defined proximal density of water molecules ( oxygen atoms ) conditional on the alkyl chain configurations . though the conventional interfacial density profile takes a traditional monotonic form , the proximal radial distribution of oxygen atoms around a specific methyl ( methylene ) group closely resembles that for a solitary methane solute in liquid water . moreover , this proximal radial distribution function is sufficient to accurately reconstruct the water oxygen density profile of the oil water interface . these observations provide an alternative interpretation to collective drying or vaporization interpretations of commonly observed oil water interfacial profiles for which water penetration into the interfacial region plays a role .
the present document has been developed within the 3 rd generation partnership project ( 3gpp tm ) and may be further elaborated for the purposes of 3gpp . the present document has not been subject to any approval process by the 3gpp organizational partners and shall not be implemented . this specification is provided for future development work within 3gpp only . the organizational partners accept no liability for any use of this specification . specifications and reports for implementation of the 3gpp tm system should be obtained via the 3gpp organizational partners ' publications offices . no part may be reproduced except as authorized by written permission . the copyright and the foregoing restriction extend to reproduction in all media . umts is a trade mark of etsi registered for the benefit of its members 3gpp is a trade mark of etsi registered for the benefit of its members and of the 3gpp organizational partners lte is a trade mark of etsi currently being registered for the benefit of its members and of the 3gpp organizational partners gsm\xae and the gsm logo are registered and owned by the gsm association story_separator_special_tag this work presents a new architecture , multihop cellular network ( mcn ) , for wireless communications . mcn preserves the benefit of conventional single-hop cellular networks ( scn ) where the service infrastructure is constructed by fixed bases , and it also incorporates the flexibility of ad-hoc networks where wireless transmission through mobile stations in multiple hops is allowed . mcn can reduce the required number of bases or improve the throughput performance , while limiting path vulnerability encountered in ad-hoc networks . in addition , mcn and scn are analyzed , in terms of mean hop count , hop-by-hop throughput , end-to-end throughput , and mean number of channels ( i.e . simultaneous transmissions ) under different traffic localities and transmission ranges . numerical results demonstrate that the throughput of mcn exceeds that of scn , the former also increases as the transmission range decreases . the above results can be accounted for by the different orders , linear and square , at which the mean hop count and mean number of channels increase , respectively . story_separator_special_tag spectrum sharing is a novel opportunistic strategy to improve spectral efficiency of wireless networks . much of the research to quantify such a gain is done under the premise that the spectrum is being used inefficiently by the primary network . our main result is that even in a spectrally efficient network , device to device users can exploit the network topology to render gains in additional throughput . the focus will be on providing ad-hoc multihop access to a network for device to device users , that are transparent to the primary wireless cellular network , while sharing the primary network 's resources . story_separator_special_tag in this article device-to-device ( d2d ) communication underlaying a 3gpp lte-advanced cellular network is studied as an enabler of local services with limited interference impact on the primary cellular network . the approach of the study is a tight integration of d2d communication into an lte-advanced network . in particular , we propose mechanisms for d2d communication session setup and management involving procedures in the lte system architecture evolution . moreover , we present numerical results based on system simulations in an interference limited local area scenario . our results show that d2d communication can increase the total throughput observed in the cell area . story_separator_special_tag in this paper the possibility of device-to-device ( d2d ) communications as an underlay of an lte-a network is introduced . the d2d communication enables new service opportunities and reduces the enb load for short range data intensive peer-to-peer communication . the cellular network may establish a new type of radio bearer dedicated for d2d communications and stay in control of the session setup and the radio resources without routing the user plane traffic . the paper addresses critical issues and functional blocks to enable d2d communication as an add-on functionality to the lte sae architecture . unlike 3g spread spectrum cellular and ofdm wlan techniques , lte-a resource management is fast and operates in high time-frequency resolution . this could allow the use of non-allocated time-frequency resources , or even partial reuse of the allocated resources for d2d with enb controlled power constraints . the feasibility and the range of d2d communication , and its impact to the power margins of cellular communications are studied by simulations in two example scenarios . the results demonstrate that by tolerating a modest increase in interference , d2d communication with practical range becomes feasible . by tolerating higher interference power the d2d story_separator_special_tag in this paper , we introduce two innovative concepts which have not been present in cellular systems for imt-advanced so far : device-to-device ( d2d ) communication and network coding . both of them are promising techniques to increase the efficiency of cellular communication systems , especially from a network point of view . first , we study the potential gains from d2d communication as an underlay to the downlink of a cellular network in an interference limited multi-cell indoor scenario . our results show that multi-antenna receivers are required to achieve sufficient sinrs that allow device-to-device communication when the d2d connections re-use cellular resources within the cell . then we investigate applying network coding for cooperative transmission and relay-based communications in networks . for the cooperative transmission , we propose to combine wireless diversity and the capability of increasing max-flow of network coding . we show designed non-binary network codes can substantially decrease outage probability and frame error rate ( fer ) . for the relay based networks , we propose a novel network coding protocol applied to uplink cellular traffic protocol with an efficient decoding approach at the receiver . we complement this method with user grouping in story_separator_special_tag a hybrid system of cellular mode and device-to-device ( d2d ) mode is considered in this paper , where the cellular uplink resource is reused by the d2d transmission . in order to maximize the overall system performance , the mutual interference between cellular and d2d sub-systems has to be addressed . here , two mechanisms are proposed to solve the problem : one is mitigating the interference from cellular transmission to d2d communication by an interference tracing approach . the other one is aiming to reduce the interference from d2d transmission to cellular communication by a tolerable interference broadcasting approach . both mechanisms can work independently or jointly to synergy the transmission in the hybrid system for the efficient resource utilization . in the end , simulation is conducted to study the performance of the proposed schemes , which shows satisfying results . story_separator_special_tag an efficient receiving status feedback in multicast communications can improve the system performance considerably . in this paper , we propose a novel compressed hybrid automatic repeat request ( harq ) mechanism for the reliable multicast services in the cellular network controlled device-to-device ( d2d ) communications . closely located d2d cluster devices transmit the acknowledgement and negative acknowledgement ( ack/nack ) message to a cluster head through d2d links directly , after that the cluster head feeds back the whole d2d cluster receiving status using 2-bit ack/nack to the cellular network . performance analysis and numerical simulation results reveal that the proposed harq feedback mechanism is better than the conventional d2d multicast mechanisms in terms of the error probability and signaling overhead . story_separator_special_tag device-to-device ( d2d ) communications help improve the performance of wireless multicast services in cellular networks via cooperative retransmissions among multicast recipients within a cluster . resource utilization efficiency should be taken into account in the design of d2d communication systems . to maximize resource efficiency of d2d retransmissions , there is a tradeoff between multichannel diversity and multicast gain . in this paper , by analyzing the relationship between the number of relays and minimal time-frequency resource cost on retransmissions , we derive a closed-form probability density function ( pdf ) for an optimal number of d2d relays . motivated by the analysis , we then propose an intracluster d2d retransmission scheme with optimized resource utilization , which can adaptively select the number of cooperative relays performing multicast retransmissions and give an iterative subcluster partition algorithm to enhance retransmission throughput . exploiting both multichannel diversity and multicast gain , the proposed scheme achieves a significant gain in terms of resource utilization if compared with its counterparts with a fixed number of relays . story_separator_special_tag this article studies direct communications between user equipments in the lte-advanced cellular networks . different from traditional device-to-device communication technologies such as bluetooth and wifi-direct , the operator controls the communication process to provide better user experience and make profit accordingly . the related usage cases and business models are analyzed . some technical considerations are discussed , and a resource allocation and data transmission procedure is provided . story_separator_special_tag we propose a new scheme for increasing the throughput of video files in cellular communications systems . this scheme exploits ( i ) the redundancy of user requests as well as ( ii ) the considerable storage capacity of smartphones and tablets . users cache popular video files and - after receiving requests from other users - serve these requests via device-to-device localized transmissions . the file placement is optimal when a central control knows a priori the locations of wireless devices when file requests occur . however , even a purely random caching scheme shows only a minor performance loss compared to such a genie-aided scheme . we then analyze the optimal collaboration distance , trading off frequency reuse with the probability of finding a requested file within the collaboration distance . we show that an improvement of spectral efficiency of one to two orders of magnitude is possible , even if there is not very high redundancy in video requests . story_separator_special_tag video is the main driver for the inexorable increase in wireless data traffic . in this paper we analyze a new architecture in which device-to-device ( d2d ) communications is used to drastically increase the capacity of cellular networks for video transmission . users cache popular video files and - after receiving requests from other users - serve these requests via d2d localized transmissions ; the short range of the d2d transmission enables frequency reuse within the cell . we analyze the scaling behavior of the throughput with the number of devices per cell . the user content request statistics , as well as the caching distribution , are modeled by a zipf distribution with parameters r and c , respectively . for the practically important case r c > 1 , we derive a closed form expression for the scaling behavior of the number of d2d links that coexist without interference . our analysis relies on a novel poisson approximation result for wireless networks obtained through the chen-stein method . story_separator_special_tag in this paper , we address a resource allocation problem , where a pair of device-to-device terminals are integrated into a time division duplex ( tdd ) cellular network . by introducing an incremental relay transmission scheme for the d2d communication , the d2d transmitter , traditionally believed to be the source of interference , are coordinated with other cellular user equipments ( cues ) in the uplink session . in consequence , both the d2d receiver and the central base station ( cbs ) are able to decode the message sent from the d2d transmitter . the cbs , in the following downlink session , may forward this message to the d2d receiver if the direct d2d link is in outage . we formulate and solve the cell throughput maximization problem for three transmission modes : cellular , underlay transmission , and incremental relay mode . simulation results show that the proposed incremental relay is not only with higher spectral efficiency , but also provides more reliable d2d transmission than the cellular relay and the underlay scheme . story_separator_special_tag wireless cellular networks feature two emerging technological trends . the first is the direct device-to-device ( d2d ) communications , which enables direct links between the wireless devices that reutilize the cellular spectrum and radio interface . the second is that of machine-type communications ( mtc ) , where the objective is to attach a large number of low-rate low-power devices , termed machine-type devices ( mtds ) to the cellular network . mtds pose new challenges to the cellular network , one if which is that the low transmission power can lead to outage problems for the cell-edge devices . another issue imminent to mtc is the \\emph { massive access } that can lead to overload of the radio interface . in this paper we explore the opportunity opened by d2d links for supporting mtds , since it can be desirable to carry the mtc traffic not through direct links to a base station , but through a nearby relay . mtc is modeled as a fixed-rate traffic with an outage requirement . we propose two network-assisted d2d schemes that enable the cooperation between mtds and standard cellular devices , thereby meeting the mtc outage requirements while maximizing story_separator_special_tag mobile broadband demand keeps growing at an overwhelming pace . though emerging wireless technologies will provide more bandwidth , the increase in demand may easily consume the extra bandwidth . to alleviate this problem , we propose using the content available on individual devices as caches . particularly , when a user reaches areas with dense clusters of mobile devices , `` data spots '' , the operator can instruct the user to connect with other users sharing similar interests and serve the requests locally . this paper presents feasibility study as well as prototype implementation of this idea . story_separator_special_tag this paper proposes flashlinq - a synchronous peer-to-peer wireless phy/mac network architecture for distributed channel allocation . by leveraging the fine-grained parallel channel access of ofdm , flashlinq develops an analog energy-level based signaling scheme that enables sir ( signal to interference ratio ) based distributed scheduling . this new signaling mechanism and the corresponding allocation algorithms permit efficient channel-aware spatial resource allocation , leading to significant gains over a csma/ca system with rts/cts . flashlinq is a complete system architecture including ( i ) timing and frequency synchronization derived from cellular spectrum , ( ii ) peer discovery , ( iii ) link management , and ( iv ) channelaware distributed power , data-rate and link scheduling . we implement flashlinq over licensed spectrum on a dsp/fpga platform . in this paper , we present performance results for flashlinq using both implementation and simulations . story_separator_special_tag one of the major features to be developed in 3gpp release 12 is direct device-to-device ( aka d2d ) communication , as a support for so-called 3gpp proximity services ( or prose ) . the main advantage of using d2d communication in 3gpp networks is the provision of higher data rates and radio resource usage rates , thanks to the proximity of communicating users . this paper reports a feasibility study of cellular-d2d reuse of radio resource , i.e . simultaneous use for cellular and d2d links , in the context of an lte network . based on real constraints from recent 3gpp standardization activities , the paper investigates the conditions for such an allocation scheme to serve the expected quality of service ( qos ) on both links . in this paper interference constraints are identified along with a possible associated lte system design ( i.e . lte network and user equipments ) , including message sequence charts ( mscs ) and related protocols . finally , results from simulation are shown and analyzed to draw measurable conclusions expressed in terms of probability of reuse . story_separator_special_tag abstract a primary technological challenge after a disaster is rapid deployment of temporary infrastructure to provide communications for disaster management workers . 3gpp release 12 defines proximity service ( prose ) to allow direct communications within users for long term evolution-advanced ( lte-a ) public safety . to support group communications , each group members have to receive data sent from the group application server ( gas ) maintained in prose . this paper discusses the use of vehicle-mounted mobile base station to provide group communications over a two-tier heterogeneous architecture with the aid of gas and prose links . in this method , each vehicle-mounted mobile base station ( bs ) works in a cell breathing manner to expand or shrink the cell coverage periodically . this way can ensure gas having the up-to-date proximity knowledge of all users while the part of high-cost group data traffic due to the users located far from the bs can be offloaded to low-cost prose links instead . we formulate the minimum cost problem of the prose-based file distribution ( pfd ) in such architecture . we prove the minimum pfd problem is np-complete and then proposes three sources selection algorithms for story_separator_special_tag device-to-device communication is likely to be added to lte in 3gpp release 12. in principle , exploiting direct communication between nearby mobile devices will improve spectrum utilization , overall throughput , and energy consumption , while enabling new peer-to-peer and location-based applications and services . d2d-enabled lte devices can also become competitive for fallback public safety networks , which must function when cellular networks are not available or fail . introducing d2d poses many challenges and risks to the long-standing cellular architecture , which is centered around the base station . we provide an overview of d2d standardization activities in 3gpp , identify outstanding technical challenges , draw lessons from initial evaluation studies , and summarize `` best practices '' in the design of a d2d-enabled air interface for lte-based cellular networks . story_separator_special_tag we consider device-to-device ( d2d ) communications underlaying a cellular network to accommodate local services . the system aims to optimize the overall cell throughput while giving priority to the cellular service . in this paper , we study the impact of a fading environment to a d2d enabled cellular network . the results show that the system experiences an increased cellular service outage probability and a decreased cell throughput . we also show that a conservative optimization scheme can effectively control the cellular service outage . even when using the conservative scheme the cell throughput increases significantly compared to cellular-only transmission which shows high potential of underlay d2d communications . story_separator_special_tag an innovative resource allocation scheme is proposed to improve the performance of device-to-device ( d2d ) communications as an underlay in the downlink ( dl ) cellular networks . to optimize the system sum rate over the resource sharing of both d2d and cellular modes , we introduce a sequential second price auction as the allocation mechanism . in the auction , all the spectrum resources are considered as a set of resource units , which are auctioned off by groups of d2d pairs in sequence . we first formulate the value of each resource unit for each d2d pair , as a basis of the proposed auction . and then a detailed auction algorithm is explained using a n-ary tree . the equilibrium path of a sequential second price auction is obtained in the auction process , and the state value of the leaf node in the end of the path represents the final allocation . the simulation results show that the proposed auction algorithm leads to a good performance on the system sum rate , efficiency and fairness . story_separator_special_tag it is expected that device-to-device ( d2d ) communication is allowed to underlay future cellular networks such as imt-advanced for spectrum efficiency . however , by reusing the uplink spectrums with the cellular system , the interference to d2d users has to be addressed to maximize the overall system performance . in this paper , a novel method to deal with the resource allocation and interference avoidance issues by utilizing the network peculiarity of a hybrid network to share the uplink resource is proposed and the implementation details are described in a real cellular system . simulation results prove that satisfying performance can be achieved by using the proposed mechanism . story_separator_special_tag this paper considers device-to-device ( d2d ) communications underlaying cellular networks with a multi-antenna base station ( bs ) . the bs serves its own cellular users while letting another remote terminal directly transmit signals to its nearby receiver via a d2d link . two transmit strategies including beamforming ( bf ) and interference cancellation ( ic ) are considered at the bs for performance evaluation in terms of achievable channel capacity . the capacity performance of two different cases with perfect and quantized channel knowledge at the transmitter is derived with closed-form expressions . based on these results , an adaptive transmission scheme to switch between bf and ic is proposed . numerical results verify the accuracy of the derived expressions and draw the operating regions of bf/ic strategies . story_separator_special_tag device-to-device ( d2d ) communications underlaying cellular networks have recently been considered as a promising means to improve the resource utilization of the cellular network and the user throughput between devices in proximity to each other . in this paper , we investigate the resource sharing problem to optimize the system performance in such a scenario . specifically , we formulate the interference relationships among different d2d communication links and cellular communication links as a novel interference-aware graph , and propose an interference-aware graph based resource sharing algorithm that can effectively obtain the near optimal resource assignment solutions at the base station ( bs ) but with low computational complexity . simulation results confirm that , with markedly reduced complexity , our proposed scheme achieves a network sum rate that approaches the one corresponding to the optimal resource sharing scheme obtained via exhaustive search . story_separator_special_tag future cellular networks such as imt-advanced are expected to allow underlaying direct device-to-device ( d2d ) communication for spectrally efficient support of e.g . rich multimedia local services . enabling d2d links in a cellular network presents a challenge in radio resource management due to the potentially severe interference it may cause to the cellular network . we propose a practical and efficient scheme for generating local awareness of the interference between the cellular and d2d terminals at the base station , which then exploits the multiuser diversity inherent in the cellular network to minimize the interference . system simulations demonstrate that substantial gains in cellular and d2d performance can be obtained using the proposed scheme . story_separator_special_tag a new interference management strategy is proposed to enhance the overall capacity of cellular networks ( cns ) and device-to-device ( d2d ) systems . we consider m out of k cellular user equipments ( cues ) and one d2d pair exploiting the same resources in the uplink ( ul ) period under the assumption of m multiple antennas at the base station ( bs ) . first , we use the conventional mechanism which limits the maximum transmit power of the d2d transmitter so as not to generate harmful interference from d2d systems to cns . second , we propose a d-interference limited area ( ila ) control scheme to manage interference from cns to d2d systems . the method does not allow the coexistence ( i.e. , use of the same resources ) of cues and a d2d pair if the cues are located in the d-ila defined as the area in which the interference to signal ratio ( isr ) at the d2d receiver is greater than the predetermined threshold , d. next , we analyze the coverage of the d-ila and derive the lower bound of the ergodic capacity as a closed form . numerical results story_separator_special_tag spectrally-efficient and low-latency support of local media services is expected to be provided by enabling underlay direct device-to-device ( d2d ) communication mode in future cellular networks . interference alignment ( ia ) can enhance the capacity of a wireless network by providing more degrees of freedom . in this paper , we propose using ia techniques in a d2d underlay network to enhance spectral efficiency . we compare ia transmission and traditional point-to-point ( p2p ) transmission from the bit-error-rate ( ber ) and sum-rate points of view . furthermore , we propose three grouping schemes for the d2d users into groups of 3-pairs such that ia can be applied using a limited number of signal extensions . results demonstrate that although traditional p2p transmission can achieve better ber performance ; ia transmission is still able to achieve gains in the sum rate . also , system simulations show that the cell total d2d sum rate can be improved using ia . a gain of up to 31.8 % is shown to be attainable at a reasonable transmit signal power level . story_separator_special_tag in this paper , a spectrum sharing protocol is proposed for device-to-device ( d2d ) communication overlaying cellular networks . specifically , the protocol allows the d2d users to communicate bi-directionally with each other while assisting the two-way communication between the cellular base station ( bs ) and the cellular user ( cu ) over the same time and frequency resources . the achievable rate region of the sum-rate of the d2d transmissions versus that of the cellular transmissions is evaluated . the pareto boundary of the region is found by optimizing the transmit power at bs and cu as well as the power splitting factor at the relay d2d node . we find through numerical results that the proposed two-way protocol with power control at the bs and cu is effective to improve the sum rate for both the d2d and cellular communication . story_separator_special_tag nowadays , the wi-fi direct technology is supported by most of smartphones on the market , and provides a viable solution to guarantee opportunistic communication among group of devices in a 1-hop range . however , the current specifications of the standard do not support the inter-group communication , which constitutes a key requirement for content-delivery applications like the public safety ones . in this paper , we provide an in-depth analysis of the utilization of the wi-fi direct technology for safety message dissemination over emergency and post-disaster scenarios . three main contributions are provided . first , we show the experimental results of the wi-fi direct technology on a test-bed composed of multiple heterogeneous smartphones , and we analyze the main factors affecting the system performance , like the network setup overhead , the communication range and the network throughput . second , we investigate how to create multi-group peer-to-peer ( p2p ) networks by leveraging on the presence of p2p relay devices , which are in charge of offloading the data among different p2p groups , although being connected to only one p2p group at a time . an analytical model is proposed in order to derive the story_separator_special_tag popularity of bluetooth technology and the proposition of bluetooth specification version 4.0 make indoor location have a broad application prospect.the fuzzy theory is applied in indoor location based on bluetooth , and a fuzzy fingerprint location algorithm is proposed.the location process is divided into two parts : off-line and on-line.a fuzzy fingerprint database is established in the off-line stage , and real-time location of cell phone clients is realized in the on-line stage.simulation results show that the average location error is 1.36 m.compared with traditional fingerprint calibration method , location precision is improved by 49 % and computation complexity is reduced to 1/c where c is the category number of fuzzy clustering . story_separator_special_tag opportunistic schedulers have been primarily proposed to enhance capacity of cellular networks . however , little is known about opportunistic scheduling with fairness and energy efficiency constraints . in this work , we show that adapting opportunistic scheduling can dramatically ameliorate energy efficiency for uplink transmissions , while achieving near-optimal throughput and high fairness . to achieve this goal , we propose a novel two-tier uplink forwarding scheme in which users cooperate , in particular by forming clusters of dual-radio mobiles in hybrid wireless networks . story_separator_special_tag opportunistic scheduling was initially proposed to exploit user channel diversity for network capacity enhancement . however , the achievable gain of opportunistic schedulers is generally restrained due to fairness considerations which impose a tradeoff between fairness and throughput . in this paper , we show via analysis and numerical simulations that opportunistic scheduling not only increases network throughput dramatically , but also increases energy efficiency and can be fair to the users when they cooperate , in particular by using d2d communications . we propose to leverage smartphone 's dual-radio interface capabilities to form clusters among mobile users . we design simple , scalable and energy-efficient d2d-assisted opportunistic strategies , which would incentivize mobile users to form clusters . we use a coalitional game theory approach to analyze the cluster formation mechanism , and show that proportional fair-based intra-cluster payoff distribution brings significant incentive to all mobile users regardless of their channel quality . story_separator_special_tag with the evolution of high-performance multi-radio smartphones , device-to-device ( d2d ) communications became an attractive solution for enhancing the performance of cellular networks . although d2d communications have been widely studied within past few years , the majority of the literature is confined to new theoretical proposals and did not consider implementation challenges . in fact , the implementation feasibility of d2d communications and its challenges are still a relevant research question . in this paper , we introduce a protocol that focuses on d2d communications using lte and wifi direct technologies . we also show that currently available wifi direct features permits to deploy the d2d paradigm on top of the lte cellular infrastructure , without requiring any fundamental change in lte protocols . story_separator_special_tag to meet the increasing demand for wireless capacity , future networks are likely to consist of dense layouts of small cells . thus , the number of concurrent users served by each base station ( bs ) is likely to be small which results in diminished gains from opportunistic scheduling , particularly under dynamic traffic loads . we propose user-initiated bs-transparent traffic spreading that leverages user-to-user communication to increase bs scheduling flexibility . the proposed scheme is able to increase opportunistic gains and improve user performance . for a specified tradeoff between performance and power expenditure , we characterize the optimal policy by modeling the system as a markov decision process and also present a heuristic algorithm that yields significant performance gains . our simulations show that , in the performance-centric case , average file transfer delays are lowered by up to 20 % even in homogeneous scenarios , and up to 50 % with heterogeneous users . further , we show that the bulk of the performance improvement can be achieved with a small increase in power expenditure , e.g. , in an energy-sensitive case , up to 78 % of the performance improvement can be typically achieved at story_separator_special_tag the spectrum sensing problem has gained new aspects with cognitive radio and opportunistic spectrum access concepts . it is one of the most challenging issues in cognitive radio systems . in this paper , a survey of spectrum sensing methodologies for cognitive radio is presented . various aspects of spectrum sensing problem are studied from a cognitive radio perspective and multi-dimensional spectrum sensing concept is introduced . challenges associated with spectrum sensing are given and enabling spectrum sensing methods are reviewed . the paper explains the cooperative sensing concept and its various forms . external sensing algorithms and other alternative sensing methods are discussed . furthermore , statistical modeling of network traffic and utilization of these models for prediction of primary user behavior is studied . finally , sensing features of some current wireless standards are given . story_separator_special_tag the ever-increasing number of resource-constrained machine-type communication ( mtc ) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments . among different application scenarios that the upcoming 5g and beyond cellular networks are expected to support , such as enhanced mobile broadband ( embb ) , massive machine type communications ( mmtcs ) , and ultra-reliable and low latency communications ( urllcs ) , the mmtc brings the unique technical challenge of supporting a huge number of mtc devices in cellular networks , which is the main focus of this paper . the related challenges include quality of service ( qos ) provisioning , handling highly dynamic and sporadic mtc traffic , huge signalling overhead , and radio access network ( ran ) congestion . in this regard , this paper aims to identify and analyze the involved technical issues , to review recent advances , to highlight potential solutions and to propose new research directions . first , starting with an overview of mmtc features and qos provisioning issues , we present the key enablers for mmtc in cellular networks . along with the highlights on the inefficiency of the story_separator_special_tag we consider the problem of mode selection for device-to-device ( d2d ) communications in lte-advanced networks . we propose a solution based on a coalitional game among d2d links to select their communications modes . the solution is given by three coalitions which represent the groups of d2d links using cellular mode , reuse mode , and dedicated mode of transmission . the d2d links in the same coalition cooperatively select the subchannels and use the corresponding transmission mode such that the total power is minimized while their rate requirements are satisfied . the d2d links can make a decision to leave and join a coalition based on their individual transmission costs . the individual transmission cost of each d2d link is a function of the transmission power and the price of channel occupancy which depends on the d2d link 's communications mode . we find stable coalitions as the solution of the mode selection problem . the stable coalitions represent the system states in which no d2d link can change its communication mode and have lower transmission cost without making others worse off . a discrete-time markov chain-based analysis and a distributed algorithm are presented to obtain the stable story_separator_special_tag the zigbee specification is an emerging wireless technology designed to address the specific needs of low-cost , low-power wireless sensor networks and is built upon the physical and medium access control layers defined in ieee 802.15.4 standard for wireless personal area networks ( wpans ) . a key component for the wide-spread success and applicability of zigbee-based networking solutions will be its ability to provide enhanced security mechanisms that can scale to hundreds of nodes . currently , however , an area of concern is the zigbee key management scheme , which uses a centralized approach that introduces well-known issues of limited scalability and a single point of vulnerability . moreover , zigbee key management uses a public key infrastructure . due to these limitations , we suggest replacing zigbee key management with a better candidate scheme that is decentralized , symmetric , and scalable while addressing security requirements . in this work , we investigate the feasibility of implementing localized encryption and authentication protocol ( leap+ ) , a distributed symmetric based key management . leap+ is designed to support multiple types of keys based on the message type that is being exchanged . in this paper , we story_separator_special_tag device-to-device ( d2d ) communication underlaying cellular networks can enhance the network capacity and spectrum efficiency when sharing cellular resources . however , the severe interference between d2d and cellular networks may lead to performance decrease of d2d and cellular communication . in this paper , we concentrate on suppressing the interference between d2d users and cellular networks when d2d communication reuses the cellular resources in downlink . an efficient resource allocation scheme is proposed to manage the interference between d2d and cellular networks . first , the mutual interference is restricted under the constraints by adopting the interference limited area control method . after that , the resources are assigned to d2d users to improve the sum rate of cellular communication and d2d communication . besides , the simulation results are presented and analyzed . finally , we conclude that the proposed scheme can significantly improve the total capacity of cellular and d2d communication , while at the same time suppressing the mutual interference . story_separator_special_tag we consider rate splitting and interference cancelation in device-to-device ( d2d ) communications underlaying a cellular network . we assume that a transmitted message is split into a private and a public part , as in han-kobayashi scheme . the private part is decodable only by the intended receiver , whereas the public part is in addition decodable by an interference victim . the receivers run a best-effort successive interference cancelation ( sic ) algorithm , canceling interfering public signals . we derive the optimal rate splitting factors for most of the categorized channel conditions in a two-link scenario . we use the rate splitting scheme for resource sharing in a two-link d2d underlay cellular network . the results show that rate splitting resource sharing achieves higher sum rate than resource sharing schemes which are based on power control or orthogonal resource allocation , including the traditional cellular mode . story_separator_special_tag an ad hoc network is a collection of wireless mobile hosts forming a temporary network without the aid of any established infrastructure or centralized administration . in such an environment , it may be necessary for one mobile host to enlist the aid of other hosts in forwarding a packet to its destination , due to the limited range of each mobile host s wireless transmissions . this paper presents a protocol for routing in ad hoc networks that uses dynamic source routing . the protocol adapts quickly to routing changes when host movement is frequent , yet requires little or no overhead during periods in which hosts move less frequently . based on results from a packet-level simulation of mobile hosts operating in an ad hoc network , the protocol performs well over a variety of environmental conditions such as host density and movement rates . for all but the highest rates of host movement simulated , the overhead of the protocol is quite low , falling to just 1 % of total data packets transmitted for moderate movement rates in a network of 24 mobile hosts . in all cases , the difference in length between the routes story_separator_special_tag we consider a power control scheme for maximizing the information capacity of the uplink in single-cell multiuser communications with frequency-flat fading , under the assumption that the users attenuations are measured perfectly . its main characteristics are that only one user transmits over the entire bandwidth at any particular time instant and that the users are allocated more power when their channels are good , and less when they are bad . moreover , these features are independent of the statistics of the fading . numerical results are presented for the case of single-path rayleigh fading . we show that an increase in capacity over a perfectly-power controlled ( gaussian ) channel can be achieved , especially if the number of users is large . by examining the bit error-rate with antipodal signalling , we show the inherent diversity in multiuser communications over fading channels . story_separator_special_tag a new achievable rate region for the general interference channel which extends previous results is presented and evaluated . the technique used is a generalization of superposition coding to the multivariable case . a detailed computation for the gaussian channel case clarifies to what extent the new region improves previous ones . the capacity of a class of gaussian interference channels is also established . story_separator_special_tag in this paper , we describe linear successive interference cancellation ( sic ) based on matrix-algebra . we show that linear sic schemes ( single stage and multistage ) correspond to linear matrix filtering that can be performed directly on the received chip-matched filtered signal vector without explicitly performing the interference cancellation . this leads to an analytical expression for calculating the resulting bit-error rate which is of particular use for short code systems . convergence issues are discussed , and the concept of /spl epsiv/-convergence is introduced to determine the number of stages required for practical convergence for both short and long codes . story_separator_special_tag device-to-device communication underlaying a cellular network enables local services with limited interference to the cellular network . in this paper we study the optimal selection of possible resource sharing modes with the cellular network in a single cell . based on the learning from the single cell studies we propose a mode selection procedure for a multi-cell environment . our evaluation results of the proposed procedure show that it enables a much more reliable device-to-device communication with limited interference to the cellular network compared to simpler mode selection procedures . a well performing and practical mode selection is critical to enable the adoption of underlay device-to-device communication in cellular networks . story_separator_special_tag device-to-device communications with automated connectivity to sensors , machines and other users is an important enabler for a multitude of use cases with local social networks as one example . in this paper we focus on the main challenges to build a seamless user experience . in particular , we present a novel device beaconing scheme to facilitate service and device discovery . moreover , the mechanism enables the exchange of small data packets and facilitates the connection setup of a suitable transport radio . we discuss the energy efficiency of the proposed device discovery mechanism and evaluate the capability to form a network in a residential scenario with different device densities . story_separator_special_tag mobile users ' data rate and quality of service are limited by the fact that , within the duration of any given call , they experience severe variations in signal attenuation , thereby necessitating the use of some type of diversity . in this two-part paper , we propose a new form of spatial diversity , in which diversity gains are achieved via the cooperation of mobile users . part i describes the user cooperation strategy , while part ii ( see ibid. , p.1939-48 ) focuses on implementation issues and performance analysis . results show that , even though the interuser channel is noisy , cooperation leads not only to an increase in capacity for both users but also to a more robust system , where users ' achievable rates are less susceptible to channel variations . story_separator_special_tag for pt.i see ibid. , p.1927-38 . this is the second of a two-part paper on a new form of spatial diversity , where diversity gains are achieved through the cooperation of mobile users . part i described the user cooperation concept and proposed a cooperation strategy for a conventional code-division multiple-access ( cdma ) system . part ii investigates the cooperation concept further and considers practical issues related to its implementation . in particular , we investigate the optimal and suboptimal receiver design , and present performance analysis for the conventional cdma implementation proposed in part i. we also consider a high-rate cdma implementation and a cooperation strategy when assumptions about the channel state information at the transmitters are relaxed . we illustrate that , under all scenarios studied , cooperation is beneficial in terms of increasing system throughput and cell coverage , as well as decreasing sensitivity to channel variations . story_separator_special_tag an innovative auction-based allocation scheme is proposed to improve the performance of device-to-device ( d2d ) communications as an underlay in the downlink ( dl ) cellular networks . to optimize the system sum rate over the resource sharing of both d2d and cellular modes , we introduce a reverse iterative combinatorial auction as the allocation mechanism . in the auction , all the spectrum resources are considered as a set of resource units , which compete to obtain business as bidders while packages of d2d pairs are auctioned off as goods in each auction round . we first formulate the valuation of each resource unit for packages of d2d links . and then a detailed non-monotonic descending price auction algorithm is explained . further , we prove that the proposed scheme is cheat-proof , converges in a finite number of iteration rounds , and has lower complexity compared to a traditional combinatorial allocation . the simulation results demonstrate that the algorithm efficiently leads to a good performance on the system sum rate . story_separator_special_tag this paper proposes a power optimization scheme with joint resource allocation ( i.e . subcarrier and bit allocation ) and mode selection in an ofdma system with integrated d2d communications . through the proper control of the base station ( bs ) , users can communicate with each other either directly or via the bss as in traditional cellular networks . particularly , an optimization problem is formulated to minimize total downlink transmission power constrained by users ' qos demands ; while a heuristic scheme exploiting joint subcarrier allocation , adaptive modulation and mode selection is contrived to solve the problem . simulation results show that our proposed scheme may not only conserve total downlink transmission power effectively , but also save overall power consumption of bss significantly , compared with existing algorithms used in traditional ofdma systems . story_separator_special_tag we consider the problem of resource allocation ( in terms of subcarrier , bit and corresponding power ) for qos provisioning to real-time services in multiuser ofdm systems . a novel dynamic subcarrier and bit allocation algorithm is proposed for real-time services in multiuser ofdm systems . the proposed algorithm takes advantage of the instantaneous channel gain in subcarrier and bit allocation properly without relying on the nonlinear optimization technique , which is usually used in subcarrier and bit allocation when the instantaneous channel gain is considered . therefore it avoids the corresponding computational complexity of the nonlinear optimization . the performance of the proposed algorithm is compared to other existing algorithms . story_separator_special_tag orthogonal frequency division multiple access ( ofdma ) technique is considered in this paper . we do research on the subcarrier , bit and power allocation algorithm in it and propose the novel dynamic subcarrier and bit allocation algorithm - qos-guaranteed adaptive resource allocation algorithm with low complexity for real-time services when the instantaneous channel gain is considered . then we can derive the better system performance when the adaptive algorithm is used in the resource allocation scheme , and it can derive the lower power and complexity in contrast to other existing algorithms story_separator_special_tag this paper proposes a power-efficient mode selection and power allocation scheme in device-to-device ( d2d ) communication system as an underlay coexistence with cellular networks . the proposed scheme is performed based on the exhaustive search of all possible mode combinations of the devices which consist of the mode indices for all devices in the system . specifically , the proposed scheme consists of two steps . first , we calculate the optimal power with respect to the maximum power-efficiency for all possible modes of each device . since the power-efficiency is not a concave function for the transmission power , we obtain the suboptimal solution by using the concavity of the lower and upper bound for the power-efficiency . the powerefficiencies for all possible modes of each device are obtained by the suboptimal power allocation in the first step . in the second step , we select a mode sequence which has the maximal power-efficiency among all possible mode combinations of the devices based on the obtained power-efficiencies in the first step . then we can jointly obtain the suboptimal transmission power and the mode maximizing the power-efficiency . the proposed suboptimal scheme for the power allocation and mode story_separator_special_tag in a cellular network system one way to increase its capacity is to allow direct communication between closely located user devices when they are communicating with each other instead of conveying data from one device to the other via the radio and core network . the problem is then when the network shall assign direct communication mode and when not . in previous works the decision has been done individually per communicating device pair not taking into account other devices and the current state of the network . we derive means for getting optimal communication mode for all devices in the system in terms of system equations . the system equations capture information of the network such as link gains , noise levels , signal-to-interference-and-noise-ratios , etc. , as well as communication mode selection for the devices . using the derived equations performance bounds for the cellular system where d2d communication is an additional communication mode are illustrated via simulations . further , practical communication mode selection algorithms are used to evaluate their system performance against the achievable bounds . analysis show the usability of the system equations and the potential of having d2d operation integrated into a cellular system story_separator_special_tag device-to-device ( d2d ) communications underlaying a cellular infrastructure has recently been proposed as a means of increasing the cellular capacity , improving the user throughput and extending the battery lifetime of user equipments by facilitating the reuse of spectrum resources between d2d and cellular links . in network assisted d2d communications , when two devices are in the proximity of each other , the network can not only help the devices to set the appropriate transmit power and schedule time and frequency resources but also to determine whether communication should take place via the direct d2d link ( d2d mode ) or via the cellular base station ( cellular mode ) . in this paper we formulate the joint mode selection , scheduling and power control task as an optimization problem that we first solve assuming the availability of a central entity . we also propose a distributed suboptimal joint mode selection and resource allocation scheme that we benchmark with respect to the centralized optimal solution . we find that the distributed scheme performs close to the optimal scheme both in terms of resource efficiency and user fairness . story_separator_special_tag horn formulae play a prominent role in artificial intelligence and logic programming . in this paper we investigate the problem of optimal compression of propositional horn production rule knowledge bases . the standard approach to this problem , consisting in the removal of redundant rules from a knowledge base , leads to an `` irredundant '' but not necessarily optimal knowledge base . we prove here that the number of rules in any irredundant horn knowledge base involving n propositional variables is at most n 0 1 times the minimum possible number of rules . in order to formalize the optimal compression problem , we define a boolean function of a knowledge base as being the function whose set of true points is the set of models of the knowledge base . in this way the optimal compression of production rule knowledge bases becomes a problem of boolean function minimization . in this paper we prove that the minimization of horn functions ( i.e . boolean functions associated to horn knowledge bases ) is . story_separator_special_tag device-to-device ( d2d ) communication underlaying cellular networks is seen as a promising technology for future communication systems , especially when relays are employed to extend the range of d2d links . however , the mutual interference between the spectrum-sharing d2d links and conventional cellular ( cc ) links is still the dominating performance limiting factor , which enforces those links to adopt higher transmission power . as energy saving is always one of the key concerns , we propose to minimize the total transmission energy consumption of a relay-assisted d2d link and a cc link sharing radio resources by jointly optimizing their transmission rate selection and power control ( trspc ) . we formulate the optimization problem as a non-convex non-linear programming ( nlp ) problem and then develop a game theory based distributed approach to solve it . simulation results demonstrate that our game theory based trspc approach can enormously decrease the total transmission energy consumption compared with existing works . story_separator_special_tag device-to-device ( d2d ) communications as underlays of cellular networks facilitate diverse local services and reduce base station traffic . however , d2d communication may cause interference with the primary cellular network . to avoid this problem , the network should flexibly allocate its resources and select a proper mode for users . here , we formulate a joint mode selection and resource allocation problem to maximize the system throughput with a minimum required rate guarantee . a mode selection and resource allocation scheme based on particle swarm optimization ( pso-msra ) is proposed in which solutions are mapped onto particles and a fitness function embodies the constraints in a penalty function . simulation results show its superiority over other schemes in terms of throughput and minimum required rate guarantee . story_separator_special_tag we study an opportunistic subchannel scheduling and transmission mode selection problem for the ofdma system with device-to-device ( d2d ) communication . we allow d2d users to opportunistically select its transmission mode between two transmission modes : direct transmission between d2d users ( direct one- hop transmission ) and indirect transmission through the bs ( indirect two-hop transmission ) . we develop a framework with which opportunistic transmission mode selection can be modeled as opportunistic subchannel scheduling , which enables our problem to be reduced to an opportunistic subchannel scheduling problem . we formulate a stochastic optimization problem that aims to maximize the average sum-rate of the system , while satisfying the quality-of-service ( qos ) requirement of each user . by solving the problem , we develop an optimal opportunistic subchannel scheduling algorithm , which enables us to perform both subchannel scheduling and transmission mode selection opportunistically . story_separator_special_tag in this paper , we consider the fair resource allocation problem for device-to-device ( d2d ) communications in orthogonal frequency division multiple access ( ofdma ) -based wireless cellular networks . in particular , we propose a two-phase solution approach where resource allocation for cellular downlink and uplink flows with max-min fairness is performed in the first phase and resource allocation for d2d flows with rate protection for cellular flows is conducted in the second phase . we present both optimal formulations and low-complexity algorithms to solve the corresponding problems in the two phases . we also analyze the complexity of both solutions . finally , we present numerical results to demonstrate the efficacy of the proposed algorithms in exploiting the spatial spectrum opportunities for d2d communications . story_separator_special_tag we address resource sharing of the cellular network and a device-to-device ( d2d ) underlay communication assuming that the cellular network has control over the transmit power and the radio resources of d2d links . we show that by proper power control , the interference between two services can be coordinated to benefit the overall performance . in addition , we consider a scenario with prioritized cellular communication and an upper limit on the maximum transmission rate of all links . we derive the optimum power allocation for the considered resource sharing modes . the results show that cellular service can be effectively guaranteed while having a comparable sum rate with a none power control case in most of the cell area . story_separator_special_tag we consider device-to-device ( d2d ) communication underlaying cellular networks to improve local services . the system aims to optimize the throughput over the shared resources while fulfilling prioritized cellular service constraints . optimum resource allocation and power control between the cellular and d2d connections that share the same resources are analyzed for different resource sharing modes . optimality is discussed under practical constraints such as minimum and maximum spectral efficiency restrictions , and maximum transmit power or energy limitation . it is found that in most of the considered cases , optimum power control and resource allocation for the considered resource sharing modes can either be solved in closed form or searched from a finite set . the performance of the d2d underlay system is evaluated in both a single-cell scenario , and a manhattan grid environment with multiple winner ii a1 office buildings . the results show that by proper resource management , d2d communication can effectively improve the total throughput without generating harmful interference to cellular networks . story_separator_special_tag an important and difficult problem in computer vision is to determine 2d image feature correspondences over a set of images . in this paper , two new affinity measures for image points and lines from different images are presented , and are used to construct unweighted and weighted bipartite graphs . it is shown that the image feature matching problem can be reduced to an unweighted matching problem in the bipartite graphs . it is further shown that the problem can be formulated as the general maximum-weight bipartite matching problem , thus generalizing the above unweighted bipartite matching technique . story_separator_special_tag device-to-device ( d2d ) communication as an underlaying cellular network empowers user-driven rich multimedia applications and also has proven to be network efficient offloading enodeb traffic . however , d2d transmitters may cause significant amount of interference to the primary cellular network when radio resources are shared between them . during the downlink ( dl ) phase , primary cell ue ( user equipment ) may suffer from interference by the d2d transmitter . on the other hand , the immobile enodeb is the victim of interference by the d2d transmitter during the uplink ( ul ) phase when radio resources are allocated randomly . such interference can be avoided otherwise diminish if radio resource allocated intelligently with the coordination from the enodeb . in this paper , we formulate the problem of radio resource allocation to the d2d communications as a mixed integer nonlinear programming ( minlp ) . such an optimization problem is notoriously hard to solve within fast scheduling period of the long term evolution ( lte ) network . we therefore propose an alternative greedy heuristic algorithm that can lessen interference to the primary cellular network utilizing channel gain information . we also perform extensive simulation story_separator_special_tag a concept for the optimization of nonlinear functions using particle swarm methodology is introduced . the evolution of several paradigms is outlined , and an implementation of one of the paradigms is discussed . benchmark testing of the paradigm is described , and applications , including nonlinear function optimization and neural network training , are proposed . the relationships between particle swarm optimization and both artificial life and genetic algorithms are described . story_separator_special_tag introduction and preliminaries . problems , algorithms , and complexity . linear algebra . linear algebra and complexity . lattices and linear diophantine equations . theory of lattices and linear diophantine equations . algorithms for linear diophantine equations . diophantine approximation and basis reduction . polyhedra , linear inequalities , and linear programming . fundamental concepts and results on polyhedra , linear inequalities , and linear programming . the structure of polyhedra . polarity , and blocking and anti -- blocking polyhedra . sizes and the theoretical complexity of linear inequalities and linear programming . the simplex method . primal -- dual , elimination , and relaxation methods . khachiyana s method for linear programming . the ellipsoid method for polyhedra more generally . further polynomiality results in linear programming . integer linear programming . introduction to integer linear programming . estimates in integer linear programming . the complexity of integer linear programming . totally unimodular matrices : fundamental properties and examples . recognizing total unimodularity . further theory related to total unimodularity . integral polyhedra and total dual integrality . cutting planes . further methods in integer linear programming . references . indexes . story_separator_special_tag an adaptive subcarrier allocation and an adaptive modulation for multiuser orthogonal frequency-division multiplexing ( ofdm ) are considered . the optimal subcarrier and bit allocation problems , which are previously formulated as nonlinear optimizations , are reformulated into and solved by integer programming ( ip ) . a suboptimal approach that performs subcarrier allocation and bit loading separately is proposed . it is shown that the subcarrier allocation in this approach can be optimized by the linear-programming ( lp ) relaxation of ip , while the bit loading can be performed in a manner similar to a single-user ofdm . in addition , a heuristic method for solving the lp problem is presented . the lp-based suboptimal and heuristic algorithms are considerably simpler to implement than the optimal ip , plus their performances are close to those of the optimal approach story_separator_special_tag we consider how to efficiently employ d2d communications for secondary users ( sus ) in a cognitive cellular network . in this network , primary users ( pus ) transmit via base station normally , while sus can employ multiple transmission modes . one is to transmit via base station ( bs mode ) , and the other is to employ d2d communication ( d2d mode ) due to the scarce idle spectrum . the sus who have the potential to transmit to each other using d2d mode form a group . within this group , they can transmit to each other via bs mode or using d2d mode directly . outside this group , only bs mode is available . to investigate how to employ d2d mode into this network , first we define the utilities of sus employing bs mode and d2d mode respectively considering achieved data rate , power consumption , price of unit bandwidth and the impact of interference . then we analyze the optimal power allocation for each mode . to optimize sus ' strategies of mode selection , we adopt replicator dynamics in evolution theory to model the behaviors of sus . furthermore , story_separator_special_tag it has been predicted that the recent explosive growth in wireless data traffic will continue for the foreseeable future . to date , attempts to address this explosive growth have included increasing base station density through smaller cells , and to a lesser extent , by improving spectral efficiency using close-to-capacity channel coding and mimo techniques . providing further capacity and coverage improvements through ever shrinking cells could lead to large infrastructure costs and operating expenses . in this paper we propose a far less expensive alternate solution using a topology which employs mobile user equipment ( ue ) nodes as virtual infrastructure to enhance the cellular capacity while also improving network coverage . the improvements in capacity and coverage are achieved by enabling cellular controlled direct device to device ( d2d ) links to carry relayed traffic . each terminal ue ( t-ue ) with an active connection could potentially be assigned a helper ue ( h-ue ) , depending on the network conditions and traffic requirements to improve the system capacity . in addition , t-ues which are out of coverage can also be assigned a h-ue , which would extend the typical coverage of a base station story_separator_special_tag a new interference management scheme is proposed to improve the reliability of a device-to-device ( d2d ) communication in the uplink ( ul ) period without reducing the power of cellular user equipment ( ue ) . to improve the reliability of the d2d receiver , two conventional receive techniques and one proposed method are introduced . one of the conventional methods is demodulating the desired signal first ( mode1 ) , while the other is demodulating an interference first ( mode2 ) , and the proposed method is exploiting a retransmission of the interference from the base station ( bs ) ( mode3 ) . we derive their outage probabilities in closed forms and explain the mechanism of receive mode selection which selects the mode guaranteeing the minimum outage probability among three modes . numerical results show that by applying the receive mode selection , the d2d receiver achieves a remarkable enhancement of outage probability in the middle interference regime from the usage of mode3 compared to the conventional ways of using only mode1 or mode2 . story_separator_special_tag in an attempt to utilize spectrum resources more efficiently , protocols sharing licensed spectrum with unlicensed users are receiving increased attention . from the perspective of cellular networks , spectrum underutilization makes spatial reuse a feasible complement to existing standards . interference management is a major component in designing these schemes as it is critical that licensed users maintain their expected quality of service . we develop a distributed dynamic spectrum protocol in which ad-hoc device-to-device users opportunistically access the spectrum actively in use by cellular users . first , channel gain estimates are used to set feasible transmit powers for device-to-device users that keeps the interference they cause within the allowed interference temperature . then network information is distributed by route discovery packets in a random access manner to help establish either a single-hop or multi-hop route between two device-to-device users . we show that network information in the discovery packet can decrease the failure rate of the route discovery and reduce the number of necessary transmissions to find a route . using the found route , we show that two device-to-device users can communicate with a low probability of outage while only minimally affecting the cellular network , story_separator_special_tag the explosive growth of the mobile user population and multimedia services are causing a severe traffic overload problem in the cellular network . the third-generation partnership project ( 3gpp ) has defined data offloading as a critical area to cope with this problem . local ip access ( lipa ) and selected ip traffic offload ( sipto ) are considered to be the typical technologies to implement cellular data offloading . currently , device-to-device ( d2d ) communication has been proposed as a new dataoffloading solution . the d2d communication underlying a cellular infrastructure reduces the radio access network load as well as the core network load . the cellular network needs to establish a new type of radio bearer for d2d communications , and this bearer can provide the direct traffic path between devices . in this article , we propose a d2d bearer control architecture for d2d communications supported by a longterm evolution-advanced ( lte-a ) infrastructure . this could be a feasible and efficient solution for offloading data from the cellular network . story_separator_special_tag establishing the capacity region of a gaussian interference network is an open problem in information theory . recent progress on this problem has led to the characterization of the capacity region of a general two-user gaussian interference channel within one bit . in this paper , we develop new , improved outer bounds on the capacity region . using these bounds , we show that treating interference as noise achieves the sum capacity of the two-user gaussian interference channel in a low-interference regime , where the interference parameters are below certain thresholds . we then generalize our techniques and results to gaussian interference networks with more than two users . in particular , we demonstrate that the total interference threshold , below which treating interference as noise achieves the sum capacity , increases with the number of users . story_separator_special_tag this paper presents the capacity region of frequency-selective gaussian interference channels under the condition of strong interference , assuming an average power constraint per user . first , a frequency-selective gaussian interference channel is modeled as a set of independent parallel memoryless gaussian interference channels . using nonfrequency selective results , the capacity region of frequency-selective gaussian interference channels under strong interference is expressed mathematically . exploiting structures inherent in the problem , a dual problem is constructed for each independent memoryless channel , in which both mathematical and numerical analysis are performed . furthermore , three suboptimal methods are compared to the capacity-achieving coding and power allocation scheme . iterative waterfilling , a suboptimal scheme , provides close-to-optimum performance and has a distributed coding and power allocation scheme , which are attractive in practice . story_separator_special_tag device-to-device ( d2d ) communication as an underlaying cellular network empowers rich multimedia application , improves local communication , and enables local services , which would also bring out interference between cellular users and d2d terminals . in this paper , we study the challenges of the interference management in a hybrid network that d2d communication reuses uplink resource of cellular networks . we first introduce an interference coordination strategy for d2d communication . then we design an optimal channel reusing selection algorithm for single cell scenario which is based on hungarian algorithm . we also propose a heuristic algorithm to reduce the computational complexity . our simulation results show that the heuristic algorithm has a close performance as the optimal algorithm with a significant decreasing of computational complexity , and both of the proposed algorithms could improve the system performance . story_separator_special_tag the author p r e s e n t s a geometrical modelwhich illuminates variants of the hungarian method for the solution of the assignment problem . story_separator_special_tag in this paper , we examine the performance of uniform backoff ( ub ) and binary exponential backoff ( beb ) algorithms with retry limit , which can be used in the random-access channels of universal mobile telecommunication system ( umts ) -long term evolution ( lte ) and ieee 802.16 systems under the assumption of finite population under unsaturated traffic conditions . additionally , we consider access prioritization schemes to provide differential performance by controlling various system parameters . we show that controlling the persistence value as specified in umts is effective in both backoff algorithms . the performances with and without access prioritization schemes are presented in terms of throughput , mean , and variance of packet retransmission delay , packet-dropping probability , and system stability . finally , we consider a dynamic window assignment algorithm that is based on bayesian broadcasting , in which the base station adaptively controls the window size of the ub algorithm under unsaturated traffic conditions . results show that the proposed window assignment algorithm outperforms fixed window assignment in static and dynamic traffic conditions under the assumption of perfect orthogonality between random-access codes . story_separator_special_tag in this paper we examine the feasibility of semi-persistent scheduling ( sps ) for voice over ip ( voip ) by random access and evaluate its performance in terms of throughput of random access and traffic channels , and random access delay . we further investigate system stability issues and present methods to stabilize the system . to see the voip capacity gain , we show the maximum number of acceptable voip terminals without exceeding some front-end packet dropping ( i.e. , voice clipping ) probability . in addition , we examine the effect of the parameter called implicit release after in the lte standard on the system performance , which is used for silence period detection . our performance evaluation model based on equilibrium point analysis is compared to simulations . story_separator_special_tag in this letter , we consider the uplink random access problem in a wireless multimedia network ( wmn ) with audio , video , and best effort applications . since these multimedia applications have different quality-of-service ( qos ) requirements , we model their utilities with concave , step , and quasi-concave functions . we assume that the access point performs admission control and assigns transmission probabilities to the users for random access , based on solving a non-convex network utility maximization problem . we propose a novel enumeration algorithm to obtain the global optimal solution by solving a number of computationally tractable convex optimization problems . we characterize the total number of iterations of the algorithm analytically . simulation results show that our proposed algorithm achieves a higher average network aggregate utility than a carrier sense multiple access ( csma ) scheme implemented in a slotted time system . story_separator_special_tag lte-advanced networks employ random access based on preambles transmitted according to multi-channel slotted aloha principles . the random access is controlled through a limit $ w $ on the number of transmission attempts and a timeout period for uniform backoff after a collision . we model the lte-advanced random access system by formulating the equilibrium condition for the ratio of the number of requests successful within the permitted number of transmission attempts to those successful in one attempt . we prove that for $ w \xa0\\leq 8 $ there is only one equilibrium operating point and for $ w \\geq 9 $ there are three operating points if the request load $ \\rho $ is between load boundaries $ \\rho_1 $ and $ \\rho_2 $ . we analytically identify these load boundaries as well as the corresponding system operating points . we analyze the throughput and delay of successful requests at the operating points and validate the analytical results through simulations . story_separator_special_tag in this paper , we develop distributed random access scheduling schemes that exploit the time-varying nature of fading channels for multimedia traffic in multihop wireless networks . it should be noted that while centralized scheduling solutions can achieve optimal throughput under this setting , they incur high-computational complexity and require centralized coordination requiring global channel information . the proposed solution not only achieves provable performance guarantees under a wide range of interference models , but also can be implemented in a distributed fashion using local information . to the best of our knowledge , this is the first distributed scheduling mechanism for fading channels that achieves provable performance guarantees . we show through simulations that the proposed schemes achieve better empirical performance than other known distributed scheduling schemes . story_separator_special_tag this paper analyzes the optimal d2d user allocation over multi-bands in the heterogeneous networks . the heterogeneous networks contain one or several cellular systems and d2d communication shares uplink resource with them . by allocating d2d users on different bands , it can reduce the interference between d2d and cellular systems and improve d2d transmission capacity at the same time . through utilizing stochastic geometry , the problem is formed as sum d2d transmission capacity on each band with constraints that guarantee outage probilities of both cellular and d2d transmission . the primal problem is first proved to be convex and then solved by constructing lagrange function and kkt conditions . the optimal d2d user densities over multi-bands are derived and we propose a d2d scheduling algorithm base on this conclusion for the dynamic d2d access process . simulation results show the superiority of optimal user allocation over average allocation method . story_separator_special_tag in this paper we introduce a reliable multicast concept for device-to-device ( d2d ) communication integrated into cellular network . in addition to the introduction of the basic concept , initial simulation results are presented as well . clustering closely located devices which have local communication needs is a feasible and efficient way of solving the increasing data traffic requirements in the future cellular network . reliable d2d multicast concept introduced in this paper is designed to be scalable and efficient solution for local communication needs such as file transfer and even streaming services . due to the network involvement in controlling the local d2d communication , sufficient quality of service can be guaranteed . in addition , due to the flexible mode switching between direct and cellular modes in the integrated operation the service continuity can be provided . story_separator_special_tag this text presents a modern theory of analysis , control , and optimization for dynamic networks . mathematical techniques of lyapunov drift and lyapunov optimization are developed and shown to enable constrained optimization of time averages in general stochastic systems . the focus is on communication and queueing systems , including wireless networks with time-varying channels , mobility , and randomly arriving traffic . a simple drift-plus-penalty framework is used to optimize time averages such as throughput , throughput-utility , power , and distortion . explicit performance-delay tradeoffs are provided to illustrate the cost of approaching optimality . this theory is also applicable to problems in operations research and economics , where energy-efficient and profit-maximizing decisions must be made without knowing the future . topics in the text include the following : - queue stability theory - backpressure , max-weight , and virtual queue methods - primal-dual methods for non-convex stochastic utility maximization - universal scheduling theory for arbitrary sample paths - approximate and randomized scheduling theory - optimization of renewal systems and markov decision systems detailed examples and numerous problem set questions are provided to reinforce the main concepts . table of contents : introduction / introduction to queues story_separator_special_tag device-to-device ( d2d ) communications underlaying a cellular infrastructure has been proposed as a means of taking advantage of the physical proximity of communicating devices , increasing resource utilization , and improving cellular coverage . relative to the traditional cellular methods , there is a need to design new peer discovery methods , physical layer procedures , and radio resource management algorithms that help realize the potential advantages of d2d communications . in this article we use the 3gpp long term evolution system as a baseline for d2d design , review some of the key design challenges , and propose solution approaches that allow cellular devices and d2d pairs to share spectrum resources and thereby increase the spectrum and energy efficiency of traditional cellular networks . simulation results illustrate the viability of the proposed design . story_separator_special_tag focus of this paper is on an integration framework between umts ( universal mobile telecommunications system ) and mobile ad-hoc networks ( manet ) , specifically designed to increase the effectiveness of cellular systems in supporting multicast transmissions . the aim is to overcome the scalability constrains of cellular network by enriching it through multi-hop communications . to the purpose of reducing the adverse impact of multicast transmission in the wireless cellular environment and to improve the system scalability , in this paper we propose a radio resource management ( rrm ) policy based on a mbms/manet integrated architecture . the proposed solution has been successfully tested through a comprehensive simulation campaign . obtained results make us confident that a well-designed integration is a promising approach to overcome the umts ( and beyond umts networks , as well ) inadequacy in supporting multicast services efficiently . story_separator_special_tag device-to-device communication has been regarded as a promising technology to improve the cellular system efficiency , which allows mobile devices communicate directly with each other on the licensed frequency resources under the control of a base station ( or enb ) . however , the benefits of d2d using license exempt bands , such as ism bands , have not been sufficiently taken into account yet . the main challenges arise from the coexistence of d2d and wlan in the same frequency band and geographical area . in this paper , we propose a group-wise channel sensing and resource pre-allocation scheme for d2d user fairly contending and utilizing the ism band resource with its wlan rivals , where d2d pairs with different qos or bandwidth requirements are grouped and pre-scheduled to approximately fill in an overall flexible resource block and then a representative d2d pair is appointed in each group for contending resource to avoid intra-group collisions . by using the proposed scheme , the resource on ism band can be used more efficiently so that d2d applications can be applied to a much wider scope . story_separator_special_tag as wireless video is the fastest growing form of data traffic , methods for spectrally efficient on-demand wireless video streaming are essential to both service providers and users . a key property of video on-demand is the asynchronous content reuse , such that a few popular files account for a large part of the traffic but are viewed by users at different times . caching of content on wireless devices in conjunction with device-to-device ( d2d ) communications allows to exploit this property , and provide a network throughput that is significantly in excess of both the conventional approach of unicasting from cellular base stations and the traditional d2d networks for regular data traffic . this paper presents in a tutorial and concise form some recent results on the throughput scaling laws of wireless networks with caching and asynchronous content reuse , contrasting the d2d approach with other alternative approaches such as conventional unicasting , harmonic broadcasting , and a novel coded multicasting approach based on caching in the user devices and network-coded transmission from the cellular base station only . somehow surprisingly , the d2d scheme with spatial reuse and simple decentralized random caching achieves the same near-optimal throughput story_separator_special_tag caching is a technique to reduce peak traffic rates by prefetching popular content into memories at the end users . conventionally , these memories are used to deliver requested content in part from a locally cached copy rather than through the network . the gain offered by this approach , which we term local caching gain , depends on the local cache size ( i.e , the memory available at each individual user ) . in this paper , we introduce and exploit a second , global , caching gain not utilized by conventional caching schemes . this gain depends on the aggregate global cache size ( i.e. , the cumulative memory available at all users ) , even though there is no cooperation among the users . to evaluate and isolate these two gains , we introduce an information-theoretic formulation of the caching problem focusing on its basic structure . for this setting , we propose a novel coded caching scheme that exploits both local and global caching gains , leading to a multiplicative improvement in the peak rate compared to previously known schemes . in particular , the improvement can be on the order of the number of story_separator_special_tag in this paper , we focus on mobile wireless networks comprising of a powerful communication center and a multitude of mobile users . we investigate the propagation of deadline-based content in the wireless network characterized by heterogeneous ( time-varying and user-dependent ) wireless channel conditions , heterogeneous user mobility , and where communication could occur in a hybrid format ( e.g. , directly from the central controller or by exchange with other mobiles in a peer-to-peer manner ) . we show that exploiting double opportunities , i.e. , both time-varying channel conditions and mobility , can result in substantial performance gains . we develop a class of double opportunistic multicast schedulers and prove their optimality in terms of both utility and fairness under heterogeneous channel conditions and user mobility . extensive simulation results are provided to demonstrate that these algorithms can not only substantially boost the throughput of all users ( e.g. , by 50 % to 150 % ) , but also achieve different consideration of fairness among individual users and groups of users . story_separator_special_tag the evolution of cellular wireless communications has involved the introduction of technologies such as multiple antennas , ofdm , higher spectral efficiency through better modulation , denser deployments and carrier aggregation . a different approach to enhancing the cellular network by using direct communication between ues is presented in this paper . direct device-to-device ( d2d ) communication can be used for several purposes including network traffic offloading , public safety , and social applications such as gaming . the architectural and protocol enhancements required to extend the current 3gpp lte-advanced system to incorporate d2d communication are described , including the logical functions of a d2d server in the core network , the procedures for devices to discover each other and obtain d2d services , the steps involved in establishing and maintaining a d2d call and procedures for efficient mobility between a traditional cellular mode and a d2d mode of operation . story_separator_special_tag wireless technology advancements made opportunistic scheduling a popular topic in recent times . however , opportunistic schedulers for wireless systems have been studied since nearly twenty years , but not implemented in real systems due to their high complexity and hardly achievable requirements . in contrast , today 's popularity of opportunistic schedulers extends to implementation proposals for next generation cellular technologies . motivated by such a novel interest towards opportunistic scheduling , we provide a taxonomy for opportunistic schedulers , which is based on scheduling design 's objectives ; accordingly , we provide an extensive review of opportunistic scheduling proposals which have appeared in the literature during nearly two decades . the huge number of papers available in the literature propose different techniques to perform opportunistic scheduling , ranging from simple heuristic algorithms to complex mathematical models . some proposals are only designed to increase the total network capacity , while others enhance qos objectives such as throughput and fairness . interestingly , our survey helps to unveil two major issues : ( i ) the research in opportunistic is mature enough to jump from pure theory to implementation , and ( ii ) there are still under-explored and story_separator_special_tag we study the spatial distribution of transmit powers and signal to interference plus noise ratio ( sinr ) in device-to-device ( d2d ) networks . using homogeneous poisson point processes ( ppp ) , cumulative distribution function ( cdf ) of the transmit power and sinr are analytically derived for a d2d network employing power control . then , computer simulations are performed for the same network architecture and it is shown that device location modeling and analytical methods from stochastic geometry can enable us to obtain transmit power and sinr distributions of a d2d network . story_separator_special_tag at the hamburg university of technology , germany , the modelling of communication networks using the omnet++ simulator and the inet framework is taught for master students . teaching the concepts of simulation and modelling while letting the students obtain a hands-on experience during a 14-week ( 4h/week ) period ( single semester ) is a challenging task . the diversity of the pre-knowledge of the participating students and the duration of the course are the main challenges that need to be addressed when organising such a course . this paper discusses the structure of this course and the best practices followed . the course adopts a methodology where lectures on concepts are mixed with inet based exercises that begin with simple topics and gradually moving into advanced topics . story_separator_special_tag we study how direct communication within a group of devices , cluster , can improve the performance of a conventional cellular system . the clusters are formed from devices that are close and communicating with each other , for example , sharing data . the clusters share the radio resources among other devices in the system thus creating a mixed network system comprising directly communicating devices and devices having radio links to and from the base stations . in this kind of a system the additional challenge is to decide when clusters shall use direct communication and when conventional cellular radio links to communicate with each other . here , in addition to clustering concept description we provide new means to analyse achievable system performance when clustering communication is integrated into a cellular network and especially into an interference limited system . story_separator_special_tag device-to-device ( d2d ) communication underlaying cellular network is considered to be a promising resource reuse technique , and it is basically developed for lots of local services . considering the differences of the quality of services ( qos ) between cellular and d2d users , the resource allocation scheme for d2d should flexibly allocate resource for d2d , rather than limiting one d2d user to share resource with one cellular user . a qos-based resource allocation scheme for d2d users in the context of an ofdm-based air interface is proposed . this scheme exploits the qos information , especially the rate target of users , to allocate sufficient resource as need . in addition , the system efficiency can be improved as the proposed resource allocation scheme always select the most efficient resource for d2d users . numerical results corroborate that the proposed scheme can fulfill d2d user 's qos requirements effectively , and meanwhile accommodate different qos requirements flexibly . story_separator_special_tag aura-net is a mobile communications system whose function realizes a new form of proximityaware networking , and whose form points in the direction of a `` proximity-aware internetwork . '' the system is founded on an implementation of a `` wireless sense . '' the existence of such a sense , it is argued , is essential for realization of a vision of ubiquitous computing famously expounded by mark weiser [ 1 ] . moreover , current wireless technologies are ill-suited to enabling this vision . the proposed wireless technology ( flashlinq ) is described at a conceptual and tutorial level . story_separator_special_tag machine-to-machine ( m2m ) communications have emerged as a cutting edge technology for next-generation communications , and are undergoing rapid development and inspiring numerous applications . this article presents an investigation of the application of m2m communications in the smart grid . first , an overview of m2m communications is given . the enabling technologies and open research issues of m2m communications are also discussed . then we address the network design issue of m2m communications for a home energy management system ( hems ) in the smart grid . the network architecture for hems to collect status and power consumption demand from home appliances is introduced . then the optimal hems traffic concentration is presented and formulated as the optimal cluster formation . a dynamic programming algorithm is applied to obtain the optimal solution . the numerical results show that the proposed optimal traffic concentration can minimize the cost of hems . story_separator_special_tag this paper analyzes the applicability of existing communication technology on the smart grid . in particular it evaluates how networks , e.g . peer-to-peer ( p2p ) and decentralized virtual private network ( vpn ) can help set up an agent-based system . it is expected that applications on smart grid devices will become more powerful and be able to operate without a central control instance . we analyze which requirements agents and smart grid devices place on communication systems and validate promising approaches . the main focus is to create a logical overlay network that provides direct communication between network nodes . we provide a comparison of different approaches of p2p networks and mesh-vpns . finally the advantages of mesh-vpn for agent-based systems are worked out .
scintillation screens are widely used for transverse beam profile diagnostics at particle accelerators . the monitor principle relies on the fact that a charged particle crossing the screen material deposits energy which is converted into detectable light . the resulting photon emission leads to a direct image of the two-dimensional beam distribution and can be measured with standard optical techniques . simplicity and low cost make this kind of diagnostic very attractive . during the last years , scintillating screen monitors were mainly deployed in hadron and low energy electron machines . most recent experiences from modern linac-based light sources showed that optical transition radiation ( otr ) diagnostics commonly used as standard profile measurement system might fail for high energy and high brilliance electron beams . this makes again the usage of scintillating screens very attractive . studies showed that the response of scintillating materials depends on many parameters such as particle energy , intensity , species and time structure of the beam . therefore , scintillating materials have to be tailored with respect to the application demands required at large accelerator facilities . measured properties , as light yield or imaged beam shape , show a strong dependency story_separator_special_tag interactions of high-energy beam particles with residual gas offer a unique opportunity to measure the beam profile in a non-intrusive fashion . such a method was successfully pioneered at the lhcb experiment using a silicon microstrip vertex detector . during the recent large hadron collider shutdown at cern , a demonstrator beam-gas vertexing system based on eight scintillating-fibre modules was designed , constructed and installed on ring 2 to be operated as a pure beam diagnostics device . the detector signals are read out and collected with lhcb-type front-end electronics and a daq system consisting of a cpu farm . tracks and vertices will be reconstructed to obtain a beam profile in real time . here , first commissioning results are reported . the advantages and potential for future applications of this technique are discussed . story_separator_special_tag introduction to optics is now available in a re-issued edition from cambridge university press . designed to offer a comprehensive and engaging introduction to intermediate and upper level undergraduate physics and engineering students , this text also allows instructors to select specialized content to suit individual curricular needs and goals . specific features of the text , in terms of coverage beyond traditional areas , include extensive use of matrices in dealing with ray tracing , polarization , and multiple thin-film interference ; three chapters devoted to lasers ; a separate chapter on the optics of the eye ; and individual chapters on holography , coherence , fiber optics , interferometry , fourier optics , nonlinear optics , and fresnel equations . story_separator_special_tag in one embodiment , a method includes receiving a first analog signal at a first input ; receiving a second analog signal at a second input ; mixing the first analog signal with a first oscillator signal having a first frequency ; mixing the second analog signal with a second oscillator signal having a second frequency ; converting a sum signal to a digital signal ; generating a first control signal based on a first digital value of a first function and the digital signal ; and generating a second control signal based on a second digital value of a second function and the digital signal . story_separator_special_tag mamiferos - orden rodentia - familia arvicolidae en la enciclopedia virtual de vertebrados espanoles , http : //www.vertebradosibericos.org/ . versiones anteriores : 14-10-2004 ; 3-06-2005 ; 7-05-2007 ; 18-04-2008 ; 28-02-2012 story_separator_special_tag publisher summary this chapter presents a research review of television camera tubes . in an ideal tube , the sensitivity , resolution , and contrast discrimination would be limited only by the statistical fluctuations in the number of pho- tons comprising the image . the tube itself should introduce no limitations in the process of generating the video signal . the use of a low-velocity scanning beam eliminates the spurious shading and loss of efficiency caused by the redistribution of secondary electrons in the iconoscope . since the electron beam is decelerated just prior to striking the target a strong electric field in front of the target draws all photoelectrons and reflected beam electrons away from the target . a low-velocity electron beam from a gun whose cathode is held within a few volts of the target mesh scans the reverse side of the target depositing electrons in the areas corresponding to the bright , parts of the scene . the two-sided target has been a major fabrication problem as well as a source of some of the most desirable performance characteristics found in the image orthicon . the camera tubes based on photoconductivity are also elaborated . story_separator_special_tag image sensors have been in use for many years in the field of beam instrumentation . in particular cameras are widely used to take pictures of particle beams from which important parameters can be deduced . this paper will give an overview of the available image sensor technologies with particular focus to the aspects important for beam instrumentation : radiation hardness , high frame rates , fast shutters and low light intensities . the overview will also cover digital acquisition aspects including frame grabbers and digital cameras . story_separator_special_tag beam monitoring using cameras has evolved from qualitative beam observation to precision measurement . after a description of the two main tv standards , various sensors including tv tubes ( vidicon ) , solid state sensors ( interline and frame transfer ccds , cmos and cid x-y matrices ) , and fast shutter/intensifiers of the mcp type are reviewed . comparative resolution measurements for the various sensors are given . the two types of sensor acquisition hardware , frame grabbers and digital cameras , are described . finally , special image processing requirements for beam instrumentation are reviewed , including radiation hardness , spectral sensitivity , fast acquisition , and enlarged dynamic range . story_separator_special_tag problem to be solved : to effectively expand a light receiving region , to secure infrared ray sensitivity , to improve quantum efficiency , to improve dynamic range by increasing capacitance , and to provide a charge coupled semiconductor device which can be corresponded to miniaturization of an element . solution : n type channel regions 64a and 64b are separated with each other by a p type channel stopper region 63a , a p type potential barrier region 76 is formed on the channel regions 64a and 64b in such a manner that they come in contact with both sides of the channel stopper region 63a , and n type polisilicon layer 75 is formed as an excess charge absorbing layer , in such a manner that it comes in contact with the surface of the channel stopper region 63a and each potential barrier region 76 . story_separator_special_tag charge transfer imaging devices are described which perform a psuedo-interlacing operation . a unit cell is provided which in its vertical dimension occupies the space corresponding to two lines in the display . means are provided for integrating charge under alternate phases of the charge transfer drive mechanism in alternate fields in order to shift the center of charge collection . the device may be in the form of an area imaging device of the frame transfer and store type , or a line imaging device . both charge coupled and bucket brigade devices may be constructed in accordance with the invention . story_separator_special_tag the design and fabrication of a 96-element 3-phase linear charge-coupled device are described . a transfer efficiency of 95 percent over 288 transfers at a 1-mhz clock rate was measured . the use of the device as an analog delay line is demonstrated and its imaging properties are illustrated with reproductions of black and white text and a picture with gray scale . the results demonstrate the feasibility of using self-scanned imaging devices in practical applications . configurations are presented for both an improved linear and an area imaging device . in both cases the problem of image smear , which occurs if stored charge is transferred along the light-sensitive region and if significant light integration takes place during this transfer , can be avoided . story_separator_special_tag this paper provides an overview of both ccd ( charged coupled device ) and cmos ( complimentary metal oxide semiconductor ) imaging array technologies . ccds have been in existence for nearly 30 years and the technology has matured to the point where very large , consistent ( low numbers of defects ) devices can now be produced . however , ccds suffer from a number of drawbacks , including cost , complex power supplies and support electronics . cmos imaging arrays , on the other hand , are still in their infancy , but are set to develop rapidly and offer a number of potential benefits over ccds . this review provides an overview of both ccd and cmos imaging technology , and includes explanations of how images are captured and read out from the imaging arrays . also covered are issues such as performance characteristics , cost considerations and the future of imaging arrays . this review does not provide details of colour sensors , colour filter arrays and colour interpolation , etc. , as these will be the subject of a separate report . introduction to ccds the charged coupled device ( ccd ) was invented in story_separator_special_tag integration of 16 \xd716 and 32 \xd732 extended-gate ion-sensitive field-effect transistor ( isfet ) arrays with a low-power consumption read-out circuitry has been reported . si3n4 film used as a ph sensitive layer is deposited on the 4 \xd74 \xb5m2 extended-gate electrodes of isfets in the integrated circuit by catalytic chemical vapor deposition ( cat-cvd ) . the average ph sensitivity is 41 mv/ph , and the distribution obeys a gaussian distribution with a standard deviation of 1.5 3.4 mv . the result is compared with that of ph sensitivity measurement using an al2o3 layer formed by o2 plasma . story_separator_special_tag the basic principle of operation of each bit in a matrix of image detectors is described , together with the derivation of appropriate operating formulas . the necessary circuitry to make a functional two-dimensional array is also described , including the scanning circuitry integrally constructed with the photodetector matrix . the necessary design considerations for operation of the matrix are discussed in the context of the currently operating 10-by-10 array . extension of the principles to larger arrays is outlined in the two modes of array scanning considered , with special reference to spatial noise problems . the paper concludes with reference to other applications of the basic principle , such as card reading . story_separator_special_tag abstract : advanced charge transfer device ( ctd ) solid state array detectors offer a variety of powerful capabilities for improving spectrochemical analysis . the class of ctd detectors is divided into charge coupled devices ( ccds ) and charge-injection devices ( cids ) , with each subclass having different readout modes and capabilities . while both subclasses of ctd detectors , when properly operated , provide high quantum efficiency , ultra-low dark current , low readout noise , wide dynamic range and photon integration , cids and ccds possess differing capabilities suited to specific spectroscopic applications . performance characteristics of several selected devices are presented and contrasted with those of photomultiplier tubes , photodiode arrays and several other imaging detectors . the operating parameters of ctds including read noise , fixed pattern 'noise ' , binning , and integration are explained and evaluated for a variety of spectroscopic applications . techniques for expanded a detector 's operational dynamic range are discussed including random access integration ( rai ) , allowing optimization of the integration time for each different detector element based on the actual photon flux falling on each element during a specific measurement ; binning , allowing the story_separator_special_tag abstract we report on total ionizing dose effects on the x-ray soi pixel sensor , xrpix . xrpix has been developed as an imaging spectrometer for x-ray astronomical use in space . front- and back-illuminated ( fi and bi ) devices were irradiated with hard x-rays from an x-ray tube operated at 30\xa0kv with a molybdenum target . we found that the degradation rate of the readout noise of the bi device was approximately three times slower than that of the fi device as a function of radiation exposure . those of both type of devices , however , were virtually identical when the readout noise was evaluated as a function of the absorbed dose at the buried oxide layer , d box . the pedestal and analog-to-digital conversion gain also displayed similar tendencies . these results demonstrate that bi type devices have a higher radiation tolerance as a focal plane sensor of an x-ray mirror and the radiation tolerance of xrpix devices is governed by d box . the readout noise was stable up to about 1 krad in d box , increased by about 10 % at 10\xa0krad in d box , and continued to increase under further story_separator_special_tag the embedded local monitor board is a plug-on board to be used in lhc detectors for a range of different front-end control and monitoring tasks . it is based on the can serial bus system and is radiation tolerant and can be used in magnetic fields . the main features of the elmb are described and results of several radiation tests are presented . story_separator_special_tag the facility for antiproton and ion research ( fair ) with its wide range of beam parameters poses new challenges for standard beam instrumentation like precise beam imaging . to cover the various foreseen applications for standard scintillating screen based diagnostics , a new technical solution was required . cupid ( control unit for profile and image data ) is a new system for scintillating screen imaging , which is based on the data acquisition framework for fair . it includes digital image acquisition , remote control of the optical system ( focus and iris ; camera setup and power ) and a graphical user interface ( gui ) . cupid is also designed to work with different imaging devices like gige cameras or video cameras using frame grabber cards . in this paper we report on the first results with this novel system during routine beam operation . for imaging applications in the high radiation environment of the heavy ion synchrotrons radiation-hard cameras are required . one possible candidate for such cameras at fair is the ccir megarad3 from thermo fischer scientific . we describe here our first results with this camera , which has been installed at the story_separator_special_tag the ionization profile monitor ( ipm ) in the sis18 is frequently used for machine development . the permanent availability and the elaborated software user interface make it easy and comfortable to use . additional to the beam profile data the device records the data of synchrotron dc current , dipole ramp and accelerating rf properties . the trend curves of these data are shown correlated to the beam profile evolution for an entire synchrotron cycle from injection to extraction with 100 profiles/s . the reliable function is based on the optimized in-vacuum hardware design , and the uv-light based calibration system . the permanent availability is based on the convenient software interface using the qt library . a new ipm generation was recently commissioned in the experimental storage ring ( esr ) at gsi and another at cosy at fz-julich . these monitors are enhancements of the heavy ion synchrotron ( sis18 ) multiwire ipm but equipped with an especially developed large area 44x94 mm 2 optical particle detector of rectangular shape that is readout by a digital camera through a viewport . story_separator_special_tag the beam induced fluorescence monitor was developed as a non-intercepting optical measurement device , dedicated for transverse beam profile monitoring at high current operation at the gsi heavy ion linear accelerator unilac . nowadays , bif monitors are installed at four different locations and handed over to the operating team as a standard diagnostic tool . story_separator_special_tag two high speed systems for spectrometer based frequency domain optical coherence tomography are presented . a device operating at 800 nm , based on the basler sprint cmos camera with linerates of up to 312,000 lps and a device based on the goodrich sui lhd 1024 px camera at 1060 nm with 47,000 lps are applied in a clinical environment to normal subjects . the feasibility of clinical high and ultra high-resolution optical coherence tomography ( oct ) devices for retinal imaging at different wavelengths , capable of isotropic sampling with 70 to 600 frames per second at 512 depth scans/frame for widefield imaging and high density sampling at 1 gvoxel are demostrated .
this paper includes both the motivation for multivariate quality control , and a discussion of some ot rhe techniques currently available . the emphasis focuses primarily on control charts and includes the t2 -chart , the use of principal components anm some recent developments , multivariate analogs of cusum cnarts and the use of the andrews procedure . some of the problems associated with multivariate acceptance sampling are presented , and the paper concludes with some recommendauons for future researcn and development . story_separator_special_tag a new scheme for multivariate statistical quality control is investigated and characterized . the control scheme consists of three steps and it will identify any out-of-control samples , select the subset of variables that are out of control , and diagnose the out-of-control variables . a new control variable selection algorithm , the backward selection algorithm , and a new control variable diagnosis method , the hyperplane methods , are proposed . it is shown by simulation that the control scheme is useful in cases where the process variables are correlated and where they are uncorrelated . story_separator_special_tag this article presents the design procedures and average run lengths for two mulativariater cumulative sum ( cusum ) quality-control procedures . the first cusum procedure reduces each multivariate observation to a scalar and then forms a cusum of the scalars . the second cusum procedure forms a cusum vector directly from the observations . these two procedures are compared with each other and with the multivariate shewhart chart . other multivariate quality-control procedures are mentioned . robustness , the fast initial response feature for cusum schemes , and combined shewhart-cusum schemes are discussed . story_separator_special_tag the multivariate profile ( mp ) chart is a new control chart for simultaneous display of univariate and multivariate statistics . it is designed to analyze and display extended structures of statistical process control data for various cases of grouping , reference distribution , and use of nominal specifications . for each group of observations , the scaled deviations from reference values are portrayed together as a modified profile plot symbol . the vertical location of the symbol is determined by the multivariate distance of the vector of means from the reference values . the graphical display in the mp chart enjoys improved visual characteristics as compared with previously suggested methods . moreover , the perceptual tasks required by the use of the mp chart provide higher accuracy in retrieving the quantitative information . this graphical display is used to display other combined univariate and multivariate statistics , such as measures of dispersion , principal components , and cumulative sums story_separator_special_tag many quality control problems are multivariate in character since the quality of a given product or object consists simultaneously of more than one variable . a good multivariate quality control pro . story_separator_special_tag cumulative sum ( cusum ) procedures are among the most powerful tools for detecting a shift from a good quality distribution to a bad quality distribution . this article discusses the natural application of cusum procedures to the multivariate normal distribution . it discusses two cases , detecting a shift in the mean vector and detecting a shift in the covariance matrix . as an example , the procedure is applied to measurements taken on optical fibers . story_separator_special_tag statistical process control methods for monitoring processes with multivariate measurements in both the product quality variable space and the process variable space are considered . traditional multivariate control charts based on x2 and t2 statistics . story_separator_special_tag the most popular multivariate process monitoring and control procedure used in the industry is the chi-square control chart . as with most shewhart-type control charts , the major disadvantage of the chi-square control chart , is that it only uses the information contained in the most recently inspected sample ; as a consequence , it is not very efficient in detecting gradual or small shifts in the process mean vector . during the last decades , the performance improvement of the chi-square control chart has attracted continuous research interest . in this paper we introduce a simple modification of the chi-square control chart which makes use of the notion of runs to improve the sensitivity of the chart in the case of small and moderate process mean vector shifts . story_separator_special_tag a review of the literature on control charts for multivariate quality control ( mqc ) is given , with a concentration on developments occurring since the mid-1980s . multivariate cumulative sum ( cusum ) control procedures and a multivariate exponentially weighted moving average ( ewma ) control chart are reviewed and recommendations are made regarding their use . several recent articles that give methods for interpreting an out-of-control signal on a multivariate control chart are analyzed and discussed . other topics such as the use of principal components and regression adjustment of variables in mqc , as well as frequently used approximations in mqc , are discussed . story_separator_special_tag a multivariate exponentially weighted moving average control chart can be used to improve the detection of small shifts in multivariate statistical process control . recommendations are provided for the selection of parameters for such a chart . the recom . story_separator_special_tag this applied , self-contained text provides detailed coverage of the practical aspects of multivariate statistical process control ( mvspc ) based on the application of hotelling 's t2 statistic . mvspc is the application of multivariate statistical techniques to improve the quality and productivity of an industrial process . the authors , leading researchers in this area who have developed major software for this type of charting procedure , provide valuable insight into the t2 statistic . intentionally including only a minimal amount of theory , they lead readers through the construction and monitoring phases of the t2 control statistic using numerous industrial examples taken primarily from the chemical and power industries . these examples are applied to the construction of historical data sets to serve as a point of reference for the control procedure and are also applied to the monitoring phase , where emphasis is placed on signal location and interpretation in terms of the process variables . specifically devoted to the t2 methodology , multivariate statistical process control with industrial applications is the only book available that concisely and thoroughly presents such topics as how to construct a historical data set ; how to check the necessary story_separator_special_tag cumulative sum ( cusum ) control charts have been widely used for monitoring the process mean . relatively little attention has been given to the use of cusum charts for monitoring the process variance . the properties of cusum charts based on the logarithm . story_separator_special_tag a persistent problem in multivariate control chart procedures is the interpretation of a signal . determining which variable or group of variables is contributing to the signal can be a difficult task fo rthe practitioner . however , a procedure for decomp . story_separator_special_tag the productivity of an industrial processing unit often depends on equipment that changes over time . these changes may not be consistent , and , in many cases , may appear to occur in stages . although changes in the process levels within each stage may ap . story_separator_special_tag the t2 statistic in multivariate process control is a function of the residuals taken from a set of linear regressions of the process variables . these residuals are contained in the conditional t2 terms of the orthogonal decomposition of the statistic . story_separator_special_tag abstract the identification of the out of control variable , or variables , after a multivariate control chart signals , is an appealing subject for many researchers in the last years . in this paper we propose a new method for approaching this problem based on principal components analysis . theoretical control limits are derived and a detailed investigation of the properties and the limitations of the new method is given . a graphical technique which can be applied in some of these limiting situations is also provided . story_separator_special_tag data visualization tools can provide very powerful information and insight when performing data analysis . in many situations , a set of data can be adequately analyzed through data visualization methods alone . in other situations , data visualization can be used for preliminary data analysis . in this paper , radial plots are developed as a sas-based data visualization tool that can improve one 's ability to monitor , analyze and control a process . using the program developed in this research , we present two examples of data analysis using radial plots ; the first example is based on data from a particle board manufacturing process and the second example is a business process for monitoring the time-varying level of stock return data . story_separator_special_tag we consider several distinct approaches for controlling the mean of a multivariate normal process including two new and distinct multivariate cusum charts , several multiple univariate cusum charts , and a shewhart x2 control chart . the performances of th . story_separator_special_tag objectiveto propose a computationally simple , fast , and reliable temporal method for early event detection in multiple data streamsintroductioncurrent biosurveillance systems run multiple univariate statistical process control ( spc ) charts to detect increases in multiple data streams1 . the method of using multiple univariate spc charts is easy to implement and easy to interpret . by examining alarms from each control chart , it is easy to identify which data stream is causing the alarm . however , testing multiple data streams simultaneously can lead to multiple testing problems that inflate the combined false alarm probability . although methods such as the bonferroni correction can be applied to address the multiple testing problem by lowering the false alarm probability in each control chart , these approaches can be extremely conservative.biosurveillance systems often make use of variations of popular univariate spc charts such as the shewart chart , the cumulative sum chart ( cusum ) , and the exponentially weighted moving average chart ( ewma ) . in these control charts an alarm is signaled when the charting statistic exceeds a pre-defined control limit . with the standard spc charts , the false alarm rate is specified using the story_separator_special_tag handbook of research methods in child development by paul henry mussen ( editor ) . new york : john wiley & sons , 1960. pp . vii + 1061 . $ 15.25 . on the premise that research in child development wants improvement in both quality and quantity , this book presents 22 chapters of help . counting the editor , there are 31 authors who , together with editorial consultants , comprise a partial who s who in this domain . the work was to emphasize method over findings ; to be delimited to techniques actually employed ; and to exclude the clinical and anything otherwise adequately described , such aa mental testing and the processing of statistical hypotheses . in general the delimitations were maintained , to the book s advantage . the desired emphasis on method over content was difficult story_separator_special_tag abstract the use of multivariate quality control techniques is usually avoided by practitioners because of the complexity involved in the design , implementation , and maintenance of the control system . in this paper a new approach to multivariate control problems is proposed , the simulated minimax control chart . the new control chart consists of placing upper and lower control limits on the maximum and the minimum of the p correlated variables standardized sample means such that the chart has a fixed probability of type i error . the position of the control limits is determined by simulating the samples taken from a multivariate normal population . a comparison of the performance of the simulated minimax control chart and the chi-squared control chart in terms of the average run length ( arl ) is provided for two scenarios ( n=5 , p=2 , =0 and n=5 , p=2 , =0.5 ) under different shifts in the mean . the results show that the simulated minimax control chart has excellent arl properties as compared to the chi-squared control chart . thus , the simulated minimax control chart provides practitioners the advantage of interpreting the signals right from the chart , story_separator_special_tag summary in this article , we present a method for monitoring multivariate process data based on the gabriel biplot . in contrast to existing methods that are based on some form of dimension reduction , we use reduction to two dimensions for displaying the state of the process but all the data for determining whether it is in a state of statistical control . this approach allows us to detect changes in location , variation , and correlational structure accurately yet display a large amount of information concisely . we illustrate the use of the biplot on an example of industrial data and also discuss some of the issues related to a practical implementation of the method . story_separator_special_tag multivariate quality control problems involve the evaluation of a process based on the simultaneous behavior of p variables . most multivariate quality control procedures evaluate the in-control or out-of-control condition based upon an overall statistic . story_separator_special_tag when p correlated process characteristics are being measured simultaneously , often individual observations are initially collected . the process data are monitored and special causes of variation are identified in order to establish control and to obtain . story_separator_special_tag abstract this paper discusses contribution plots for both the d -statistic and the q -statistic in multivariate statistical process control of batch processes . contributions of process variables to the d -statistic are generalized to any type of latent variable model with or without orthogonality constraints . the calculation of contributions to the q -statistic is discussed . control limits for both types of contributions are introduced to show the relative importance of a contribution compared to the contributions of the corresponding process variables in the batches obtained under normal operating conditions . the contributions are introduced for off-line monitoring of batch processes , but can easily be extended to on-line monitoring and to continuous processes , as is shown in this paper . story_separator_special_tag the performance of a product often depends on several quality characteristics . these characteristics may have interactions . in answering the question is the process in control ? , multivariate statistical process control methods take these interactions into account . in this paper , we review several of these multivariate methods and point out where to fill up gaps in the theory . the review includes multivariate control charts , multivariate cusum charts , a multivariate mma chart , and multivariate process capability indices . the most important open question from a practical point of view is how to detect the variables that caused an out-of-control signal . theoretically , the statistical properties of the methods should be investigated more profoundly . story_separator_special_tag an overview is given of current research on control charting methods for process monitoring and improvement . a historical perspective and ideas for future research also are given . research topics include : variable sample size and sampling interval met . story_separator_special_tag it is a common practice to use , simultaneously , several one-sided or two-sided cusum procedures of the type proposed by page ( 1954 ) . in this article , this method of control is considered to be a single multivariate cusum ( mcusum ) procedure . methods are given for approximating parameters of the distribution of the minimum of the run lengths of the univariate cusum charts . using a new method of comparing multivariate control charts , it is shown that an mcusum procedure is often preferable to hotelling 's tz procedure for the case in which the quality characteristics are bivariate normal random variables .
this paper gives a survey of the relationship between the fields of cryptography and machine learning , with an emphasis on how each field has contributed ideas and techniques to the other . some suggested directions for future cross-fertilization are also proposed . story_separator_special_tag the problem of privacy-preserving data analysis has a long history spanning multiple disciplines . as electronic data about individuals becomes increasingly detailed , and as technology enables ever more powerful collection and curation of these data , the need increases for a robust , meaningful , and mathematically rigorous definition of privacy , together with a computationally rich class of algorithms that satisfy this definition . differential privacy is such a definition.after motivating and discussing the meaning of differential privacy , the preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy , and application of these techniques in creative combinations , using the query-release problem as an ongoing example . a key point is that , by rethinking the computational goal , one can often obtain far better results than would be achieved by methodically replacing each step of a non-private computation with a differentially private implementation . despite some astonishingly powerful computational results , there are still fundamental limitations not just on what can be achieved with differential privacy but on what can be achieved with any method that protects against a complete breakdown in privacy . virtually all the algorithms discussed herein maintain differential story_separator_special_tag private companies , government entities , and institutions such as hospitals routinely gather vast amounts of digitized personal information about the individuals who are their customers , clients , or patients . much of this information is private or sensitive , and a key technological challenge for the future is how to design systems and processing techniques for drawing inferences from this large-scale data while maintaining the privacy and security of the data and individual identities . individuals are often willing to share data , especially for purposes such as public health , but they expect that their identity or the fact of their participation will not be disclosed . in recent years , there have been a number of privacy models and privacy-preserving data analysis algorithms to answer these challenges . in this article , we will describe the progress made on differentially private machine learning and signal processing . story_separator_special_tag privacy-preserving multi-party machine learning allows multiple organizations to perform collaborative data analytics while guaranteeing the privacy of their individual datasets . using trusted sgx-processors for this task yields high performance , but requires a careful selection , adaptation , and implementation of machine-learning algorithms to provably prevent the exploitation of any side channels induced by data-dependent access patterns . we propose data-oblivious machine learning algorithms for support vector machines , matrix factorization , neural networks , decision trees , and k-means clustering . we show that our efficient implementation based on intel skylake processors scales up to large , realistic datasets , with overheads several orders of magnitude lower than with previous approaches based on advanced cryptographic multi-party computation schemes . story_separator_special_tag the emergence of cloud computing brings users abundant opportunities to utilize the power of cloud to perform computation on data contributed by multiple users . these cloud data should be encrypted under multiple keys due to privacy concerns . however , existing secure computation techniques are either limited to single key or still far from practical . in this paper , we design two efficient schemes for secure outsourced computation over cloud data encrypted under multiple keys . our schemes employ two non-colluding cloud servers to jointly compute polynomial functions over multiple users ' encrypted cloud data without learning the inputs , intermediate or final results , and require only minimal interactions between the two cloud servers but not the users . we demonstrate our schemes ' efficiency experimentally via applications in machine learning . our schemes are also applicable to privacy-preserving data aggregation such as in smart metering . story_separator_special_tag machine learning is widely used in practice to produce predictive models for applications such as image processing , speech and text recognition . these models are more accurate when trained on large amount of data collected from different sources . however , the massive data collection raises privacy concerns . in this paper , we present new and efficient protocols for privacy preserving machine learning for linear regression , logistic regression and neural network training using the stochastic gradient descent method . our protocols fall in the two-server model where data owners distribute their private data among two non-colluding servers who train various models on the joint data using secure two-party computation ( 2pc ) . we develop new techniques to support secure arithmetic operations on shared decimal numbers , and propose mpc-friendly alternatives to non-linear functions such as sigmoid and softmax that are superior to prior work . we implement our system in c++ . our experiments validate that our protocols are several orders of magnitude faster than the state of the art implementations for privacy preserving linear and logistic regressions , and scale to millions of data samples with thousands of features . we also implement the first story_separator_special_tag we design a novel , communication-efficient , failure-robust protocol for secure aggregation of high-dimensional data . our protocol allows a server to compute the sum of large , user-held data vectors from mobile devices in a secure manner ( i.e . without learning each user 's individual contribution ) , and can be used , for example , in a federated learning setting , to aggregate user-provided model updates for a deep neural network . we prove the security of our protocol in the honest-but-curious and active adversary settings , and show that security is maintained even if an arbitrarily chosen subset of users drop out at any time . we evaluate the efficiency of our protocol and show , by complexity analysis and a concrete implementation , that its runtime and communication overhead remain low even on large data sets and client pools . for 16-bit input values , our protocol offers $ 1.73 x communication expansion for 210 users and 220-dimensional vectors , and 1.98 x expansion for 214 users and 224-dimensional vectors over sending data in the clear . story_separator_special_tag distributed secure quantum machine learning ( dsqml ) enables a classical client with little quantum technology to delegate a remote quantum machine learning to the quantum server with the privacy data preserved . moreover , dsqml can be extended to a more general case that the client does not have enough data , and resorts both the remote quantum server and remote databases to perform the secure machine learning . here we propose a dsqml protocol that the client can classify two-dimensional vectors to different clusters , resorting to a remote small-scale photon quantum computation processor . the protocol is secure without leaking any relevant information to the eve . any eavesdropper who attempts to intercept and disturb the learning process can be noticed . in principle , this protocol can be used to classify high dimensional vectors and may provide a new viewpoint and application for future big data . story_separator_special_tag the main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer . most problems with a proven quantum advantage involve the repeated use of a black box , or oracle , whose structure encodes the solution . one measure of the algorithmic performance is the query complexity , i.e. , the scaling of the number of oracle calls needed to find the solution with a given probability . few-qubit demonstrations of quantum algorithms , such as deutsch jozsa and grover , have been implemented across diverse physical systems such as nuclear magnetic resonance , trapped ions , optical systems , and superconducting circuits . however , at the small scale , these problems can already be solved classically with a few oracle queries , limiting the obtained advantage . here we solve an oracle-based problem , known as learning parity with noise , on a five-qubit superconducting processor . executing classical and quantum algorithms using the same oracle , we observe a large gap in query count in favor of quantum processing . we find that this gap grows by orders of magnitude as a function of the error rates and story_separator_special_tag this thesis is a study of the computational complexity of machine learning from examples in the distribution-free model introduced by l. g. valiant ( v84 ) . in the distribution-free model , a learning algorithm receives positive and negative examples of an unknown target set ( or concept ) that is chosen from some known class of sets ( or concept class ) . these examples are generated randomly according to a fixed but unknown probability distribution representing nature , and the goal of the learning algorithm is to infer an hypothesis concept that closely approximates the target concept with respect to the unknown distribution . this thesis is concerned with proving theorems about learning in this formal mathematical model . we are interested in the phenomenon of efficient learning in the distribution-free model , in the standard polynomial-time sense . our results include general tools for determining the polynomial-time learnability of a concept class , an extensive study of efficient learning when errors are present in the examples , and lower bounds on the number of examples required for learning in our model . a centerpiece of the thesis is a series of results demonstrating the computational difficulty of story_separator_special_tag some security protocols or mechanisms have been designed for wireless sensor networks ( wsns ) . however , an intrusion detection system ( ids ) should always be deployed on security critical applications to defense in depth . due to the resource constraints , the intrusion detection system for traditional network can not be used directly in wsns . several schemes have been proposed to detect intrusions in wireless sensor networks . but most of them aim on some specific attacks ( e.g . selective forwarding ) or attacks on particular layers , such as media access layer or routing layer . in this paper , we present a framework of machine learning based intrusion detection system for wireless sensor networks . our system will not be limited on particular attacks , while machine learning algorithm helps to build detection model from training data automatically , which will save human labor from writing signature of attacks or specifying the normal behavior of a sensor node . story_separator_special_tag in recent years , intrusion detection has emerged as an important technique for network security . due to the large volumes of security audit data as well as complex and dynamic properties of intrusion behaviors , to optimize the performance of intrusion detection systems ( idss ) becomes an important open problem . in this paper , a general framework of adaptive intrusion detection based on machine learning is presented . in the framework , three perspectives of challenging problems are explored , which include feature extraction , classifier construction and pattern prediction for sequential data . it is illustrated that the three perspectives of research challenges are mainly suitable for machine learning methods using unsupervised , supervised and reinforcement learning algorithms , respectively . several recently developed machine learning algorithms , including a multi-class support vector machine with principal component analysis ( pca ) for feature reduction and a reinforcement learning algorithm for sequential prediction , are applied and evaluated both on network-based traffic data and on host-based program behaviors . experiments on the kdd99 intrusion detection data set and the system call data from university of new mexico show very promising results for the machine learning approaches to story_separator_special_tag the rapid growth of the internet usage has caused problem on internet protocol address space . to solve the space issue of internet protocol version 4 addresses , internet protocol version 6 was created to expand the availability of address spaces . internet protocol version 6 is designed to overcome the main limitations of internet protocol version 4 including the lack of security and the exhaustion of internet protocol address space . internet protocol version 6 protocols are not well supported by network intrusion detection system , as is the case with internet protocol version 4 protocols . several data mining techniques have been introduced to improve the classification mechanism of intrusion detection system . in addition , extensive researches indicated that there is no intrusion detection systems for internet protocol version 6 using advanced machine-learning techniques to ward distributed denial of service attacks . with the increasing adoption of internet protocol version 6 , internet protocol version 6-unique security issues become more urgent to address . unlike internet protocol version 4 , internet protocol version 6 relies on internet control message protocol version 6 in neighbor discovery . this means that blocking internet control message protocol version 6 traffic story_separator_special_tag a method for identifying a botnet in a network , including analyzing historical network data using a pre-determined heuristic to determine values of a feature in the historical network data , obtaining a ground truth data set having labels assigned to data units in the historical network data identifying known malicious nodes in the network , analyzing the historical network data and the ground truth data set using a machine learning algorithm to generate a model representing the labels as a function of the values of the feature , analyzing real-time network data using the pre-determined heuristic to determine a value of the feature for a data unit in the real-time network data , assigning a label to the data unit by applying the model to the value of the feature , and categorizing the data unit as associated with the botnet based on the label . story_separator_special_tag with the rapid rise in the ubiquity and sophistication of internet technology and the accompanying growth in the number of network attacks , network intrusion detection has become increasingly important . anomaly-based network intrusion detection refers to finding exceptional or nonconforming patterns in network traffic data compared to normal behavior . finding these anomalies has extensive applications in areas such as cyber security , credit card and insurance fraud detection , and military surveillance for enemy activities . network anomaly detection : a machine learning perspective presents machine learning techniques in depth to help you more effectively detect and counter network intrusion . in this book , youll learn about : network anomalies and vulnerabilities at various layers the pros and cons of various machine learning techniques and algorithms a taxonomy of attacks based on their characteristics and behavior feature selection algorithms how to assess the accuracy , performance , completeness , timeliness , stability , interoperability , reliability , and other dynamic aspects of a network anomaly detection system practical tools for launching attacks , capturing packet or flow traffic , extracting features , detecting attacks , and evaluating detection performance important unresolved issues and research challenges that need story_separator_special_tag malware analysis forms a critical component of cyber defense mechanism . in the last decade , lot of research has been done , using machine learning methods on both static as well as dynamic analysis . since the aim and objective of malware developers have changed from just for fame to political espionage or financial gain , the malware is also getting evolved in its form , and infection methods . one of the latest form of malware is known as targeted malware , on which not much research has happened . targeted malware , which is a superset of advanced persistent threat ( apt ) , is growing in its volume and complexity in recent years . targeted cyber attack ( through targeted malware ) plays an increasingly malicious role in disrupting the online social and financial systems . apts are designed to steal corporate / national secrets and/or harm national/corporate interests . it is difficult to recognize targeted malware by antivirus , ids , ips and custom malware detection tools . attackers leverage compelling social engineering techniques along with one or more zero day vulnerabilities for deploying apts . along with these , the recent introduction of crypto story_separator_special_tag the proliferation of android-based mobile devices and mobile applications in the market has triggered the malware author to make the mobile devices as the next profitable target . with user are now able to use mobile devices for various purposes such as web browsing , ubiquitous services , online banking , social networking , mms and etc , more credential information is expose to exploitation . applying a similar security solution that work in desktop environment to mobile devices may not be proper as mobile devices have a limited storage , memory , cpu and power consumption . hence , there is a need to develop a mobile malware detection that can provide an effective solution to defence the mobile user from any malicious threat and at the same time address the limitation of mobile devices environment . prior to this matter , this research focused on evaluating the best features selection to be used in the best machine-learning classifiers . to find the best combination of both features selection and classifier , five sets of different feature selection are applies to five different machine learning classifiers . the classifier outcome is evaluated using the true positive rate ( tpr story_separator_special_tag recent advances in cryptography promise to enable secure statistical computation on encrypted data , whereby a limited set of operations can be carried out without the need to first decrypt . we review these homomorphic encryption schemes in a manner accessible to statisticians and machine learners , focusing on pertinent limitations inherent in the current state of the art . these limitations restrict the kind of statistics and machine learning algorithms which can be implemented and we review those which have been successfully applied in the literature . finally , we document a high performance r package implementing a recent homomorphic scheme in a general framework . story_separator_special_tag machine learning ( ml ) is a well-studied strategy in modeling physical unclonable functions ( pufs ) but reaches its limits while applied on instances of high complexity . to address this issue , side-channel attacks have recently been combined with modeling techniques to make attacks more efficient [ 25 ] [ 26 ] . in this work , we present an overview and survey of these so-called hybrid modeling and side-channel attacks on pufs , as well as of classical side channel techniques for pufs . a taxonomy is proposed based on the characteristics of different side-channel attacks . the practical reach of some published side-channel attacks is discussed . both challenges and opportunities for puf attackers are introduced . countermeasures against some certain side-channel attacks are also analyzed . to better understand the side-channel attacks on pufs , three different methodologies of implementing side-channel attacks are compared . at the end of this paper , we bring forward some open problems for this research area . story_separator_special_tag novel methods , components , and systems for detecting malicious software in a proactive manner are presented . more specifically , we describe methods , components , and systems that leverage machine learning techniques to detect malicious software . the disclosed invention provides a significant improvement with regard to detection capabilities compared to previous approaches . story_separator_special_tag web applications make life more convenient through on the activities . many web applications have several kind of user input ( e.g . personal information , a user 's comment of commercial goods , etc . ) for the activities . however , there are various vulnerabilities in input functions of web applications . it is possible to try malicious actions using free accessibility of the web applications . the attacks by exploitation of these input vulnerabilities enable to be performed by injecting malicious web code ; it enables one to perform various illegal actions , such as sql injection attacks ( sqlias ) and cross site scripting ( xss ) . these actions come down to theft , replacing personal information , or phishing . many solutions have devised for the malicious web code , such as amnesia [ 1 ] and sql check [ 2 ] , etc . the methods use parser for the code , and limited to fixed and very small patterns , and are difficult to adapt to variations . machine learning method can give leverage to cover far broader range of malicious web code and is easy to adapt to variations and changes story_separator_special_tag machine learning is evolved from a collection of powerful techniques in ai areas and has been extensively used in data mining , which allows the system to learn the useful structural patterns and models from training data . machine learning algorithms can be basically classified into four categories : supervised , unsupervised , semi-supervised and reinforcement learning . in this chapter , widely-used machine learning algorithms are introduced . each algorithm is briefly explained with some examples . story_separator_special_tag mutual learning of a pair of tree parity machines with continuous and discrete weight vectors is studied analytically . the analysis is based on a mapping procedure that maps the mutual learning in tree parity machines onto mutual learning in noisy perceptrons . the stationary solution of the mutual learning in the case of continuous tree parity machines depends on the learning rate where a phase transition from partial to full synchronization is observed . in the discrete case the learning process is based on a finite increment and a full synchronized state is achieved in a finite number of steps . the synchronization of discrete parity machines is introduced in order to construct an ephemeral key-exchange protocol . the dynamic learning of a third tree parity machine ( an attacker ) that tries to imitate one of the two machines while the two still update their weight vectors is also analyzed . in particular , the synchronization times of the naive attacker and the flipping attacker recently introduced in ref . 9 are analyzed . all analytical results are found to be in good agreement with simulation results . story_separator_special_tag two neural networks that are trained on their mutual output synchronize to an identical time dependant weight vector . this novel phenomenon can be used for creation of a secure cryptographic secret-key using a public channel . several models for this cryptographic system have been suggested , and have been tested for their security under different sophisticated attack strategies . the most promising models are networks that involve chaos synchronization . the synchronization process of mutual learning is described analytically using statistical physics methods . story_separator_special_tag in this paper , we explore the possibility that machine learning approaches to naturallanguage processing ( nlp ) being developed in engineering-oriented computational linguistics ( cl ) may be able to provide specific scientific insights into the nature of human language . we argue that , in principle , machine learning ( ml ) results could inform basic debates about language , in one area at least , and that in practice , existing results may offer initial tentative support for this prospect . further , results from computational learning theory can inform arguments carried on within linguistic theory as well . story_separator_special_tag the objective of this work is to assess the robustness of machine learning based traffic classification for classifying encrypted traffic where ssh and skype are taken as good representatives of encrypted traffic . here what we mean by robustness is that the classifiers are trained on data from one network but tested on data from an entirely different network . to this end , five learning algorithms adaboost , support vector machine , naie bayesian , ripper and c4.5 are evaluated using flow based features , where ip addresses , source/destination ports and payload information are not employed . results indicate the c4.5 based approach performs much better than other algorithms on the identification of both ssh and skype traffic on totally different networks . story_separator_special_tag in this paper ( expanded from an invited talk at aisec 2010 ) , we discuss an emerging field of study : adversarial machine learning - the study of effective machine learning techniques against an adversarial opponent . in this paper , we : give a taxonomy for classifying attacks against online machine learning algorithms ; discuss application-specific factors that limit an adversary 's capabilities ; introduce two models for modeling an adversary 's capabilities ; explore the limits of an adversary 's knowledge about the algorithm , feature space , training , and input data ; explore vulnerabilities in machine learning algorithms ; discuss countermeasures against attacks ; introduce the evasion challenge ; and discuss privacy-preserving learning techniques . \xa9 2011 acm . story_separator_special_tag steganography is the art of communicating a secret message , hiding the very existence of a secret message . this is typically done by hiding the message within a non-sensitive document.steganalysis is the art and science of detecting such hidden messages . the task in steganalysis is to take an object ( communication ) and classify it as either a steganogram or a clean document . most recent solutions apply classification algorithms from machine learning and pattern recognition , which tackle problems too complex for analytical solution by teaching computers to learn from empirical data.part 1of the book is an introduction to steganalysis as part of the wider trend of multimedia forensics , as well as a practical tutorial on machine learning in this context . part 2 is a survey of a wide range of feature vectors proposed for steganalysis with performance tests and comparisons . part 3 is an in-depth study of machine learning techniques and classifier algorithms , and presents a critical assessment of the experimental methodology and applications in steganalysis.key features : serves as a tutorial on the topic of steganalysis with brief introductions to much of the basic theory provided , and also presents a story_separator_special_tag we demonstrate that , by using a recently proposed leveled homomorphic encryption scheme , it is possible to delegate the execution of a machine learning algorithm to a computing service while retaining confidentiality of the training and test data . since the computational complexity of the homomorphic encryption scheme depends primarily on the number of levels of multiplications to be carried out on the encrypted data , we define a new class of machine learning algorithms in which the algorithm 's predictions , viewed as functions of the input data , can be expressed as polynomials of bounded degree . we propose confidential algorithms for binary classification based on polynomial approximations to least-squares solutions obtained by a small number of gradient descent steps . we present experimental validation of the confidential machine learning pipeline and discuss the trade-offs regarding computational complexity , prediction accuracy and cryptographic security . story_separator_special_tag cryptography is a process of protecting information and data from unauthorized access . now-a-days , security is an important and basic issue while sending or receiving the data over any network . cryptography is used to achieve availability , privacy and integrity . generally there are two categories of cryptography i.e . symmetric and asymmetric . in this paper , we have proposed a new symmetric key algorithm based on counter propagation neural network ( cpn ) . story_separator_special_tag machine learning classification is used for numerous tasks nowadays , such as medical or genomics predictions , spam detection , face recognition , and financial predictions . due to privacy concerns , in some of these applications , it is important that the data and the classifier remain confidential . in this work , we construct three major classification protocols that satisfy this privacy constraint : hyperplane decision , na\xefve bayes , and decision trees . we also enable these protocols to be combined with adaboost . at the basis of these constructions is a new library of building blocks , which enables constructing a wide range of privacy-preserving classifiers ; we demonstrate how this library can be used to construct other classifiers than the three mentioned above , such as a multiplexer and a face detection classifier . we implemented and evaluated our library and our classifiers . our protocols are efficient , taking milliseconds to a few seconds to perform a classification when running on real medical datasets . story_separator_special_tag electronic devices may undergo attacks going beyond traditional cryptanalysis . side-channel analysis ( sca ) is an alternative attack that exploits information leaking from physical implementations of e.g . cryptographic devices to discover cryptographic keys or other secrets . this work comprehensively investigates the application of a machine learning technique in sca . the considered technique is a powerful kernel-based learning algorithm : the least squares support vector machine ( ls-svm ) . the chosen side-channel is the power consumption and the target is a software implementation of the advanced encryption standard . in this study , the ls-svm technique is compared to template attacks . the results show that the choice of parameters of the machine learning technique strongly impacts the performance of the classification . in contrast , the number of power traces and time instants does not influence the results in the same proportion . this effect can be attributed to the usage of data sets with straightforward hamming weight leakages in this first study . story_separator_special_tag in cryptography , a side-channel attack is any attack based on the analysis of measurements related to the physical implementation of a cryptosystem . nowadays , the possibility of collecting a large amount of observations paves the way to the adoption of machine learning techniques , i.e. , techniques able to extract information and patterns from large datasets . the use of statistical techniques for side-channel attacks is not new . techniques like the template attack have shown their effectiveness in recent years . however , these techniques rely on parametric assumptions and are often limited to small dimensionality settings , which limit their range of application . this paper explores the use of machine learning techniques to relax such assumptions and to deal with high dimensional feature vectors . story_separator_special_tag in this paper , we apply a new cryptanalytic attack on des and triple-des . the implemented attack is a known-plaintext attack based on neural networks . in this attack we trained a neural network to retrieve plaintext from ciphertext without retrieving the key used in encryption . the attack was practically , and successfully , applied on des and triple-des . this attack required an average of 211 plaintext-ciphertext pairs to perform cryptanalysis of des in an average duration of 51 minutes . for the cryptanalysis of triple-des , an average of only 212 plaintext-ciphertext pairs was required in an average duration of 72 minutes . as compared to other attacks , this attack is an improvement in terms of number of known-plaintexts required , as well as the time required to perform the complete attack . story_separator_special_tag mobile devices can be maliciously exploited to violate the privacy of people . in most attack scenarios , the adversary takes the local or remote control of the mobile device , by leveraging a vulnerability of the system , hence sending back the collected information to some remote web service . in this paper , we consider a different adversary , who does not interact actively with the mobile device , but he is able to eavesdrop the network traffic of the device from the network side ( e.g. , controlling a wi-fi access point ) . the fact that the network traffic is often encrypted makes the attack even more challenging . in this paper , we investigate to what extent such an external attacker can identify the specific actions that a user is performing on her mobile apps . we design a system that achieves this goal using advanced machine learning techniques . we built a complete implementation of this system , and we also run a thorough set of experiments , which show that our attack can achieve accuracy and precision higher than 95 % , for most of the considered actions . we compared our solution story_separator_special_tag template attack is the most common and powerful profiled side channel attack . it relies on a realistic assumption regarding the noise of the device under attack : the probability density function of the data is a multivariate gaussian distribution . to relax this assumption , a recent line of research has investigated new profiling approaches mainly by applying machine learning techniques . the obtained results are commensurate , and in some particular cases better , compared to template attack . in this work , we propose to continue this recent line of research by applying more sophisticated profiling techniques based on deep learning . our experimental results confirm the overwhelming advantages of the resulting new attacks when targeting both unprotected and protected cryptographic implementations . story_separator_special_tag we present a lightweight puf-based authentication approach that is practical in settings where a server authenticates a device , and for use cases where the number of authentications is limited over a device 's lifetime . our scheme uses a server-managed challenge/response pair ( crp ) lockdown protocol : unlike prior approaches , an adaptive chosen-challenge adversary with machine learning capabilities can not obtain new crps without the server 's implicit permission . the adversary is faced with the problem of deriving a puf model with a limited amount of machine learning training data . our system-level approach allows a so-called strong puf to be used for lightweight authentication in a manner that is heuristically secure against today 's best machine learning methods through a worst-case crp exposure algorithmic validation . we also present a degenerate instantiation using a weak puf that is secure against computationally unrestricted adversaries , which includes any learning adversary , for practical device lifetimes and read-out rates . we validate our approach using silicon puf data , and demonstrate the feasibility of supporting 10 , 1,000 , and 1m authentications , including practical configurations that are not learnable with polynomial resources , e.g. , the story_separator_special_tag machine learning systems offer unparalled flexibility in dealing with evolving input in a variety of applications , such as intrusion detection systems and spam e-mail filtering . however , machine learning algorithms themselves can be a target of attack by a malicious adversary . this paper provides a framework for answering the question , `` can machine learning be secure ? '' novel contributions of this paper include a taxonomy of different types of attacks on machine learning techniques and systems , a variety of defenses against those attacks , a discussion of ideas that are important to security for machine learning , an analytical model giving a lower bound on attacker 's work function , and a list of open problems . story_separator_special_tag machine learning 's ability to rapidly evolve to changing and complex situations has helped it become a fundamental tool for computer security . that adaptability is also a vulnerability : attackers can exploit machine learning systems . we present a taxonomy identifying and analyzing attacks against machine learning systems . we show how these classes influence the costs for the attacker and defender , and we give a formal structure defining their interaction . we use our framework to survey and analyze the literature of attacks against machine learning systems . we also illustrate our taxonomy by showing how it can guide attacks against spambayes , a popular statistical spam filter . finally , we discuss how our taxonomy suggests new lines of defenses . story_separator_special_tag in security-sensitive applications , the success of machine learning depends on a thorough vetting of their resistance to adversarial data . in one pertinent , well-motivated attack scenario , an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples . in this work , we present a simple but effective gradient-based approach that can be exploited to systematically assess the security of several , widely-used classification algorithms against evasion attacks . following a recently proposed framework for security evaluation , we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker 's knowledge of the system and her ability to manipulate attack samples . this gives the classifier designer a better picture of the classifier performance under evasion attacks , and allows him to perform a more informed model selection ( or parameter setting ) . we evaluate our approach on the relevant security task of malware detection in pdf files , and show that such systems can be easily evaded . we also sketch some countermeasures suggested by our analysis . story_separator_special_tag machine-learning ml enables computers to learn how to recognise patterns , make unintended decisions , or react to a dynamic environment . the effectiveness of trained machines varies because of more suitable ml algorithms or because superior training sets . although ml algorithms are known and publicly released , training sets may not be reasonably ascertainable and , indeed , may be guarded as trade secrets . in this paper we focus our attention on ml classifiers and on the statistical information that can be unconsciously or maliciously revealed from them . we show that it is possible to infer unexpected but useful information from ml classifiers . in particular , we build a novel meta-classifier and train it to hack other classifiers , obtaining meaningful information about their training sets . such information leakage can be exploited , for example , by a vendor to build more effective classifiers or to simply acquire trade secrets from a competitor 's apparatus , potentially violating its intellectual property rights . story_separator_special_tag advances in machine learning ( ml ) in recent years have enabled a dizzying array of applications such as data analytics , autonomous systems , and security diagnostics . ml is now pervasive -- -new systems and models are being deployed in every domain imaginable , leading to rapid and widespread deployment of software based inference and decision making . there is growing recognition that ml exposes new vulnerabilities in software systems , yet the technical community 's understanding of the nature and extent of these vulnerabilities remains limited . we systematize recent findings on ml security and privacy , focusing on attacks identified on these systems and defenses crafted to date . we articulate a comprehensive threat model for ml , and categorize attacks and defenses within an adversarial framework . key insights resulting from works both in the ml and security communities are identified and the effectiveness of approaches are related to structural elements of ml algorithms and the data used to train them . we conclude by formally exploring the opposing relationship between model accuracy and resilience to adversarial manipulation . through these explorations , we show that there are ( possibly unavoidable ) tensions between model
we have developed a near-real-time computer system that can locate and track a subject 's head , and then recognize the person by comparing characteristics of the face to those of known individuals . the computational approach taken in this system is motivated by both physiology and information theory , as well as by the practical requirements of near-real-time performance and accuracy . our approach treats the face recognition problem as an intrinsically two-dimensional ( 2-d ) recognition problem rather than requiring recovery of three-dimensional geometry , taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-d characteristic views . the system functions by projecting face images onto a feature space that spans the significant variations among known face images . the significant features are known as `` eigenfaces , '' because they are the eigenvectors ( principal components ) of the set of faces ; they do not necessarily correspond to features such as eyes , ears , and noses . the projection operation characterizes an individual face by a weighted sum of the eigenface features , and so to recognize a particular face it is necessary only story_separator_special_tag we trained a large , deep convolutional neural network to classify the 1.2 million high-resolution images in the imagenet lsvrc-2010 contest into the 1000 different classes . on the test data , we achieved top-1 and top-5 error rates of 37.5 % and 17.0 % , respectively , which is considerably better than the previous state-of-the-art . the neural network , which has 60 million parameters and 650,000 neurons , consists of five convolutional layers , some of which are followed by max-pooling layers , and three fully connected layers with a final 1000-way softmax . to make training faster , we used non-saturating neurons and a very efficient gpu implementation of the convolution operation . to reduce overfitting in the fully connected layers we employed a recently developed regularization method called `` dropout '' that proved to be very effective . we also entered a variant of this model in the ilsvrc-2012 competition and achieved a winning top-5 test error rate of 15.3 % , compared to 26.2 % achieved by the second-best entry . story_separator_special_tag we propose a deep convolutional neural network architecture codenamed `` inception '' , which was responsible for setting the new state of the art for classification and detection in the imagenet large-scale visual recognition challenge 2014 ( ilsvrc 2014 ) . the main hallmark of this architecture is the improved utilization of the computing resources inside the network . this was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant . to optimize quality , the architectural decisions were based on the hebbian principle and the intuition of multi-scale processing . one particular incarnation used in our submission for ilsvrc 2014 is called googlenet , a 22 layers deep network , the quality of which is assessed in the context of classification and detection . story_separator_special_tag deeper neural networks are more difficult to train . we present a residual learning framework to ease the training of networks that are substantially deeper than those used previously . we explicitly reformulate the layers as learning residual functions with reference to the layer inputs , instead of learning unreferenced functions . we provide comprehensive empirical evidence showing that these residual networks are easier to optimize , and can gain accuracy from considerably increased depth . on the imagenet dataset we evaluate residual nets with a depth of up to 152 layers -- -8x deeper than vgg nets but still having lower complexity . an ensemble of these residual nets achieves 3.57 % error on the imagenet test set . this result won the 1st place on the ilsvrc 2015 classification task . we also present analysis on cifar-10 with 100 and 1000 layers . the depth of representations is of central importance for many visual recognition tasks . solely due to our extremely deep representations , we obtain a 28 % relative improvement on the coco object detection dataset . deep residual nets are foundations of our submissions to ilsvrc & coco 2015 competitions , where we also won story_separator_special_tag in modern face recognition , the conventional pipeline consists of four stages : detect = > align = > represent = > classify . we revisit both the alignment step and the representation step by employing explicit 3d face modeling in order to apply a piecewise affine transformation , and derive a face representation from a nine-layer deep neural network . this deep network involves more than 120 million parameters using several locally connected layers without weight sharing , rather than the standard convolutional layers . thus we trained it on the largest facial dataset to-date , an identity labeled dataset of four million facial images belonging to more than 4 , 000 identities . the learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments , even with a simple classifier . our method reaches an accuracy of 97.35 % on the labeled faces in the wild ( lfw ) dataset , reducing the error of the current state of the art by more than 27 % , closely approaching human-level performance . story_separator_special_tag this paper designs a high-performance deep convolutional network ( deepid2+ ) for face recognition . it is learned with the identification-verification supervisory signal . by increasing the dimension of hidden representations and adding supervision to early convolutional layers , deepid2+ achieves new state-of-the-art on lfw and youtube faces benchmarks . through empirical studies , we have discovered three properties of its deep neural activations critical for the high performance : sparsity , selectiveness and robustness . ( 1 ) it is observed that neural activations are moderately sparse . moderate sparsity maximizes the discriminative power of the deep net as well as the distance between images . it is surprising that deepid2+ still can achieve high recognition accuracy even after the neural responses are binarized . ( 2 ) its neurons in higher layers are highly selective to identities and identity-related attributes . we can identify different subsets of neurons which are either constantly excited or inhibited when different identities or attributes are present . although deepid2+ is not taught to distinguish attributes during training , it has implicitly learned such high-level concepts . ( 3 ) it is much more robust to occlusions , although occlusion patterns are not story_separator_special_tag the goal of this paper is face recognition from either a single photograph or from a set of faces tracked in a video . recent progress in this area has been due to two factors : ( i ) end to end learning for the task using a convolutional neural network ( cnn ) , and ( ii ) the availability of very large scale training datasets . we make two contributions : first , we show how a very large scale dataset ( 2.6m images , over 2.6k people ) can be assembled by a combination of automation and human in the loop , and discuss the trade off between data purity and time ; second , we traverse through the complexities of deep network training and face recognition to present methods and procedures to achieve comparable state of the art results on the standard lfw and ytf face benchmarks . story_separator_special_tag despite significant recent advances in the field of face recognition , implementing face verification and recognition efficiently at scale presents serious challenges to current approaches . in this paper we present a system , called facenet , that directly learns a mapping from face images to a compact euclidean space where distances directly correspond to a measure of face similarity . once this space has been produced , tasks such as face recognition , verification and clustering can be easily implemented using standard techniques with facenet embeddings as feature vectors . our method uses a deep convolutional network trained to directly optimize the embedding itself , rather than an intermediate bottleneck layer as in previous deep learning approaches . to train , we use triplets of roughly aligned matching / non-matching face patches generated using a novel online triplet mining method . the benefit of our approach is much greater representational efficiency : we achieve state-of-the-art face recognition performance using only 128-bytes per face . on the widely used labeled faces in the wild ( lfw ) dataset , our system achieves a new record accuracy of 99.63 % . on youtube faces db it achieves 95.12 % . our story_separator_special_tag this paper addresses deep face recognition ( fr ) problem under open-set protocol , where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space . however , few existing algorithms can effectively achieve this criterion . to this end , we propose the angular softmax ( a-softmax ) loss that enables convolutional neural networks ( cnns ) to learn angularly discriminative features . geometrically , a-softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold , which intrinsically matches the prior that faces also lie on a manifold . moreover , the size of angular margin can be quantitatively adjusted by a parameter m. we further derive specific m to approximate the ideal feature criterion . extensive analysis and experiments on labeled face in the wild ( lfw ) , youtube faces ( ytf ) and megaface challenge 1 show the superiority of a-softmax loss in fr tasks . story_separator_special_tag one of the main challenges in feature learning using deep convolutional neural networks ( dcnns ) for large-scale face recognition is the design of appropriate loss functions that can enhance the discriminative power . centre loss penalises the distance between deep features and their corresponding class centres in the euclidean space to achieve intra-class compactness . sphereface assumes that the linear transformation matrix in the last fully connected layer can be used as a representation of the class centres in the angular space and therefore penalises the angles between deep features and their corresponding weights in a multiplicative way . recently , a popular line of research is to incorporate margins in well-established loss functions in order to maximise face class separability . in this paper , we propose an additive angular margin loss ( arcface ) to obtain highly discriminative features for face recognition . the proposed arcface has a clear geometric interpretation due to its exact correspondence to geodesic distance on a hypersphere . we present arguably the most extensive experimental evaluation against all recent state-of-the-art face recognition methods on ten face recognition benchmarks which includes a new large-scale image database with trillions of pairs and a large-scale story_separator_special_tag this paper proposes to learn a set of high-level feature representations through deep learning , referred to as deep hidden identity features ( deepid ) , for face verification . we argue that deepid can be effectively learned through challenging multi-class face identification tasks , whilst they can be generalized to other tasks ( such as verification ) and new identities unseen in the training set . moreover , the generalization capability of deepid increases as more face classes are to be predicted at training . deepid features are taken from the last hidden layer neuron activations of deep convolutional networks ( convnets ) . when learned as classifiers to recognize about 10 , 000 face identities in the training set and configured to keep reducing the neuron numbers along the feature extraction hierarchy , these deep convnets gradually form compact identity-related features in the top layers with only a small number of hidden neurons . the proposed features are extracted from various face regions to form complementary and over-complete representations . any state-of-the-art classifiers can be learned based on these high-level representations for face verification . 97:45 % verification accuracy on lfw is achieved with only weakly aligned faces story_separator_special_tag most face databases have been created under controlled conditions to facilitate the study of specific parameters on the face recognition problem . these parameters include such variables as position , pose , lighting , background , camera quality , and gender . while there are many applications for face recognition technology in which one can control the parameters of image acquisition , there are also many applications in which the practitioner has little or no control over such parameters . this database , labeled faces in the wild , is provided as an aid in studying the latter , unconstrained , recognition problem . the database contains labeled face photographs spanning the range of conditions typically encountered in everyday life . the database exhibits natural variability in factors such as pose , lighting , race , accessories , occlusions , and background . in addition to describing the details of the database , we provide specific experimental paradigms for which the database is suitable . this is done in an effort to make research performed with the database as consistent and comparable as possible . we provide baseline results , including results of a state of the art face recognition story_separator_special_tag recognizing faces in unconstrained videos is a task of mounting importance . while obviously related to face recognition in still images , it has its own unique characteristics and algorithmic requirements . over the years several methods have been suggested for this problem , and a few benchmark data sets have been assembled to facilitate its study . however , there is a sizable gap between the actual application needs and the current state of the art . in this paper we make the following contributions . ( a ) we present a comprehensive database of labeled videos of faces in challenging , uncontrolled conditions ( i.e. , in the wild ) , the youtube faces database , along with benchmark , pair-matching tests1 . ( b ) we employ our benchmark to survey and compare the performance of a large variety of existing video face recognition techniques . finally , ( c ) we describe a novel set-to-set similarity measure , the matched background similarity ( mbgs ) . this similarity is shown to considerably improve performance on the benchmark tests . story_separator_special_tag in this paper , we provide an overview of the fundamentals of biometric identification , together with a description of the main biometric technologies currently in use , all of them within a common reference framework . a comparison on different qualitative parameters of these technologies is also given , so that the reader may have a clear perspective of advantages and disadvantages of each . a section on multibiometrics describes the state of the art in making these systems work coordinately . fusion at different conceptual levels is described . finally , a section on commercial issues provides the reader a perspective of the main companies currently involved in this field . story_separator_special_tag the growing use of control access systems based on face recognition shed light over the need for even more accurate systems to detect face spoofing attacks . in this paper , an extensive analysis on face spoofing detection works published in the last decade is presented . the analyzed works are categorized by their fundamental parts , i.e. , descriptors and classifiers . this structured survey also brings a comparative performance analysis of the works considering the most important public data sets in the field . the methodology followed in this work is particularly relevant to observe temporal evolution of the field , trends in the existing approaches , to discuss still opened issues , and to propose new perspectives for the future of face spoofing detection . it is a comprehensive survey covering the topic of face spoofing detection.discussion about types of attacks , existing methods , a timeline and benchmarking.a thorough analysis about trends and perspectives is provided . story_separator_special_tag 3d mask face spoofing attack becomes new challenge and attracts more research interests in recent years . however , due to the deficiency number and limited variations of database , there are few methods be proposed to aim on it . meanwhile , most of existing databases only concentrate on the anti-spoofing of different kinds of attacks and ignore the environmental changes in real world applications . in this paper , we build a new 3d mask anti-spoofing database with more variations to simulate the real world scenario . the proposed database contains 12 masks from two companies with different appearance quality . 7 cameras from the stationary and mobile devices and 6 lighting settings that cover typical illumination conditions are also included . therefore , each subject contains 42 ( 7 cameras * 6 lightings ) genuine and 42 mask sequences and the total size is 1008 videos . through the benchmark experiments , directions of the future study are pointed out . we plan to release the database as an platform to evaluate methods under different variations . story_separator_special_tag resisting spoofing attempts via photographs and video playbacks is a vital issue for the success of face biometrics . yet , the ldquolivenessrdquo topic has only been partially studied in the past . in this paper we are suggesting a holistic liveness detection paradigm that collaborates with standard techniques in 2d face biometrics . the experiments show that many attacks are avertible via a combination of anti-spoofing measures . we have investigated the topic using real-time techniques and applied them to real-life spoofing scenarios in an indoor , yet uncontrolled environment . story_separator_special_tag face antispoofing has now attracted intensive attention , aiming to assure the reliability of face biometrics . we notice that currently most of face antispoofing databases focus on data with little variations , which may limit the generalization performance of trained models since potential attacks in real world are probably more complex . in this paper we release a face antispoofing database which covers a diverse range of potential attack variations . specifically , the database contains 50 genuine subjects , and fake faces are made from the high quality records of the genuine faces . three imaging qualities are considered , namely the low quality , normal quality and high quality . three fake face attacks are implemented , which include warped photo attack , cut photo attack and video attack . therefore each subject contains 12 videos ( 3 genuine and 9 fake ) , and the final database contains 600 video clips . test protocol is provided , which consists of 7 scenarios for a thorough evaluation from all possible aspects . a baseline algorithm is also given for comparison , which explores the high frequency information in the facial region to determine the liveness . we story_separator_special_tag existing face liveness detection algorithms adopt behavioural challenge-response methods that require user cooperation . to be verified live , users are expected to obey some user unfriendly requirement . in this paper , we present a multispectral face liveness detection method , which is user cooperation free . moreover , the system is adaptive to various user-system distances . using the lambertian model , we analyze multispectral properties of human skin versus non-skin , and the discriminative wavelengths are then chosen . reflectance data of genuine and fake faces at multi-distances are selected to form a training set . an svm classifier is trained to learn the multispectral distribution for a final genuine-or-fake classification . compared with previous works , the proposed method has the following advantages : ( a ) the requirement on the users ' cooperation is no longer needed , making the liveness detection user friendly and fast . ( b ) the system can work without restricted distance requirement from the target being analyzed . experiments are conducted on genuine versus planar face data , and genuine versus mask face data . furthermore a comparison with the visible challenge-response liveness detection method is also given . story_separator_special_tag rendering a face recognition system robust is vital in order to safeguard it against spoof attacks carried out using printed pictures of a victim ( also known as print attack ) or a replayed video of the person ( replay attack ) . a key property in distinguishing a live , valid access from printed media or replayed videos is by exploiting the information dynamics of the video content , such as blinking eyes , moving lips , and facial dynamics . we advance the state of the art in facial antispoofing by applying a recently developed algorithm called dynamic mode decomposition ( dmd ) as a general purpose , entirely data-driven approach to capture the above liveness cues . we propose a classification pipeline consisting of dmd , local binary patterns ( lbps ) , and support vector machines ( svms ) with a histogram intersection kernel . a unique property of dmd is its ability to conveniently represent the temporal information of the entire video as a single image with the same dimensions as those images contained in the video . the pipeline of dmd + lbp + svm proves to be efficient , convenient to use , story_separator_special_tag the vulnerability of biometric systems to external attacks using a physical artefact in order to impersonate the legitimate user has become a major concern over the last decade . such a threat , commonly known as spoofing , poses a serious risk to the integrity of biometric systems . the usual low-complexity and low-cost characteristics of these attacks make them accessible to the general public , rendering each user a potential intruder . the present study addresses the spoofing issue analysing the feasibility to perform low-cost attacks with self-manufactured three-dimensional ( 3d ) printed models to 2.5d and 3d face recognition systems . a new database with 2d , 2.5d and 3d real and fake data from 26 subjects was acquired for the experiments . results showed the high vulnerability of the three tested systems , including a commercial solution , to the attacks . story_separator_special_tag spoofing is the act of masquerading as a valid user by falsifying data to gain an illegitimate access . vulnerability of recognition systems to spoofing attacks ( presentation attacks ) is still an open security issue in biometrics domain and among all biometric traits , face is exposed to the most serious threat , since it is particularly easy to access and reproduce . in this paper , many different types of face spoofing attacks have been examined and various algorithms have been proposed to detect them . mainly focusing on 2d attacks forged by displaying printed photos or replaying recorded videos on mobile devices , a significant portion of these studies ground their arguments on the flatness of the spoofing material in front of the sensor . however , with the advancements in 3d reconstruction and printing technologies , this assumption can no longer be maintained . in this paper , we aim to inspect the spoofing potential of subject-specific 3d facial masks for different recognition systems and address the detection problem of this more complex attack type . in order to assess the spoofing performance of 3d masks against 2d , 2.5d , and 3d face recognition and story_separator_special_tag in recent years face recognition systems have been applied in various useful applications , such as surveillance , access control , criminal investigations , law enforcement , and others . however face biometric systems can be highly vulnerable to spoofing attacks where an impostor tries to bypass the face recognition system using a photo or video sequence . in this paper a novel liveness detection method , based on the 3d structure of the face , is proposed . processing the 3d curvature of the acquired data , the proposed approach allows a biometric system to distinguish a real face from a photo , increasing the overall performance of the system and reducing its vulnerability . in order to test the real capability of the methodology a 3d face database has been collected simulating spoofing attacks , therefore using photographs instead of real faces . the experimental results show the effectiveness of the proposed approach . story_separator_special_tag automatic facial expression recognition has attracted significant research attention since the 1990s due to its potential applications in human computer interaction . although facial expression recognition seems easy and straight forward to us , it is a challenging task for computers . due to the subtlety and variability of human facial expressions , existing methods based on 2d images/videos have limitations such as sensitivity to changes in recording conditions , and inability to represent easily-confused expressions etc . moreover , the human face is neither convex nor rigid , which means that some of the deformations are hard to record by single-view 2d images/videos . in order to represent facial expressions sufficiently , 3d data based methods for expression analysis have recently gained popularity due to the availability of low cost 3d recording devices . an ideal facial expression recognition system should be fully automatic , person-independent , and able to work with all types of facial expressions . existing research has focused on addressing individual aspects but a system that fulfills all these requirements is yet to be developed . this thesis investigates the facial expression recognition problem with emphasis on : ( 1 ) full automation , from story_separator_special_tag high-quality custom-made 3d masks are increasing becoming a serious threat to face-recognition systems . this threat is driven , in part , by the falling cost of manufacturing such masks . research in face presentation-attack detection ( pad ) in general , and also specifically for 3d-mask based attacks , has mostly concentrated on imagery in the visible-light range of wavelengths ( rgb ) . we look beyond imagery in the visible-light spectrum to find potentially easier solutions for the challenge of face presentation-attack detection ( pad ) . in particular , we explore the use of near-infrared ( nir ) and thermal imagery to detect print- , replay- , and 3d-mask-attacks . this preliminary study shows that both nir and thermal imagery can potentially simplify the task of face-pad . story_separator_special_tag in this paper we study presentation attack detection ( pad ) in face recognition systems against realistic artifacts such as 3d masks or good quality of photo attacks . in recent works , pulse detection based on remote photoplethysmography ( rppg ) has shown to be a effective countermeasure in concrete setups , but still there is a need for a deeper understanding of when and how this kind of pad works in various practical conditions . related works analyze full video sequences ( usually over 60 seconds ) to distinguish between attacks and legitimate accesses . however , existing approaches may not be as effective as it has been claimed in the literature in time variable scenarios . in this paper we evaluate the performance of an existent state-of-the-art pad scheme based on rppg when analyzing short-time video sequences extracted from a longer video . results are reported using the 3d mask attack database ( 3dmad ) , and a self-collected dataset called heart rate database ( hr ) , including different video durations , spectrum bands , resolutions and frame rates . several conclusions can be drawn from this work : a ) pad performance based on rppg story_separator_special_tag with the wide applications of face recognition , spoofing attack is becoming a big threat to their security . conventional face recognition systems usually adopt behavioral challenge-response or texture analysis methods to resist spoofing attacks , however , these methods require high user cooperation and are sensitive to the imaging quality and environments . in this chapter , we present a multi-spectral face recognition system working in vis ( visible ) and nir ( near infrared ) spectrums , which is robust to various spoofing attacks and user cooperation free . first , we introduce the structure of the system from several aspects including : imaging device , face landmarking , feature extraction , matching , vis , and nir sub-systems . then the performance of the multi-spectral system and each subsystem is evaluated and analyzed . finally , we describe the multi-spectral image-based anti-spoofing module , and report its performance under photo attacks . experiments on a spoofing database show the excellent performance of the proposed system both in recognition rate and anti-spoofing ability . compared with conventional vis face recognition system , the multi-spectral system has two advantages : ( 1 ) by combining the vis and nir story_separator_special_tag face liveness detection in visible light ( vis ) spectrum is facing great challenges . beyond visible light spectrum , thermal ir ( tir ) has intrinsic live signal itself . in this paper , we present a novel liveness detection approach based on thermal ir spectrum . live face is modeled in the cross-modality of thermal ir and visible light spectrum . in our model , canonical correlation analysis between visible and thermal ir face is exploited . the correlation of different face parts is also investigated to illustrate more correlative features and be helpful to improve live face detection ability . an extensive set of liveness detection experiments are presented to show effectiveness of our approach and other correlation methods are also tested for comparison . story_separator_special_tag spoofing attacks are one of the security traits that biometric recognition systems are proven to be vulnerable to . when spoofed , a biometric recognition system is bypassed by presenting a copy of the biometric evidence of a valid user . among all biometric modalities , spoofing a face recognition system is particularly easy to perform : all that is needed is a simple photograph of the user . in this paper , we address the problem of detecting face spoofing attacks . in particular , we inspect the potential of texture features based on local binary patterns ( lbp ) and their variations on three types of attacks : printed photographs , and photos and videos displayed on electronic screens of different sizes . for this purpose , we introduce replay-attack , a novel publicly available face spoofing database which contains all the mentioned types of attacks . we conclude that lbp , with 15 % half total error rate , show moderate discriminability when confronted with a wide set of attack types . story_separator_special_tag automatic face recognition is now widely used in applications ranging from deduplication of identity to authentication of mobile payment . this popularity of face recognition has raised concerns about face spoof attacks ( also known as biometric sensor presentation attacks ) , where a photo or video of an authorized person s face could be used to gain access to facilities or services . while a number of face spoof detection techniques have been proposed , their generalization ability has not been adequately addressed . we propose an efficient and rather robust face spoof detection algorithm based on image distortion analysis ( ida ) . four different features ( specular reflection , blurriness , chromatic moment , and color diversity ) are extracted to form the ida feature vector . an ensemble classifier , consisting of multiple svm classifiers trained for different face spoof attacks ( e.g. , printed photo and replayed video ) , is used to distinguish between genuine ( live ) and spoof faces . the proposed approach is extended to multiframe face spoof detection in videos using a voting-based scheme . we also collect a face spoof database , msu mobile face spoofing database ( msu story_separator_special_tag face anti-spoofing is crucial to prevent face recognition systems from a security breach . previous deep learning approaches formulate face anti-spoofing as a binary classification problem . many of them struggle to grasp adequate spoofing cues and generalize poorly . in this paper , we argue the importance of auxiliary supervision to guide the learning toward discriminative and generalizable cues . a cnn-rnn model is learned to estimate the face depth with pixel-wise supervision , and to estimate rppg signals with sequence-wise supervision . the estimated depth and rppg are fused to distinguish live vs. spoof faces . further , we introduce a new face anti-spoofing database that covers a large range of illumination , subject , and pose variations . experiments show that our model achieves the state-of-the-art results on both intra- and cross-database testing . story_separator_special_tag user authentication is an important step to protect information and in this field face biometrics is advantageous . face biometrics is natural , easy to use and less human-invasive . unfortunately , recent work has revealed that face biometrics is vulnerable to spoofing attacks using low-tech equipments . this article assesses how well existing face anti-spoofing countermeasures can work in a more realistic condition . experiments carried out with two freely available video databases ( replay attack database and casia face anti-spoofing database ) show low generalization and possible database bias in the evaluated countermeasures . to generalize and deal with the diversity of attacks in a real world scenario we introduce two strategies that show promising results . story_separator_special_tag though having achieved some progresses , the hand-crafted texture features , e.g. , lbp [ 23 ] , lbp-top [ 11 ] are still unable to capture the most discriminative cues between genuine and fake faces . in this paper , instead of designing feature by ourselves , we rely on the deep convolutional neural network ( cnn ) to learn features of high discriminative ability in a supervised manner . combined with some data pre-processing , the face anti-spoofing performance improves drastically . in the experiments , over 70 % relative decrease of half total error rate ( hter ) is achieved on two challenging datasets , casia [ 36 ] and replay-attack [ 7 ] compared with the state-of-the-art . meanwhile , the experimental results from inter-tests between two datasets indicates cnn can obtain features with better generalization ability . moreover , the nets trained using combined data from two datasets have less biases between two datasets . story_separator_special_tag research on face spoofing detection has mainly been focused on analyzing the luminance of the face images , hence discarding the chrominance information which can be useful for discriminating fake faces from genuine ones . in this work , we propose a new face anti-spoofing method based on color texture analysis . we analyze the joint color-texture information from the luminance and the chrominance channels using a color local binary pattern descriptor . more specifically , the feature histograms are extracted from each image band separately . extensive experiments on two benchmark datasets , namely casia face anti-spoofing and replay-attack databases , showed excellent results compared to the state-of-the-art . most importantly , our inter-database evaluation depicts that the proposed approach showed very promising generalization capabilities . story_separator_special_tag the face image is the most accessible biometric modality which is used for highly accurate face recognition systems , while it is vulnerable to many different types of presentation attacks . face anti-spoofing is a very critical step before feeding the face image to biometric systems . in this paper , we propose a novel two-stream cnn-based approach for face anti-spoofing , by extracting the local features and holistic depth maps from the face images . the local features facilitate cnn to discriminate the spoof patches independent of the spatial face areas . on the other hand , holistic depth map examine whether the input image has a face-like depth . extensive experiments are conducted on the challenging databases ( casia-fasd , msu-ussa , and replay attack ) , with comparison to the state of the art . story_separator_special_tag for face authentication to become widespread on mobile devices , robust countermeasures must be developed for face presentation-attack detection ( pad ) . existing databases for evaluating face-pad methods do not capture the specific characteristics of mobile devices . we introduce a new database , replay-mobile , for this purpose.1 this publicly available database includes 1,200 videos corresponding to 40 clients . besides the genuine videos , the database contains a variety of presentation-attacks . the database also provides three non- overlapping sets for training , validating and testing classifiers for the face-pad problem . this will help researchers in comparing new approaches to existing algorithms in a standardized fashion . for this purpose , we also provide baseline results with state- of-the-art approaches based on image quality analysis and face texture analysis . story_separator_special_tag the vulnerability of face recognition systems to presentation attacks ( also known as direct attacks or spoof attacks ) has received a great deal of interest from the biometric community . the rapid evolution of face recognition systems into real-time applications has raised new concerns about their ability to resist presentation attacks , particularly in unattended application scenarios such as automated border control . the goal of a presentation attack is to subvert the face recognition system by presenting a facial biometric artifact . popular face biometric artifacts include a printed photo , the electronic display of a facial photo , replaying video using an electronic display , and 3d face masks . these have demonstrated a high security risk for state-of-the-art face recognition systems . however , several presentation attack detection ( pad ) algorithms ( also known as countermeasures or antispoofing methods ) have been proposed that can automatically detect and mitigate such targeted attacks . the goal of this survey is to present a systematic overview of the existing work on face presentation attack detection that has been carried out . this paper describes the various aspects of face presentation attacks , including different types of face story_separator_special_tag the main scope of this chapter is to serve as a brief introduction to face presentation attack detection . the next pages present the different presentation attacks that a face recognition system can confront , in which an attacker presents to the sensor , mainly a camera , an artifact ( generally a photograph , a video , or a mask ) to try to impersonate a genuine user . first , we make an introduction of the current status of face recognition , its level of deployment , and the challenges it faces . in addition , we present the vulnerabilities and the possible attacks that a biometric system may be exposed to , showing that way the high importance of presentation attack detection methods . we review different types of presentation attack methods , from simpler to more complex ones , and in which cases they could be effective . later , we summarize the most popular presentation attack detection methods to deal with these attacks . finally , we introduce public datasets used by the research community for exploring the vulnerabilities of face biometrics and developing effective countermeasures against known spoofs . story_separator_special_tag we present a real-time liveness detection approach against photograph spoofing in face recognition , by recognizing spontaneous eyeblinks , which is a non-intrusive manner . the approach requires no extra hardware except for a generic webcamera . eyeblink sequences often have a complex underlying structure . we formulate blink detection as inference in an undirected conditional graphical framework , and are able to learn a compact and efficient observation and transition potentials from data . for purpose of quick and accurate recognition of the blink behavior , eye closity , an easily-computed discriminative measure derived from the adaptive boosting algorithm , is developed , and then smoothly embedded into the conditional model . an extensive set of experiments are presented to show effectiveness of our approach and how it outperforms the cascaded adaboost and hmm in task of eyeblink detection . story_separator_special_tag this paper presents a blinking-based liveness detection method for human face using conditional random fields ( crfs ) . our method only needs a web camera for capturing video clips . blinking clue is a passive action and does not need the user to to any hint , such as speaking , face moving . we model blinking activity by crfs , which accommodates long-range contextual dependencies among the observation sequence . the experimental results demonstrate that the proposed method is promising , and outperforms the cascaded adaboost method and hmm method . story_separator_special_tag biometrics is a rapidly developing technology that is to identify a person based on his or her physiological or behavioral characteristics . to ensure the correction of authentication , the biometric system must be able to detect and reject the use of a copy of a biometric instead of the live biometric . this function is usually termed `` liveness detection '' . this paper describes a new method for live face detection . using structure and movement information of live face , an effective live face detection algorithm is presented . compared to existing approaches , which concentrate on the measurement of 3d depth information , this method is based on the analysis of fourier spectra of a single face image or face image sequences . experimental results show that the proposed method has an encouraging performance . story_separator_special_tag a technique evaluating liveness in short face image sequences is presented the intended purpose of the proposed system is to assist in a biometric authentication framework , by adding liveness awareness in a non-intrusive manner . analyzing the trajectories of single parts of a live face reveal valuable information to discriminate it against a spoofed one . the proposed system uses a lightweight novel optical flow , which is especially applicable in face motion estimation based on the structure tensor and a few frames . it uses a model-based local gabor decomposition and svm experts for face part detection . an alternative approach for face pan detection using optical flow pattern matching is introduced as well . experimental results on the proposed system are presented . story_separator_special_tag a technique evaluating liveness in face image sequences is presented . to ensure the actual presence of a live face in contrast to a photograph ( playback attack ) , is a significant problem in face authentication to the extent that anti-spoofing measures are highly desirable . the purpose of the proposed system is to assist in a biometric authentication framework , by adding liveness awareness in a non-intrusive manner . analyzing the trajectories of certain parts of a live face reveals valuable information to discriminate it against a spoofed one . the proposed system uses a lightweight novel optical flow , which is especially applicable in face motion estimation based on the structure tensor and inputs of a few frames . for reliable face part detection , the system utilizes a model-based local gabor decomposition and svm experts , where selected points from a retinotopic grid are used to form regional face models . also the estimated optical flow is exploited to detect a face part . the whole procedure , starting with three images as input and finishing in a liveness score , is executed in near real-time without special purpose hardware . experimental results on the proposed story_separator_special_tag it is a common spoof to use a photograph to fool face recognition algorithm . in light of differences in optical flow fields generated by movements of two-dimensional planes and three-dimensional objects , we proposed a new liveness detection method for face recognition . under the assumption that the test region is a two-dimensional plane , we can obtain a reference field from the actual optical flow field data . then the degree of differences between the two fields can be used to distinguish between a three-dimensional face and a two-dimensional photograph . empirical study shows that the proposed approach is both feasible and effective . story_separator_special_tag a robust face detection technique along with mouth localization , processing every frame in real time ( video rate ) , is presented . moreover , it is exploited for motion analysis onsite to verify `` liveness '' as well as to achieve lip reading of digits . a methodological novelty is the suggested quantized angle features ( `` quangles '' ) being designed for illumination invariance without the need for preprocessing ( e.g. , histogram equalization ) . this is achieved by using both the gradient direction and the double angle direction ( the structure tensor angle ) , and by ignoring the magnitude of the gradient . boosting techniques are applied in a quantized feature space . a major benefit is reduced processing time ( i.e. , that the training of effective cascaded classifiers is feasible in very short time , less than 1 h for data sets of order 104 ) . scale invariance is implemented through the use of an image scale pyramid . we propose `` liveness '' verification barriers as applications for which a significant amount of computation is avoided when estimating motion . novel strategies to avert advanced spoofing attempts ( e.g. , story_separator_special_tag face biometric systems are vulnerable to spoofing attacks . such attacks can be performed in many ways , including presenting a falsified image , video or 3d mask of a valid user . a widely used approach for differentiating genuine faces from fake ones has been to capture their inherent differences in ( 2d or 3d ) texture using local descriptors . one limitation of these methods is that they may fail if an unseen attack type , e.g . a highly realistic 3d mask which resembles real skin texture , is used in spoofing . here we propose a robust anti-spoofing method by detecting pulse from face videos . based on the fact that a pulse signal exists in a real living face but not in any mask or print material , the method could be a generalized solution for face liveness detection . the proposed method is evaluated first on a 3d mask spoofing database 3dmad to demonstrate its effectiveness in detecting 3d mask attacks . more importantly , our cross-database experiment with high quality real-f masks shows that the pulse based method is able to detect even the previously unseen mask type whereas texture based methods fail story_separator_special_tag authentication of users by exploiting face as a biometric is gaining widespread traction due to recent advances in face detection and recognition algorithms . while face recognition has made rapid advances in its performance , such facebased authentication systems remain vulnerable to biometric presentation attacks . biometric presentation attacks are varied and the most common attacks include the presentation of a video or photograph on a display device , the presentation of a printed photograph or the presentation of a face mask resembling the user to be authenticated . in this paper , we present ppgsecure , a novel methodology that relies on camera-based physiology measurements to detect and thwart such biometric presentation attacks . ppgsecure uses a photoplethysmogram ( ppg ) , which is an estimate of vital signs from the small color changes in the video observed due to minor pulsatile variations in the volume of blood flowing to the face . we demonstrate that the temporal frequency spectra of the estimated ppg signal for real live individuals are distinctly different than those of presentation attacks and exploit these differences to detect presentation attacks . we demonstrate that ppgsecure achieves significantly better performance than existing state of the story_separator_special_tag 3d mask spoofing attack has been one of the main challenges in face recognition . among existing methods , texture-based approaches show powerful abilities and achieve encouraging results on 3d mask face anti-spoofing . however , these approaches may not be robust enough in application scenarios and could fail to detect imposters with hyper-real masks . in this paper , we propose a novel approach to 3d mask face anti-spoofing from a new perspective , by analysing heartbeat signal through remote photoplethysmography ( rppg ) . we develop a novel local rppg correlation model to extract discriminative local heartbeat signal patterns so that an imposter can better be detected regardless of the material and quality of the mask . to further exploit the characteristic of rppg distribution on real faces , we learn a confidence map through heartbeat signal strength to weight local rppg correlation pattern for classification . experiments on both public and self-collected datasets validate that the proposed method achieves promising results under intra and cross dataset scenario . story_separator_special_tag spoofing with photograph or video is one of the most common manner to circumvent a face recognition system . in this paper , we present a real-time and non-intrusive method to address this based on individual images from a generic webcamera . the task is formulated as a binary classification problem , in which , however , the distribution of positive and negative are largely overlapping in the input space , and a suitable representation space is hence of importance . using the lambertian model , we propose two strategies to extract the essential information about different surface properties of a live human face or a photograph , in terms of latent samples . based on these , we develop two new extensions to the sparse logistic regression model which allow quick and accurate spoof detection . primary experiments on a large photo imposter database show that the proposed method gives preferable detection performance compared to others . story_separator_special_tag spoofing face recognition systems with photos or videos of someone else is not difficult . sometimes , all one needs is to display a picture on a laptop monitor or a printed photograph to the biometric system . in order to detect this kind of spoofs , in this paper we present a solution that works either with printed or lcd displayed photographs , even under bad illumination conditions without extra-devices or user involvement . tests conducted on large databases show good improvements of classification accuracy as well as true positive and false positive rates compared to the state-of-the-art . story_separator_special_tag face recognition is an increasingly popular method for user authentication . however , face recognition is susceptible to playback attacks . therefore , a reliable way to detect malicious attacks is crucial to the robustness of the system . we propose and validate a novel physics-based method to detect images recaptured from printed material using only a single image . micro-textures present in printed paper manifest themselves in the specular component of the image . features extracted from this component allows a linear svm classifier to achieve 2.2 % false acceptance rate and 13 % false rejection rate ( 6.7 % equal error rate ) . we also show that the classifier can be generalizable to contrast enhanced recaptured images and lcd screen recaptured images without re-training , demonstrating the robustness of our approach.1 story_separator_special_tag current face biometric systems are vulnerable to spoofing attacks . a spoofing attack occurs when a person tries to masquerade as someone else by falsifying data and thereby gaining illegitimate access . inspired by image quality assessment , characterization of printing artifacts , and differences in light reflection , we propose to approach the problem of spoofing detection from texture analysis point of view . indeed , face prints usually contain printing quality defects that can be well detected using texture features . hence , we present a novel approach based on analyzing facial image textures for detecting whether there is a live person in front of the camera or a face print . the proposed approach analyzes the texture of the facial images using multi-scale local binary patterns ( lbp ) . compared to many previous works , our proposed approach is robust , computationally fast and does not require user-cooperation . in addition , the texture features that are used for spoofing detection can also be used for face recognition . this provides a unique feature space for coupling spoofing detection and face recognition . extensive experimental analysis on a publicly available database showed excellent results compared to story_separator_special_tag there are several types of spoofing attacks to face recognition systems such as photograph , video or mask attacks . recent studies show that face recognition systems are vulnerable to these attacks . in this paper , a countermeasure technique is proposed to protect face recognition systems against mask attacks . to the best of our knowledge , this is the first time a countermeasure is proposed to detect mask attacks . the reason for this delay is mainly due to the unavailability of public mask attacks databases . in this study , a 2d+3d face mask attacks database is used which is prepared for a research project in which the authors are all involved . the performance of the countermeasure is evaluated on both the texture images and the depth maps , separately . the results show that the proposed countermeasure gives satisfactory results using both the texture images and the depth maps . the performance of the countermeasure is observed to be slight better when the technique is applied on texture images instead of depth maps , which proves that face texture provides more information than 3d face shape characteristics using the proposed approach . story_separator_special_tag photographs , videos or masks can be used to spoof face recognition systems . in this paper , a countermeasure is proposed to protect face recognition systems against 3d mask attacks . the reason for the lack of studies on countermeasures against mask attacks is mainly due to the unavailability of public databases dedicated to mask attack . in this study , a 2d+3d mask attacks database is used that is prepared for a research project in which the authors are all involved . the proposed countermeasure is based on the fusion of the information extracted from both the texture and the depth images in the mask database , and provides satisfactory results to protect recognition systems against mask attacks . another contribution of this study is that the countermeasure is integrated to the selected baseline systems for 2d and 3d face recognition , which provides to analyze the performances of the systems with/without attacks and with/without the countermeasure . story_separator_special_tag vulnerability to spoofing attacks is a serious drawback for many biometric systems . among all biometric traits , face is the one that is exposed to the most serious threat , since it is exceptionally easy to access . the limited work on fraud detection capabilities for face mainly shapes around 2d attacks forged by displaying printed photos or replaying recorded videos on mobile devices . a significant portion of this work is based on the flatness of the facial surface in front of the sensor . in this study , we complicate the spoofing problem further by introducing the 3rd dimension and examine possible 3d attack instruments . a small database is constructed with six different types of 3d facial masks and experimented on to determine the right direction to study 3d attacks . spoofing performance for each type of mask is assessed and analysed thoroughly using two gabor-wavelet-based algorithms . story_separator_special_tag with the wide deployment of face recognition systems in applications from border control to mobile device unlocking , the combat of face spoofing attacks requires increased attention ; such attacks can be easily launched via printed photos , video replays and 3d masks . we address the problem of facial spoofing detection against replay attacks based on the analysis of aliasing in spoof face videos . the application domain of interest is mobile phone unlock . we analyze the moire pattern aliasing that commonly appears during the recapture of video or photo replays on a screen in different channels ( r , g , b and grayscale ) and regions ( the whole frame , detected face , and facial component between the nose and chin ) . multi-scale lbp and dsift features are used to represent the characteristics of moire patterns that differentiate a replayed spoof face from a live face ( face present ) . experimental results on idiap replay-attack and casia databases as well as a database collected in our laboratory ( rafs ) , which is based on the msu-fsd database , shows that the proposed approach is very effective in face spoof detection for both story_separator_special_tag research on non-intrusive software-based face spoofing detection schemes has been mainly focused on the analysis of the luminance information of the face images , hence discarding the chroma component , which can be very useful for discriminating fake faces from genuine ones . this paper introduces a novel and appealing approach for detecting face spoofing using a colour texture analysis . we exploit the joint colour-texture information from the luminance and the chrominance channels by extracting complementary low-level feature descriptions from different colour spaces . more specifically , the feature histograms are computed over each image band separately . extensive experiments on the three most challenging benchmark data sets , namely , the casia face anti-spoofing database , the replay-attack database , and the msu mobile face spoof database , showed excellent results compared with the state of the art . more importantly , unlike most of the methods proposed in the literature , our proposed approach is able to achieve stable performance across all the three benchmark data sets . the promising results of our cross-database evaluation suggest that the facial colour texture representation is more stable in unknown conditions compared with its gray-scale counterparts . story_separator_special_tag current face biometric systems are vulnerable to spoofing attacks . a spoofing attack occurs when a person tries to masquerade as someone else by falsifying data and thereby gaining illegitimate access . inspired by image quality assessment , characterisation of printing artefacts and differences in light reflection , the authors propose to approach the problem of spoofing detection from texture analysis point of view . indeed , face prints usually contain printing quality defects that can be well detected using texture and local shape features . hence , the authors present a novel approach based on analysing facial image for detecting whether there is a live person in front of the camera or a face print . the proposed approach analyses the texture and gradient structures of the facial images using a set of low-level feature descriptors , fast linear classification scheme and score level fusion . compared to many previous works , the authors proposed approach is robust and does not require user-cooperation . in addition , the texture features that are used for spoofing detection can also be used for face recognition . this provides a unique feature space for coupling spoofing detection and face recognition . extensive story_separator_special_tag spoofing attacks mainly include printing artifacts , electronic screens and ultra-realistic face masks or models . in this paper , we propose a component-based face coding approach for liveness detection . the proposed method consists of four steps : ( 1 ) locating the components of face ; ( 2 ) coding the low-level features respectively for all the components ; ( 3 ) deriving the high-level face representation by pooling the codes with weights derived from fisher criterion ; ( 4 ) concatenating the histograms from all components into a classifier for identification . the proposed framework makes good use of micro differences between genuine faces and fake faces . meanwhile , the inherent appearance differences among different components are retained . extensive experiments on three published standard databases demonstrate that the method can achieve the best liveness detection performance in three databases . story_separator_special_tag the face recognition community has finally started paying more attention to the long-neglected problem of spoofing attacks and the number of countermeasures is gradually increasing . fairly good results have been reported on the publicly available databases but it is reasonable to assume that there exists no superior anti-spoofing technique due to the varying nature of attack scenarios and acquisition conditions . therefore , we propose to approach the problem of face spoofing as a set of attack-specific subproblems that are solvable with a proper combination of complementary countermeasures . inspired by how we humans can perform reliable spoofing detection only based on the available scene and context information , this work provides the first investigation in research literature that attempts to detect the presence of spoofing medium in the observed scene . we experiment with two publicly available databases consisting of several fake face attacks of different nature under varying conditions and imaging qualities . the experiments show excellent results beyond the state of the art . more importantly , our cross-database evaluation depicts that the proposed approach has promising generalization capabilities . story_separator_special_tag a new face anti-spoofing method based on general image quality assessment is presented . the proposed approach presents a very low degree of complexity which makes it suitable for real-time applications , using 14 image quality features extracted from one image ( i.e. , the same acquired for face recognition purposes ) to distinguish between legitimate and impostor samples . the experimental results , obtained on two publicly available datasets , show very competitive results compared to other state-of-the-art methods tested on the same benchmarks . the findings presented in the work clearly suggest that the analysis of the general image quality of real face samples reveals highly valuable information that may be very efficiently used to discriminate them from fake images . story_separator_special_tag to ensure the actual presence of a real legitimate trait in contrast to a fake self-manufactured synthetic or reconstructed sample is a significant problem in biometric authentication , which requires the development of new and efficient protection measures . in this paper , we present a novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts . the objective of the proposed system is to enhance the security of biometric recognition frameworks , by adding liveness assessment in a fast , user-friendly , and non-intrusive manner , through the use of image quality assessment . the proposed approach presents a very low degree of complexity , which makes it suitable for real-time applications , using 25 general image quality features extracted from one image ( i.e. , the same acquired for authentication purposes ) to distinguish between legitimate and impostor samples . the experimental results , obtained on publicly available data sets of fingerprint , iris , and 2d face , show that the proposed method is highly competitive compared with other state-of-the-art approaches and that the analysis of the general image quality of real biometric samples reveals highly story_separator_special_tag with the wide deployment of the face recognition systems in applications from deduplication to mobile device unlocking , security against the face spoofing attacks requires increased attention ; such attacks can be easily launched via printed photos , video replays , and 3d masks of a face . we address the problem of face spoof detection against the print ( photo ) and replay ( photo or video ) attacks based on the analysis of image distortion ( e.g . , surface reflection , moire pattern , color distortion , and shape deformation ) in spoof face images ( or video frames ) . the application domain of interest is smartphone unlock , given that the growing number of smartphones have the face unlock and mobile payment capabilities . we build an unconstrained smartphone spoof attack database ( msu ussa ) containing more than 1000 subjects . both the print and replay attacks are captured using the front and rear cameras of a nexus 5 smartphone . we analyze the image distortion of the print and replay attacks using different : 1 ) intensity channels ( r , g , b , and grayscale ) ; 2 ) image regions story_separator_special_tag with the wide applications of user authentication based on face recognition , face spoof attacks against face recognition systems are drawing increasing attentions . while emerging approaches of face antispoofing have been reported in recent years , most of them limit to the non-realistic intra-database testing scenarios instead of the cross-database testing scenarios . we propose a robust representation integrating deep texture features and face movement cue like eye-blink as countermeasures for presentation attacks like photos and replays . we learn deep texture features from both aligned facial images and whole frames , and use a frame difference based approach for eye-blink detection . a face video clip is classified as live if it is categorized as live using both cues . cross-database testing on public-domain face databases shows that the proposed approach significantly outperforms the state-of-the-art . story_separator_special_tag recently deep convolutional neural networks have been successfully applied in many computer vision tasks and achieved promising results . so some works have introduced the deep learning into face anti-spoofing . however , most approaches just use the final fully-connected layer to distinguish the real and fake faces . inspired by the idea of each convolutional kernel can be regarded as a part filter , we extract the deep partial features from the convolutional neural network ( cnn ) to distinguish the real and fake faces . in our prosed approach , the cnn is fine-tuned firstly on the face spoofing datasets . then , the block principle component analysis ( pca ) method is utilized to reduce the dimensionality of features that can avoid the over-fitting problem . lastly , the support vector machine ( svm ) is employed to distinguish the real the real and fake faces . the experiments evaluated on two public available databases , replay-attack and casia , show the proposed method can obtain satisfactory results compared to the state-of-the-art methods . story_separator_special_tag many prior face anti-spoofing works develop discriminative models for recognizing the subtle differences between live and spoof faces . those approaches often regard the image as an indivisible unit , and process it holistically , without explicit modeling of the spoofing process . in this work , motivated by the noise modeling and denoising algorithms , we identify a new problem of face de-spoofing , for the purpose of anti-spoofing : inversely decomposing a spoof face into a spoof noise and a live face , and then utilizing the spoof noise for classification . a cnn architecture with proper constraints and supervisions is proposed to overcome the problem of having no ground truth for the decomposition . we evaluate the proposed method on multiple face anti-spoofing databases . the results show promising improvements due to our spoof noise modeling . moreover , the estimated spoof noise provides a visualization which helps to understand the added spoof noise by each spoof medium . story_separator_special_tag face recognition has evolved as a prominent biometric authentication modality . however , vulnerability to presentation attacks curtails its reliable deployment . automatic detection of presentation attacks is essential for secure use of face recognition technology in unattended scenarios . in this work , we introduce a convolutional neural network ( cnn ) based framework for presentation attack detection , with deep pixel-wise supervision . the framework uses only frame level information making it suitable for deployment in smart devices with minimal computational and time overhead . we demonstrate the effectiveness of the proposed approach in public datasets for both intra as well as cross-dataset experiments . the proposed approach achieves an hter of 0 % in replay mobile dataset and an acer of 0.42 % in protocol-1 of oulu dataset outperforming state of the art methods . story_separator_special_tag user authentication is an important step to protect information and in this field face biometrics is advantageous . face biometrics is natural , easy to use and less human-invasive . unfortunately , recent work has revealed that face biometrics is vulnerable to spoofing attacks using low-tech cheap equipments . this article presents a countermeasure against such attacks based on the lbp top operator combining both space and time information into a single multiresolution texture descriptor . experiments carried out with the replay attack database show a half total error rate ( hter ) improvement from 15.16 % to 7.60 % . story_separator_special_tag user authentication is an important step to protect information , and in this context , face biometrics is potentially advantageous . face biometrics is natural , intuitive , easy to use , and less human-invasive . unfortunately , recent work has revealed that face biometrics is vulnerable to spoofing attacks using cheap low-tech equipment . this paper introduces a novel and appealing approach to detect face spoofing using the spatiotemporal ( dynamic texture ) extensions of the highly popular local binary pattern operator . the key idea of the approach is to learn and detect the structure and the dynamics of the facial micro-textures that characterise real faces but not fake ones . we evaluated the approach with two publicly available databases ( replay-attack database and casia face anti-spoofing database ) . the results show that our approach performs better than state-of-the-art techniques following the provided evaluation protocols of each database . story_separator_special_tag for a robust face biometric system , a reliable anti-spoofing approach must be deployed to circumvent the print and replay attacks . several techniques have been proposed to counter face spoofing , however a robust solution that is computationally efficient is still unavailable . this paper presents a new approach for spoofing detection in face videos using motion magnification . eulerian motion magnification approach is used to enhance the facial expressions commonly exhibited by subjects in a captured video . next , two types of feature extraction algorithms are proposed : ( i ) a configuration of lbp that provides improved performance compared to other computationally expensive texture based approaches and ( ii ) motion estimation approach using hoof descriptor . on the print attack and replay attack spoofing datasets , the proposed framework improves the state-of-art performance , especially hoof descriptor yielding a near perfect half total error rate of 0 % and 1.25 % respectively . story_separator_special_tag recent advances on biometrics , information forensics , and security have improved the accuracy of biometric systems , mainly those based on facial information . however , an ever-growing challenge is the vulnerability of such systems to impostor attacks , in which users without access privileges try to authenticate themselves as valid users . in this work , we present a solution to video-based face spoofing to biometric systems . such type of attack is characterized by presenting a video of a real user to the biometric system . to the best of our knowledge , this is the first attempt of dealing with video-based face spoofing based in the analysis of global information that is invariant to video content . our approach takes advantage of noise signatures generated by the recaptured video to distinguish between fake and valid access . to capture the noise and obtain a compact representation , we use the fourier spectrum followed by the computation of the visual rhythm and extraction of the gray-level co-occurrence matrices , used as feature descriptors . results show the effectiveness of the proposed approach to distinguish between valid and fake users for video-based spoofing with near-perfect classification results . story_separator_special_tag spoofing attacks or impersonation can be easily accomplished in a facial biometric system wherein users without access privileges attempt to authenticate themselves as valid users , in which an impostor needs only a photograph or a video with facial information of a legitimate user . even with recent advances in biometrics , information forensics and security , vulnerability of facial biometric systems against spoofing attacks is still an open problem . even though several methods have been proposed for photo-based spoofing attack detection , attacks performed with videos have been vastly overlooked , which hinders the use of the facial biometric systems in modern applications . in this paper , we present an algorithm for video-based spoofing attack detection through the analysis of global information which is invariant to content , since we discard video contents and analyze content-independent noise signatures present in the video related to the unique acquisition processes . our approach takes advantage of noise signatures generated by the recaptured video to distinguish between fake and valid access videos . for that , we use the fourier spectrum followed by the computation of video visual rhythms and the extraction of different characterization methods . for evaluation , story_separator_special_tag despite important recent advances , the vulnerability of biometric systems to spoofing attacks is still an open problem . spoof attacks occur when impostor users present synthetic biometric samples of a valid user to the biometric system seeking to deceive it . considering the case of face biometrics , a spoofing attack consists in presenting a fake sample ( e.g. , photograph , digital video , or even a 3d mask ) to the acquisition sensor with the facial information of a valid user . in this paper , we introduce a low cost and software-based method for detecting spoofing attempts in face recognition systems . our hypothesis is that during acquisition , there will be inevitable artifacts left behind in the recaptured biometric samples allowing us to create a discriminative signature of the video generated by the biometric sensor . to characterize these artifacts , we extract time-spectral feature descriptors from the video , which can be understood as a low-level feature descriptor that gathers temporal and spectral information across the biometric sample and use the visual codebook concept to find mid-level feature descriptors computed from the low-level ones . such descriptors are more robust for detecting several kinds story_separator_special_tag temporal features is important for face anti-spoofing . unfortunately existing methods have limitations to explore such temporal features . in this work , we propose a deep neural network architecture combining long short-term memory ( lstm ) units with convolutional neural networks ( cnn ) . our architecture works well for face anti-spoofing by utilizing the lstm units ' ability of finding long relation from its input sequences as well as extracting local and dense features through convolution operations . our best model shows significant performance improvement over general cnn architecture ( 5.93 % vs. 7.34 % ) , and hand-crafted features ( 5.93 % vs. 10.00 % ) on casia dataset . story_separator_special_tag face anti-spoofing is an important task in full-stack face applications including face detection , verification , and recognition . previous approaches build models on datasets which do not simulate the real-world data well ( e.g. , small scale , insignificant variance , etc. ) . existing models may rely on auxiliary information , which prevents these anti-spoofing solutions from generalizing well in practice . in this paper , we present a data collection solution along with a data synthesis technique to simulate digital medium-based face spoofing attacks , which can easily help us obtain a large amount of training data well reflecting the real-world scenarios . through exploiting a novel spatio-temporal anti-spoof network ( stasn ) , we are able to push the performance on public face anti-spoofing datasets over state-of-the-art methods by a large margin . since the proposed model can automatically attend to discriminative regions , it makes analyzing the behaviors of the network possible.we conduct extensive experiments and show that the proposed model can distinguish spoof faces by extracting features from a variety of regions to seek out subtle evidences such as borders , moire patterns , reflection artifacts , etc . story_separator_special_tag face recognition , which is security-critical , has been widely deployed in our daily life . however , traditional face recognition technologies in practice can be spoofed easily , for example , by using a simple printed photo . in this paper , we propose a novel face liveness detection approach to counter spoofing attacks by recovering sparse 3d facial structure . given a face video or several images captured from more than two viewpoints , we detect facial landmarks and select key frames . then , the sparse 3d facial structure can be recoveredfrom the selected key frames . finally , an support vector machine ( svm ) classifier is trained to distinguish the genuine and fake faces . compared with the previous works , the proposed method has the following advantages . first , it gives perfect liveness detection results , which meets the security requirement of face biometric systems . second , it is independent on cameras or systems , which works well on different devices . experiments with genuine faces versus planar photo faces and warped photo faces demonstrate the superiority of the proposed method over the state-of-the-art liveness detection methods . story_separator_special_tag face anti-spoofing is significant to the security of face recognition systems . previous works on depth supervised learning have proved the effectiveness for face anti-spoofing . nevertheless , they only considered the depth as an auxiliary supervision in the single frame . different from these methods , we develop a new method to estimate depth information from multiple rgb frames and propose a depth-supervised architecture which can efficiently encodes spatiotemporal information for presentation attack detection . it includes two novel modules : optical flow guided feature block ( offb ) and convolution gated recurrent units ( convgru ) module , which are designed to extract short-term and long-term motion to discriminate living and spoofing faces . extensive experiments demonstrate that the proposed approach achieves state-of-the-art results on four benchmark datasets , namely oulu-npu , siw , casia-mfsd , and replay-attack . story_separator_special_tag face anti-spoofing ( fas ) plays a vital role in face recognition systems . most state-of-the-art fas methods 1 ) rely on stacked convolutions and expert-designed network , which is weak in describing detailed fine-grained information and easily being ineffective when the environment varies ( e.g. , different illumination ) , and 2 ) prefer to use long sequence as input to extract dynamic features , making them difficult to deploy into scenarios which need quick response . here we propose a novel frame level fas method based on central difference convolution ( cdc ) , which is able to capture intrinsic detailed patterns via aggregating both intensity and gradient information . a network built with cdc , called the central difference convolutional network ( cdcn ) , is able to provide more robust modeling capacity than its counterpart built with vanilla convolution . furthermore , over a specifically designed cdc search space , neural architecture search ( nas ) is utilized to discover a more powerful network structure ( cdcn++ ) , which can be assembled with multiscale attention fusion module ( mafm ) for further boosting performance . comprehensive experiments are performed on six benchmark datasets to show story_separator_special_tag this paper presents a face liveness detection system against spoofing with photographs , videos , and 3d models of a valid user in a face recognition system . anti-spoofing clues inside and outside a face are both exploited in our system . the inside-face clues of spontaneous eyeblinks are employed for anti-spoofing of photographs and 3d models . the outside-face clues of scene context are used for anti-spoofing of video replays . the system does not need user collaborations , i.e . it runs in a non-intrusive manner . in our system , the eyeblink detection is formulated as an inference problem of an undirected conditional graphical framework which models contextual dependencies in blink image sequences . the scene context clue is found by comparing the difference of regions of interest between the reference scene image and the input one , which is based on the similarity computed by local binary pattern descriptors on a series of fiducial points extracted in scale space . extensive experiments are carried out to show the effectiveness of our system . story_separator_special_tag a multi-cues integration framework is proposed using a hierarchical neural network.bottleneck representations are effective in multi-cues feature fusion.shearlet is utilized to perform face image quality assessment.motion-based face liveness features are automatically learned using autoencoders . many trait-specific countermeasures to face spoofing attacks have been developed for security of face authentication . however , there is no superior face anti-spoofing technique to deal with every kind of spoofing attack in varying scenarios . in order to improve the generalization ability of face anti-spoofing approaches , an extendable multi-cues integration framework for face anti-spoofing using a hierarchical neural network is proposed , which can fuse image quality cues and motion cues for liveness detection . shearlet is utilized to develop an image quality-based liveness feature . dense optical flow is utilized to extract motion-based liveness features . a bottleneck feature fusion strategy can integrate different liveness features effectively . the proposed approach was evaluated on three public face anti-spoofing databases . a half total error rate ( hter ) of 0 % and an equal error rate ( eer ) of 0 % were achieved on both replay-attack database and 3d-mad database . an eer of 5.83 % was achieved on casia-fasd story_separator_special_tag while face recognition systems got a significant boost in terms of recognition performance in recent years , they are known to be vulnerable to presentation attacks . up to date , most of the research in the field of face anti-spoofing or presentation attack detection was considered as a two-class classification task : features of bona-fide samples versus features coming from spoofing attempts . the main focus has been on boosting the anti-spoofing performance for databases with identical types of attacks across both training and evaluation subsets . however , in realistic applications the types of attacks are likely to be unknown , potentially occupying a broad space in the feature domain . therefore , a failure to generalize on unseen types of attacks is one of the main potential challenges in existing anti-spoofing approaches . first , to demonstrate the generalization issues of two-class anti-spoofing systems we establish new evaluation protocols for existing publicly available databases . second , to unite the data collection efforts of various institutions we introduce a challenging aggregated database composed of 3 publicly available datasets : replay-attack , replay-mobile and msu mfsd , reporting the performance on it . third , considering existing limitations story_separator_special_tag face spoofing detection is commonly formulated as a two-class recognition problem where relevant features of both positive ( real access ) and negative samples ( spoofing attempts ) are utilized to train the system . however , the diversity of spoofing attacks , any new means of spoofing attackers , may invent ( previously unseen by the system ) the problem of imaging sensor interoperability , and other environmental factors in addition to the small sample size make the problem quite challenging . considering these observations , in this paper , a number of propositions in the evaluation scenario , problem formulation , and solving are presented . first of all , a new evaluation protocol to study the effect of occurrence of unseen attack types , where the train and test data are produced by different means , is proposed . the new evaluation protocol better reflects the realistic conditions in spoofing attempts where an attacker may come up with new means for spoofing . inter-database and intra-database experiments are incorporated into the evaluation scheme to account for the sensor interoperability problem . second , a new and more realistic formulation of the spoofing detection problem based on the story_separator_special_tag face anti-spoofing is designed to keep face recognition systems from recognizing fake faces as the genuine users . while advanced face anti-spoofing methods are developed , new types of spoof attacks are also being created and becoming a threat to all existing systems . we define the detection of unknown spoof attacks as zero-shot face anti-spoofing ( zsfa ) . previous works of zsfa only study 1-2 types of spoof attacks , such as print/replay attacks , which limits the insight of this problem . in this work , we expand the zsfa problem to a wide range of 13 types of spoof attacks , including print attack , replay attack , 3d mask attacks , and so on . a novel deep tree network ( dtn ) is proposed to tackle the zsfa . the tree is learned to partition the spoof samples into semantic sub-groups in an unsupervised fashion . when a data sample arrives , being know or unknown attacks , dtn routes it to the most similar spoof cluster , and make the binary decision . in addition , to enable the study of zsfa , we introduce the first face anti-spoofing database that contains diverse story_separator_special_tag face presentation attacks have become an increasingly critical issue in the face recognition community . many face anti-spoofing methods have been proposed , but they can not generalize well on `` unseen '' attacks . this work focuses on improving the generalization ability of face anti-spoofing methods from the perspective of the domain generalization . we propose to learn a generalized feature space via a novel multi-adversarial discriminative deep domain generalization framework . in this framework , a multi-adversarial deep domain generalization is performed under a dual-force triplet-mining constraint . this ensures that the learned feature space is discriminative and shared by multiple source domains , and thus is more generalized to new face presentation attacks . an auxiliary face depth supervision is incorporated to further enhance the generalization ability . extensive experiments on four public datasets validate the effectiveness of the proposed method . story_separator_special_tag recent developments are reviewed in the computation of motion and structure of objects in a scene from a sequence of images . two distinct paradigms are highlighted : ( i ) the feature-based approach and ( ii ) the optical-flow-based approach . the comparative merits/demerits of these approaches are discussed . the current status of research in these areas is reviewed and future research directions are indicated . > story_separator_special_tag interactive graphics systems that are driven by visual input are discussed . the underlying computer vision techniques and a theoretical formulation that addresses issues of accuracy , computational efficiency , and compensation for display latency are presented . experimental results quantitatively compare the accuracy of the visual technique with traditional sensing . an extension to the basic technique to include structure recovery is discussed . > story_separator_special_tag the problem of detection of orientation in finite dimensional euclidean spaces is solved in the least squares sense . the theory is developed for the case when such orientation computations are necessary at all local neighborhoods of the n-dimensional euclidean space . detection of orientation is shown to correspond to fitting an axis or a plane to the fourier transform of an n-dimensional structure . the solution of this problem is related to the solution of a well-known matrix eigenvalue problem . the computations can be performed in the spatial domain without actually doing a fourier transformation . along with the orientation estimate , a certainty measure , based on the error of the fit , is proposed . two applications in image analysis are considered : texture segmentation and optical flow . the theory is verified by experiments which confirm accurate orientation estimates and reliable certainty measures in the presence of noise . the comparative results indicate that the theory produces algorithms computing robust texture features as well as optical flow . > story_separator_special_tag a series of studies demonstrated a possible relationship between eye-blink rate and central dopamine activity . first , apomorphine and other dopamine agonists acutely increased blink rate in monkeys , an effect blocked by sulpiride . secondly , parkinsonian patients with levodopa-induced dyskinesia exhibited twice the mean blink rate ( 21 blinks/min ) of other parkinsonians ( 11 blinks/min , p less than 0.002 ) whereas the more symptomatic of the nondyskinetic patients had a very slow rate ( 3 blinks/min , p less than 0.01 ) . thirdly , schizophrenic patients had an elevated mean blink ( 31 vs 23 blinks/min for normals , p less than 0.05 ) which was normalized by neuroleptic treatment . thus , the correlation with central dopamine activity may also prove clinically useful in selected neuropsychiatric disorders . story_separator_special_tag this tutorial provides an overview of the basic theory of hidden markov models ( hmms ) as originated by l.e . baum and t. petrie ( 1966 ) and gives practical details on methods of implementation of the theory along with a description of selected applications of the theory to distinct problems in speech recognition . results from a number of original sources are combined to provide a single source of acquiring the background required to pursue further this area of research . the author first reviews the theory of discrete markov chains and shows how the concept of hidden states , where the observation is a probabilistic function of the state , can be used effectively . the theory is illustrated with two simple examples , namely coin-tossing , and the classic balls-in-urns system . three fundamental problems of hmms are noted and several practical techniques for solving these problems are given . the various types of hmms that have been studied , including ergodic as well as left-right models , are described . > story_separator_special_tag this paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates . this work is distinguished by three key contributions . the first is the introduction of a new image representation called the `` integral image '' which allows the features used by our detector to be computed very quickly . the second is a learning algorithm , based on adaboost , which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers . the third contribution is a method for combining increasingly more complex classifiers in a `` cascade '' which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions . the cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest . in the domain of face detection the system yields detection rates comparable to the best previous systems . used in real-time applications , the detector runs at 15 frames per second without resorting to image differencing or skin color story_separator_special_tag setting of the learning problem consistency of learning processes bounds on the rate of convergence of learning processes controlling the generalization ability of learning processes constructing learning algorithms what is important in learning theory ? . story_separator_special_tag deep learning has been successfully applied to solve various complex problems ranging from big data analytics to computer vision and human-level control . deep learning advances however have also been employed to create software that can cause threats to privacy , democracy and national security . one of those deep learning-powered applications recently emerged is deepfake . deepfake algorithms can create fake images and videos that humans can not distinguish them from authentic ones . the proposal of technologies that can automatically detect and assess the integrity of digital visual media is therefore indispensable . this paper presents a survey of algorithms used to create deepfakes and , more importantly , methods proposed to detect deepfakes in the literature to date . we present extensive discussions on challenges , research trends and directions related to deepfake technologies . by reviewing the background of deepfakes and state-of-the-art deepfake detection methods , this study provides a comprehensive overview of deepfake techniques and facilitates the development of new and more robust methods to deal with the increasingly challenging deepfakes . story_separator_special_tag this paper describes a speech recognition system that uses both acoustic and visual speech information to improve recognition performance in noisy environments . the system consists of three components : a visual module ; an acoustic module ; and a sensor fusion module . the visual module locates and tracks the lip movements of a given speaker and extracts relevant speech features . this task is performed with an appearance-based lip model that is learned from example images . visual speech features are represented by contour information of the lips and grey-level information of the mouth area . the acoustic module extracts noise-robust features from the audio signal . finally the sensor fusion module is responsible for the joint temporal modeling of the acoustic and visual feature streams and is realized using multistream hidden markov models ( hmms ) . the multistream method allows the definition of different temporal topologies and levels of stream integration and hence enables the modeling of temporal dependencies more accurately than traditional approaches . we present two different methods to learn the asynchrony between the two modalities and how to incorporate them in the multistream models . the superior performance for the proposed system is story_separator_special_tag crack detection is a crucial problem in many tasks such as inspection conditions of concrete pipes or tunnels , diagnosing structural damages , ensuring road safety and so on . thus , vision-based crack detection had attracted researchers recently , and many approaches for crack detection had been proposed . however , it remains a great challenging task due to the intensity inhomogeneity of cracks and complexity of the background . inspire by the fast development of deep convolutional neural network ( cnn ) in image processing recently , we propose a multi-scale deep convolutional network based on encoder-decoder architecture . more specific , our network is based on segnet network , which is a deep convolutional encoderdecoder architecture designed for pixel-wise semantic segmentation . we first discard the softmax layer in the segnet network , and then build enhanced modules based on the convolution feature maps from encoder and decoder network . furthermore , we adopt the focal loss function instead of cross-entropy loss in the original segnet network to focus on learning the hard examples and down-weighting the numerous easy negatives . experimental results on public datasets show that our network achieves better results compared to other stateof-the-art story_separator_special_tag although generative adversarial networks ( gans ) have shown remarkable success in various tasks , they still face challenges in generating high quality images . in this paper , we propose stacked generative adversarial networks ( stackgans ) aimed at generating high-resolution photo-realistic images . first , we propose a two-stage generative adversarial network architecture , stackgan-v1 , for text-to-image synthesis . the stage-i gan sketches the primitive shape and colors of a scene based on a given text description , yielding low-resolution images . the stage-ii gan takes stage-i results and the text description as inputs , and generates high-resolution images with photo-realistic details . second , an advanced multi-stage generative adversarial network architecture , stackgan-v2 , is proposed for both conditional and unconditional generative tasks . our stackgan-v2 consists of multiple generators and multiple discriminators arranged in a tree-like structure ; images at multiple scales corresponding to the same scene are generated from different branches of the tree . stackgan-v2 shows more stable training behavior than stackgan-v1 by jointly approximating multiple distributions . extensive experiments demonstrate that the proposed stacked generative adversarial networks significantly outperform other state-of-the-art methods in generating photo-realistic images . story_separator_special_tag deepfake is a technique used to manipulate videos using computer code . it involves replacing the face of a person in a video with the face of another person . the automation of video manipulation means that deepfakes are becoming more prevalent and easier to implement . this can be credited to the emergence of apps like faceapp and fakeapp , which allow users to create their own deepfake videos using their smartphones . it has hence become essential to detect fake videos , to avoid the spread of false information . a recent study shows that the heart rate of fake videos can be used to distinguish original and fake videos . in the study presented , we obtained the heart rate of original videos and trained the state-of-the-art neural ordinary differential equations ( neural-ode ) model . we then created deepfake videos using commercial software . the average loss obtained for ten original videos is 0.010927 , and ten donor videos are 0.010041. the trained neural-ode was able to predict the heart rate of our 10 deepfake videos generated using commercial software and 320 deepfake videos of deepfaketimi database . to best of our knowledge , this is story_separator_special_tag the recent proliferation of fake portrait videos poses direct threats on society , law , and privacy [ 1 ] . believing the fake video of a politician , distributing fake pornographic content of celebrities , fabricating impersonated fake videos as evidence in courts are just a few real world consequences of deep fakes . we present a novel approach to detect synthetic content in portrait videos , as a preventive solution for the emerging threat of deep fakes . in other words , we introduce a deep fake detector . we observe that detectors blindly utilizing deep learning are not effective in catching fake content , as generative models produce formidably realistic results . our key assertion follows that biological signals hidden in portrait videos can be used as an implicit descriptor of authenticity , because they are neither spatially nor temporally preserved in fake content . to prove and exploit this assertion , we first engage several signal transformations for the pairwise separation problem , achieving 99.39 % accuracy . second , we utilize those findings to formulate a generalized classifier for fake content , by analyzing proposed signal transformations and corresponding feature sets . third , we story_separator_special_tag in this paper we propose a novel image representation called face x-ray for detecting forgery in face images . the face x-ray of an input face image is a greyscale image that reveals whether the input image can be decomposed into the blending of two images from different sources . it does so by showing the blending boundary for a forged image and the absence of blending for a real image . we observe that most existing face manipulation methods share a common step : blending the altered face into an existing background image . for this reason , face x-ray provides an effective way for detecting forgery generated by most existing face manipulation algorithms . face x-ray is general in the sense that it only assumes the existence of a blending step and does not rely on any knowledge of the artifacts associated with a specific face manipulation technique . indeed , the algorithm for computing face x-ray can be trained without fake images generated by any of the state-of-the-art face manipulation methods . extensive experiments show that face x-ray remains effective when applied to forgery generated by unseen face manipulation techniques , while most existing face forgery detection story_separator_special_tag decision trees are attractive classifiers due to their high execution speed . but trees derived with traditional methods often can not be grown to arbitrary complexity for possible loss of generalization accuracy on unseen data . the limitation on complexity usually means suboptimal accuracy on training data . following the principles of stochastic modeling , we propose a method to construct tree-based classifiers whose capacity can be arbitrarily expanded for increases in accuracy for both training and unseen data . the essence of the method is to build multiple trees in randomly selected subspaces of the feature space . trees in , different subspaces generalize their classification in complementary ways , and their combined classification can be monotonically improved . the validity of the method is demonstrated through experiments on the recognition of handwritten digits . story_separator_special_tag we introduce a new family of deep neural network models . instead of specifying a discrete sequence of hidden layers , we parameterize the derivative of the hidden state using a neural network . the output of the network is computed using a black-box differential equation solver . these continuous-depth models have constant memory cost , adapt their evaluation strategy to each input , and can explicitly trade numerical precision for speed . we demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models . we also construct continuous normalizing flows , a generative model that can train by maximum likelihood , without partitioning or ordering the data dimensions . for training , we show how to scalably backpropagate through any ode solver , without access to its internal operations . this allows end-to-end training of odes within larger models . story_separator_special_tag lambert 's model for diffuse reflection is extensively used in computational vision . it is used explicitly by methods such as shape from shading and photometric stereo , and implicitly by methods such as binocular stereo and motion detection . for several real-world objects , the lambertian model can prove to be a very inaccurate approximation to the diffuse component . while the brightness of a lambertian surface is independent of viewing direction , the brightness of a rough diffuse surface increases as the viewer approaches the source direction . a comprehensive model is developed that predicts reflectance from rough diffuse surfaces . the model accounts for complex geometric and radiometric phenomena such as masking , shadowing , and interreflections between points on the surface . experiments have been conducted on real samples , such as , plaster , clay , sand , and cloth . all these surfaces demonstrate significant deviation from lambertian behavior . the reflectance measurements obtained are in strong agreement with the reflectance predicted by the proposed model . the paper is concluded with a discussion on the implications of these results for machine vision . story_separator_special_tag recognition in uncontrolled situations is one of the most important bottlenecks for practical face recognition systems . we address this by combining the strengths of robust illumination normalization , local texture based face representations and distance transform based matching metrics . specifically , we make three main contributions : ( i ) we present a simple and efficient preprocessing chain that eliminates most of the effects of changing illumination while still preserving the essential appearance details that are needed for recognition ; ( ii ) we introduce local ternary patterns ( ltp ) , a generalization of the local binary pattern ( lbp ) local texture descriptor that is more discriminant and less sensitive to noise in uniform regions ; and ( iii ) we show that replacing local histogramming with a local distance transform based similarity metric further improves the performance of lbp/ltp based face recognition . the resulting method gives state-of-the-art performance on three popular datasets chosen to test recognition under difficult illumination conditions : face recognition grand challenge version 1 experiment 4 , extended yale-b , and cmu pie . story_separator_special_tag the paper introduces the contrast sort methods , and introduces mainly the contrast limited adaptive histogram equalization , which enhance range by confining the height of local histogram , so limit noise magnification . story_separator_special_tag the presence of highlights , which in dielectric inhomogeneousobjects are linear combination of specular and diffusereflection components , is inevitable . a number of methodshave been developed to separate these reflection components.to our knowledge , all methods that use a singleinput image require explicit color segmentation to deal withmulticolored surfaces . unfortunately , for complex texturedimages , current color segmentation algorithms are stillproblematic to segment correctly . consequently , a methodwithout explicit color segmentation becomes indispensable , and this paper presents such a method . the method is basedsolely on colors , particularly chromaticity , without requiringany geometrical parameter information . one of the basicideas is to compare the intensity logarithmic differentiationof specular-free images and input images iteratively.the specular-free image is a pseudo-code of diffuse componentsthat can be generated by shifting a pixel 's intensityand chromaticity nonlinearly while retaining its hue . allprocesses in the method are done locally , involving a maximumof only two pixels . the experimental results on naturalimages show that the proposed method is accurate and robustunder known scene illumination chromaticity . unlikethe existing methods that use a single image , our methodis effective for textured objects with complex multicoloredscenes . story_separator_special_tag presents a theoretically very simple , yet efficient , multiresolution approach to gray-scale and rotation invariant texture classification based on local binary patterns and nonparametric discrimination of sample and prototype distributions . the method is based on recognizing that certain local binary patterns , termed `` uniform , '' are fundamental properties of local image texture and their occurrence histogram is proven to be a very powerful texture feature . we derive a generalized gray-scale and rotation invariant operator presentation that allows for detecting the `` uniform '' patterns for any quantization of the angular space and for any spatial resolution and presents a method for combining multiple operators for multiresolution analysis . the proposed approach is very robust in terms of gray-scale variations since the operator is , by definition , invariant against any monotonic transformation of the gray scale . another advantage is computational simplicity as the operator can be realized with a few operations in a small neighborhood and a lookup table . experimental results demonstrate that good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary patterns . story_separator_special_tag this paper presents a novel and efficient facial image representation based on local binary pattern ( lbp ) texture features . the face image is divided into several regions from which the lbp feature distributions are extracted and concatenated into an enhanced feature vector to be used as a face descriptor . the performance of the proposed method is assessed in the face recognition problem under different challenges . other applications and several extensions are also discussed story_separator_special_tag effective and real-time face detection has been made possible by using the method of rectangle haar-like features with adaboost learning since viola and jones ' work [ 12 ] . in this paper , we present the use of a new set of distinctive rectangle features , called multi-block local binary patterns ( mb-lbp ) , for face detection . the mb-lbp encodes rectangular regions ' intensities by local binary pattern operator , and the resulting binary patterns can describe diverse local structures of images . based on the mb-lbp features , a boosting-based learning method is developed to achieve the goal of face detection . to deal with the non-metric feature value of mb-lbp features , the boosting algorithm uses multibranch regression tree as its weak classifiers . the experiments show the weak classifiers based on mb-lbp are more discriminative than haar-like features and original lbp features . given the same number of features , the proposed face detector illustrates 15 % higher correct rate at a given false alarm rate of 0.001 than haar-like feature and 8 % higher than original lbp feature . this indicates that mb-lbp features can capture more information about the image structure and story_separator_special_tag dynamic texture ( dt ) is an extension of texture to the temporal domain . description and recognition of dts have attracted growing attention . in this paper , a novel approach for recognizing dts is proposed and its simplifications and extensions to facial image analysis are also considered . first , the textures are modeled with volume local binary patterns ( vlbp ) , which are an extension of the lbp operator widely used in ordinary texture analysis , combining motion and appearance . to make the approach computationally simple and easy to extend , only the co-occurrences of the local binary patterns on three orthogonal planes ( lbp-top ) are then considered . a block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account . in experiments with two dt databases , dyntex and massachusetts institute of technology ( mit ) , both the vlbp and lbp-top clearly outperformed the earlier approaches . the proposed block-based method was evaluated with the cohn-kanade facial expression database with excellent results . the advantages of our approach include local processing , robustness story_separator_special_tag image content based retrieval is emerging as an important research area with application to digital libraries and multimedia databases . the focus of this paper is on the image processing aspects and in particular using texture information for browsing and retrieval of large image data . we propose the use of gabor wavelet features for texture analysis and provide a comprehensive experimental evaluation . comparisons with other multiresolution texture features using the brodatz texture database indicate that the gabor features provide the best pattern retrieval accuracy . an application to browsing large air photos is illustrated . story_separator_special_tag we study the question of feature sets for robust visual object recognition ; adopting linear svm based human detection as a test case . after reviewing existing edge and gradient based descriptors , we show experimentally that grids of histograms of oriented gradient ( hog ) descriptors significantly outperform existing feature sets for human detection . we study the influence of each stage of the computation on performance , concluding that fine-scale gradients , fine orientation binning , relatively coarse spatial binning , and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results . the new approach gives near-perfect separation on the original mit pedestrian database , so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds . story_separator_special_tag large scale nonlinear support vector machines ( svms ) can be approximated by linear ones using a suitable feature map . the linear svms are in general much faster to learn and evaluate ( test ) than the original nonlinear svms . this work introduces explicit feature maps for the additive class of kernels , such as the intersection , hellinger 's , and 2 kernels , commonly used in computer vision , and enables their use in large scale problems . in particular , we : 1 ) provide explicit feature maps for all additive homogeneous kernels along with closed form expression for all common kernels ; 2 ) derive corresponding approximate finite-dimensional feature maps based on a spectral analysis ; and 3 ) quantify the error of the approximation , showing that the error is independent of the data dimension and decays exponentially fast with the approximation order for selected kernels such as 2. we demonstrate that the approximations have indistinguishable performance from the full kernels yet greatly reduce the train/test times of svms . we also compare with two other approximation methods : nystrom 's approximation of perronnin et al . [ 1 ] , which is story_separator_special_tag the objective of this paper is to estimate 2d human pose as a spatial configuration of body parts in tv and movie video shots . such video material is uncontrolled and extremely challenging . we propose an approach that progressively reduces the search space for body parts , to greatly improve the chances that pose estimation will succeed . this involves two contributions : ( i ) a generic detector using a weak model of pose to substantially reduce the full pose search space ; and ( ii ) employing 'grabcut ' initialized on detected regions proposed by the weak model , to further prune the search space . moreover , we also propose ( hi ) an integrated spatio- temporal model covering multiple frames to refine pose estimates from individual frames , with inference using belief propagation . the method is fully automatic and self-initializing , and explains the spatio-temporal volume covered by a person moving in a shot , by soft-labeling every pixel as belonging to a particular body part or to the background . we demonstrate upper-body pose estimation by an extensive evaluation over 70000 frames from four episodes of the tv series buffy the vampire slayer story_separator_special_tag in this paper , we propose a new descriptor for texture classification that is robust to image blurring . the descriptor utilizes phase information computed locally in a window for every image position . the phases of the four low-frequency coefficients are decorrelated and uniformly quantized in an eight-dimensional space . a histogram of the resulting code words is created and used as a feature in texture classification . ideally , the low-frequency phase components are shown to be invariant to centrally symmetric blur . although this ideal invariance is not completely achieved due to the finite window size , the method is still highly insensitive to blur . because only phase information is used , the method is also invariant to uniform illumination changes . according to our experiments , the classification accuracy of blurred texture images is much higher with the new method than with the well-known lbp or gabor filter bank methods . interestingly , it is also slightly better for textures that are not blurred . story_separator_special_tag this paper presents a method for recognizing scene categories based on approximate global geometric correspondence . this technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region . the resulting `` spatial pyramid '' is a simple and computationally efficient extension of an orderless bag-of-features image representation , and it shows significantly improved performance on challenging scene categorization tasks . specifically , our proposed method exceeds the state of the art on the caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories . the spatial pyramid framework also offers insights into the success of several recently proposed image descriptions , including torralba\x92s `` gist '' and lowe\x92s sift descriptors . story_separator_special_tag in daily life , we can see images of real-life objects on posters , television , or virtually any type of smooth physical surfaces . we seldom confuse these images with the objects per se mainly with the help of the contextual information from the surrounding environment and nearby objects . without this contextual information , distinguishing an object from an image of the object becomes subtle ; it is precisely an effect that a large immersive display aims at achieving . in this work , we study and address a problem that mirrors the above-mentioned recognition problem , i.e. , distinguishing images of true natural scenes and those from recapturing . being able to detect recaptured images , robot vision can be more intelligent and a single-image-based counter-measure for re-broadcast attack on a face authentication system becomes feasible . this work is timely as the face authentication system is getting common on consumer mobile devices such as smart phones and laptop computers . in this work , we present a physical model for image recapturing and the features derived from the model are used in a recaptured image detector . our physics-based method out-performs a statistics-based method by a story_separator_special_tag we present a no-reference blur metric for images and video . the blur metric is based on the analysis of the spread of the edges in an image . its perceptual significance is validated through subjective experiments . the novel metric is near real-time , has low computational complexity and is shown to perform well over a range of image content . potential applications include optimization of source coding , network resource management and autofocus of an image capturing device . story_separator_special_tag in general , digital images can be classified into photographs and computer graphics . this taxonomy is very useful in many applications , such as web image search . however , there are no effective methods to perform this classification automatically . in this paper , we manage to solve this problem from two aspects . at first , we propose some novel low-level features that can reveal perceptional differences between photographs and graphics . then , we adopt an effective algorithm to perform the classification . the experiments conducted on a large-scale image database indicate the effectiveness of our algorithm . story_separator_special_tag multimodal biometric systems consolidate the evidence presented by multiple biometric sources and typically provide better recognition performance compared to systems based on a single biometric modality . although information fusion in a multimodal system can be performed at various levels , integration at the matching score level is the most common approach due to the ease in accessing and combining the scores generated by different matchers . since the matching scores output by the various modalities are heterogeneous , score normalization is needed to transform these scores into a common domain , prior to combining them . in this paper , we have studied the performance of different normalization techniques and fusion rules in the context of a multimodal biometric system based on the face , fingerprint and hand-geometry traits of a user . experiments conducted on a database of 100 users indicate that the application of min-max , z-score , and tanh normalization schemes followed by a simple sum of scores fusion method results in better recognition performance compared to other methods . however , experiments also reveal that the min-max and z-score normalization techniques are sensitive to outliers in the data , highlighting the need for a robust story_separator_special_tag the explosion of image data on the internet has the potential to foster more sophisticated and robust models and algorithms to index , retrieve , organize and interact with images and multimedia data . but exactly how such data can be harnessed and organized remains a critical problem . we introduce here a new database called imagenet , a large-scale ontology of images built upon the backbone of the wordnet structure . imagenet aims to populate the majority of the 80,000 synsets of wordnet with an average of 500-1000 clean and full resolution images . this will result in tens of millions of annotated images organized by the semantic hierarchy of wordnet . this paper offers a detailed analysis of imagenet in its current state : 12 subtrees with 5247 synsets and 3.2 million images in total . we show that imagenet is much larger in scale and diversity and much more accurate than the current image datasets . constructing such a large-scale database is a challenging task . we describe the data collection scheme with amazon mechanical turk . lastly , we illustrate the usefulness of imagenet through three simple applications in object recognition , image classification and automatic story_separator_special_tag pushing by big data and deep convolutional neural network ( cnn ) , the performance of face recognition is becoming comparable to human . using private large scale training datasets , several groups achieve very high performance on lfw , ie , 97 % to 99 % . while there are many open source implementations of cnn , none of large scale face dataset is publicly available . the current situation in the field of face recognition is that data is more important than algorithm . to solve this problem , this paper proposes a semi- automatical way to collect face images from internet and builds a large scale dataset containing about 10,000 subjects and 500,000 images , called casiawebface . based on the database , we use a 11-layer cnn to learn discriminative representation and obtain state- of-theart accuracy on lfw and ytf . story_separator_special_tag recent work has shown that convolutional networks can be substantially deeper , more accurate , and efficient to train if they contain shorter connections between layers close to the input and those close to the output . in this paper , we embrace this observation and introduce the dense convolutional network ( densenet ) , which connects each layer to every other layer in a feed-forward fashion . whereas traditional convolutional networks with l layers have l connections & # x2014 ; one between each layer and its subsequent layer & # x2014 ; our network has l ( l+1 ) /2 direct connections . for each layer , the feature-maps of all preceding layers are used as inputs , and its own feature-maps are used as inputs into all subsequent layers . densenets have several compelling advantages : they alleviate the vanishing-gradient problem , strengthen feature propagation , encourage feature reuse , and substantially reduce the number of parameters . we evaluate our proposed architecture on four highly competitive object recognition benchmark tasks ( cifar-10 , cifar-100 , svhn , and imagenet ) . densenets obtain significant improvements over the state-of-the-art on most of them , whilst requiring less story_separator_special_tag our goal is to reveal temporal variations in videos that are difficult or impossible to see with the naked eye and display them in an indicative manner . our method , which we call eulerian video magnification , takes a standard video sequence as input , and applies spatial decomposition , followed by temporal filtering to the frames . the resulting signal is then amplified to reveal hidden information . using our method , we are able to visualize the flow of blood as it fills the face and also to amplify and reveal small motions . our technique can run in real time to show phenomena occurring at the temporal frequencies selected by the user . story_separator_special_tag system theoretic approaches to action recognition model the dynamics of a scene with linear dynamical systems ( ldss ) and perform classification using metrics on the space of ldss , e.g . binet-cauchy kernels . however , such approaches are only applicable to time series data living in a euclidean space , e.g . joint trajectories extracted from motion capture data or feature point trajectories extracted from video . much of the success of recent object recognition techniques relies on the use of more complex feature descriptors , such as sift descriptors or hog descriptors , which are essentially histograms . since histograms live in a non-euclidean space , we can no longer model their temporal evolution with ldss , nor can we classify them using a metric for ldss . in this paper , we propose to represent each frame of a video using a histogram of oriented optical flow ( hoof ) and to recognize human actions by classifying hoof time-series . for this purpose , we propose a generalization of the binet-cauchy kernels to nonlinear dynamical systems ( nlds ) whose output lives in a non-euclidean space , e.g . the space of histograms . this can story_separator_special_tag in this paper , a fast dct-based algorithm is proposed to efficiently locate text captions embedded on specific areas in a video sequence through visual rhythm , which can be fast constructed by sampling certain portions of a dc image sequence and temporally accumulating the samples along time . our proposed approach is based on the observations that the text captions carrying important information suitable for indexing often appear on specific areas on video frames , from where sampling strategies are derived for a visual rhythm . our method then uses a combination of contrast and temporal coherence information on the visual rhythm to detect text frames such that each detected text frame represents consecutive frames containing identical text strings , thus significantly reducing the amount of text frames needed to be examined for text localization from a video sequence . it then utilizes several important properties of text caption to locate the text caption from the detected frames . story_separator_special_tag the visual rhythm is a simplification of the video content represented by a 2d image . in this work , the video segmentation problem is transformed into a problem of pattern detection , where each video effect is transformed into a different pattern on the visual rhythm . to detect sharp video transitions ( cuts ) we use topological and morphological tools instead of using a dissimilarity measure . thus , we propose a method to detect sharp video transitions between two consecutive shots . we present a comparative analysis of our method with respect to some other methods . we also propose a variant of this method to detect the position of flashes in a video . story_separator_special_tag texture is one of the important characteristics used in identifying objects or regions of interest in an image , whether the image be a photomicrograph , an aerial photograph , or a satellite image . this paper describes some easily computable textural features based on gray-tone spatial dependancies , and illustrates their application in category-identification tasks of three different kinds of image data : photomicrographs of five kinds of sandstones , 1:20 000 panchromatic aerial photographs of eight land-use categories , and earth resources technology satellite ( erts ) multispecial imagery containing seven land-use categories . we use two kinds of decision rules : one for which the decision regions are convex polyhedra ( a piecewise linear decision rule ) , and one for which the decision regions are rectangular parallelpipeds ( a min-max decision rule ) . in each experiment the data set was divided into two parts , a training set and a test set . test set identification accuracy is 89 percent for the photomicrographs , 82 percent for the aerial photographic imagery , and 83 percent for the satellite imagery . these results indicate that the easily computable textural features probably have a general applicability for story_separator_special_tag we describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video . the object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint , illumination and partial occlusion . the temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors . the analogy with text retrieval is in the implementation where matches on descriptors are pre-computed ( using vector quantization ) , and inverted file systems and document rankings are used . the result is that retrieved is immediate , returning a ranked list of key frames/shots in the manner of google . the method is illustrated for matching in two full length feature films . story_separator_special_tag the decomposition of experimental data into dynamic modes using a data-based algorithm is applied to schlieren snapshots of a helium jet and to time-resolved piv-measurements of an unforced and harmonically forced jet . the algorithm relies on the reconstruction of a low-dimensional inter-snapshot map from the available flow field data . the spectral decomposition of this map results in an eigenvalue and eigenvector representation ( referred to as dynamic modes ) of the underlying fluid behavior contained in the processed flow fields . this dynamic mode decomposition allows the breakdown of a fluid process into dynamically revelant and coherent structures and thus aids in the characterization and quantification of physical mechanisms in fluid flow . story_separator_special_tag deformable model fitting has been actively pursued in the computer vision community for over a decade . as a result , numerous approaches have been proposed with varying degrees of success . a class of approaches that has shown substantial promise is one that makes independent predictions regarding locations of the model 's landmarks , which are combined by enforcing a prior over their joint motion . a common theme in innovations to this approach is the replacement of the distribution of probable landmark locations , obtained from each local detector , with simpler parametric forms . in this work , a principled optimization strategy is proposed where nonparametric representations of these likelihoods are maximized within a hierarchy of smoothed estimates . the resulting update equations are reminiscent of mean-shift over the landmarks but with regularization imposed through a global prior over their joint motion . extensions to handle partial occlusions and reduce computational complexity are also presented . through numerical experiments , this approach is shown to outperform some common existing methods on the task of generic face fitting . story_separator_special_tag large-pose face alignment is a very challenging problem in computer vision , which is used as a prerequisite for many important vision tasks , e.g , face recognition and 3d face reconstruction . recently , there have been a few attempts to solve this problem , but still more research is needed to achieve highly accurate results . in this paper , we propose a face alignment method for large-pose face images , by combining the powerful cascaded cnn regressor method and 3dmm . we formulate the face alignment as a 3dmm fitting problem , where the camera projection matrix and 3d shape parameters are estimated by a cascade of cnn-based regressors . the dense 3d shape allows us to design pose-invariant appearance features for effective cnn learning . extensive experiments are conducted on the challenging databases ( aflw and afw ) , with comparison to the state of the art . story_separator_special_tag pose-invariant face alignment is a very challenging problem in computer vision , which is used as a prerequisite for many facial analysis tasks , e.g. , face recognition , expression recognition , and 3d face reconstruction . recently , there have been a few attempts to tackle this problem , but still more research is needed to achieve higher accuracy . in this paper , we propose a face alignment method that aligns an image with arbitrary poses , by combining the powerful cascaded cnn regressors , 3d morphable model ( 3dmm ) , and mirrorability constraint . the core of our proposed method is a novel 3dmm fitting algorithm , where the camera projection matrix parameters and 3d shape parameters are estimated by a cascade of cnn-based regressors . furthermore , we impose the mirrorability constraint during the cnn learning by employing a novel loss function inside the siamese network . the dense 3d shape enables us to design pose-invariant appearance features for effective cnn learning . extensive experiments are conducted on the challenging large-pose face databases ( aflw and afw ) , with comparison to the state of the art . story_separator_special_tag we propose a straightforward method that simultaneously reconstructs the 3d facial structure and provides dense alignment . to achieve this , we design a 2d representation called uv position map which records the 3d shape of a complete face in uv space , then train a simple convolutional neural network to regress it from a single 2d image . we also integrate a weight mask into the loss function during training to improve the performance of the network . our method does not rely on any prior face model , and can reconstruct full facial geometry along with semantic meaning . meanwhile , our network is very light-weighted and spends only 9.8ms to process an image , which is extremely faster than previous works . experiments on multiple challenging datasets show that our method surpasses other state-of-the-art methods on both reconstruction and alignment tasks by a large margin . story_separator_special_tag convolutional networks are powerful visual models that yield hierarchies of features . we show that convolutional networks by themselves , trained end-to-end , pixels-to-pixels , exceed the state-of-the-art in semantic segmentation . our key insight is to build `` fully convolutional '' networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning . we define and detail the space of fully convolutional networks , explain their application to spatially dense prediction tasks , and draw connections to prior models . we adapt contemporary classification networks ( alexnet , the vgg net , and googlenet ) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task . we then define a novel architecture that combines semantic information from a deep , coarse layer with appearance information from a shallow , fine layer to produce accurate and detailed segmentations . our fully convolutional network achieves state-of-the-art segmentation of pascal voc ( 20 % relative improvement to 62.2 % mean iu on 2012 ) , nyudv2 , and sift flow , while inference takes one third of a second for a typical image . story_separator_special_tag this paper presents a method for face recognition across variations in pose , ranging from frontal to profile views , and across a wide range of illuminations , including cast shadows and specular reflections . to account for these variations , the algorithm simulates the process of image formation in 3d space , using computer graphics , and it estimates 3d shape and texture of faces from single images . the estimate is achieved by fitting a statistical , morphable model of 3d faces to images . the model is learned from a set of textured 3d scans of heads . we describe the construction of the morphable model , an algorithm to fit the model to images , and a framework for face identification . in this framework , faces are represented by model parameters for 3d shape and texture . we present results obtained with 4,488 images from the publicly available cmu-pie database and 1,940 images from the feret database . story_separator_special_tag a shadow generation method and apparatus that employs a depth buffer technique to increase the speed of calculation of visible shadows . the system employs pipelined processors to determine visible objects and shadows generated by those objects for one or more light sources . the technique determines whether a shadow exists at a given pixel by evaluating the parity of the number of intersections between shadow polygons and a line of sight extending from the viewpoint . pipeline processing is introduced to speed the process to result in rapid evaluation of a large number of objects and associated shadows . an alternate embodiment is presented which retains many of the speed advantages but allows the use of processors other than pipelined processors . determination of the effect of a shadow on a given point is further speeded by indexing the shadow affect resulting in a quantized shadow correction value that reduces the processing requirements . story_separator_special_tag a common technique to by-pass 2-d face recognition systems is to use photographs of spoofed identities . unfortunately , research in counter-measures to this type of attack have not kept-up - even if such threats have been known for nearly a decade , there seems to exist no consensus on best practices , techniques or protocols for developing and testing spoofing-detectors for face recognition . we attribute the reason for this delay , partly , to the unavailability of public databases and protocols to study solutions and compare results . to this purpose we introduce the publicly available print-attack database and exemplify how to use its companion protocol with a motion-based algorithm that detects correlations between the person 's head movements and the scene context . the results are to be used as basis for comparison to other counter-measure techniques . the print-attack database contains 200 videos of real-accesses and 200 videos of spoof attempts using printed photographs of 50 different identities . story_separator_special_tag we present a comprehensive performance study of multiple appearance-based face recognition methodologies , on visible and thermal infrared imagery . we compare algorithms within the same imaging modality as well as between them . both identification and verification scenarios are considered , and appropriate performance statistics reported for each case . our experimental design is aimed at gaining full understanding of algorithm performance under varying conditions , and is based on monte carlo analysis of performance measures . this analysis reveals that under many circumstances , using thermal infrared imagery yields higher performance , while in other cases performance in both modalities is equivalent . performance increases further when algorithms on visible and thermal infrared imagery are fused . our study also provides a partial explanation for the multiple contradictory claims in the literature regarding performance of various algorithms on visible data sets . story_separator_special_tag this paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene . the features are invariant to image scale and rotation , and are shown to provide robust matching across a substantial range of affine distortion , change in 3d viewpoint , addition of noise , and change in illumination . the features are highly distinctive , in the sense that a single feature can be correctly matched with high probability against a large database of features from many images . this paper also describes an approach to using these features for object recognition . the recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm , followed by a hough transform to identify clusters belonging to a single object , and finally performing verification through least-squares solution for consistent pose parameters . this approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance . story_separator_special_tag spoofing identities using photographs is one of the most common techniques to attack 2-d face recognition systems . there seems to exist no comparative studies of different techniques using the same protocols and data . the motivation behind this competition is to compare the performance of different state-of-the-art algorithms on the same database using a unique evaluation method . six different teams from universities around the world have participated in the contest . use of one or multiple techniques from motion , texture analysis and liveness detection appears to be the common trend in this competition . most of the algorithms are able to clearly separate spoof attempts from real accesses . the results suggest the investigation of more complex attacks . story_separator_special_tag as a crucial security problem , anti-spoofing in biometrics , and particularly for the face modality , has achieved great progress in the recent years . still , new threats arrive inform of better , more realistic and more sophisticated spoofing attacks . the objective of the 2nd competition on counter measures to 2d face spoofing attacks is to challenge researchers to create counter measures effectively detecting a variety of attacks . the submitted propositions are evaluated on the replay-attack database and the achieved results are presented in this paper . story_separator_special_tag liveness detection is an indispensable guarantee for reliable face recognition , which has recently received enormous attention . in this paper we propose three scenic clues , which are non-rigid motion , face-background consistency and imaging banding effect , to conduct accurate and efficient face liveness detection . non-rigid motion clue indicates the facial motions that a genuine face can exhibit such as blinking , and a low rank matrix decomposition based image alignment approach is designed to extract this non-rigid motion . face-background consistency clue believes that the motion of face and background has high consistency for fake facial photos while low consistency for genuine faces , and this consistency can serve as an efficient liveness clue which is explored by gmm based motion detection method . image banding effect reflects the imaging quality defects introduced in the fake face reproduction , which can be detected by wavelet decomposition . by fusing these three clues , we thoroughly explore sufficient clues for liveness detection . the proposed face liveness detection method achieves 100 % accuracy on idiap print-attack database and the best performance on self-collected face anti-spoofing database . story_separator_special_tag this paper presents a novel discriminative learning technique for label sequences based on a combination of the two most successful learning algorithms , support vector machines and hidden markov models which we call hidden markov support vector machine . the proposed architecture handles dependencies between neighboring labels using viterbi decoding . in contrast to standard hmm training , the learning procedure is discriminative and is based on a maximum/soft margin criterion . compared to previous methods like conditional random fields , maximum entropy markov models and label sequence boosting , hm-svms have a number of advantages . most notably , it is possible to learn non-linear discriminant functions via kernel functions . at the same time , hm-svms share the key advantages with other discriminative methods , in particular the capability to deal with overlapping features . we report experimental evaluations on two tasks , named entity recognition and part-of-speech tagging , that demonstrate the competitiveness of the proposed approach . story_separator_special_tag abstract in spite of their remarkable success in signal processing applications , it is now widely acknowledged that traditional wavelets are not very effective in dealing multidimensional signals containing distributed discontinuities such as edges . to overcome this limitation , one has to use basis elements with much higher directional sensitivity and of various shapes , to be able to capture the intrinsic geometrical features of multidimensional phenomena . this paper introduces a new discrete multiscale directional representation called the discrete shearlet transform . this approach , which is based on the shearlet transform , combines the power of multiscale methods with a unique ability to capture the geometry of multidimensional data and is optimally efficient in representing images containing edges . we describe two different methods of implementing the shearlet transform . the numerical experiments presented in this paper demonstrate that the discrete shearlet transform is very competitive in denoising applications both in terms of performance and computational efficiency . story_separator_special_tag image and video quality measurements are crucial for many applications , such as acquisition , compression , transmission , enhancement , and reproduction . nowadays , no-reference ( nr ) image quality assessment ( iqa ) methods have drawn extensive attention because it does not rely on any information of original images . however , most of the conventional nr-iqa methods are designed only for one or a set of predefined specific image distortion types , which are unlikely to generalize for evaluating image/video distorted with other types of distortions . in order to estimate a wide range of image distortions , in this paper , we present an efficient general-purpose nr-iqa algorithm which is based on a new multiscale directional transform ( shearlet transform ) with a strong ability to localize distributed discontinuities . this is mainly based on distorted natural image that leads to significant variation in the spread discontinuities in all directions . thus , the statistical property of the distorted image is significantly different from that of natural images in fine scale shearlet coefficients , which are referred to as 'distorted parts ' . however , some 'natural parts ' are reserved in coarse scale shearlet story_separator_special_tag the focus of motion analysis has been on estimating a flow vector for every pixel by matching intensities . in my thesis , i will explore motion representations beyond the pixel level and new applications to which these representations lead . i first focus on analyzing motion from video sequences . traditional motion analysis suffers from the inappropriate modeling of the grouping relationship of pixels and from a lack of ground-truth data . using layers as the interface for humans to interact with videos , we build a human-assisted motion annotation system to obtain ground-truth motion , missing in the literature , for natural video sequences . furthermore , we show that with the layer representation , we can detect and magnify small motions to make them visible to human eyes . then we move to a contour presentation to analyze the motion for textureless objects under occlusion . we demonstrate that simultaneous boundary grouping and motion analysis can solve challenging data , where the traditional pixel-wise motion analysis fails . in the second part of my thesis , i will show the benefits of matching local image structures instead of intensity values . we propose sift flow that establishes story_separator_special_tag most current speech recognition systems use hidden markov models ( hmms ) to deal with the temporal variability of speech and gaussian mixture models ( gmms ) to determine how well each state of each hmm fits a frame or a short window of frames of coefficients that represents the acoustic input . an alternative way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over hmm states as output . deep neural networks ( dnns ) that have many hidden layers and are trained using new methods have been shown to outperform gmms on a variety of speech recognition benchmarks , sometimes by a large margin . this article provides an overview of this progress and represents the shared views of four research groups that have had recent successes in using dnns for acoustic modeling in speech recognition . story_separator_special_tag the ability of learning networks to generalize can be greatly enhanced by providing constraints from the task domain . this paper demonstrates how such constraints can be integrated into a backpropagation network through the architecture of the network . this approach has been successfully applied to the recognition of handwritten zip code digits provided by the u.s. postal service . a single network learns the entire recognition operation , going from the normalized image of the character to the final classification . story_separator_special_tag deep and recurrent neural networks ( dnns and rnns respectively ) are powerful models that were considered to be almost impossible to train using stochastic gradient descent with momentum . in this paper , we show that when stochastic gradient descent with momentum uses a well-designed random initialization and a particular type of slowly increasing schedule for the momentum parameter , it can train both dnns and rnns ( on datasets with long-term dependencies ) to levels of performance that were previously achievable only with hessian-free optimization . we find that both the initialization and the momentum are crucial since poorly initialized networks can not be trained with momentum and well-initialized networks perform markedly worse when the momentum is absent or poorly tuned . our success training these models suggests that previous attempts to train deep and recurrent neural networks from random initializations have likely failed due to poor initialization schemes . furthermore , carefully tuned momentum methods suffice for dealing with the curvature issues in deep and recurrent network training objectives without the need for sophisticated second-order methods . story_separator_special_tag abstract : neural machine translation is a recently proposed approach to machine translation . unlike the traditional statistical machine translation , the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance . the models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation . in this paper , we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture , and propose to extend this by allowing a model to automatically ( soft- ) search for parts of a source sentence that are relevant to predicting a target word , without having to form these parts as a hard segment explicitly . with this new approach , we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of english-to-french translation . furthermore , qualitative analysis reveals that the ( soft- ) alignments found by the model agree well with our intuition . story_separator_special_tag very deep convolutional networks have been central to the largest advances in image recognition performance in recent years . one example is the inception architecture that has been shown to achieve very good performance at relatively low computational cost . recently , the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ilsvrc challenge ; its performance was similar to the latest generation inception-v3 network . this raises the question of whether there are any benefit in combining the inception architecture with residual connections . here we give clear empirical evidence that training with residual connections accelerates the training of inception networks significantly . there is also some evidence of residual inception networks outperforming similarly expensive inception networks without residual connections by a thin margin . we also present several new streamlined architectures for both residual and non-residual inception networks . these variations improve the single-frame recognition performance on the ilsvrc 2012 classification task significantly . we further demonstrate how proper activation scaling stabilizes the training of very wide residual inception networks . with an ensemble of three residual and one inception-v4 , we achieve 3.08 percent top-5 error on the story_separator_special_tag designing architectures for deep neural networks requires expert knowledge and substantial computation time . we propose a technique to accelerate architecture selection by learning an auxiliary hypernet that generates the weights of a main model conditioned on that model 's architecture . by comparing the relative validation performance of networks with hypernet-generated weights , we can effectively search over a wide range of architectures at the cost of a single training run . to facilitate this search , we develop a flexible mechanism based on memory read-writes that allows us to define a wide range of network connectivity patterns , with resnet , densenet , and fractalnet blocks as special cases . we validate our method ( smash ) on cifar-10 and cifar-100 , stl-10 , modelnet10 , and imagenet32x32 , achieving competitive performance with similarly-sized hand-designed networks . our code is available at this https url story_separator_special_tag neural networks are powerful and flexible models that work well for many difficult learning tasks in image , speech and natural language understanding . despite their success , neural networks are still hard to design . in this paper , we use a recurrent network to generate the model descriptions of neural networks and train this rnn with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set . on the cifar-10 dataset , our method , starting from scratch , can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy . our cifar-10 model achieves a test error rate of 3.65 , which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme . on the penn treebank dataset , our model can compose a novel recurrent cell that outperforms the widely-used lstm cell , and other state-of-the-art baselines . our cell achieves a test set perplexity of 62.4 on the penn treebank , which is 3.6 perplexity better than the previous state-of-the-art model . the cell can also be transferred to the character language modeling task on story_separator_special_tag developing neural network image classification models often requires significant architecture engineering . in this paper , we study a method to learn the model architectures directly on the dataset of interest . as this approach is expensive when the dataset is large , we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset . the key contribution of this work is the design of a new search space ( which we call the `` nasnet search space '' ) which enables transferability . in our experiments , we search for the best convolutional layer ( or `` cell '' ) on the cifar-10 dataset and then apply this cell to the imagenet dataset by stacking together more copies of this cell , each with their own parameters to design a convolutional architecture , which we name a `` nasnet architecture '' . we also introduce a new regularization technique called scheduleddroppath that significantly improves generalization in the nasnet models . on cifar-10 itself , a nasnet found by our method achieves 2.4 % error rate , which is state-of-the-art . although the cell is not searched for directly story_separator_special_tag neural networks have proven effective at solving difficult problems but designing their architectures can be challenging , even for image classification problems alone . our goal is to minimize human participation , so we employ evolutionary algorithms to discover such networks automatically . despite significant computational requirements , we show that it is now possible to evolve models with accuracies within the range of those published in the last year . specifically , we employ simple evolutionary techniques at unprecedented scales to discover models for the cifar-10 and cifar-100 datasets , starting from trivial initial conditions and reaching accuracies of 94.6 % ( 95.6 % for ensemble ) and 77.0 % , respectively . to do this , we use novel and intuitive mutation operators that navigate large search spaces ; we stress that no human participation is required once evolution starts and that the output is a fully-trained model . throughout this work , we place special emphasis on the repeatability of results , the variability in the outcomes and the computational requirements . story_separator_special_tag the effort devoted to hand-crafting neural network image classifiers has motivated the use of architecture search to discover them automatically . although evolutionary algorithms have been repeatedly applied to neural network topologies , the image classifiers thus discovered have remained inferior to human-crafted ones . here , we evolve an image classifier amoebanet-a that surpasses hand-designs for the first time . to do this , we modify the tournament selection evolutionary algorithm by introducing an age property to favor the younger genotypes . matching size , amoebanet-a has comparable accuracy to current state-of-the-art imagenet models discovered with more complex architecture-search methods . scaled to larger size , amoebanet-a sets a new state-of-theart 83.9 % top-1 / 96.6 % top-5 imagenet accuracy . in a controlled comparison against a well known reinforcement learning algorithm , we give evidence that evolution can obtain results faster with the same hardware , especially at the earlier stages of the search . this is relevant when fewer compute resources are available . evolution is , thus , a simple method to effectively discover high-quality architectures . story_separator_special_tag this paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner . unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space , our method is based on the continuous relaxation of the architecture representation , allowing efficient search of the architecture using gradient descent . extensive experiments on cifar-10 , imagenet , penn treebank and wikitext-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling , while being orders of magnitude faster than state-of-the-art non-differentiable techniques . our implementation has been made publicly available to facilitate further research on efficient architecture search algorithms . story_separator_special_tag differentiable architecture search ( darts ) provided a fast solution in finding effective network architectures , but suffered from large memory and computing overheads in jointly training a super-network and searching for an optimal architecture . in this paper , we present a novel approach , namely , partially-connected darts , by sampling a small part of super-network to reduce the redundancy in exploring the network space , thereby performing a more efficient search without comprising the performance . in particular , we perform operation search in a subset of channels while bypassing the held out part in a shortcut . this strategy may suffer from an undesired inconsistency on selecting the edges of super-net caused by sampling different channels . we alleviate it using edge normalization , which adds a new set of edge-level parameters to reduce uncertainty in search . thanks to the reduced memory cost , pc-darts can be trained with a larger batch size and , consequently , enjoys both faster speed and higher training stability . experimental results demonstrate the effectiveness of the proposed method . specifically , we achieve an error rate of 2.57 % on cifar10 with merely 0.1 gpu-days for architecture search story_separator_special_tag by the widespread popularity of electronic devices , the emergence of biometric technology has brought significant convenience to user authentication compared with the traditional password and mode unlocking . among many biological characteristics , the face is a universal and irreplaceable feature that does not need too much cooperation and can significantly improve the user 's experience at the same time . face recognition is one of the main functions of electronic equipment propaganda . hence it 's virtually worth researching in computer vision . previous work in this field has focused on two directions : converting loss function to improve recognition accuracy in traditional deep convolution neural networks ( resnet ) ; combining the latest loss function with the lightweight system ( mobilenet ) to reduce network size at the minimal expense of accuracy . but none of these has changed the network structure . with the development of automl , neural architecture search ( nas ) has shown excellent performance in the benchmark of image classification . in this paper , we integrate nas technology into face recognition to customize a more suitable network . we quote the framework of neural architecture search which trains child and controller story_separator_special_tag deep neural networks have achieved great success for video analysis and understanding . however , designing a high-performance neural architecture requires substantial efforts and expertise . in this paper , we make the first attempt to let algorithm automatically design neural networks for video action recognition tasks . specifically , a spatio-temporal network is developed in a differentiable space modeled by a directed acyclic graph , thus a gradient-based strategy can be performed to search an optimal architecture . nonetheless , it is computationally expensive , since the computational burden to evaluate each architecture candidate is still heavy . to alleviate this issue , we , for the video input , introduce a temporal segment approach to reduce the computational cost without losing global video information . for the architecture , we explore in an efficient search space by introducing pseudo 3d operators . experiments show that , our architecture outperforms popular neural architectures , under the training from scratch protocol , on the challenging ucf101 dataset , surprisingly , with only around one percentage of parameters of its manual-design counterparts . story_separator_special_tag prevailing deep convolutional neural networks ( cnns ) for person re-identification ( reid ) are usually built upon resnet or vgg backbones , which were originally designed for classification . because reid is different from classification , the architecture should be modified accordingly . we propose to automatically search for a cnn architecture that is specifically suitable for the reid task . there are three aspects to be tackled . first , body structural information plays an important role in reid but it is not encoded in backbones . second , neural architecture search ( nas ) automates the process of architecture design without human effort , but no existing nas methods incorporate the structure information of input images . third , reid is essentially a retrieval task but current nas algorithms are merely designed for classification . to solve these problems , we propose a retrieval-based search algorithm over a specifically designed reid search space , named auto-reid . our auto-reid enables the automated approach to find an efficient and effective cnn architecture for reid . extensive experiments demonstrate that the searched architecture achieves state-of-the-art performance while reducing 50 % parameters and 53 % flops compared to others . story_separator_special_tag current state-of-the-art convolutional architectures for object detection are manually designed . here we aim to learn a better architecture of feature pyramid network for object detection . we adopt neural architecture search and discover a new feature pyramid architecture in a novel scalable search space covering all cross-scale connections . the discovered architecture , named nas-fpn , consists of a combination of top-down and bottom-up connections to fuse features across scales . nas-fpn , combined with various backbone models in the retinanet framework , achieves better accuracy and latency tradeoff compared to state-of-the-art object detection models . nas-fpn improves mobile detection accuracy by 2 ap compared to state-of-the-art ssdlite with mobilenetv2 model in [ 32 ] and achieves 48.3 ap which surpasses mask r-cnn [ 10 ] detection accuracy with less computation time . story_separator_special_tag in this paper , we propose a customizable architecture search ( cas ) approach to automatically generate a network architecture for semantic image segmentation . the generated network consists of a sequence of stacked computation cells . a computation cell is represented as a directed acyclic graph , in which each node is a hidden representation ( i.e. , feature map ) and each edge is associated with an operation ( e.g. , convolution and pooling ) , which transforms data to a new layer . during the training , the cas algorithm explores the search space for an optimized computation cell to build a network . the cells of the same type share one architecture but with different weights . in real applications , however , an optimization may need to be conducted under some constraints such as gpu time and model size . to this end , a cost corresponding to the constraint will be assigned to each operation . when an operation is selected during the search , its associated cost will be added to the objective . as a result , our cas is able to search an optimized architecture with customized constraints . the approach story_separator_special_tag in recent years , software-based face presentation attack detection ( pad ) methods have seen a great progress . however , most existing schemes are not able to generalize well in more realistic conditions . the objective of this competition is to evaluate and compare the generalization performances of mobile face pad techniques under some real-world variations , including unseen input sensors , presentation attack instruments ( pai ) and illumination conditions , on a larger scale oulu-npu dataset using its standard evaluation protocols and metrics . thirteen teams from academic and industrial institutions across the world participated in this competition . this time typical liveness detection based on physiological signs of life was totally discarded . instead , every submitted system relies practically on some sort of feature representation extracted from the face and/or background regions using hand-crafted , learned or hybrid descriptors . interesting results and findings are presented and discussed in this paper . story_separator_special_tag domain adaptation is an important emerging topic in computer vision . in this paper , we present one of the first studies of domain shift in the context of object recognition . we introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution . the transformation is learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain . while we focus our evaluation on object recognition tasks , the transform-based adaptation technique we develop is general and could be applied to nonimage data . another contribution is a new multi-domain object database , freely available for download . we experimentally demonstrate the ability of our method to improve recognition on categories with few or no target domain labels and moderate to large changes in the imaging conditions . story_separator_special_tag datasets are an integral part of contemporary object recognition research . they have been the chief reason for the considerable progress in the field , not just as source of large amounts of training data , but also as means of measuring and comparing performance of competing algorithms . at the same time , datasets have often been blamed for narrowing the focus of object recognition research , reducing it to a single benchmark performance number . indeed , some datasets , that started out as data capture efforts aimed at representing the visual world , have become closed worlds unto themselves ( e.g . the corel world , the caltech-101 world , the pascal voc world ) . with the focus on beating the latest benchmark numbers on the latest dataset , have we perhaps lost sight of the original purpose ? the goal of this paper is to take stock of the current state of recognition datasets . we present a comparison study using a set of popular datasets , evaluated based on a number of criteria including : relative data bias , cross-dataset generalization , effects of closed-world assumption , and sample value . the experimental results story_separator_special_tag the problem of domain generalization is to learn from multiple training domains , and extract a domain-agnostic model that can then be applied to an unseen domain . domain generalization ( dg ) has a clear motivation in contexts where there are target domains with distinct characteristics , yet sparse data for training . for example recognition in sketch images , which are distinctly more abstract and rarer than photos . nevertheless , dg methods have primarily been evaluated on photo-only benchmarks focusing on alleviating the dataset bias where both problems of domain distinctiveness and data sparsity can be minimal . we argue that these benchmarks are overly straightforward , and show that simple deep learning baselines perform surprisingly well on them . in this paper , we make two main contributions : firstly , we build upon the favorable domain shift-robust properties of deep learning methods , and develop a low-rank parameterized cnn model for end-to-end dg learning . secondly , we develop a dg benchmark dataset covering photo , sketch , cartoon and painting domains . this is both more practically relevant , and harder ( bigger domain shift ) than existing benchmarks . the results show that story_separator_special_tag in this paper , we tackle the problem of domain generalization : how to learn a generalized feature representation for an `` unseen '' target domain by taking the advantage of multiple seen source-domain data . we present a novel framework based on adversarial autoencoders to learn a generalized latent feature representation across domains for domain generalization . to be specific , we extend adversarial autoencoders by imposing the maximum mean discrepancy ( mmd ) measure to align the distributions among different domains , and matching the aligned distribution to an arbitrary prior distribution via adversarial feature learning . in this way , the learned feature representation is supposed to be universal to the seen source domains because of the mmd regularization , and is expected to generalize well on the target domain because of the introduction of the prior distribution . we proposed an algorithm to jointly train different components of our proposed framework . extensive experiments on various vision tasks demonstrate that our proposed framework can learn better generalized features for the unseen target domain compared with state-of-the-art domain generalization methods . story_separator_special_tag the vulnerabilities of face-based biometric systems to presentation attacks have been finally recognized but yet we lack generalized software-based face presentation attack detection ( pad ) methods performing robustly in practical mobile authentication scenarios . this is mainly due to the fact that the existing public face pad datasets are beginning to cover a variety of attack scenarios and acquisition conditions but their standard evaluation protocols do not encourage researchers to assess the generalization capabilities of their methods across these variations . in this present work , we introduce a new public face pad database , oulu-npu , aiming at evaluating the generalization of pad methods in more realistic mobile authentication scenarios across three covariates : unknown environmental conditions ( namely illumination and background scene ) , acquisition devices and presentation attack instruments ( pai ) . this publicly available database consists of 5940 videos corresponding to 55 subjects recorded in three different environments using high-resolution frontal cameras of six different smartphones . the high-quality print and videoreplay attacks were created using two different printers and two different display devices . each of the four unambiguously defined evaluation protocols introduces at least one previously unseen condition to the test set story_separator_special_tag face anti-spoofing is essential to prevent face recognition systems from a security breach . much of the progresses have been made by the availability of face anti-spoofing benchmark datasets in recent years . however , existing face anti-spoofing benchmarks have limited number of subjects ( 170 ) and modalities ( 2 ) , which hinder the further development of the academic community . to facilitate face anti-spoofing research , we introduce a large-scale multi-modal dataset , namely casia-surf , which is the largest publicly available dataset for face anti-spoofing in terms of both subjects and visual modalities . specifically , it consists of 1,000 subjects with 21,000 videos and each sample has 3 modalities ( i.e. , rgb , depth and ir ) . we also provide a measurement set , evaluation protocol and training/validation/testing subsets , developing a new benchmark for face anti-spoofing . moreover , we present a new multi-modal fusion method as baseline , which performs feature re-weighting to select the more informative channel features while suppressing the less useful ones for each modal . extensive experiments have been conducted on the proposed dataset to verify its significance and generalization capability . the dataset is available at story_separator_special_tag auto face annotation , which aims to detect human faces from a facial image and assign them proper human names , is a fundamental research problem and beneficial to many real-world applications . in this work , we address this problem by investigating a retrieval-based annotation scheme of mining massive web facial images that are freely available over the internet . in particular , given a facial image , we first retrieve the top n similar instances from a large-scale web facial image database using content-based image retrieval techniques , and then use their labels for auto annotation . such a scheme has two major challenges : 1 ) how to retrieve the similar facial images that truly match the query , and 2 ) how to exploit the noisy labels of the top similar facial images , which may be incorrect or incomplete due to the nature of web images . in this paper , we propose an effective weak label regularized local coordinate coding ( wlrlcc ) technique , which exploits the principle of local coordinate coding by learning sparse features , and employs the idea of graph-based weak label regularization to enhance the weak labels of the story_separator_special_tag recent face recognition experiments on a major benchmark ( lfw [ 15 ] ) show stunning performance-a number of algorithms achieve near to perfect score , surpassing human recognition rates . in this paper , we advocate evaluations at the million scale ( lfw includes only 13k photos of 5k people ) . to this end , we have assembled the megaface dataset and created the first megaface challenge . our dataset includes one million photos that capture more than 690k different individuals . the challenge evaluates performance of algorithms with increasing numbers of `` distractors '' ( going from 10 to 1m ) in the gallery set . we present both identification and verification performance , evaluate performance with respect to pose and a persons age , and compare as a function of training data size ( # photos and # people ) . we report results of state of the art and baseline algorithms . the megaface dataset , baseline code , and evaluation scripts , are all publicly released for further experimentations1 . story_separator_special_tag we present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding . this is achieved by gathering images of complex everyday scenes containing common objects in their natural context . objects are labeled using per-instance segmentations to aid in precise object localization . our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old . with a total of 2.5 million labeled instances in 328k images , the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection , instance spotting and instance segmentation . we present a detailed statistical analysis of the dataset in comparison to pascal , imagenet , and sun . finally , we provide baseline performance analysis for bounding box and segmentation detection results using a deformable parts model . story_separator_special_tag this paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates . there are three key contributions . the first is the introduction of a new image representation called the integral image which allows the features used by our detector to be computed very quickly . the second is a simple and efficient classifier which is built using the adaboost learning algorithm ( freund and schapire , 1995 ) to select a small number of critical visual features from a very large set of potential features . the third contribution is a method for combining classifiers in a cascade which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions . a set of experiments in the domain of face detection is presented . the system yields face detection performance comparable to the best previous systems ( sung and poggio , 1998 ; rowley et al. , 1998 ; schneiderman and kanade , 2000 ; roth et al. , 2000 ) . implemented on a conventional desktop , face detection proceeds at 15 frames per second . story_separator_special_tag in this paper , we present an enhanced pictorial structure ( ps ) model for precise eye localization , a fundamental problem involved in many face processing tasks . ps is a computationally efficient framework for part-based object modelling . for face images taken under uncontrolled conditions , however , the traditional ps model is not flexible enough for handling the complicated appearance and structural variations . to extend ps , we 1 ) propose a discriminative ps model for a more accurate part localization when appearance changes seriously , 2 ) introduce a series of global constraints to improve the robustness against scale , rotation and translation , and 3 ) adopt a heuristic prediction method to address the difficulty of eye localization with partial occlusion . experimental results on the challenging lfw ( labeled face in the wild ) database show that our model can locate eyes accurately and efficiently under a broad range of uncontrolled variations involving poses , expressions , lightings , camera qualities , occlusions , etc . story_separator_special_tag convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks . since 2014 very deep convolutional networks started to become mainstream , yielding substantial gains in various benchmarks . although increased model size and computational cost tend to translate to immediate quality gains for most tasks ( as long as enough labeled data is provided for training ) , computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios . here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization . we benchmark our methods on the ilsvrc 2012 classification challenge validation set demonstrate substantial gains over the state of the art : 21.2 % top-1 and 5.6 % top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters . with an ensemble of 4 models and multi-crop evaluation , we report 3.5 % top-5 error on the validation set ( 3.6 % error on the story_separator_special_tag identity spoofing is a contender for high-security face recognition applications . with the advent of social media and globalized search , our face images and videos are wide-spread on the internet and can be potentially used to attack biometric systems without previous user consent . yet , research to counter these threats is just on its infancy we lack public standard databases , protocols to measure spoofing vulnerability and baseline methods to detect these attacks . the contributions of this work to the area are three-fold : firstly we introduce a publicly available photo-attack database with associated protocols to measure the effectiveness of counter-measures . based on the data available , we conduct a study on current state-of-the-art spoofing detection algorithms based on motion analysis , showing they fail under the light of these new dataset . by last , we propose a new technique of counter-measure solely based on foreground/background motion correlation using optical flow that outperforms all other algorithms achieving nearly perfect scoring with an equal-error rate of 1.52 % on the available test data . the source code leading to the reported results is made available for the replicability of findings in this article . story_separator_special_tag facial makeup has the ability to alter the appearance of a person . such an alteration can degrade the accuracy of automated face recognition systems , as well as that of meth-ods estimating age and beauty from faces . in this work , we design a method to automatically detect the presence of makeup in face images . the proposed algorithm extracts a feature vector that captures the shape , texture and color characteristics of the input face , and employs a classifier to determine the presence or absence of makeup . besides extracting features from the entire face , the algorithm also considers portions of the face pertaining to the left eye , right eye , and mouth . experiments on two datasets consisting of 151 subjects ( 600 images ) and 125 subjects ( 154 images ) , respectively , suggest that makeup detection rates of up to 93.5 % ( at a false positive rate of 1 % ) can be obtained using the proposed approach . further , an adaptive pre-processing scheme that exploits knowledge of the presence or absence of facial makeup to improve the matching accuracy of a face matcher is presented . story_separator_special_tag this paper introduces an automatic method for editing a portrait photo so that the subject appears to be wearing makeup in the style of another person in a reference photo . our unsupervised learning approach relies on a new framework of cycle-consistent generative adversarial networks . different from the image domain transfer problem , our style transfer problem involves two asymmetric functions : a forward function encodes example-based style transfer , whereas a backward function removes the style . we construct two coupled networks to implement these functions - one that transfers makeup style and a second that can remove makeup - such that the output of their successive application to an input photo will match the input . the learned style network can then quickly apply an arbitrary makeup style to an arbitrary photo . we demonstrate the effectiveness on a broad range of portraits and styles .
we are concerned here with one of the oldest problems in combinatorial extremal theory . it is readily described after we have made a few conventions . denotes the set of story_separator_special_tag we describe several variants of the norm-graphs introduced by kollar , ronyai , and szabo and study some of their extremal properties . using these variants we construct , for infinitely many values of n , a graph on n vertices with more than 12n5/3 edges , containing no copy of k3 , 3 , thus slightly improving an old construction of brown . we also prove that the maximum number of vertices in a complete graph whose edges can be colored by k colors with no monochromatic copy of k3 , 3 is ( 1+o ( 1 ) ) k3 . this answers a question of chung and graham . in addition we prove that for every fixed t , there is a family of subsets of an n element set whose so-called dual shatter function is o ( mt ) and whose discrepancy is ? ( n1/2 ? 1/2tlogn ) . this settles a problem of matou ? ek . story_separator_special_tag given a set x and a natural number r denote by x ( r ) the set of relement subsets of x. an r-graph or hypergraph g is a pair ( v , t ) , where v is a finite set and t c v ( r ) . we call v e v a vertex of g and z ct an r-tuple or an edge of g . thus a 1-graph is a set v and a subset t of v. as the structure of 1-graphs is trivial , throughout the note we suppose r > 2. a 2-graph is a graph in the sense of ( 5 ) . the degree deg v of a vertex v cv is the number of r-tuples containing v . a set of pairwise disjoint r-tuples is said to be independent . we say g ' = ( v ' , t ' ) is a subgraph of g = ( v , t ) and write g ' c g if v ' c v and t ' c t. if 0 = ( v , t ) and v e v then g v = ( v ' , story_separator_special_tag a thomsen graph [ 2 , p. 22 ] consists of six vertices partitioned into two classes of three each , with every vertex in one class connected to every vertex in the other ; it is the graph of the gas , water , and electricity problem [ 1 , p. 206 ] . ( all graphs considered in this paper will be undirected , having neither loops nor multiple edges . ) story_separator_special_tag if g and h are graphs ( which will mean finite , with no loops or parallel lines ) , define the ramsey number r ( g , h ) to be the least number p such that if the lines of the complete graph kp are colored red and blue ( say ) , either the red subgraph contains a copy of g or the blue subgraph contains h. the diagonal ranisei , numbers are given by r ( g ) _ = r ( g , g ) . these definitions follow those of c h v \xe1 t a i and h a r a r y [ 1 ] . otlier terminology will follow h a r a r y [ 2 ] . these generalized ramsey numbers have been much studied recently ; see [ 3 ] for a survey . story_separator_special_tag a family of $ k $ -subsets $ a_1 , a_2 , . , a_d $ on $ [ n ] =\\ { 1,2 , . , n\\ } $ is called a $ ( d , c ) $ -cluster if the union $ a_1\\cup a_2 \\cup . \\cup a_d $ contains at most $ ck $ elements with $ c < d $ . let $ \\mathcal { f } $ be a family of $ k $ -subsets of an $ n $ -element set . we show that for $ k \\geq 2 $ and $ n \\geq k+2 $ , if every $ ( k , 2 ) $ -cluster of $ \\mathcal { f } $ is intersecting , then $ \\mathcal { f } $ contains no $ ( k-1 ) $ -dimensional simplices . this leads to an affirmative answer to mubayi 's conjecture for $ d=k $ based on chvatal 's simplex theorem . we also show that for any $ d $ satisfying $ 3 \\leq d \\leq k $ and $ n \\geq \\frac { dk } { d-1 } $ , if every $ ( d , { story_separator_special_tag suppose f is a collection of 3-subsets of { 1,2 , , n } . the problem of determining the least integer ( n , k ) with the property that if |f| > ( n , k ) then f contains a k-star ( i.e. , k 3-sets such that the intersection of any pair of them consists of exactly the same element ) is studied . it is proved that , for k odd , ( n , k ) = k ( k 1 ) n + o ( k3 ) and , for k even , ( n , k ) = k ( k 32 ) n + o ( n + k3 ) . story_separator_special_tag suppose that is a collection of 3-subsets of { 1 , 2 , . , n } which does not contain ak-star ( i.e. , k 3-sets any two of which intersect in the same singleton ) . fork 3 andn n0 ( k ) , the collections having largest possible sizes are determined . story_separator_special_tag we propose a homological approach to two conjectures descended from the erdos-ko-rado theorem , one due to chvatal and the other to frankl and furedi . we apply the method to reprove , and in one case improve , results of these authors related to their conjectures . story_separator_special_tag let x be a finite set of cardinality n. ifl = { ll , . , l , } is a set of nonnegative integers with 11 enr-1 ( c = e ( k ) is a constant depending on k ) , then ( i ) there exists an story_separator_special_tag anr-graph is a graph whose basic elements are its vertices and r-tuples . it is proved that to everyl andr there is ane ( l , r ) so that forn > n 0 everyr-graph ofn vertices andn r e ( l , r ) r-tuples containsr . l verticesx ( j ) , 1 j r , 1 i l , so that all ther-tuples $ $ ( x_ { i_1 } ^ { ( 1 ) } , x_ { i_2 } ^ { ( 2 ) } , \\cdots , x_ { i_r } ^ { ( r ) } ) $ $ occur in ther-graph . story_separator_special_tag then g ( n ; l ) contains k independent edges . it is easy to see that the above result is best possible since the complete graph of 2k-1 vertices and the graph of vertices x1 , . . , xk-1 ; yl , , yn-k+l and edges ( x1 , xj ) , 1i ; ( x1 , y ) , 1 i : ! - < k 1 , 1-yj : ! 5 n k + 1 clearly does not contain k independent edges . by an r-graph o ( r ) we shall mean a graph whose basic elements are its vertices and r-tuples ; for r = 2 we obtain the ordinary graphs . g ( r ) ( n ; m ) will denote an r-graph of n vertices and m r-tuples . for r > 2 these generalised graphs have not yet been investigated very much . a set of r-tuples is called independent if no two of them have a vertex in common . f ( n ; r , k ) denotes the smallest integer so that every g ( r ) ( n ; f ( n ; r , story_separator_special_tag i published several papers with similar titles . one of my latest ones [ 13 ] ( also see [ 16 ] and the yearly meetings at boca raton or baton rouge ) contains , in the introduction , many references to my previous papers . i discuss here as much as possible new problems , and present proofs in only one case . i use the same notation as used in my previous papers . g ( ' ) ( n ; 1 ) denotes an r-graph ( uniform hypergraph all of whose edges have size r ) of n vertices and i edges . if r = 2 and there is no danger of confusion . i omit the upper index r = 2 . k ( r ) ( n ) denotes the complete hypergraph g ( ' ) ( n ; ( ; ) ) . k ( a , b ) denotes the complete bipartite graph ( r = 2 ) of a white and b black vertices . k ( r ) ( t ) denotes the hypergraph of it vertices x ( i ' ) , i < i < t , 1 story_separator_special_tag many papers and also the excellent book of bollobas , recently appeared on extremal problems on graphs . two survey papers of simonovits are in the press and brown , simonovits and i have several papers , some appeared , some in the press and some in preparation on this subject . story_separator_special_tag 2. notation the letters a , b , c , d , x , y , z denote finite sets of non-negative integers , all other lower-case letters denote non-negative integers . if fc i , then [ k , i ) denotes the set { k , k+l , k+2 , . , l-l } = { * : fc < \xab < q. the obliteration operator `` serves to remove from any system of elements the element above which it is placed . thus [ k , i ) = { k , k- { -l , . , fy . the cardinal of o is \\a\\\\ inclusion ( in the wide sense ) , union , difference , and intersection of sets are denoted by o c b , a-\\-b , a b , ab respectively , and a b = a ab for all a , b. by 8 ( k , l , m ) we denote the set of all systems ( ao , av. , dn ) such that avc [ 0 , m ) ; \\av\\ 1 ( v < \xbb ) , story_separator_special_tag let n be an arbitrary but fixed positive integer . let tn be the set of all monotone - increasing n-tuples of positive integers : 1 define 2 in this note we prove that is a 1 1 mapping from tn onto { 1 , 2 , 3 , } . story_separator_special_tag in this paper g ( n ; i ) will denote a graph of n vertices and l edges , k will denote the complete graph of p vertices g ( p ; ( pa and k , ( p i , . . , p , ) will denote the rchromatic graph with p i vertices of the i-th colour , in which every two vertices of different colour are adjacent . 7r ( g ) will denote the number of vertices of g and v ( g ) denotes the number of edges of g . g ( n :1 ) denotes the complementary graph of g ( n : l ) i . e. g ( n ; 1 ) is the g ( ii : ( 211 ) -/ ) which has the samevertices as g ( n ; 1 ) story_separator_special_tag we shall consider graphs ( hypergraphs ) without loops and multiple edges . let be a family of so called prohibited graphs and ex ( n , ) denote the maximum number of edges ( hyperedges ) a graph ( hypergraph ) onn vertices can have without containing subgraphs from . a graph ( hyper-graph ) will be called supersaturated if it has more edges than ex ( n , ) . ifg hasn vertices and ex ( n , ) +k edges ( hyperedges ) , then it always contains prohibited subgraphs . the basic question investigated here is : at least how many copies ofl e must occur in a graphg n onn vertices with ex ( n , ) +k edges ( hyperedges ) ? story_separator_special_tag programme educational objectives ( peos ) : master programme in applied geology aims to provide comprehensive knowledge based on various branches of geology , with special focus on applied geology subjects in the areas of geomorphology , structural geology , hydrogeology , petroleum geology , mining geology , remote sensing and environmental geology . to provide an in-depth knowledge and hands-on training to learners in the area of applied geology and enable them to work independently at a higher level education / career . to gain knowledge on the significance of dynamics of earth , basic principles of sedimentology and stratigraphy and economic mineral formations and related exploration operations in industries . to impart fundamental concepts of economic mineral explorations , geological mapping techniques , geomorphologic principles , and applications of geology in engineering and story_separator_special_tag denote byg ( n ; m ) a graph ofn vertices andm edges . we prove that everyg ( n ; [ n 2/4 ] +1 ) contains a circuit ofl edges for every 3 l < c 2 n , also that everyg ( n ; [ n 2/4 ] +1 ) contains ak e ( u n , un ) withu n= [ c 1 logn ] ( for the definition ofk e ( u n , un ) see the introduction ) . finally fort > t 0 everyg ( n ; [ tn 3/2 ] ) contains a circuit of 2l edges for 2 l < c 3 t 2 . story_separator_special_tag we describe a simple and yet surprisingly powerful probabilistic technique which shows how to find in a dense graph a large subset of vertices in which all ( or almost all ) small subsets have many common neighbors . recently this technique has had several striking applications to extremal graph theory , ramsey theory , additive combinatorics , and combinatorial geometry . in this survey we discuss some of them . story_separator_special_tag let x be a finite set of cardinality n , and let f be a family of k-subsets of x. in this paper we prove the following conjecture of p. erdos and v.t . sos.if n > n0 ( k ) , k 4 , then we can find two members f and g in f such that |f g| = 1 . story_separator_special_tag letn , k , t be integers , n > k > t 0 , and letm ( n , k , t ) denote the maximum number of sets , in a family ofk-subsets of ann-set , no two of which intersect in exactlyt elements . the problem of determiningm ( n , k , t ) was raised by erd s in 1975. in the present paper we prove that ifk 2t+1 andk t is a prime , thenm ( n , k , t ) ( t n ) ( k 2k-t-1 ) / ( t 2k-t-1 ) . moreover , equality holds if and only if an ( n , 2k t 1 , t ) -steiner system exists . the proof uses a linear algebraic approach . story_separator_special_tag an $ ( n , s , q ) $ -graph is an $ n $ -vertex multigraph in which every $ s $ -set of vertices spans at most $ q $ edges . tur\\'an-type questions on the maximum of the sum of the edge multiplicities in such multigraphs have been studied since the 1990s . more recently , mubayi and terry [ an extremal problem with a transcendental solution , combinatorics probability and computing 2019 ] posed the problem of determining the maximum of the product of the edge multiplicities in $ ( n , s , q ) $ -graphs . we give a general lower bound construction for this problem for many pairs $ ( s , q ) $ , which we conjecture is asymptotically best possible . we prove various general cases of our conjecture , and in particular we settle a conjecture of mubayi and terry on the $ ( s , q ) = ( 4,6a+3 ) $ case of the problem ( for $ a\\geq2 $ ) ; this in turn answers a question of alon . we also determine the asymptotic behaviour of the problem for ` sparse ' multigraphs story_separator_special_tag finding the exact solution to dynamical systems in the field of mathematical modeling is extremely important and to achieve this goal , various integral transforms have been developed . in this research analysis , non-integer order ordinary differential equations are analytically solved via the laplace-carson integral transform technique , which is a technique that has not been previously employed to test the non-integer order differential systems . firstly , it has proved that the laplace-carson transform for n-times repeated classical integrals can be computed by dividing the laplace-carson transform of the underlying function by n-th power of a real number p which later helped us to present a new result for getting the laplace-carson transform for -derivative of a function under the caputo operator . some initial value problems based upon caputo type fractional operator have been precisely solved using the results obtained thereof . msc 2010 : 26a33 , 34m03 story_separator_special_tag we show that if the largest matching in a $ k $ -uniform hypergraph $ g $ on $ n $ vertices has precisely $ s $ edges , and $ n > 2k^2s/\\log k $ , then $ h $ has at most $ \\binom n k - \\binom { n-s } k $ edges and this upper bound is achieved only for hypergraphs in which the set of edges consists of all $ k $ -subsets which intersect a given set of $ s $ vertices . story_separator_special_tag the number , 0 1 , is a jump forr if for any positive e and any integerm , m r , anyr-uniform hypergraph withn > n o ( e , m ) vertices and at least ( +e ) $ $ \\left ( { \\begin { array } { * { 20 } c } n \\\\ r \\\\ \\end { array } } \\right ) $ $ edges contains a subgraph withm vertices and at least ( +c ) $ $ \\left ( { \\begin { array } { * { 20 } c } m \\\\ r \\\\ \\end { array } } \\right ) $ $ edges , wherec=c ( ) does not depend on e andm . it follows from a theorem of erdos , stone and simonovits that forr=2 every is a jump . erdos asked whether the same is true forr 3. he offered $ 1000 for answering this question . in this paper we give a negative answer by showing that $ $ 1 - \\frac { 1 } { { l^ { r - 1 } } } $ $ is not a jump ifr 3 , l > 2r . story_separator_special_tag we study the tur\\'an number of long cycles in random graphs and in pseudo-random graphs . denote by $ ex ( g ( n , p ) , h ) $ the random variable counting the number of edges in a largest subgraph of $ g ( n , p ) $ without a copy of $ h $ . we determine the asymptotic value of $ ex ( g ( n , p ) , c_t ) $ where $ c_t $ is a cycle of length $ t $ , for $ p\\geq \\frac cn $ and $ a \\log n \\leq t \\leq ( 1 - \\varepsilon ) n $ . the typical behavior of $ ex ( g ( n , p ) , c_t ) $ depends substantially on the parity of $ t $ . in particular , our results match the classical result of woodall on the tur\\'an number of long cycles , and can be seen as its random version , showing that the transference principle holds here as well . in fact , our techniques apply in a more general sparse pseudo-random setting . we also prove a robustness-type result , story_separator_special_tag a subgraph $ h $ of $ g $ is \\textit { singular } if the vertices of $ h $ either have the same degree in $ g $ or have pairwise distinct degrees in $ g $ . the largest number of edges of a graph on $ n $ vertices that does not contain a singular copy of $ h $ is denoted by $ t_s ( n , h ) $ . caro and tuza [ theory and applications of graphs , 6 ( 2019 ) , 1 -- 32 ] obtained the asymptotics of $ t_s ( n , h ) $ for every graph $ h $ , but determined the exact value of this function only in the case $ h=k_3 $ and $ n\\equiv 2 $ ( mod 4 ) . we determine $ t_s ( n , k_3 ) $ for all $ n\\equiv 0 $ ( mod 4 ) and $ n\\equiv 1 $ ( mod 4 ) , and also $ t_s ( n , k_ { r+1 } ) $ for large enough $ n $ that is divisible by $ r $ . we also explore the story_separator_special_tag given a tree t on v vertices and an integer k exceeding one . one can define the k-expansion t^k as a k-uniform linear hypergraph by enlarging each edge with a new , distinct set of ( k-2 ) vertices . then t^k has v+ ( v-1 ) ( k-2 ) vertices . the aim of this paper is to show that using the delta-system method one can easily determine asymptotically the size of the largest t^k-free n-vertex hypergraph , i.e. , the turan number of t^k . story_separator_special_tag let k , t be positive integers and let f be a set-system which consists of k-element sets . in this paper it is proved that one can choose a subsystem f^ * @ ? f containing a positive proportion of the members of f , ( i.e |f^ * | > c ( k , t ) |f| ) and having the property that every pairwise intersection is a kernel of a t-star in f^ * ( i.e . @ ? , f ' @ e f^ * , ftnf ' = a , @ ef '' 1 , . , f '' t @ e f^ * such that f '' i @ ? f '' j = a for 1 = story_separator_special_tag a k-uniform linear path of length , denoted by ( k ) , is a family of k-sets { f1 , . , f such that |fi fi+1|=1 for each i and fi fbj = $ ot 0 $ whenever |i j| > 1 . given a k-uniform hypergraph h and a positive integer n , the k-uniform hypergraph tur\xe1n number of h , denoted by exk ( n , h ) , is the maximum number of edges in a k-uniform hypergraph $ \\mathcal { f } $ on n vertices that does not contain h as a subhypergraph . with an intensive use of the delta-system method , we determine exk ( n , p ( k ) exactly for all fixed 1 , k 4 , and sufficiently large n. we show that $ ex_k ( n , \\mathbb { p } _ { 2t + 1 } ^ { ( k ) } ) = ( _ { k - 1 } ^ { n - 1 } ) + ( _ { k - 1 } ^ { n - 2 } ) + \\cdots + ( _ { k - 1 } ^ { story_separator_special_tag more than forty years ago , erd\\h { o } s conjectured that for any t < = n/k , every k-uniform hypergraph on n vertices without t disjoint edges has at most max { \\binom { kt-1 } { k } , \\binom { n } { k } - \\binom { n-t+1 } { k } } edges . although this appears to be a basic instance of the hypergraph tur\\'an problem ( with a t-edge matching as the excluded hypergraph ) , progress on this question has remained elusive . in this paper , we verify this conjecture for all t < n/ ( 3k^2 ) . this improves upon the best previously known range t = o ( n/k^3 ) , which dates back to the 1970 's . story_separator_special_tag a $ d $ -simplex is a collection of $ d+1 $ sets such that every $ d $ of them have nonempty intersection and the intersection of all of them is empty . a strong $ d $ -simplex is a collection of $ d+2 $ sets $ a , a_1 , \\dots , a_ { d+1 } $ such that $ \\ { a_1 , \\dots , a_ { d+1 } \\ } $ is a $ d $ -simplex , while $ a $ contains an element of $ \\cap_ { j eq i } a_j $ for each $ i $ , $ 1\\leq i\\leq d+1 $ . mubayi and ramadurai [ combin . probab . comput. , 18 ( 2009 ) , pp . 441-454 ] conjectured that if $ k\\geq d+1\\geq3 $ , $ n > k ( d+1 ) /d $ , and $ \\mathcal { f } $ is a family of $ k $ -element subsets of an $ n $ -element set that contains no strong $ d $ -simplex , then $ |\\mathcal { f } |\\leq { n-1\\choose k-1 } $ with equality only when $ \\mathcal { story_separator_special_tag a cancellative hypergraph has no three edges a , b , c with a b c. we give a new short proof of an old result of bollobas , which states that the maximum size of a cancellative triple system is achieved by the balanced complete tripartite 3-graph . one of the two forbidden subhypergraphs in a cancellative 3-graph is f 5 = { abc , abd , cde } . for n 33 we show that the maximum number of triples on n vertices containing no copy of f 5 is also achieved by the balanced complete tripartite 3-graph . this strengthens a theorem of frankl and furedi , who proved it for n 3000. for both extremal results , we show that a 3-graph with almost as many edges as the extremal example is approximately tripartite . these stability theorems are analogous to the simonovits stability theorem for graphs . story_separator_special_tag a d-dimensional simplex is a collection of d+1 sets with empty intersection , every d of which have nonempty intersection . a k-uniform d-cluster is a collection of d+1 sets of size k with empty intersection and union of size at most 2k . we prove the following result which simultaneously addresses an old conjecture of chvatal [ 6 ] and a recent conjecture of the second author [ 28 ] . for d 2 and > 0 there is a number t such that the following holds for sufficiently large n. let g be a k-uniform set system on [ n ] = { 1 , , n } with n < k < n/2 t , and suppose either that g contains no d-dimensional simplex or that g contains no d-cluster . then |g| $ $ \\left ( { \\begin { array } { * { 20 } c } { n - 1 } \\\\ { k - 1 } \\\\ \\end { array } } \\right ) $ $ with equality only for the family of all k-sets containing a specific element . in the non-uniform setting we obtain the following exact result that generalises a story_separator_special_tag let $ \\mathcal { f } $ be a $ k $ -uniform set system defined on a ground set of size $ n $ with no singleton intersection ; i.e. , no pair $ a , b\\in\\mathcal { f } $ has $ |a\\cap b|=1 $ . frankl showed that $ |\\mathcal { f } |\\leq\\binom { n-2 } { k-2 } $ for $ k\\geq4 $ and $ n $ sufficiently large , confirming a conjecture of erdods and so\xb4s . we determine the maximum size of $ \\mathcal { f } $ for $ k=4 $ and all $ n $ , and also establish a stability result for general $ k $ , showing that any $ \\mathcal { f } $ with size asymptotic to that of the best construction must be structurally similar to it . story_separator_special_tag background it is known that obesity , sodium intake , and alcohol consumption factors influence blood pressure . in this clinical trial , dietary approaches to stop hypertension , we assessed the effects of dietary patterns on blood pressure . methods we enrolled 459 adults with systolic blood pressures of less than 160 mm hg and diastolic blood pressures of 80 to 95 mm hg . for three weeks , the subjects were fed a control diet that was low in fruits , vegetables , and dairy products , with a fat content typical of the average diet in the united states . they were then randomly assigned to receive for eight weeks the control diet , a diet rich in fruits and vegetables , or a `` combination '' diet rich in fruits , vegetables , and low-fat dairy products and with reduced saturated and total fat . sodium intake and body weight were maintained at constant levels . results at base line , the mean ( +/-sd ) systolic and diastolic blood pressures were 131.3+/-10.8 mm hg and 84.7+/-4.7 mm hg , respectively . the combination diet reduced systolic and diastolic blood pressure by 5.5 and 3.0 mm story_separator_special_tag maximum of a square-free quadratic form on a simplex . the following question was suggested by a problem of j. e. macdonald jr. ( 1 ) : given a graph g with vertices 1 , 2 , . , n. let s be the simplex in e n given by x i 0 , x i = 1. what is story_separator_special_tag we present new short proofs to both the exact and the stability results of two extremal problems . the first one is the extension of tur\\ ' { a } n 's theorem in hypergraphs , which was firstly studied by mubayi $ \\cite { mu06 } $ . the second one is about the cancellative hypergraphs , which was firstly studied by bollob\\ ' { a } s $ \\cite { bo74 } $ and later by keevash and mubayi $ \\cite { km04 } $ . our proofs are concise and straightforward , but give a sharper version of stability theorems to both problems . story_separator_special_tag abstract fix integers n , r 4 and let f denote a family of r-sets of an n-element set . suppose that for every four distinct a , b , c , d f with | a b c d | 2 r , we have a b c d . we prove that for n sufficiently large , | f | ( n 1 r 1 ) , with equality only if f f f . this is closely related to a problem of katona and a result of frankl and furedi [ p. frankl , z. furedi , a new generalization of the erd s ko rado theorem , combinatorica 3 ( 3 4 ) ( 1983 ) 341 349 ] , who proved a similar statement for three sets . it has been conjectured by the author [ d. mubayi , erd s ko rado for three sets , j. combin . theory ser . a , 113 ( 3 ) ( 2006 ) 547 550 ] that the same result holds for d sets ( instead of just four ) , where d r , and for all n d r / ( d 1 ) story_separator_special_tag a d-simplex is a collection of d + 1 sets such that every d of them has non-empty intersection and the intersection of all of them is empty . fix k d + 2 3 and let be a family of k-element subsets of an n-element set that contains no d-simplex . we prove that if $ |\\cg| \\geq ( 1-o ( 1 ) ) \\binom { n-1 } { k-1 } $ , then there is a vertex x of such that the number of sets in omitting x is o ( nk 1 ) ( here o ( 1 ) 0 and n ) . a similar result when n/k is bounded from above was recently proved in [ 10 ] . our main result is actually stronger , and implies that if $ |\\cg| > ( 1 + \\epsilon ) \\binom { n-1 } { k-1 } $ for any contains d + 2 sets a , a1 , .\xa0.\xa0. , ad+1 such that the ais form a d-simplex , and a contains an element of j iaj for each i. this generalizes , in asymptotic form , a recent result of vestraete and the first author story_separator_special_tag let 2= ( n-1k-1 ) . then g contains d sets with union of size at most 2k and empty intersection . this extends the erdos-ko-rado theorem and verifies a conjecture of the first author for large n . story_separator_special_tag we present a conceptually simple , flexible , and general framework for object instance segmentation . our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance . the method , called mask r-cnn , extends faster r-cnn by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition . mask r-cnn is simple to train and adds only a small overhead to faster r-cnn , running at 5 fps . moreover , mask r-cnn is easy to generalize to other tasks , e.g. , allowing us to estimate human poses in the same framework . we show top results in all three tracks of the coco suite of challenges , including instance segmentation , bounding-box object detection , and person keypoint detection . without bells and whistles , mask r-cnn outperforms all existing , single-model entries on every task , including the coco 2016 challenge winners . we hope our simple and effective approach will serve as a solid baseline and help ease future research in instance-level recognition . code has been made available at : https : //github.com/facebookresearch/detectron . story_separator_special_tag a minimal k-cycle is a family of sets a '' 0 , . , a '' k '' - '' 1 for which a '' i @ ? a '' j 0 @ ? if and only if i=j or i and j are consecutive modulo k. let f '' r ( n , k ) be the maximum size of a family of r-sets of an n element set containing no minimal k-cycle . our results imply that for fixed r , k > =3 , @ ? n-1r-1+o ( n^r^-^2 ) @ ? f '' r ( n , k ) @ ? 3 @ ? n-1r-1+o ( n^r^-^2 ) , where @ ? = @ ? ( k-1 ) /2 @ ? . we also prove that f '' r ( n,4 ) = ( 1+o ( 1 ) ) n-1r-1 as n- > ~ . this supports a conjecture of z. furedi [ hypergraphs in which all disjoint pairs have distinct unions , combinatorica 4 ( 2-3 ) ( 1984 ) 161-168 ] on families in which no two pairs of disjoint sets have the same union . story_separator_special_tag we prove that the maximum number of edges in a k-uniform hypergraph on n vertices containing no 2-regular subhypergraph is ( n-1k-1 ) if k > =4 is even and n is sufficiently large . equality holds only if all edges contain a specific vertex v. for odd k we conjecture that this maximum is ( n-1k-1 ) [ email\xa0protected ] [ email\xa0protected ] ? , with equality only for the hypergraph described above plus a maximum matching omitting v . story_separator_special_tag let l > k > =3 . let the k-graph h '' l^ ( ^k^ ) be obtained from the complete 2-graph k '' l^ ( ^2^ ) by enlarging each edge with a new set of k-2 vertices . mubayi [ a hypergraph extension of turan @ ? s theorem , j. combin . theory ser . b 96 ( 2006 ) 122-134 ] computed asymptotically the turan function ex ( n , h '' l^ ( ^k^ ) ) . here we determine the exact value of ex ( n , h '' l^ ( ^k^ ) ) for all sufficiently large n , settling a conjecture of mubayi . story_separator_special_tag abstract an optimization complex of a graph is constructed , and on the basis of this a statement is formalised in the class of wave subgraphs introduced in this paper also , and a solution of extremal problems on an arbitrary flow graph is given . the applicability of the method to the solution of multi-iteration problems is considered using the example of the problem of the greatest pair-combination of a bipartite graph . a new algorithm with an estimate of complexity improving the known estimate o ( n 5 3 ) is presented . story_separator_special_tag in this paper we adapt techniques used by ahlswede and khachatrian in their proof of the complete erd\\h { o } s-ko-rado theorem to show that if $ n \\geq 2t+1 $ , then any pairwise $ t $ -cycle-intersecting family of permutations has cardinality less than or equal to $ ( n-t ) ! $ . furthermore , the only families attaining this size are the stabilizers of $ t $ points , that is , families consisting of all permutations having $ t $ 1-cycles in common . this is a strengthening of a previous result of ku and renshaw and supports a recent conjecture by ellis , friedgut and pilpel concerning the corresponding bound for $ t $ -intersecting families of permutations . story_separator_special_tag obstructive sleep apnoea ( osa ) is a significant public health problem with large health and economic burden . despite the existence of effective treatment , undiagnosed osa remains a challenge . the gold standard diagnostic tool is polysomnography ( psg ) , yet this test is expensive , labour intensive , and time-consuming . home-based , limited channel sleep study testing ( level 3 and 4 ) can advance and widen access to diagnostic services . this systematic review aims to summarise available evidence regarding the cost-effectiveness of limited channel tests compared to laboratory and home psg in diagnosing osa . eligible studies were identified across the following databases : medline , psychinfo , proquest , scopus , cinahl , cochrane , emcare and web of science . studies were screened , critically appraised and eligible data were extracted using a standardised template . relevant findings were summarised using a qualitative approach adhering to economic reporting standards . 915 non-duplicate abstracts were identified , 82 full-text articles were retrieved for review . 32 studies met the inclusion criteria and were included in the final analysis : 28 studies investigated level 3 and four assessed level 4 osa diagnostic tests
roadmap a : topologically protected quantum computing main objective : braiding with majorana s within 4 years in order to create qubits with the potential of very long coherence times . to demonstrate majorana braiding , nanowire crosses will need to be integrated in a superconducting circuit with a microwave resonator and several josephson junctions . in 2017 new in-plane nano-wires growth has been achieved . furthermore , detailed studies on materials and nano-fabrication workflows were performed . figure 1 majorana device roadmap b : faulttolerant quantum computing main objective : a 49 qubits device within 4 years , controlled with surface code driving . two principle types of qubits are under investigation . the transmon qubits are relatively speaking the most advanced type of qubits . in 2017 a fullstack quantum computer demonstrator has been established with a 2 qubit transmon processor ( http : //quantuminfinity.tnw.tudelft.nl/ ) , where simple quantum algorithms can be executed . the development of a 7 and a 17-qubit design was established and mobility measurements were performed . this is the smallest set of qubits required to demonstrate surface code protection . spin qubits may intrinsically have longer coherence times , and also use story_separator_special_tag the single-user separation theorem of joint source-channel coding has been proved previously for wide classes of sources and channels . we find an information-stable source/channel pair which does not satisfy the separation theorem . new necessary and sufficient conditions for the transmissibility of a source through a channel are found , and we characterize the class of channels for which the separation theorem holds regardless of the source statistics story_separator_special_tag this paper is written in three main sections . in the first and third , w. w. is responsible both for the ideas and the form . the middle section , namely 2 ) communication problems at level a is an interpretation of mathematical papers by dr. claude e. shannon of the bell telephone laboratories . dr. shannon s work roots back , as von neumann has pointed out , to boltzmann s observation , in some of his work on statistical physics ( 1894 ) , that entropy is related to missing information , inasmuch as it is related to the number of alternatives which remain possible to a physical system after all the macroscopically observable information concerning it has been recorded . l. szilard ( zsch . f. phys . vol . 53 , 1925 ) extended this idea to a general discussion of information in physics , and von neumann ( math . foundation of quantum mechanics , berlin , 1932 , chap . v ) treated information in quantum mechanics and particle physics . dr. shannon s work connects more directly with certain ideas developed some twenty years ago by h. nyquist and r. v. l. story_separator_special_tag the goal of this paper is to promote the idea that including semantic and goal-oriented aspects in future 6g networks can produce a significant leap forward in terms of system effectiveness and sustainability . semantic communication goes beyond the common shannon paradigm of guaranteeing the correct reception of each single transmitted packet , irrespective of the meaning conveyed by the packet . the idea is that , whenever communication occurs to convey meaning or to accomplish a goal , what really matters is the impact that the correct reception/interpretation of a packet is going to have on the goal accomplishment . focusing on semantic and goal-oriented aspects , and possibly combining them , helps to identify the relevant information , i.e . the information strictly necessary to recover the meaning intended by the transmitter or to accomplish a goal . combining knowledge representation and reasoning tools with machine learning algorithms paves the way to build semantic learning strategies enabling current machine learning algorithms to achieve better interpretation capabilities and contrast adversarial attacks . 6g semantic networks can bring semantic learning mechanisms at the edge of the network and , at the same time , semantic learning can help 6g networks story_separator_special_tag wireless connectivity has traditionally been regarded as an opaque data pipe carrying messages , whose context-dependent meaning and effectiveness have been ignored . nevertheless , in emerging cyber-physical and autonomous networked systems , acquiring , processing , and sending excessive amounts of distributed real-time data , which ends up being stale or useless to the end user , will cause communication bottlenecks , increased latency , and safety issues . we envision a communication paradigm shift , which makes the semantics of information ( i.e. , the significance and usefulness of messages ) the foundation of the communication process . this entails a goal-orient-ed unification of information generation , transmission , and reconstruction , by taking into account process dynamics , signal sparsity , data correlation , and semantic information attributes . we apply this structurally new , synergetic approach to a communication scenario where the destination is tasked with real-time source reconstruction for the purpose of remote actuation . capitalizing on semantics-empowered sampling and communication policies , we show significant reduction in both reconstruction error and cost of actuation error , as well as in the number of uninformative samples generated . story_separator_special_tag we present our vision for a departure from the established way of architecting and assessing communication networks , by incorporating the semantics of information for communications and control in networked systems . we define semantics of information , not as the meaning of the messages , but as their significance , possibly within a real time constraint , relative to the purpose of the data exchange . we argue that research efforts must focus on laying the theoretical foundations of a redesign of the entire process of information generation , transmission and usage in unison by developing : advanced semantic metrics for communications and control systems ; an optimal sampling theory combining signal sparsity and semantics , for real-time prediction , reconstruction and control under communication constraints and delays ; semantic compressed sensing techniques for decision making and inference directly in the compressed domain ; semantic-aware data generation , channel coding , feedback , multiple and random access schemes that reduce the volume of data and the energy consumption , increasing the number of supportable devices . story_separator_special_tag this paper studies methods of quantitatively measuring semantic information in communication . we review existing work on quantifying semantic information , then investigate a model-theoretical approach for semantic data compression and reliable semantic communication . we relate our approach to the statistical measurement of information by shannon , and show that shannon 's source and channel coding theorems have semantic counterparts . story_separator_special_tag abstract one of the main building blocks and major challenges for 5g cellular systems is the design of flexible network architectures which can be realized by the software defined networking paradigm . existing commercial cellular systems rely on closed and inflexible hardware-based architectures both at the radio frontend and in the core network . these problems significantly delay the adoption and deployment of new standards , impose significant challenges in implementing and innovation of new techniques to maximize the network capacity and accordingly the coverage , and prevent provisioning of truly- differentiated services which are able to adapt to growing and uneven and highly variable traffic patterns . in this paper , a new software-defined architecture , called softair , for next generation ( 5g ) wireless systems , is introduced . specifically , the novel ideas of network function cloudification and network virtualization are exploited to provide a scalable , flexible and resilient network architecture . moreover , the essential enabling technologies to support and manage the proposed architecture are discussed in details , including fine-grained base station decomposition , seamless incorporation of openflow , mobility- aware control traffic balancing , resource-efficient network virtualization , and distributed and collaborative story_separator_special_tag the provision of high data rate services to mobile users combined with improved quality of experience ( i.e. , zero latency multimedia content ) drives technological evolution towards the design and implementation of fifth generation ( 5g ) broadband wireless networks . to this end , a dynamic network design approach is adopted whereby network topology is configured according to service demands . in parallel , many private companies are interested in developing their own 5g networks , also referred to as non-public networks ( npns ) , since this deployment is expected to leverage holistic production monitoring and support critical applications . in this context , this paper introduces a 5g npn architectural approach , supporting among others various key enabling technologies , such as cell densification , disaggregated ran with open interfaces , edge computing , and ai/ml-based network optimization . in the same framework , potential applications of our proposed approach in real world scenarios ( e.g. , support of mission critical services and computer vision analytics for emergencies ) are described . finally , scalability issues are also highlighted since a deployment framework of our architectural design in an additional real-world scenario related to industry 4.0 story_separator_special_tag the 5g system is being developed and enhanced to provide unparalleled connectivity to connect everyone and everything , everywhere . the first version of the 5g system , based on the release 15 ( rel-15 ) version of the specifications developed by 3gpp , comprising the 5g core ( 5gc ) and 5g new radio ( nr ) with 5g user equipment ( ue ) , is currently being deployed commercially throughout the world both at sub-6 ghz and at mmwave frequencies . concurrently , the second phase of 5g is being standardized by 3gpp in the release 16 ( rel-16 ) version of the specifications which will be completed by march 2020. while the main focus of rel-15 was on enhanced mobile broadband services , the focus of rel-16 is on new features for urllc ( ultra-reliable low latency communication ) and industrial iot , including time sensitive communication ( tsc ) , enhanced location services , and support for non-public networks ( npns ) . in addition , some crucial new features , such as nr on unlicensed bands ( nr-u ) , integrated access & backhaul ( iab ) and nr vehicle-to-x ( v2x ) , are story_separator_special_tag high-performance wireless communication is crucial to the digital transformation of industrial systems , which is driven by industry 4.0 and industrial internet initiatives . among the candidate industrial wireless technologies , 5g ( cellular/mobile ) holds significant potential . the operation of private ( nonpublic ) 5g networks in industrial environments is promising to fully unleash this potential . this article provides a technical overview of private 5g networks . it introduces the concept and functional architecture of private 5g while highlighting key benefits and industrial use cases . it explores spectrum opportunities for private 5g networks and discusses design aspects of private 5g along with key challenges . finally , it examines the emerging standardization and open innovation ecosystem for private 5g . story_separator_special_tag future networks will be defined by software . in contrast to a wired network , the software defined wireless network ( sdwn ) experiences more challenges due to the fast-changing wireless channel environment . this article focuses on the state-of-the-art of sdwn architecture , including control plane virtualization strategies and semantic ontology for network resource description . in addition , a novel sdwn architecture with resource description function is proposed , along with two ontologies for the resource description of the latest wireless network . future research directions for sdwn , control strategy design , and resource description are also addressed . story_separator_special_tag with the blossoming of network functions virtualization and software-defined networks , networks are becoming more and more agile with features like resilience , programmability , and open interfaces , which help operators to launch a network or service with more flexibility and shorter time to market . recently , the concept of network slicing has been proposed to facilitate the building of a dedicated and customized logical network with virtualized resources . in this article , we introduce the concept of hierarchical nsaas , helping operators to offer customized end-to-end cellular networks as a service . moreover , the service orchestration and service level agreement mapping for quality assurance are introduced to illustrate the architecture of service management across different levels of service models . finally , we illustrate the process of network slicing as a service within operators by typical examples . with network slicing as a service , we believe that the supporting system will transform itself to a production system by merging the operation and business domains , and enabling operators to build network slices for vertical industries more agilely . story_separator_special_tag although the radio access network ( ran ) part of mobile networks offers a significant opportunity for benefiting from the use of sdn ideas , this opportunity is largely untapped due to the lack of a software-defined ran ( sd-ran ) platform . we fill this void with flexran , a flexible and programmable sd-ran platform that separates the ran control and data planes through a new , custom-tailored southbound api . aided by virtualized control functions and control delegation features , flexran provides a flexible control plane designed with support for real-time ran control applications , flexibility to realize various degrees of coordination among ran infrastructure entities , and programmability to adapt control over time and easier evolution to the future following sdn/nfv principles . we implement flexran as an extension to a modified version of the openairinterface lte platform , with evaluation results indicating the feasibility of using flexran under the stringent time constraints posed by the ran . to demonstrate the effectiveness of flexran as an sd-ran platform and highlight its applicability for a diverse set of use cases , we present three network services deployed over flexran focusing on interference management , mobile edge computing and story_separator_special_tag in this paper , we study discrete-time stationary sources s with memory . the rate r ( \\beta ) of the source relative to a distortion measure is compared with r^ \\ast ( \\beta ) , the rate of the memoryless source s^ /ast with the same marginal statistics as s . we show that r^ \\ast ( \\beta ) - \\delta \\leq r ( \\beta ) \\leq r^ \\ast ( \\beta ) , where \\delta is a measure of the memory of the source . a number of interesting applications of these bounds are given . story_separator_special_tag the task of manipulating correlated random variables in a distributed setting has received attention in the fields of both information theory and computer science . often shared correlations can be converted , using a little amount of communication , into perfectly shared uniform random variables . such perfect shared randomness , in turn , enables the solutions of many tasks . even the reverse conversion of perfectly shared uniform randomness into variables with a desired form of correlation turns out to be insightful and technically useful . in this article , we describe progress-to-date on such problems and lay out pertinent measures , achievability results , limits of performance , and point to new directions . story_separator_special_tag we develop elements of a theory of cooperation and coordination in networks . rather than considering a communication network as a means of distributing information , or of reconstructing random processes at remote nodes , we ask what dependence can be established among the nodes given the communication constraints . specifically , in a network with communication rates { ri , j } between the nodes , we ask what is the set of all achievable joint distributions p ( x1 , . , xm ) of actions at the nodes of the network . several networks are solved , including arbitrarily large cascade networks . distributed cooperation can be the solution to many problems such as distributed games , distributed control , and establishing mutual information bounds on the influence of one part of a physical system on another . story_separator_special_tag we study the problem of empirical coordination subject to a fidelity criterion for a general set-up . we prove a result which indicates a strong connection between our frame-work and the framework of empirical coordination developed in [ 1 ] . it turns out that when we design codes that achieve empirical coordination according to a given distribution and subject to the fidelity criterion , it is sufficient to consider codes that produce actions of the same joint type for a class of types which is close enough to our desired distribution is some sense . story_separator_special_tag in this paper , we review how shannon 's classical notion of capacity is not enough to characterize a noisy communication channel if the channel is intended to be used as part of a feedback loop to stabilize an unstable scalar linear system . while classical capacity is not enough , another sense of capacity ( parametrized by reliability ) called `` anytime capacity '' is necessary for the stabilization of an unstable process . the required rate is given by the log of the unstable system gain and the required reliability comes from the sense of stability desired . a consequence of this necessity result is a sequential generalization of the schalkwijk-kailath scheme for communication over the additive white gaussian noise ( awgn ) channel with feedback . in cases of sufficiently rich information patterns between the encoder and decoder , adequate anytime capacity is also shown to be sufficient for there to exist a stabilizing controller . these sufficiency results are then generalized to cases with noisy observations , delayed control actions , and without any explicit feedback between the observer and the controller . both necessary and sufficient conditions are extended to continuous time systems as well story_separator_special_tag a fundamental question in learning theory is the quantification of the basic tradeoff between the complexity of a model and its predictive accuracy . one valid way of quantifying this tradeoff , known as the information bottleneck , is to measure both the complexity of the model and its prediction accuracy by using shannon s mutual information . in this paper we show that the information bottleneck framework answers a well defined and known coding problem and at same time it provides a general relationship between complexity and prediction accuarcy , measured by mutual information . we study the nature of this complexity-accuracy tradeoff and discuss some of its theoretical properties . furthermore , we present relations to classical information theoretic problems , such as rate-distortion theory , cost-capacity tradeoff and source coding with side information . story_separator_special_tag in this theory paper , we investigate training deep neural networks ( dnns ) for classification via minimizing the information bottleneck ( ib ) functional . we show that the resulting optimization problem suffers from two severe issues : first , for deterministic dnns , either the ib functional is infinite for almost all values of network parameters , making the optimization problem ill-posed , or it is piecewise constant , hence not admitting gradient-based optimization methods . second , the invariance of the ib functional under bijections prevents it from capturing properties of the learned representation that are desirable for classification , such as robustness and simplicity . we argue that these issues are partly resolved for stochastic dnns , dnns that include a ( hard or soft ) decision rule , or by replacing the ib functional with related , but more well-behaved cost functions . we conclude that recent successes reported about training dnns using the ib framework must be attributed to such solutions . as a side effect , our results indicate limitations of the ib framework for the analysis of dnns . we also note that rather than trying to repair the inherent problems in story_separator_special_tag it is well-known that the information bottleneck method and rate distortion theory are related . here it is described how the information bottleneck can be considered as rate distortion theory for a family of probability measures where information divergence is used as distortion measure . it is shown that the information bottleneck method has some properties that are not shared with rate distortion theory based on any other divergence measure . in this sense the information bottleneck method is unique . story_separator_special_tag we present an extension of the well-known information bottleneck framework , called conditional information bottleneck , which takes negative relevance information into account by maximizing a conditional mutual information score . this general approach can be utilized in a data mining context to extract relevant information that is at the same time novel relative to known properties or structures of the data . we present possible applications of the conditional information bottleneck in information retrieval and text mining for recovering non-redundant clustering solutions , including experimental results on the webkb data set which validate the approach . story_separator_special_tag this paper investigates the problem of coordinating several agents through their actions , focusing on an asymmetric observation structure with two agents . specifically , one agent knows the past , present , and future realizations of a state that affects a common payoff function , while the other agent either knows the past realizations of nothing about the state . in both cases , the second agent is assumed to have strictly causal observations of the first agent s actions , which enables the two agents to coordinate . these scenarios are applied to distributed power control ; the key idea is that a transmitter may embed information about the wireless channel state into its transmit power levels so that an observation of these levels , e.g. , the signal-to-interference-plus-noise ratio , allows the other transmitter to coordinate its power levels . the main contributions of this paper are twofold . first , we provide a characterization of the set of feasible average payoffs when the agents repeatedly take long sequences of actions and the realizations of the system state are i.i.d . second , we exploit these results in the context of distributed power control and introduce the story_separator_special_tag in this chapter , we describe several recent results on the problem of coordination among agents when they have partial information about a state which affects their utility , payoff , or reward function . the state is not controlled and rather evolves according to an independent and identically distributed ( i.i.d . ) random process . this random process might represent various phenomena . in control , it may represent a perturbation or model uncertainty . in the context of smart grids , it may represent a forecasting noise ( beaude et al. , 6th ieee international conference on smart grid communications ( smartgridcomm 2015 ) , miami , florida , 2015 , [ 3 ] ) . in wireless communications , it may represent the state of the global communication channel . the approach used is to exploit shannon theory to characterize the achievable long-term utility region . two scenarios are described . in the first scenario , the number of agents is arbitrary , and the agents have causal knowledge about the state . in the second scenario , there are only two agents , and the agents have some knowledge about the future of the state story_separator_special_tag describes the design of generic virtual instruments used for real-time experimentation at polytechnic university 's control engineering laboratory in a remote-access environment . these instruments can be freely downloaded and the remote user can access the laboratory facilities from anywhere at any time . our internet-accessed remote laboratory is based on a client/server computer configuration . the server , situated near the experiment , transfers to it the received command signals transmitted by the client . the client locally computes the command signal based on the reference waveform and the transmitted system response . the remote user can select the transmission protocol , switch between asynchronous and synchronous sampling , use either a batch or a recursive data transfer mode , and view the experimental testbed . our approach is distinct from others in that it offers more flexibility and responsibility to the client side , since the remote user compiles and executes the controller locally . issues concerned with network reliability , dynamic delays caused by internet traffic , concurrent user access , and limited computing power have been addressed . the designed set of experiments is the first step toward our remote control laboratory . story_separator_special_tag distance learning is an emerging paradigm where students , teachers , and equipment may be at different geographic locations . at the college of engineering of oregon state university , we have developed and demonstrated an innovative real-time remote-access control engineering teaching laboratory . our remote laboratory uses the internet to provide complete access to the usual laboratory facilities . in addition , remote power control , network reliability , and safety features are integrated into our experimental hardware and software design . remotely located students are able to develop , compile , debug , and run controllers in real time on the experiments in our laboratory . students can watch the experiment in real time from a remote workstation , hear the sounds in the laboratory , and interact with other laboratory users . students can effectively use the laboratory from anywhere on the internet . story_separator_special_tag this paper addresses feedback stabilization problems for linear time-invariant control systems with saturating quantized measurements . we propose a new control design methodology , which relies on the possibility of changing the sensitivity of the quantizer while the system evolves . the equation that describes the evolution of the sensitivity with time ( discrete rather than continuous in most cases ) is interconnected with the given system ( either continuous or discrete ) , resulting in a hybrid system . when applied to systems that are stabilizable by linear time-invariant feedback , this approach yields global asymptotic stability . story_separator_special_tag there is an increasing interest in studying control systems employing multiple sensors and actuators that are geographically distributed . communication is an important component of these distributed and networked control systems . hence , there is a need to understand the interactions between the control components and the communication components of the distributed system . in this paper , we formulate a control problem with a communication channel connecting the sensor to the controller . our task involves designing the channel encoder and channel decoder along with the controller to achieve different control objectives . we provide upper and lower bounds on the channel rate required to achieve these different control objectives . in many cases , these bounds are tight . in doing so , we characterize the `` information complexity '' of different control objectives . story_separator_special_tag feedback control with limited data rates is an emerging area which incorporates ideas from both control and information theory . a fundamental question it poses is how low the closed-loop data rate can be made before a given dynamical system is impossible to stabilize by any coding and control law . analogously to source coding , this defines the smallest error-free data rate sufficient to achieve `` reliable '' control , and explicit expressions for it have been derived for linear time-invariant systems without disturbances . in this paper , the more general case of finite-dimensional linear systems with process and observation noise is considered , the object being mean square state stability . by inductive arguments employing the entropy power inequality of information theory , and a new quantizer error bound , an explicit expression for the infimum stabilizing data rate is derived , under very mild conditions on the initial state and noise probability distributions . story_separator_special_tag we study the stabilizability of uncertain stochastic systems in the presence of finite capacity feedback . motivated by the structure of communication networks , we consider a variable rate digital link . such link is used to transmit state measurements between the plant and the controller . we derive necessary and sufficient conditions for internal and external stabilizability of the feedback loop . in accordance with previous publications , stabilizability of unstable plants is possible if and only if the link 's average transmission rate is above a positive critical value . in addition , stability in the presence of uncertainty in the plant is analyzed using a small-gain argument . we also show that robustness can be increased at the expense of a higher transmission rate . story_separator_special_tag in this paper , we present a data rate theorem for stabilization of a linear , discrete-time , unstable dynamical system with arbitrarily large disturbances , over a noiseless communication channel with time-varying rates . necessary and sufficient conditions for stabilization are derived , their implications and relationships with related results in the literature are discussed . story_separator_special_tag this paper analyzes distributed control protocols for first- and second-order networked dynamical systems . we propose a class of nonlinear consensus controllers where the input of each agent can be written as a product of a nonlinear gain , and a sum of nonlinear interaction functions . by using integral lyapunov functions , we prove the stability of the proposed control protocols , and explicitly characterize the equilibrium set . we also propose a distributed proportional-integral ( pi ) controller for networked dynamical systems . the pi controllers successfully attenuate constant disturbances in the network . we prove that agents with single-integrator dynamics are stable for any integral gain , and give an explicit tight upper bound on the integral gain for when the system is stable for agents with double-integrator dynamics . throughout the paper we highlight some possible applications of the proposed controllers by realistic simulations of autonomous satellites , power systems and building temperature control . story_separator_special_tag cyber-physical systems ( cpss ) resulting from the interconnection of computational , communication , and control ( cyber ) devices with physical processes are wide spreading in our society . in several cps applications it is crucial to minimize the communication burden , while still providing desirable closed-loop control properties . to this effect , a promising approach is to embrace the recently proposed event-triggered control paradigm , in which the transmission times are chosen based on well-defined events , using state information . however , few general event-triggered control methods guarantee closed-loop improvements over traditional periodic transmission strategies . here , we provide a new class of event-triggered controllers for linear systems which guarantee better quadratic performance than traditional periodic time-triggered control using the same average transmission rate . in particular , our main results explicitly quantify the obtained performance improvements for quadratic average cost problems . the proposed controllers are inspired by rollout ideas in the context of dynamic programming . story_separator_special_tag wireless networked control systems ( wncss ) provide a key enabling technique for industrial internet of things ( iiot ) . however , in the literature of wncss , most of the research focuses on the control perspective and has considered oversimplified models of wireless communications that do not capture the key parameters of a practical wireless communication system , such as latency , data rate , and reliability . in this article , we focus on a wncs , where a controller transmits quantized and encoded control codewords to a remote actuator through a wireless channel , and adopt a detailed model of the wireless communication system , which jointly considers the interrelated communication parameters . we derive the stability region of the wncs . if and only if the tuple of the communication parameters lies in the region , the average cost function , i.e. , a performance metric of the wncs , is bounded . we further obtain a necessary and sufficient condition under which the stability region is $ n $ -bounded , where $ n $ is the control codeword blocklength . we also analyze the average cost function of the wncs . such analysis story_separator_special_tag the paper considers a wireless networked control system ( wncs ) , where a controller sends packets carrying control information to an actuator through a wireless channel to control a physical process for industrial-control applications . in most of the existing work on wncss , the packet length for transmission is fixed . however , from the channel-encoding theory , if a message is encoded into a longer codeword , its reliability is improved at the expense of longer delay . both delay and reliability have great impact on the control performance . such a fundamental delay-reliability tradeoff has rarely been considered in wncss . in this paper , we propose a novel wncs , where the controller adaptively changes the packet length for control based on the current status of the physical process . we formulate a decision-making problem and find the optimal variable-length packet-transmission policy for minimizing the long-term average cost of the wncss . we derive a necessary and sufficient condition on the existence of the optimal policy in terms of the transmission reliabilities with different packet lengths and the control system parameter . story_separator_special_tag consider a distributed control problem with a communication channel connecting the observer of a linear stochastic system to the controller . the goal of the controller is minimize a quadratic cost function . the most basic special case of that cost function is the mean-square deviation of the system state from the desired state . we study the fundamental tradeoff between the communication rate r bits/sec and the limsup of the expected cost b , and show a lower bound on the rate necessary to attain b. the bound applies as long as the system noise has a probability density function . if target cost b is not too large , that bound can be closely approached by a simple lattice quantization scheme that only quantizes the innovation , that is , the difference between the controller 's belief about the current state and the true state . story_separator_special_tag preface introduction topological entropy , observability , robustness , stabilizability , and optimal control stabilization of linear multiple sensor systems via limited capacity communication channels detectability and output feedback stabilizability of nonlinear systems via limited capacity communication channels robust set-valued state estimation via limited capacity communication channels an analog of shannon information theory : state estimation and stabilization of linear noiseless plants via noisy discrete channels an analog of shannon information theory : state estimation and stabilization of linear noisy plants via noisy discrete channels an analog of shannon information theory : stable in probability control and state estimation of linear noisy plants via noisy discrete channels decentralized stabilization of linear systems via limited capacity communication networks h-infinity state estimation via communication channels kalman state estimation and optimal control based on asynchronously and irregularly delayed measurements optimal computer control via asynchronous communication channels linear-quadratic gaussian optimal control via limited capacity communication channels kalman state estimation in networked systems with asynchronous communication channels and switched sensors robust kalman state estimation with switched sensors appendix a : proof of proposition 7.6.13 appendix b : some properties of square ensembles of matrices appendix c : discrete kalman filter and linear-quadratic gaussian optimal control story_separator_special_tag in the past years , the problem of stabilising linear dynamical systems with low feedback data rates has been intensively investigated . a particular focus has been the characterisation of the infimum data rate for stabilisability , which specifies the smallest rate , in bits per unit time , at which information can circulate in a stable feedback loop . this paper extends this line of research to the case of fully-observed , finite-dimensional , linear systems without process noise but with control-independent , markov parameters . unlike previous formulations , the coding alphabet is permitted to be random and time-varying via a possible dependence on the observed markov modes . using quantisation techniques and real jordan forms , it is shown that the smallest asymptotic mean data rate for stabilisability in r-th absolute output moment , over all coding and control schemes , is given by an exponent which measures the asymptotic mean growth rate of unstable eigenspace volumes . an explicit formula for it is obtained in the case of antistable dynamics . for scalar systems , this expression is quite different from an earlier one derived assuming a constant alphabet , in particular being independent of the story_separator_special_tag abstractthis paper studies the stabilisability and the performance of stochastic disturbance attenuation of a markov jump linear system whose feedback channel is subject to an additive white gaussian noise . first an inequality of differential entropy of random vectors under markov switching is presented . then by the concept of entropy power and the theory of information , a necessary condition to stabilise the system is obtained . this requires that the signal-to-noise ratio in the feedback channel is bigger than a specified value . furthermore , to evaluate the performance of disturbance attenuation , a lower bound of the maximum fluctuation of the system state is presented . story_separator_special_tag this paper deals with the problem of mean square ( ms ) stabilization for a markov jump linear system ( mjls ) over gaussian relay channel subject to average power constraint . based on the theory of entropy , a necessary condition of channel capacity for ms stabilization is proposed . then by using the schalkwijk-kailath encoding scheme , a sufficient condition of ms stabilization is derived for a scalar mjls . finally , a numerical example is given to show the show the effectiveness and correctness of the theoretical work . story_separator_special_tag in this paper , we study the second order stabilization problem of markovian jump linear systems ( mjlss ) with logarithmically quantized state feedbacks . we give explicit constructions of the stabilizing logarithmic quantizer and controller . we also present a semi-convex way to determine the coarsest stabilizing quantization density . in addition , we show that the problem of stabilizing a linear time-invariant ( lti ) system over a lossy channel can be viewed as a special example of the framework developed here . a contribution of the work is a simultaneous treatment of finite bandwidth constraints ( logarithmic quantization ) and latency in feedback channels . story_separator_special_tag the problem of optimal sequential vector quantization of markov sources is cast as a stochastic control problem with partial observations and constraints , leading to useful existence results for optimal codes and their characterizations . story_separator_special_tag a class of iterative aggregation algorithms for solving infinite horizon dynamic programming problems is proposed . the idea is to interject aggregation iterations in the course of the usual successive approximation method . an important feature that sets this method apart from earlier ones is that the aggregate groups of states change adaptively from one aggregation iteration to the next , depending on the progress of the computation . this allows acceleration of convergence in difficult problems involving multiple-ergodic classes for which methods using fixed groups of aggregate states are ineffective . no knowledge of special problem structure is utilized by the algorithms . > story_separator_special_tag we propose a new aggregation framework for approximate dynamic programming , which provides a connection with rollout algorithms , approximate policy iteration , and other single and multistep lookahead methods . the central novel characteristic is the use of a bias function $ v $ of the state , which biases the values of the aggregate cost function towards their correct levels . the classical aggregation framework is obtained when $ v\\equiv0 $ , but our scheme works best when $ v $ is a known reasonably good approximation to the optimal cost function $ j^ * $ . when $ v $ is equal to the cost function $ j_ { \\mu } $ of some known policy $ \\mu $ and there is only one aggregate state , our scheme is equivalent to the rollout algorithm based on $ \\mu $ ( i.e. , the result of a single policy improvement starting with the policy $ \\mu $ ) . when $ v=j_ { \\mu } $ and there are multiple aggregate states , our aggregation approach can be used as a more powerful form of improvement of $ \\mu $ . thus , when combined with an story_separator_special_tag in this paper we study a class of modified policy iteration algorithms for solving markov decision problems . these correspond to performing policy evaluation by successive approximations . we discuss the relationship of these algorithms to newton-kantorovich iteration and demonstrate their covergence . we show that all of these algorithms converge at least as quickly as successive approximations and obtain estimates of their rates of convergence . an analysis of the computational requirements of these algorithms suggests that they may be appropriate for solving problems with either large numbers of actions , large numbers of states , sparse transition matrices , or small discount rates . these algorithms are compared to policy iteration , successive approximations , and gauss-seidel methods on large randomly generated test problems . story_separator_special_tag this paper proposes bounds and action elimination procedures for policy iteration and modified policy iteration . procedures to eliminate nonoptimal actions for one iteration and for all subsequent iterations are presented . the implementation of these procedures is discussed and encouraging computational results are presented . story_separator_special_tag abstract a manufacturer places orders periodically for products that are shipped from a supplier . during transit , orders get damaged with some probability , that is , the order is subject to random yield . the manufacturer has the option to track orders to receive information on damages and to potentially place additional orders . without tracking , the manufacturer identifies potential damages after the order has arrived . with tracking , the manufacturer is informed about the damage when it occurs and can respond to this information . we model the problem as a dynamic program with stochastic demand , tracking cost , and random yield . for small problem sizes , we provide an adjusted value iteration algorithm that finds the optimal solution . for moderate problem sizes , we propose a novel aggregation-based approximate dynamic programming ( adp ) algorithm and provide solutions for instances for which it is not possible to obtain optimal solutions . for large problem sizes , we develop a heuristic that takes tracking costs into account . in a computational study , we analyze the performance of our approaches . we observe that our adp algorithm achieves savings of up to story_separator_special_tag abstract methods of successive approximation for solving linear systems or minimization problems are accelerated by aggregation-disaggregation processes . these processes , which modify the iterates being produced , are characterized by a two directional flow of information between the original higher dimensional problem and a lower dimensional aggregated version . this technique is characterized by means of galerkin approximations , and this in turn permits analysis of the method . a deterministic as well as probabilistic analysis is given of a number of specific aggregation-disaggregation examples . numerical experiments have been performed , and these confirm the analysis and demonstrate the acceleration . story_separator_special_tag the article poses a general model for optimal control subject to information constraints , motivated in part by recent work of sims and others on information-constrained decision making by economic agents . in the average-cost optimal control framework , the general model introduced in this paper reduces to a variant of the linear-programming representation of the average-cost optimal control problem , subject to an additional mutual information constraint on the randomized stationary policy . the resulting optimization problem is convex and admits a decomposition based on the bellman error , which is the object of study in approximate dynamic programming . the theory is illustrated through the example of information-constrained linear quadratic gaussian control problem . some results on the infinite-horizon discounted-cost criterion are also presented . story_separator_special_tag in the design of closed-loop networked control systems ( ncss ) , induced transmission delay between sensors and the control station is an often present issue which compromises control performance and may even cause instability . a very relevant scenario in which network-induced delay needs to be investigated is costly usage of communication resources . more precisely , advanced communication technologies , e.g. , 5g , are capable of offering latency-varying information exchange for different prices . therefore , induced delay becomes a decision variable . it is then the matter of decision maker 's willingness to either pay the required cost to have lowlatency access to the communication resource , or delay the access at a reduced price . in this letter , we consider optimal price-based bi-variable decision making problem for single-loop ncs with a stochastic linear time-invariant system . assuming that communication incurs cost such that transmission with shorter delay is more costly , a decision maker determines the switching strategy between communication links of different delays such that an optimal balance between the control performance and the communication cost is maintained . in this letter , we show that , under mild assumptions on the available story_separator_special_tag this paper studies the optimization of observation channels ( stochastic kernels ) in partially observed stochastic control problems . in particular , existence , continuity , and convexity properties are investigated . continuity properties of the optimal cost in channels are explored under total variation , setwise convergence and weak convergence . sufficient conditions for sequential compactness under total variation and setwise convergence are presented . it is shown that the optimization is concave in observation channels . this implies that the optimization problem is non-convex in quantization/coding policies for a class of networked control problems . furthermore , the paper explains why a class of decentralized control problems , under the non-classical information structure , is non-convex when signaling is present . story_separator_special_tag abstract a constraint that actions can depend on observations only through a communication channel with finite shannon capacity is shown to be able to play a role very similar to that of a signal extraction problem or an adjustment cost in standard control problems . the resulting theory looks enough like familiar dynamic rational expectations theories to suggest that it might be useful and practical , while the implications for policy are different enough to be interesting . story_separator_special_tag a collaborative task is assigned to a multiagent system ( mas ) in which agents are allowed to communicate . the mas runs over an underlying markov decision process and its task is to maximize the averaged sum of discounted one-stage rewards . although knowing the global state of the environment is necessary for the optimal action selection of the mas , agents are limited to individual observations . the inter-agent communication can tackle the issue of local observability , however , the limited rate of the inter-agent communication prevents the agent from acquiring the precise global state information . to overcome this challenge , agents need to communicate their observations in a compact way such that the mas compromises the minimum possible sum of rewards . we show that this problem is equivalent to a form of rate-distortion problem which we call the task-based information compression . we introduce two schemes for task-based information compression ( i ) learning-based information compression ( lbic ) which leverages reinforcement learning to compactly represent the observation space of the agents , and ( ii ) state aggregation for information compression ( saic ) , for which a state aggregation algorithm is analytically story_separator_special_tag a collaborative task is assigned to a multiagent system ( mas ) in which agents are allowed to communicate . the mas runs over an underlying markov decision process and its task is to maximize the averaged sum of discounted one-stage rewards . although knowing the global state of the environment is necessary for the optimal action selection of the mas , agents are limited to individual observations . the inter-agent communication can tackle the issue of local observability , however , the limited rate of the inter-agent communication prevents the agents from acquiring the precise global state information . to overcome this challenge , agents need to communicate their observations in a compact way such that the mas compromises the minimum possible sum of rewards . we show that this problem is equivalent to a form of rate-distortion problem which we call the task-based information compression . state aggregation for information compression ( saic ) is introduced here to perform the task-based information compression . the saic is shown , conditionally , to be capable of achieving the optimal performance in terms of the attained sum of discounted rewards . the proposed algorithm is applied to a rendezvous problem story_separator_special_tag consider a collaborative task carried out by two autonomous agents that can communicate over a noisy channel . each agent is only aware of its own state , while the accomplishment of the task depends on the value of the joint state of both agents . as an example , both agents must simultaneously reach a certain location of the environment , while only being aware of their own positions . assuming the presence of feedback in the form of a common reward to the agents , a conventional approach would apply separately : ( i ) an off-the-shelf coding and decoding scheme in order to enhance the reliability of the communication of the state of one agent to the other ; and ( ii ) a standard multiagent reinforcement learning strategy to learn how to act in the resulting environment . in this work , it is argued that the performance of the collaborative task can be improved if the agents learn how to jointly communicate and act . in particular , numerical results for a baseline grid world example demonstrate that the jointly learned policy carries out compression and unequal error protection by leveraging information about the action story_separator_special_tag we discuss the temporal-difference learning algorithm , as applied to approximating the cost-to-go function of an infinite-horizon discounted markov chain . the algorithm we analyze updates parameters of a linear function approximator online during a single endless trajectory of an irreducible aperiodic markov chain with a finite or infinite state space . we present a proof of convergence ( with probability one ) , a characterization of the limit of convergence , and a bound on the resulting approximation error . furthermore , our analysis is based on a new line of reasoning that provides new intuition about the dynamics of temporal-difference learning . in addition to proving new and stronger positive results than those previously available , we identify the significance of online updating and potential hazards associated with the use of nonlinear function approximators . first , we prove that divergence may occur when updates are not based on trajectories of the markov chain . this fact reconciles positive and negative results that have been discussed in the literature , regarding the soundness of temporal-difference learning . second , we present an example illustrating the possibility of divergence when temporal difference learning is used in the presence of story_separator_special_tag we discuss a relatively new class of dynamic programming methods for control and sequential decision making under uncertainty . these methods have the potential of dealing with problems that for a long time were thought to be intractable due to either a large state space or the lack of an accurate model . the methods discussed combine ideas from the fields of neural networks , artificial intelligence , cognitive science , simulation , and approximation theory . we delineate the major conceptual issues , survey a number of recent developments , describe some computational experience , and address a number of open questions . story_separator_special_tag this paper introduces nfq , an algorithm for efficient and effective training of a q-value function represented by a multi-layer perceptron . based on the principle of storing and reusing transition experiences , a model-free , neural network based reinforcement learning algorithm is proposed . the method is evaluated on three benchmark problems . it is shown empirically , that reasonably few interactions with the plant are needed to generate control policies of high quality . story_separator_special_tag we consider continuous state , continuous action batch reinforcement learning where the goal is to learn a good policy from a sufficiently rich trajectory generated by some policy . we study a variant of fitted q-iteration , where the greedy action selection is replaced by searching for a policy in a restricted set of candidate policies by maximizing the average action values . we provide a rigorous analysis of this algorithm , proving what we believe is the first finite-time bound for value-function based algorithms for continuous state and action problems . story_separator_special_tag we address the problem of computing the optimal q-function in markov decision problems with infinite state-space . we analyze the convergence properties of several variations of q-learning when combined with function approximation , extending the analysis of td-learning in ( tsitsiklis & van roy , 1996a ) to stochastic control settings . we identify conditions under which such approximate methods converge with probability 1. we conclude with a brief discussion on the general applicability of our results and compare them with several related works . story_separator_special_tag we present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning . the model is a convolutional neural network , trained with a variant of q-learning , whose input is raw pixels and whose output is a value function estimating future rewards . we apply our method to seven atari 2600 games from the arcade learning environment , with no adjustment of the architecture or learning algorithm . we find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them . story_separator_special_tag the theory of reinforcement learning provides a normative account , deeply rooted in psychological and neuroscientific perspectives on animal behaviour , of how agents may optimize their control of an environment . to use reinforcement learning successfully in situations approaching real-world complexity , however , agents are confronted with a difficult task : they must derive efficient representations of the environment from high-dimensional sensory inputs , and use these to generalize past experience to new situations . remarkably , humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems , the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms . while reinforcement learning agents have achieved some successes in a variety of domains , their applicability has previously been limited to domains in which useful features can be handcrafted , or to domains with fully observed , low-dimensional state spaces . here we use recent advances in training deep neural networks to develop a novel artificial agent , termed a deep q-network , that can learn successful policies directly from high-dimensional sensory story_separator_special_tag decision-theoretic planning is a popular approach to sequential decision making problems , because it treats uncertainty in sensing and acting in a principled way . in single-agent frameworks like mdps and pomdps , planning can be carried out by resorting to q-value functions : an optimal q-value function q * is computed in a recursive manner by dynamic programming , and then an optimal policy is extracted from q * . in this paper we study whether similar q-value functions can be defined for decentralized pomdp models ( dec-pomdps ) , and how policies can be extracted from such value functions . we define two forms of the optimal q-value function for dec-pomdps : one that gives a normative description as the q-value function of an optimal pure joint policy and another one that is sequentially rational and thus gives a recipe for computation . this computation , however , is infeasible for all but the smallest problems . therefore , we analyze various approximate q-value functions that allow for efficient computation . we describe how they relate , and we prove that they all provide an upper bound to the optimal q-value function q * . finally , unifying story_separator_special_tag planning in single-agent models like mdps and pomdps can be carried out by resorting to q-value functions : a ( near- ) optimal q-value function is computed in a recursive manner by dynamic programming , and then a policy is extracted from this value function . in this paper we study whether similar q-value functions can be defined in decentralized pomdp models ( dec-pomdps ) , what the cost of computing such value functions is , and how policies can be extracted from such value functions . using the framework of bayesian games , we argue that searching for the optimal q-value function may be as costly as exhaustive policy search . then we analyze various approximate q-value functions that allow efficient computation . finally , we describe a family of algorithms for extracting policies from such q-value functions . story_separator_special_tag decentralized partially observable markov decision processes ( dec-pomdps ) constitute a generic and expressive framework for multiagent planning under uncertainty . however , planning optimally is difficult because solutions map local observation histories to actions , and the number of such histories grows exponentially in the planning horizon . in this work , we identify a criterion that allows for lossless clustering of observation histories : i.e. , we prove that when two histories satisfy the criterion , they have the same optimal value and thus can be treated as one . we show how this result can be exploited in optimal policy search and demonstrate empirically that it can provide a speed-up of multiple orders of magnitude , allowing the optimal solution of significantly larger problems . we also perform an empirical analysis of the generality of our clustering method , which suggests that it may also be useful in other ( approximate ) dec-pomdp solution methods . story_separator_special_tag we explore deep reinforcement learning methods for multi-agent domains . we begin by analyzing the difficulty of traditional algorithms in the multi-agent case : q-learning is challenged by an inherent non-stationarity of the environment , while policy gradient suffers from a variance that increases as the number of agents grows . we then present an adaptation of actor-critic methods that considers action policies of other agents and is able to successfully learn policies that require complex multi-agent coordination . additionally , we introduce a training regimen utilizing an ensemble of policies for each agent that leads to more robust multi-agent policies . we show the strength of our approach compared to existing methods in cooperative as well as competitive scenarios , where agent populations are able to discover various physical and informational coordination strategies . story_separator_special_tag this paper targets solving distributed machine learning problems such as federated learning in a communication-efficient fashion . a class of new stochastic gradient descent ( sgd ) approaches have been developed , which can be viewed as the stochastic generalization to the recently developed lazily aggregated gradient ( lag ) method -- - justifying the name lasg . lag adaptively predicts the contribution of each round of communication and chooses only the significant ones to perform . it saves communication while also maintains the rate of convergence . however , lag only works with deterministic gradients , and applying it to stochastic gradients yields poor performance . the key components of lasg are a set of new rules tailored for stochastic gradients that can be implemented either to save download , upload , or both . the new algorithms adaptively choose between fresh and stale stochastic gradients and have convergence rates comparable to the original sgd . lasg achieves impressive empirical performance -- - it typically saves total communication by an order of magnitude . story_separator_special_tag this paper develops algorithms for decentralized machine learning over a network , where data are distributed , computation is localized , and communication is restricted between neighbors . a line of recent research in this area focuses on improving both computation and communication complexities . the methods ssda and msda\xa0 ( scaman et al. , 2017 ) have optimal communication complexity when the objective is smooth and strongly convex , and are simple to derive . however , they require solving a subproblem at each step , so both the required accuracy of subproblem solutions and total computational complexities are uncertain . we propose new algorithms that instead of solving a subproblem , run warm-started katyusha for a small , fixed number of steps . in addition , when previous information is sufficiently useful , a local rule will decide even to skip a round of communication , leading to extra savings . we show that our algorithms are efficient in both computation and communication , provably reducing the communication and computation complexities of ssda and msda . in numerical experiments , our algorithms achieve significant computation and communication reduction compared with the state-of-the-art . story_separator_special_tag the combinatorial explosion that plagues planning and reinforcement learning ( rl ) algorithms can be moderated using state abstraction . prohibitively large task representations can be condensed such that essential information is preserved , and consequently , solutions are tractably computable . however , exact abstractions , which treat only fully-identical situations as equivalent , fail to present opportunities for abstraction in environments where no two situations are exactly alike . in this work , we investigate approximate state abstractions , which treat nearly-identical situations as equivalent . we present theoretical guarantees of the quality of behaviors derived from four types of approximate abstractions . additionally , we empirically demonstrate that approximate abstractions lead to reduction in task complexity and bounded loss of optimality of behavior in a variety of environments . story_separator_special_tag state abstraction ( or state aggregation ) has been extensively studied in the fields of artificial intelligence and operations research . instead of working in the ground state space , the decision maker usually finds solutions in the abstract state space much faster by treating groups of states as a unit by ignoring irrelevant state information . a number of abstractions have been proposed and studied in the reinforcement-learning and planning literatures , and positive and negative results are known . we provide a unified treatment of state abstraction for markov decision processes . we study five particular abstraction schemes , some of which have been proposed in the past in different forms , and analyze their usability for planning and learning . story_separator_special_tag we study the problem of representation learning in goal-conditioned hierarchical reinforcement learning . in such hierarchical structures , a higher-level controller solves tasks by iteratively communicating goals which a lower-level policy is trained to reach . accordingly , the choice of representation -- the mapping of observation space to goal space -- is crucial . to study this problem , we develop a notion of sub-optimality of a representation , defined in terms of expected reward of the optimal hierarchical policy using this representation . we derive expressions which bound the sub-optimality and show how these expressions can be translated to representation learning objectives which may be optimized in practice . results on a number of difficult continuous-control tasks show that our approach to representation learning yields qualitatively better representations as well as quantitatively better hierarchical policies , compared to existing methods ( see videos at this https url ) . story_separator_special_tag a general approach for designing and the theory for analysing robust direct and indirect adaptive-control schemes for continuous-time plants is presented . the design approach involves the development of a general robust adaptive law and the use of the certainty equivalence principle to combine it with robust model reference and pole placement control structures . the global stability properties and robustness of the developed adaptive control schemes are established by using a general theory which relates the properties of signals in the mean sense over intervals of time . the developed theory and design approach are used to analyse and compare the robustness properties and performance of a wide class of robust adaptive laws which employ a dead-zone , fixed- , e1 , and a switching- modification as well as their variations . story_separator_special_tag abstract the paper considers the control of an unknown linear time-invariant plant using direct and indirect control . using a specific controller structure and the concept of positive realness , adaptive laws which are very similar are derived for the two cases . the stability questions that arise are also shown to be analogous and are discussed in some detail . simulation results are presented towards the end of the paper to demonstrate the effectiveness of the two schemes . story_separator_special_tag wireless networked control systems for the industrial internet of things ( iiot ) require low-latency communication techniques that are very reliable and resilient . in this article , we investigate a coding-free control method to achieve ultralow latency communications in single-controller-multiplant networked control systems for both slow- and fast-fading channels . we formulate a power allocation problem to optimize the sum cost functions of multiple plants , subject to the plant stabilization condition and the controller s power limit . although the optimization problem is a nonconvex one , we derive a closed-form solution , which indicates that the optimal power allocation policy for stabilizing the plants with different channel conditions is reminiscent of the channel-inversion policy . we numerically compare the performance of the proposed coding-free control method and the conventional coding-based control methods in terms of the control performance ( i.e. , the cost function ) of a plant , which shows that the coding-free method is superior in a practical range of signal-to-noise ratios . story_separator_special_tag wireless networked control systems for industrial internet of things ( iiot ) require low latency communication techniques . in this paper , we investigate a coding-free control method to achieve ultra-low latency communications in single-controller-multi-plant networked control systems . we formulate a power allocation problem to optimize the sum cost functions of multiple plants , subject to the plant stabilization condition and the controller 's power limit . although the optimization problem is a non-convex one , we derive a closed-form solution , which indicates that the optimal power allocation policy for stabilizing the plants with different channel conditions is reminiscent of the channel-inversion policy . also , we numerically compare the performance of the proposed coding-free control method and the conventional coding-based control methods in terms of the cost function of a plant , which shows that the coding-free method is superior in a practical range of snrs . story_separator_special_tag the paper develops < formula formulatype= '' inline '' > < tex notation= '' tex '' > $ { { \\cal q } { \\cal d } } $ < /tex > < /formula > -learning , a distributed version of reinforcement < formula formulatype= '' inline '' > < tex notation= '' tex '' > $ q $ < /tex > < /formula > -learning , for multi-agent markov decision processes ( mdps ) ; the agents have no prior information on the global state transition and on the local agent cost statistics . the network agents minimize a network-averaged infinite horizon discounted cost , by local processing and by collaborating through mutual information exchange over a sparse ( possibly stochastic ) communication network . the agents respond differently ( depending on their instantaneous one-stage random costs ) to a global controlled state and the control actions of a remote controller . when each agent is aware only of its local online cost data and the inter-agent communication network is weakly connected , we prove that < formula formulatype= '' inline '' > < tex notation= '' tex '' > $ { { \\cal q } { \\cal d story_separator_special_tag a data rate theorem for stabilization of a linear , discrete-time , dynamical system with arbitrarily large disturbances , over a rate-limited , time-varying communication channel is presented . necessary and sufficient conditions for stabilization are derived , their implications and relationships with related results in the literature are discussed . the proof techniques rely on both information-theoretic and control-theoretic tools . story_separator_special_tag abstract this work studies the problem of lqg control when the link between the sensor and the controller relies on a wi-fi network . unfortunately , the communication on a wireless medium is sensitive to noise in the transmission band , which is characterized by the signal-to-noise ratio ( snr ) . wi-fi allows to switch among different bit-rates in real-time thus permitting to trade-off lower loss probabilities for larger latency or vice-versa to achieve better closed-loop performance . to exploit this feature , under a constant snr scenario , we propose a cross-layer approach where the bit-rate is optimally selected based on a control performance metric ( i.e . minimum lqg cost ) and a model-based controller is used to compensate for the packet losses . under time-varying snr , we additionally propose a ( sub-optimal ) on-line rate adaptation strategy and we guarantee the closed-loop stability under some mild conditions . numerical comparisons with emulation-based approaches using truetime , a realistic matlab-based wi-fi simulator , are included to show the benefits of the adaptive approach under time-varying snr scenarios . story_separator_special_tag this paper considers control and estimation problems where the sensor signals and the actuator signals are transmitted to various subsystems over a network . in contrast to traditional control and estimation problems , here the observation and control packets may be lost or delayed . the unreliability of the underlying communication network is modeled stochastically by assigning probabilities to the successful transmission of packets . this requires a novel theory which generalizes classical control/estimation paradigms . the paper offers the foundations of such a novel theory . the central contribution is to characterize the impact of the network reliability on the performance of the feedback loop . specifically , it is shown that for network protocols where successful transmissions of packets is acknowledged at the receiver ( e.g. , tcp-like protocols ) , there exists a critical threshold of network reliability ( i.e. , critical probabilities for the successful delivery of packets ) , below which the optimal controller fails to stabilize the system . further , for these protocols , the separation principle holds and the optimal lqg controller is a linear function of the estimated state . in stark contrast , it is shown that when there is story_separator_special_tag the paper addresses a lqg optimal control problem involving bit-rate communication capacity constraints . a discrete-time partially observed system perturbed by white noises is studied . unlike the classic lqg control theory , the control signal must be first encoded , then transmitted to the actuators over a digital communication channel with a given bandwidth , and finally decoded . both the control law and the algorithms of encoding and decoding should be designed to archive the best performance . the optimal control strategy is obtained . it is shown that where the estimator-coder separation principle holds , the controller-coder one fails to be true . story_separator_special_tag unmanned aerial vehicles ( uavs ) are capable of serving as aerial base stations ( bss ) for providing both cost-effective and on-demand wireless communications . this article investigates dynamic resource allocation of multiple uavs enabled communication networks with the goal of maximizing long-term rewards . more particularly , each uav communicates with a ground user by automatically selecting its communicating user , power level and subchannel without any information exchange among uavs . to model the dynamics and uncertainty in environments , we formulate the long-term resource allocation problem as a stochastic game for maximizing the expected rewards , where each uav becomes a learning agent and each resource allocation solution corresponds to an action taken by the uavs . afterwards , we develop a multi-agent reinforcement learning ( marl ) framework that each agent discovers its best strategy according to its local observations using learning . more specifically , we propose an agent-independent method , for which all agents conduct a decision algorithm independently but share a common structure based on q-learning . finally , simulation results reveal that : 1 ) appropriate parameters for exploitation and exploration are capable of enhancing the performance of the proposed marl story_separator_special_tag recently , the unmanned aerial vehicles ( uavs ) have been widely used in real-time sensing applications over cellular networks . the performance of a uav is determined by both its sensing and transmission processes , which are influenced by the trajectory of the uav . however , it is challenging for the uav to determine its trajectory , since it works in a dynamic environment , where other uavs determine their trajectories dynamically and compete for the limited spectrum resources in the same time . to tackle this challenge , we adopt the reinforcement learning to solve the uav trajectory design problem in a decentralized manner . to coordinate multiple uavs performing real-time sensing tasks , we first propose a sense-and-send protocol , and analyze the probability for successful valid data transmission using nested markov chains . then , we propose an enhanced multi-uav $ { q } $ -learning algorithm to solve the decentralized uav trajectory design problem . simulation results show that the proposed algorithm converges faster and achieves higher utilities for the uavs , compared to traditional single- and multi-agent $ { q } $ -learning algorithms . story_separator_special_tag this letter investigates a novel unmanned aerial vehicle ( uav ) -enabled wireless communication system , where multiple uavs transmit information to multiple ground terminals ( gts ) . we study how the uavs can optimally employ their mobility to maximize the real-time downlink capacity while covering all gts . the system capacity is characterized , by optimizing the uav locations subject to the coverage constraint . we formula the uav movement problem as a constrained markov decision process ( cmdp ) problem and employ q-learning to solve the uav movement problem . since the state of the uav movement problem has large dimensions , we propose dueling deep q-network ( ddqn ) algorithm which introduces neural networks and dueling structure into q-learning . simulation results demonstrate the proposed movement algorithm is able to track the movement of gts and obtains real-time optimal capacity , subject to coverage constraint . story_separator_special_tag one of the major research topics in unmanned aerial vehicle ( uav ) collaborative control systems is the problem of multi-uav target assignment and path planning ( mutapp ) . it is a complicated optimization problem in which target assignment and path planning are solved separately . however , recalculation of the optimal results is too slow for real-time operations in dynamic environments because of the large number of calculations required . in this paper , we propose an artificial intelligence method named simultaneous target assignment and path planning ( stapp ) based on a multi-agent deep deterministic policy gradient ( maddpg ) algorithm , which is a type of multi-agent reinforcement learning algorithm . in stapp , the mutapp problem is first constructed as a multi-agent system . then , the maddpg framework is used to train the system to solve target assignment and path planning simultaneously according to a corresponding reward structure . the proposed system can deal with dynamic environments effectively as its execution only requires the locations of the uavs , targets , and threat areas . real-time performance can be guaranteed as the neural network used in the system is simple . in addition , story_separator_special_tag recently , unmanned aerial vehicle ( uav ) -assisted wireless communication technology has been proposed to exploit the favorable propagation property and flexibility of air-to-ground channels to support content-centric caching and enhance wireless network capacity . in this article , we propose an online uav-assisted wireless caching design via jointly optimizing uav trajectory , transmission power and caching content scheduling . specifically , we formulate the joint optimization of online uav trajectory and caching content delivery as an infinite-horizon ergodic markov decision process ( mdp ) problem to obtain a qoe-optimal solution based on the concept of request queues in wireless caching networks . by exploiting the fluid approximation approach , we first derive an optimal control policy from an approximated bellman equation . based on this , an actor-critic based online reinforcement learning algorithm is proposed to solve the problem . finally , simulation results are provided to show that the proposed solution can achieve significant gain over the existing baselines . story_separator_special_tag network slices for delay-constrained applications in 5g systems require computing facilities at the edge of the network to guarantee ultra-low latency in processing data flows generated by connected devices , which is challenging with larger volumes of data , and larger distances to the edge of the network . to address this challenge , we propose to extend 5g network slices with unmanned aerial vehicles ( uav ) equipped with multi-access edge computing ( mec ) facilities . however , onboard computing elements ( ce ) consume uav s battery power thus impacting its flight duration . we propose a framework where a system controller ( sc ) can turn on and off uav s ces , with the possibility of offloading jobs to other uavs , to maximize an objective function defined in terms of power consumption , job loss , and incurred delay . management of this framework is achieved by reinforcement learning . a markov model of the system is introduced to enable reinforcement learning and provide guidelines for the selection of system parameters . a use case is considered to demonstrate the gain achieved by the proposed framework and discuss numerical results . story_separator_special_tag in the current unmanned aircraft systems ( uass ) for sensing services , unmanned aerial vehicles ( uavs ) transmit their sensory data to terrestrial mobile devices over the unlicensed spectrum . however , the interference from surrounding terminals is uncontrollable due to the opportunistic channel access . in this paper , we consider a cellular internet of uavs to guarantee the quality-of-service ( qos ) , where the sensory data can be transmitted to the mobile devices either by uav-to-device ( u2d ) communications over cellular networks , or directly through the base station ( bs ) . since uavs sensing and transmission may influence their trajectories , we study the trajectory design problem for uavs in consideration of their sensing and transmission . this is a markov decision problem ( mdp ) with a large state-action space , and thus , we utilize multi-agent deep reinforcement learning ( drl ) to approximate the state-action space , and then propose a multi-uav trajectory design algorithm to solve this problem . simulation results show that our proposed algorithm can achieve a higher total utility than policy gradient algorithm and single-agent algorithm . story_separator_special_tag in this paper , we aim to design a fully-distributed control solution to navigate a group of unmanned aerial vehicles ( uavs ) , as the mobile base stations ( bss ) to fly around a target area , to provide long-term communication coverage for the ground mobile users . different from existing solutions that mainly solve the problem from optimization perspectives , we proposed a decentralized deep reinforcement learning ( drl ) based framework to control each uav in a distributed manner . our goal is to maximize the temporal average coverage score achieved by all uavs in a task , maximize the geographical fairness of all considered point-of-interests ( pois ) , and minimize the total energy consumptions , while keeping them connected and not flying out of the area border . we designed the state , observation , action space , and reward in an explicit manner , and model each uav by deep neural networks ( dnns ) . we conducted extensive simulations and found the appropriate set of hyperparameters , including experience replay buffer size , number of neural units for two fully-connected hidden layers of actor , critic , and their target networks , story_separator_special_tag a novel framework is proposed for quality of experience driven deployment and dynamic movement of multiple unmanned aerial vehicles ( uavs ) . the problem of joint non-convex three-dimensional ( 3-d ) deployment and dynamic movement of the uavs is formulated for maximizing the sum mean opinion score of ground users , which is proved to be np-hard . in the aim of solving this pertinent problem , a three-step approach is proposed for attaining 3-d deployment and dynamic movement of multiple uavs . first , a genetic algorithm based k-means ( gak-means ) algorithm is utilized for obtaining the cell partition of the users . second , q-learning based deployment algorithm is proposed , in which each uav acts as an agent , making their own decision for attaining 3-d position by learning from trial and mistake . in contrast to the conventional genetic algorithm based learning algorithms , the proposed algorithm is capable of training the direction selection strategy offline . third , q-learning based movement algorithm is proposed in the scenario that the users are roaming . the proposed algorithm is capable of converging to an optimal state . numerical results reveal that the proposed algorithms show story_separator_special_tag in the paper the coordinated flight of an autonomously controlled group of\xa0 uavs is considered . in this system , the position information exchange between the separate group members for keeping the formation shape and fulfillment the prescribed flight mission is organized . in case one can not provide the high bandwidth for navigation data transferring it is possible to implement adaptive coding procedure based on coding with an embedded state estimator . moreover , due to a lack of the gnss navigation continuity , reducing the position estimation errors is a topical problem . the proposed novel adaptive coding procedure can be used ensuring the close to a maximal data\xa0 transmission rate . application of the proposed adaptive binary coder to the transferring navigation data between the quadrotors is studied in the details . a novel coding procedure makes possible to improve data transferring quality through the more accurate estimation process . such a property is achieved by means of the varying coder quantization level . as is shown by the simulations , \xa0 confirmed by the testbed experiments , that the habitual adaptive coding procedure has lower accuracy , which can have a critical meaning for uav 's story_separator_special_tag recently , deep reinforcement learning ( rl ) methods have been applied successfully to multi-agent scenarios . typically , these methods rely on a concatenation of agent states to represent the information content required for decentralized decision making . however , concatenation scales poorly to swarm systems with a large number of homogeneous agents as it does not exploit the fundamental properties inherent to these systems : ( i ) the agents in the swarm are interchangeable and ( ii ) the exact number of agents in the swarm is irrelevant . therefore , we propose a new state representation for deep multi-agent rl based on mean embeddings of distributions . we treat the agents as samples of a distribution and use the empirical mean embedding as input for a decentralized policy . we define different feature spaces of the mean embedding using histograms , radial basis functions and a neural network learned end-to-end . we evaluate the representation on two well known problems from the swarm literature ( rendezvous and pursuit evasion ) , in a globally and locally observable setup . for the local setup we furthermore introduce simple communication protocols . of all approaches , the mean story_separator_special_tag multi-agent deep reinforcement learning is becoming a promising approach to the problem of coordination of swarms of drones in dynamic systems . in particular , the use of autonomous aircraft for flood monitoring is now regarded as an economically viable option and it can benefit from this kind of automation : swarms of unmanned aerial vehicles could autonomously generate nearly real-time inundation maps that could improve relief work planning . in this work , we study the use of deep q-networks ( dqn ) as the optimization strategy for the trajectory planning that is required for monitoring floods , we train agents over simulated floods in procedurally generated terrain and demonstrate good performance with two different reward schemes . story_separator_special_tag we study the problem of tracking multiple moving targets using a team of mobile robots . each robot has a set of motion primitives to choose from in order to collectively maximize the number of targets tracked or the total quality of tracking . our focus is on scenarios where communication is limited and the robots have limited time to share information with their neighbors . as a result , we seek distributed algorithms that can find solutions in a bounded amount of time . we present two algorithms : ( 1 ) a greedy algorithm that is guaranteed to find a 2-approximation to the optimal ( centralized ) solution but requiring |r| communication rounds in the worst case , where |r| denotes the number of robots , and ( 2 ) a local algorithm that finds a $ $ \\mathcal { o } \\left ( ( 1+\\epsilon ) ( 1+1/h ) \\right ) $ $ approximation algorithm in $ $ \\mathcal { o } ( h\\log 1/\\epsilon ) $ $ communication rounds . here , h and $ $ \\epsilon $ $ are parameters that allow the user to trade-off the solution quality with communication time . in addition story_separator_special_tag the trend toward autonomous driving and the recent advances in vehicular networking led to a number of very successful proposals in cooperative driving . maneuvers can be coordinated among participating vehicles and controlled by means of wireless communications . one of the most challenging scenarios or applications in this context is cooperative adaptive cruise control ( cacc ) or platooning . when it comes to realizing safety gaps between the cars of less than 5 m , very strong requirements on the communication system need to be satisfied . the underlying distributed control system needs regular updates of sensor information from the other cars in the order of about 10 hz . this leads to message rates in the order of up to 10 khz for large networks , which , given the possibly unreliable wireless communication and the critical network congestion , is beyond the capabilities of current vehicular networking concepts . in this paper , we summarize the concepts of networked control systems and revisit the capabilities of current vehicular networking approaches . we then present opportunities of tactile internet concepts that integrate interdisciplinary approaches from control theory , mechanical engineering , and communication protocol design . this story_separator_special_tag this paper addresses the driver automation shared driving control for lane keeping and obstacle avoidance of automated vehicles in highway traffic . the proposed shared control framework is established from a novel cooperative trajectory planning algorithm and a fuzzy steering controller . based on polynomial functions , the cooperative trajectory planning is formulated by judiciously exploiting the information on the maneuver decision , the conflict management , and the driver monitoring . as a result , the planned trajectory of the vehicle is continuously adapted according to the driver 's actions and intentions . by means of lyapunov stability arguments , sufficient conditions in terms of linear matrix inequalities are given to design a takagi sugeno fuzzy model-based controller . this robust steering controller provides a necessary assistive torque to track the planned vehicle trajectory . the new shared driving control framework allows reducing effectively the driver automation conflict issue while offering the driver more freedom to swerve within a predefined lane . the advantages of the proposed approach are evaluated using both objective and subjective results , experimentally obtained from several human drivers and an advanced interactive dynamic driving simulator . story_separator_special_tag the worthwhile goal of reducing fatalities in road systems inspires people ever since the appearance of the first vehicles . policy makers , researchers , developers , and others have adopted various measures with a positive effect on the number of fatalities . in germany , the number dropped from a peak of 21,095 in 1970 , to 3,339 in 2013 [ 1 ] . measures include new laws and restrictions by policy makers such as reducing speed limits , penalizing drunken drivers , and enhancing education by driving schools [ 2 ] . researchers and developers mainly focus on technical safety and assistance systems . these systems include the anti-lock braking system ( abs ) , the electronic stabilization control ( esc ) , the emergency brake system , adaptive cruise control ( acc ) , and the lane-keeping control [ 3 ] . story_separator_special_tag platooning , which is the idea of cars autonomously following their leaders to form a road train , has huge potential to improve traffic flow efficiency and , most importantly , road traffic safety . wireless communication is a fundamental building block : it is needed to manage and maintain the platoons . to keep the system stable , strict constraints in terms of update frequency and communication reliability must be met . we investigate different communication strategies by explicitly taking into account the requirements of the controller , exploiting synchronized communication slots , and transmit power adaptation . as a baseline , we compared the proposed approaches to two state-of-the-art adaptive beaconing protocols that have been designed for cooperative awareness applications , namely , the european telecommunications standards institute ( etsi ) decentralized congestion control ( dcc ) and dynamic beaconing ( dynb ) . our simulation models have been parameterized and validated by means of real-world experiments . our results demonstrate that the combination of synchronized communication slots with transmit power adaptation is perfectly suited for cooperative driving applications , even on very crowded freeway scenarios . story_separator_special_tag co-adaptive system is a close coupling between human and software system cooperating to achieve shared goals . this co-adaption requires adaptive actions to react to unpredictable circumstances . one of the challenges is to deal with uncertainties , and consequently , decision making under uncertainty , which may arise because of the change in the environment , the unpredictable resources , etc . human behavior does contribute to large amounts of uncertainty . this paper presents an approach for using a simulator as a means of feedback to a human s decision under uncertainty that can assist human in automated planning to generate cooperative and symbiotic strategy of human and the system to achieve given tasks . to validate the approach , this paper presents a customizable traffic simulator to measure the delays associated with passing vehicles through intersections . the simulator contains ai-based self-adaptive vehicles which can evaluate the quality of traffic at an intersection and change their driving behavior . the human operator from the outside of the system can manipulate the signaling time , the number of predicates per driving rule , number of rules per rule set , learning factor ( adaption ) etc . to story_separator_special_tag self-driving systems are expected to become increasingly popular in the foreseeable future . however , a driver who is out of the control loop might reduce overall situation awareness by overly trusting automated driving systems . alternatively , the introduction of automated driving systems could lead to misuse or disuse . for these reasons , an automated driving system should encourage appropriate driver reliance to achieve social acceptance . imperfect information of the system sensing range might adversely affect trust . this study used a vibrotactile display with an automated driving system to provide situation awareness . the display contributes to driver trust by enabling a driver to predict or perceive actions selected by the system . the display provides spatial information related to traffic objects by haptic stimulus . the driving scenario of passing a motorbike with vehicles approaching from behind was considered . the results of this driving simulator study demonstrated that the spatial information and the behavior of the system affected trust . story_separator_special_tag the introduction of automated driving systems raised questions about how the human driver interacts with the automated system . non-cooperative game theory is increasingly used for modelling and understanding such interaction , while its counterpart , cooperative game theory is rarely discussed for similar applications despite it may be potentially more suitable . this paper describes the modelling of a human driver s steering interaction with an automated steering system using cooperative game theory . the distributed model predictive control approach is adopted to derive the driver s and the automated steering system s strategies in a pareto equilibrium sense , namely their cooperative pareto steering strategies . two separate numerical studies are carried out to study the influence of strategy parameters , and the influence of strategy types on the driver s and the automated system s steering performance . it is found that when a driver interacts with an automated steering system using a cooperative pareto steering strategy , the driver can improve his/ her performance in following a target path through increasing his/ her effort in pursuing his/ her own interest under the driver-automation cooperative control goal . it is also found that a driver s adoption story_separator_special_tag for autonomous vehicles , it is an important requirement to obtain integrate static road information in real-time in dynamic driving environment . a comprehensive perception of the surrounding road should cover the accurate detection of the entire road area despite occlusion , the 3d geometry and the types of road topology in order to facilitate the practical applications in autonomous driving . to this end , we propose a lightweight and efficient lidar-based multi-task road perception network ( lmroadnet ) to conduct occlusion-free road segmentation , road ground height estimation , and road topology recognition simultaneously . to optimize the proposed network , a corresponding multi-task dataset , named multiroad , is built semi-automatically based on the public semantickitti dataset . specifically , our network architecture uses road segmentation as the main task , and the remaining two tasks are directly decoded on a concentrated 1/4 scale feature map derived from the main task s feature maps of different scales and phases , which significantly reduces the complexity of the overall network while achieves high performance . in addition , a loss function with learnable weight of each task is adopted to train the neural network , which effectively balances story_separator_special_tag a pedestrian detection system is a crucial component of advanced driver assistance systems since it contributes to road flow safety . the safety of traffic participants could be significantly improved if these systems could also predict and recognize pedestrian s actions , or even estimate the time , for each pedestrian , to cross the street . in this paper , we focus not only on pedestrian detection and pedestrian action recognition but also on estimating if the pedestrian s action presents a risky situation according to time to cross the street . we propose 1 ) a pedestrian detection and action recognition component based , on retinanet ; 2 ) an estimation of the time to cross the street for multiple pedestrians using a recurrent neural network . for each pedestrian , the recurrent network estimates the pedestrian s action intention in order to predict the time to cross the street . we based our experiments on the jaad dataset , and show that integrating multiple pedestrian action tags for the detection part when merge with a recurrent neural network ( lstm ) allows a significant performance improvement . story_separator_special_tag in the autonomous vehicular cloud ( avc ) , when the services requested by the users are comprised of multiple tasks with dependencies in between on the conditions of limited resources of the vehicle , deploying dependency tasks by using service function chain ( sfc ) can improve the success ratio of task execution . therefore , this paper studies tasks allocation problem with dependent relationship . taking into account the stability of the node communication link and its computing capability , we connect the tasks in the order of dependencies into a service sequence accordingly , minimizing the task completion time to deploying the task , thereby to ensure the smooth execution of services and improve the success ration of the task execution . compared with the optimal generation scheme ( ogs ) algorithm , the success ratio of this scheme is 11.36 % higher in the highway avc with rapidly changing topology . story_separator_special_tag the connected automated vehicle has been often touted as a technology that will become pervasive in society in the near future . one can view an automated vehicle as having artificial intelligence ( ai ) capabilities , being able to self-drive , sense its surroundings , recognise objects in its vicinity , and perform reasoning and decision-making . rather than being stand alone , we examine the need for automated vehicles to cooperate and interact within their socio-cyber-physical environments , including the problems cooperation will solve , but also the issues and challenges . we review current work in cooperation for automated vehicles , based on selected examples from the literature . we conclude noting the need for the ability to behave cooperatively as a form of social-ai capability for automated vehicles , beyond sensing the immediate environment and beyond the underlying networking technology . story_separator_special_tag tactile internet ( ti ) is envisioned to create a paradigm shift from the content-oriented communications to steer/control-based communications by enabling real-time transmission of haptic information ( i.e. , touch , actuation , motion , vibration , surface texture ) over internet in addition to the conventional audiovisual and data traffics . this emerging ti technology , also considered as the next evolution phase of internet of things ( iot ) , is expected to create numerous opportunities for technology markets in a wide variety of applications ranging from teleoperation systems and augmented/virtual reality ( ar/vr ) to automotive safety and ehealthcare towards addressing the complex problems of human society . however , the realization of ti over wireless media in the upcoming fifth generation ( 5g ) and beyond networks creates various non-conventional communication challenges and stringent requirements in terms of ultra-low latency , ultra-high reliability , high data-rate connectivity , resource allocation , multiple access and quality-latency-rate tradeoff . to this end , this paper aims to provide a holistic view on wireless ti along with a thorough review of the existing state-of-the-art , to identify and analyze the involved technical issues , to highlight potential solutions and story_separator_special_tag touch is currently seen as the modality that will complement audition and vision as a third media stream over the internet in a variety of future haptic applications which will allow full immersion and that will , in many ways , impact society . nevertheless , the high requirements of these applications demand networks which allow ultra-reliable and low-latency communication for the challenging task of applying the required quality of service for maintaining the user s quality of experience at optimum levels . in this survey , we enlist , discuss , and evaluate methodologies and technologies of the necessary infrastructure for haptic communication . furthermore , we focus on how the fifth generation of mobile networks will allow haptic applications to take life , in combination with the haptic data communication protocols , bilateral teleoperation control schemes and haptic data processing needed . finally , we state the lessons learned throughout the surveyed research material along with the future challenges and infer our conclusions . story_separator_special_tag the past decade has witnessed how audio-visual communication has shaped the way humans interact with or through technical systems . in contemporary times , the potential of haptic communication has been recognized as being compelling to further augment human-to-human and human-to-machine interaction . in the context of immersive communication , video and audio compression are considered key enabling technologies for high-quality interaction . in contrast , the compression of haptic data is a field of research that is still relatively young and not fully explored . this disregards the fact that we as humans rely heavily on the haptic modality to interact with our environment . true immersion into a distant environment and efficient collaboration between multiple participants both require the ability to physically interact with objects in the remote environment . with recent advances in virtual reality , man-machine interaction , telerobotics , telepresence , and teleaction , haptic communication is proving instrumental in enabling many novel applications . the goal of this overview article is to summarize the state of the art and the challenges of haptic data compression and communication for telepresence and teleaction . story_separator_special_tag immersive environments provide an artificial world to surround users . these environments consist of a composition of various types of immersidata : unique data types that are combined to render a virtual experience . to construct such an environment , immersidata acquisition is indispensable for storage and future query . however , this is challenging because of the real time demands and sizeable amounts of data to be managed . we propose and evaluate alternative techniques for achieving efficient sampling and compression of one immersidata type , the haptic data , which describes the movement , rotation , and force associated with user-directed objects in an immersive environment . our experiments identify the benefits and limitations of various techniques in terms of their data storage , bandwidth and accuracy . story_separator_special_tag we present a novel approach for the transmission of haptic data in telepresence and teleaction systems . the goal of this work is to reduce the packet rate between an operator and a teleoperator without impairing the immersiveness of the system . our approach exploits the properties of human haptic perception and is , more specifically , based on the concept of just noticeable differences . in our scheme , updates of the haptic amplitude values are signaled across the network only if the change of a haptic stimulus is detectable by the human operator . we investigate haptic data communication for a 1 degree-of-freedom ( dof ) and a 3 dof teleaction system . our experimental results show that the presented approach is able to reduce the packet rate between the operator and teleoperator by up to 90 % of the original rate without affecting the performance of the system . story_separator_special_tag capitalizing on the latest developments in 5g and ultra-low delay networking as well as artificial intelligence ( ai ) and robotics , we advocate here for the emergence of an entirely novel internet which will enable the delivery of skills in digital form . we outline the technical challenges which need to be overcome to enable such a vision , i.e. , on the development of a 5g tactile internet , standardized haptic codecs , and ai to enable the perception of zero delay networks . the paper is concluded with an overview on the current capabilities , and the standardization initiatives in the ieee 5g tactile internet standards working group as well as the ieee 5g initiative . story_separator_special_tag 5g is all about integrating new industries in the design of this whole new generation , where the mobile broadband connection is not the one and only use case on focus . within the umbrella of 5g , other technologies are brought to the surface enabling new ways of communication with unprecedented effects on many sectors of society such as healthcare . this article focuses on a practical implementation of a medical-oriented internet of skills application , where the doctor is able to perform remote diagnosis and palpation with the use of cutting edge haptic technology . we present an examination of the main medical socio-economic drivers , as well as the description of specific technologies used in this practical demonstration . all this , with the main objective of delivering a proof of concept for the design and planning of multi-modal communications in 5g . story_separator_special_tag the ever-increasing growth of connected smart devices and iot verticals is leading to the crucial challenges of handling the massive amount of raw data generated by distributed iot systems and providing timely feedback to the end-users . although the existing cloud computing paradigm has an enormous amount of virtual computing power and storage capacity , it might not be able to satisfy delay-sensitive applications since computing tasks are usually processed at the distant cloud-servers . to this end , edge/fog computing has recently emerged as a new computing paradigm that helps to extend cloud functionalities to the network edge . despite several benefits of edge computing including geo-distribution , mobility support and location awareness , various communication and computing related challenges need to be addressed for future iot systems . in this regard , this article provides a comprehensive view of the current issues encountered in distributed iot systems and effective solutions by classifying them into three main categories , namely , radio and computing resource management , intelligent edge-iot systems , and flexible infrastructure management . furthermore , an optimization framework for edge-iot systems is proposed by considering the key performance metrics including throughput , delay , resource utilization story_separator_special_tag abstract : the paper addresses the issue of the send-on-delta data collecting strategy to capture information from the environment . send-on-delta concept is the signal-dependent temporal sampling scheme , where the sampling is triggered if the signal deviates by delta defined as the significant change of its value . it is an attractive scheme for wireless sensor networking due to effective energy consumption . the quantitative evaluations of send-on-delta scheme for a general type continuous-time bandlimited signal are presented in the paper . the bounds on the mean traffic of reports for a given signal , and assumed sampling resolution , are evaluated . furthermore , the send-on-delta effectiveness , defined as the reduction of the mean rate of reports in comparison to the periodic sampling for a given resolution , is derived . it is shown that the lower bound of the send-on-delta effectiveness ( i.e . the guaranteed reduction ) is independent of the sampling resolution , and constitutes the built-in feature of the input signal . the calculation of the effectiveness for standard signals , that model the state evolution of dynamic environment in time , is exemplified . finally , the example of send-on-delta programming is story_separator_special_tag a rapid progress in intelligent sensing technology creates new interest in a development of analysis and design of non-conventional sampling schemes . the investigation of the event-based sampling according to the integral criterion is presented in this paper . the investigated sampling scheme is an extension of the pure linear send-on- delta/level-crossing algorithm utilized for reporting the state of objects monitored by intelligent sensors . the motivation of using the event-based integral sampling is outlined . the related works in adaptive sampling are summarized . the analytical closed-form formulas for the evaluation of the mean rate of event-based traffic , and the asymptotic integral sampling effectiveness , are derived . the simulation results verifying the analytical formulas are reported . the effectiveness of the integral sampling is compared with the related linear send-on-delta/level-crossing scheme . the calculation of the asymptotic effectiveness for common signals , which model the state evolution of dynamic systems in time , is exemplified . story_separator_special_tag we consider sensor data scheduling for remote state estimation . due to constrained communication energy and bandwidth , a sensor needs to decide whether it should send the measurement to a remote estimator for further processing . we propose an event-based sensor data scheduler for linear systems and derive the corresponding minimum squared error estimator . by selecting an appropriate event-triggering threshold , we illustrate how to achieve a desired balance between the sensor-to-estimator communication rate and the estimation quality . simulation examples are provided to demonstrate the theory . story_separator_special_tag an event-based state estimation scenario is considered where a sensor sporadically transmits observations of a scalar linear process to a remote estimator . the remote estimator is a time-varying kalman filter . the triggering decision is based on the estimation variance : the sensor runs a copy of the remote estimator and transmits a measurement if the associated measurement prediction variance exceeds a tolerable threshold . the resulting variance iteration is a new type of riccati equation with switching that corresponds to the availability or unavailability of a measurement and depends on the variance at the previous step . we study asymptotic properties of the variance iteration and , in particular , asymptotic convergence to a periodic solution . story_separator_special_tag abstract we examine the usefulness of event-based sampling approaches for reducing communication in inertial-sensor-based analysis of human motion . to this end we consider realtime measurement of the knee joint angle during walking , employing a recently developed sensor fusion algorithm . we simulate the effects of different event-based sampling methods on a large set of experimental data with ground truth obtained from an external motion capture system . this results in a reduced wireless communication load at the cost of a slightly increased error in the calculated angles . the proposed methods are compared in terms of best balance of these two aspects . we show that the transmitted data can be reduced by 66 % while maintaining the same level of accuracy . story_separator_special_tag event-triggering ( et ) is an up-and-coming technological paradigm for monitoring , optimization , and control in the internet of things ( iot ) that achieves improved levels of operational efficiency . this paper first defines the envisioned et architecture for the iot domain . it then classifies and reviews the various different et approaches obtained from the available literature for the three phases of et , namely behavior modeling , event detection , and event handling . thereafter , a novel data-driven technique is developed to address all three phases of et in an efficient and reliable manner . finally , the applicability of the proposed data-driven technique is showcased in a real-world public transport scenario , demonstrating a substantial improvement in energy and spectrum efficiency compared to existing periodic techniques . story_separator_special_tag the heterogeneity , large scale , and resource constraint of iot environment raise some issues which hinder its development . we focus on two issues among them : 1 ) most of the existing iot applications are silos , that is , wireless sensor and actuator network resources and applications are tightly coupled , applications can not share and reuse resources and interact with each other and 2 ) how to efficiently disseminate the sensing information among the information providers and consumers , and rapidly respond to changes in the physical world . this paper proposes a multilevel and multidimensional model-based service provisioning platform , which can access large-scale heterogeneous resources and expose their capabilities as light-weighted services . moreover , it presents a unified message space to facilitate the on-demand dissemination and sharing of sensing information in distributed iot environment . the platform supports applications to share and reuse resources and provides the basic infrastructure for the iot application pattern : inner-domain autonomy and inter-domain coordination . story_separator_special_tag deep learning based on artificial neural networks is a very popular approach to modeling , classifying , and recognizing complex data such as images , speech , and text . the unprecedented accuracy of deep learning methods has turned them into the foundation of new ai-based services on the internet . commercial companies that collect user data on a large scale have been the main beneficiaries of this trend since the success of deep learning techniques is directly proportional to the amount of data available for training . story_separator_special_tag federated learning is a machine learning setting where the goal is to train a high-quality centralized model while training data remains distributed over a large number of clients each with unreliable and relatively slow network connections . we consider learning algorithms for this setting where on each round , each client independently computes an update to the current model based on its local data , and communicates this update to a central server , where the client-side updates are aggregated to compute a new global model . the typical clients in this setting are mobile phones , and communication efficiency is of the utmost importance . in this paper , we propose two ways to reduce the uplink communication costs : structured updates , where we directly learn an update from a restricted space parametrized using a smaller number of variables , e.g . either low-rank or a random mask ; and sketched updates , where we learn a full model update and then compress it using a combination of quantization , random rotations , and subsampling before sending it to the server . experiments on both convolutional and recurrent networks show that the proposed methods can reduce the communication story_separator_special_tag in this paper , we present two new communication-efficient methods for distributed minimization of an average of functions . the first algorithm is an inexact variant of the dane algorithm that allows any local algorithm to return an approximate solution to a local subproblem . we show that such a strategy does not affect the theoretical guarantees of dane significantly . in fact , our approach can be viewed as a robustification strategy since the method is substantially better behaved than dane on data partition arising in practice . it is well known that dane algorithm does not match the communication complexity lower bounds . to bridge this gap , we propose an accelerated variant of the first method , called aide , that not only matches the communication lower bounds but can also be implemented using a purely first-order oracle . our empirical results show that aide is superior to other communication efficient algorithms in settings that naturally arise in machine learning applications . story_separator_special_tag modern mobile devices have access to a wealth of data suitable for learning models , which in turn can greatly improve the user experience on the device . for example , language models can improve speech recognition and text entry , and image models can automatically select good photos . however , this rich data is often privacy sensitive , large in quantity , or both , which may preclude logging to the data center and training there using conventional approaches . we advocate an alternative that leaves the training data distributed on the mobile devices , and learns a shared model by aggregating locally-computed updates . we term this decentralized approach federated learning . we present a practical method for the federated learning of deep networks based on iterative model averaging , and conduct an extensive empirical evaluation , considering five different model architectures and four datasets . these experiments demonstrate the approach is robust to the unbalanced and non-iid data distributions that are a defining characteristic of this setting . communication costs are the principal constraint , and we show a reduction in required communication rounds by 10-100x as compared to synchronized stochastic gradient descent . story_separator_special_tag large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training , and requires expensive high-bandwidth network infrastructure . the situation gets even worse with distributed training on mobile devices ( federated learning ) , which suffers from higher latency , lower throughput , and intermittent poor connections . in this paper , we find 99.9 % of the gradient exchange in distributed sgd is redundant , and propose deep gradient compression ( dgc ) to greatly reduce the communication bandwidth . to preserve accuracy during compression , dgc employs four methods : momentum correction , local gradient clipping , momentum factor masking , and warm-up training . we have applied deep gradient compression to image classification , speech recognition , and language modeling with multiple datasets including cifar10 , imagenet , penn treebank , and librispeech corpus . on these scenarios , deep gradient compression achieves a gradient compression ratio from 270x to 600x without losing accuracy , cutting the gradient size of resnet-50 from 97mb to 0.35mb , and for deepspeech from 488mb to 0.74mb . deep gradient compression enables large-scale distributed training on inexpensive commodity 1gbps ethernet and facilitates distributed training story_separator_special_tag we show empirically that in sgd training of deep neural networks , one can , at no or nearly no loss of accuracy , quantize the gradients aggressively to but one bit per value if the quantization error is carried forward across minibatches ( error feedback ) . this size reduction makes it feasible to parallelize sgd through data-parallelism with fast processors like recent gpus . we implement data-parallel deterministically distributed sgd by combining this finding with adagrad , automatic minibatch-size selection , double buffering , and model parallelism . unexpectedly , quantization benefits adagrad , giving a small accuracy gain . for a typical switchboard dnn with 46m parameters , we reach computation speeds of 27k frames per second ( kfps ) when using 2880 samples per minibatch , and 51kfps with 16k , on a server with 8 k20x gpus . this corresponds to speed-ups over a single gpu of 3.6 and 6.3 , respectively . 7 training passes over 309h of data complete in under 7h . a 160m-parameter model training processes 3300h of data in under 16h on 20 dual-gpu servers a 10 times speed-up albeit at a small accuracy loss . story_separator_special_tag high network communication cost for synchronizing gradients and parameters is the well-known bottleneck of distributed training . in this work , we propose terngrad that uses ternary gradients to accelerate distributed deep learning in data parallelism . our approach requires only three numerical levels { -1,0,1 } , which can aggressively reduce the communication time . we mathematically prove the convergence of terngrad under the assumption of a bound on gradients . guided by the bound , we propose layer-wise ternarizing and gradient clipping to improve its convergence . our experiments show that applying terngrad on alexnet does n't incur any accuracy loss and can even improve accuracy . the accuracy loss of googlenet induced by terngrad is less than 2 % on average . finally , a performance model is proposed to study the scalability of terngrad . experiments show significant speed gains for various deep neural networks . our source code is available1 . story_separator_special_tag we propose dorefa-net , a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients . in particular , during backward pass , parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers . as convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively , dorefa-net can use bit convolution kernels to accelerate both training and inference . moreover , as bit convolutions can be efficiently implemented on cpu , fpga , asic and gpu , dorefa-net opens the way to accelerate training of low bitwidth neural network on these hardware . our experiments on svhn and imagenet datasets prove that dorefa-net can achieve comparable prediction accuracy as 32-bit counterparts . for example , a dorefa-net derived from alexnet that has 1-bit weights , 2-bit activations , can be trained from scratch using 6-bit gradients to get 46.1\\ % top-1 accuracy on imagenet validation set . the dorefa-net alexnet model is released publicly . story_separator_special_tag we study federated learning ( fl ) , where power-limited wireless devices utilize their local datasets to collaboratively train a global model with the help of a remote parameter server ( ps ) . the ps has access to the global model and shares it with the devices for local training , and the devices return the result of their local updates to the ps to update the global model . this framework requires downlink transmission from the ps to the devices and uplink transmission from the devices to the ps . the goal of this study is to investigate the impact of the bandwidth-limited shared wireless medium in both the downlink and uplink on the performance of fl with a focus on the downlink . to this end , the downlink and uplink channels are modeled as fading broadcast and multiple access channels , respectively , both with limited bandwidth . for downlink transmission , we first introduce a digital approach , where a quantization technique is employed at the ps to broadcast the global model update at a common rate such that all the devices can decode it . next , we propose analog downlink transmission , where story_separator_special_tag we study collaborative machine learning at the wireless edge , where power and bandwidth-limited devices ( workers ) , with limited local datasets , implement distributed stochastic gradient descent ( dsgd ) over-the-air with the help of a remote parameter server ( ps ) . we consider a wireless multiple access channel ( mac ) from the workers to the ps for communicating the local gradient estimates . we first introduce a digital dsgd ( d-dsgd ) scheme , assuming that the workers operate on the boundary of the mac capacity region at each iteration of the dsgd algorithm , and digitize their estimates within the bit budget allowed by the employed power allocation . we then introduce an analog scheme , called a-dsgd , motivated by the additive nature of the wireless mac , where the workers send their gradient estimates over the mac through the available channel bandwidth without employing any digital code . numerical results show that a-dsgd converges much faster than d-dsgd . the improvement is particularly compelling at low power and low bandwidth regimes . we also observe that the performance of a-dsgd improves with the number of workers , while d-dsgd deteriorates , limiting story_separator_special_tag a key learning scenario in large-scale applications is that of federated learning , where a centralized model is trained based on data originating from a large number of clients . we argue that , with the existing training and inference , federated models can be biased towards different clients . instead , we propose a new framework of agnostic federated learning , where the centralized model is optimized for any target distribution formed by a mixture of the client distributions . we further show that this framework naturally yields a notion of fairness . we present data-dependent rademacher complexity guarantees for learning with this objective , which guide the definition of an algorithm for agnostic federated learning . we also give a fast stochastic optimization algorithm for solving the corresponding optimization problem , for which we prove convergence bounds , assuming a convex loss function and hypothesis set . we further empirically demonstrate the benefits of our approach in several datasets . beyond federated learning , our framework and algorithm can be of interest to other learning scenarios such as cloud computing , domain adaptation , drifting , and other contexts where the training and test distributions do not coincide story_separator_special_tag for future internet-of-things based big data applications , data collection from ubiquitous smart sensors with limited spectrum bandwidth is very challenging . on the other hand , to interpret the meaning behind the collected data , it is also challenging for an edge fusion center running computing tasks over large data sets with a limited computation capacity . to tackle these challenges , by exploiting the superposition property of multiple-access channel and the functional decomposition , the recently proposed technique , over-the-air computation ( aircomp ) , enables an effective joint data collection and computation from concurrent sensor transmissions . in this paper , we focus on a single-antenna aircomp system consisting of k sensors and one receiver . we consider an optimization problem to minimize the computation mean-squared error ( mse ) of the k sensors signals at the receiver by optimizing the transmitting-receiving ( tx-rx ) policy , under the peak power constraint of each sensor . although the problem is not convex , we derive the computation-optimal policy in closed form . also , we comprehensively investigate the ergodic performance of the aircomp system , and the scaling laws of the average computation mse ( acm ) story_separator_special_tag over-the-air computation ( aircomp ) shows great promise to support fast data fusion in internet-of-things ( iot ) networks . aircomp typically computes desired functions of distributed sensing data by exploiting superposed data transmission in multiple access channels . to overcome its reliance on channel state information ( csi ) , this work proposes a novel blind over-the-air computation ( blaircomp ) without requiring csi access , particularly for low complexity and low latency iot networks . to solve the resulting non-convex optimization problem without the initialization dependency exhibited by the solutions of a number of recently proposed efficient algorithms , we develop a wirtinger flow solution to the blaircomp problem based on random initialization . we establish the global convergence guarantee of wirtinger flow with random initialization for blaircomp problem , which enjoys a model-agnostic and natural initialization implementation for practitioners with theoretical guarantees . specifically , in the first stage of the algorithm , the iteration of randomly initialized wirtinger flow given sufficient data samples can enter a local region that enjoys strong convexity and strong smoothness within a few iterations . we also prove the estimation error of blaircomp in the local region to be sufficiently small story_separator_special_tag motivated by various applications in distributed machine learning ( ml ) in massive wireless sensor networks , this paper addresses the problem of computing approximate values of functions over the wireless channel and provides examples of applications of our results to distributed training and ml-based prediction . the `` over-the-air '' computation of a function of data captured at different wireless devices has a huge potential for reducing the communication cost , which is needed for example for training of ml models . it is of particular interest to massive wireless scenarios because , as shown in this paper , its communication cost forntraining scales more favorable with the number of devices than that of traditional schemes that reconstruct all the data . we consider noisy fast-fading channels that pose major challenges to the `` over-the-air '' computation . as a result , function values are approximated from superimposed noisy signals transmitted by different devices . the fading and noise processes are not limited to gaussian distributions , and are assumed to be in the more general class of sub-gaussian distributions . our result does not assume necessarily independent fading and noise , thus allowing for correlations over time and story_separator_special_tag intelligent internet of things ( iot ) will be transformative with the advancement of artificial intelligence and high-dimensional data analysis , shifting from `` connected things '' to `` connected intelligence . '' this shall unleash the full potential of intelligent iot in a plethora of exciting applications , such as self-driving cars , unmanned aerial vehicles , healthcare , robotics , and supply chain finance . these applications drive the need to develop revolutionary computation , communication , and artificial intelligence technologies that can make low-latency decisions with massive realtime data . to this end , federated machine learning , as a disruptive technology , has emerged to distill intelligence from the data at the network edge , while guaranteeing device privacy and data security . however , the limited communication bandwidth is a key bottleneck of model aggregation for federated machine learning over radio channels . in this article , we shall develop an overthe- air computation-based communication-efficient federated machine learning framework for intelligent iot networks via exploiting the waveform superposition property of a multi-access channel . reconfigurable intelligent surface is further leveraged to reduce the model aggregation error via enhancing the signal strength by reconfiguring the wireless story_separator_special_tag self-organizing networks ( sons ) can help to manage the severe interference in dense heterogeneous networks ( hetnets ) . given their need to automatically configure power and other settings , machine learning is a promising tool for data-driven decision making in sons . in this paper , a hetnet is modeled as a dense two-tier network with conventional macrocells overlaid with denser small cells ( e.g . femto or pico cells ) . first , a distributed framework based on the multi-agent markov decision process is proposed that models the power optimization problem in the network . second , we present a systematic approach for designing a reward function based on the optimization problem . third , we introduce q-learning-based distributed power allocation algorithm ( q-dpa ) as a self-organizing mechanism that enables the ongoing transmit power adaptation as new small cells are added to the network . furthermore , the sample complexity of the q-dpa algorithm to achieve $ \\epsilon $ -optimality with high probability is provided . we demonstrate , at the density of several thousands femtocells per km2 , the required quality of service of a macrocell user can be maintained via the proper selection of story_separator_special_tag self-organizing networks ( son ) aim at simplifying network management ( nm ) and optimizing network capital and operational expenditure through automation . most son functions ( sfs ) are rule-based control structures , which evaluate metrics and decide actions based on a set of rules . these rigid structures are , however , very complex to design since rules must be derived for each sf in each possible scenario . in practice , rules only support generic behavior , which can not respond to the specific scenarios in each network or cell . moreover , son coordination becomes very complicated with such varied control structures . in this paper , we propose to advance son toward cognitive cellular networks ( ccn ) \xa0by adding cognition that enables the sfs to independently learn the required optimal configurations . we propose a generalized q-learning framework for the ccn functions and show how the framework fits to a general sf control loop . we then apply this framework to two functions on mobility robustness optimization ( mro ) and mobility load balancing ( mlb ) . our results show that the mro function learns to optimize handover performance while the mlb function story_separator_special_tag with the growing deployment of cellular networks , operators have to devote significant manual effort to network management . as a result , self-organizing networks ( sons ) have become increasingly important in order to raise the level of automated operation in cellular technologies . in this context , load balancing ( lb ) and handover optimization ( hoo ) have been identified by industry as key self-organizing mechanisms for the radio access networks ( rans ) . however , most efforts have been focused on developing a stand-alone entity for each self-organizing mechanism , which will run in parallel with other entities , as well as designing coordination mechanisms in charge of stabilizing the network as a whole . due to the importance of lb and hoo , in this paper , a unified self-management mechanism based on fuzzy logic and reinforcement learning is proposed . in particular , the proposed algorithm modifies handover parameters to optimize the main key performance indicators related to lb and hoo . results show that the proposed scheme effectively provides better performance than independent entities running simultaneously in the network . story_separator_special_tag in this paper we present simulation results of a self-optimizing network in a long-term-evolution ( lte ) mobile communication system that uses two optimizing algorithms at the same time : load balancing and handover parameter optimization . based on previous work , we extend the optimization by a combined use case . we present the interactions of the two son algorithms and show an example of a coordination system . the coordination system for self optimization observes system performance and controls the son algorithms . as both son algorithms deal with the handover decision itself , not only interactions , but also conflicts in the observation and control of the system are to be expected and are observed . the example of a coordination system here is not the optimal solution covering all aspects , but rather a working solution that shows equal performance to the individual algorithms or in the best case combining the strengths of the algorithms and achieving even better performance ; although as localized gain , in time and area . story_separator_special_tag recently , a number of technologies have been developed to promote vehicular networks . when vehicles are associated with the heterogeneous base stations ( e.g. , macrocells , picocells , and femtocells ) , one of the most important problems is to make load balancing among these base stations . different from common mobile networks , data traffic in vehicular networks can be observed having regularities in the spatial temporal dimension due to the periodicity of urban traffic flow . by taking advantage of this feature , we propose an online reinforcement learning approach , called orla . it is a distributed user association algorithm for network load balancing in vehicular networks . based on the historical association experiences , orla can obtain a good association solution through learning from the dynamic vehicular environment continually . in the long run , the real-time feedback and the regular traffic association patterns both help orla cope with the dynamics of network well . in experiments , we use qiangsheng taxi movement to evaluate the performance of orla . our experiments verify that orla has higher quality load balancing compared with other popular association methods . story_separator_special_tag abstract self-organizing networks ( son ) is a collection of functions for automatic configuration , optimization , and healing of networks and mobility optimization is one of the main functions of self-organized cellular networks . state of the art mobility robustness optimization ( mro ) schemes have relied on rule-based recommended systems to search the parameter space ; yet it is unwieldy to design rules for all possible mobility patterns in any network . in this regard , we presented a deep learning-based mro solution ( drl-mro ) , which learns the required parameter 's appropriate values for each mobility pattern in individual cells . optimal mobility setting for handover parameters also depends on the user distribution and their velocities in the network . in this framework , an effective mobility-aware load balancing approach applied for autonomous methods of configuring the parameters in accordance with the mobility patterns in which approximately the same quality level is provided for each subscriber . the simulation results show that the function of mobility robustness optimization not only learns to optimize ho performance , but also it learns how to distribute excess load throughout the network . the experimental results prove that this solution story_separator_special_tag in this article , we propose a deep reinforcement learning ( drl ) -based mobility load balancing ( mlb ) algorithm along with a two-layer architecture to solve the large-scale load balancing problem for ultradense networks ( udns ) . our contribution is threefold . first , this article proposes a two-layer architecture to solve the large-scale load balancing problem in a self-organized manner . the proposed architecture can alleviate the global traffic variations by dynamically grouping small cells into self-organized clusters according to their historical loads , and further adapt to local traffic variations through intracluster load balancing afterwards . second , for the intracluster load balancing , this article proposes an off-policy drl-based mlb algorithm to autonomously learn the optimal mlb policy under an asynchronous parallel learning framework , without any prior knowledge assumed over the underlying udn environments . moreover , the algorithm enables joint exploration with multiple behavior policies , such that the traditional mlb methods can be used to guide the learning process thereby improving the learning efficiency and stability . third , this article proposes an offline-evaluation-based safeguard mechanism to ensure that the online system can always operate with the optimal and well-trained mlb story_separator_special_tag past few years have witnessed the compelling applications of the satellite communications and networking in our daily life . due to the extremely high moving speeds and limited networking resources of leo satellites , how to optimize inter-satellite traffic has received amount of attention from both academia and industry . in this paper , we proposed a hybrid satellites network traffic control paradigm . in our architecture , the centralized platform collect the global state and the joint action from each agent during the training phase to ease the training , and during execution , the each agent can return the action to the local state through the trained policy . besides , we adopt a multiagent actor-critic algorithms named multi-agent actor-critic for mixed cooperative-competitive environments ( maddpg ) to our architecture . in addition , some simulation results are presented to evaluate the correctness of our architecture and algorithm . story_separator_special_tag the growing use of unmanned aerial vehicles ( uavs ) for various applications requires ubiquitous and reliable connectivity for safe control and data exchange between these devices and ground terminals . depending on the application , uav-mounted wireless equipment can either be an aerial user equipment ( aue ) that co-exists with the terrestrial users , or it can be a part of wireless infrastructure providing a range of services to the ground users . for instance , aue can be used for real-time search and rescue and aerial base station ( abs ) can enhance coverage , capacity and energy efficiency of wireless networks . in both cases , uav-based solutions are scalable , mobile , fast to deploy . however , several technical challenges have to be addressed . in this work , we present a tutorial on wireless communication with uavs , taking into account a wide range of potential applications . the main goal of this work is to provide a complete overview of the main scenarios ( aue and abs ) , channel and performance models , compare them , and discuss open research points . this work gives a comprehensive overview of the research story_separator_special_tag the use of unmanned aerial vehicles ( uavs ) serving as aerial base stations is expected to become predominant in the next decade . however , in order , for this technology , to unfold its full potential , it is necessary to develop a fundamental understanding of the distinctive features of air-to-ground ( a2g ) links . as a contribution in this direction , this paper proposes a generic framework for the analysis and optimization of the a2g systems . in contrast to the existing literature , this framework incorporates both height-dependent path loss exponent and small-scale fading , and unifies a widely used ground-to-ground channel model with that of a2g for the analysis of large-scale wireless networks . we derive analytical expressions for the optimal uav height that minimizes the outage probability of an arbitrary a2g link . moreover , our framework allows us to derive a height-dependent closed-form expression for the outage probability of an a2g cooperative communication network . our results suggest that the optimal location of the uavs with respect to the ground nodes does not change by the inclusion of ground relays . this enables interesting insights about the deployment of future a2g networks story_separator_special_tag in unmanned aerial vehicle ( uav ) applications , the uav 's limited energy supply and storage have triggered the development of intelligent energy-conserving scheduling solutions . in this paper , we investigate energy minimization for uav-aided communication networks by jointly optimizing data-transmission scheduling and uav hovering time . the formulated problem is combinatorial and non-convex with bilinear constraints . to tackle the problem , firstly , we provide an optimal algorithm ( opt ) and a golden section search heuristic algorithm ( gss-heu ) . both solutions are served as offline performance benchmarks which might not be suitable for online operations . towards this end , from a deep reinforcement learning ( drl ) perspective , we propose an actor-critic-based deep stochastic online scheduling ( ac-dsos ) algorithm and develop a set of approaches to confine the action space . compared to conventional rl/drl , the novelty of ac-dsos lies in handling two major issues , i.e. , exponentially-increased action space and infeasible actions . numerical results show that ac-dsos is able to provide feasible solutions , and save around 25-30 % energy compared to two conventional deep ac-drl algorithms . compared to the developed gss-heu , ac-dsos consumes story_separator_special_tag federated learning ( fl ) allows multiple edge computing nodes to jointly build a shared learning model without having to transfer their raw data to a centralized server , thus reducing communication overhead . however , fl still faces a number of challenges such as nonindependent and identically distributed data and heterogeneity of user equipments ( ues ) . enabling a large number of ues to join the training process in every round raises a potential issue of the heavy global communication burden . to address these issues , we generalize the current state-of-the-art federated averaging ( fedavg ) by adding a weight-based proximal term to the local loss function . the proposed fl algorithm runs stochastic gradient descent in parallel on a sampled subset of the total ues with replacement during each global round . we provide a convergence upper bound characterizing the tradeoff between convergence rate and global rounds , showing that a small number of active ues per round still guarantees convergence . next , we employ the proposed fl algorithm in wireless internet-of-things ( iot ) networks to minimize either total energy consumption or completion time of fl , where a simple yet efficient path-following algorithm story_separator_special_tag industry 4.0 aims to create a modern industrial system by introducing technologies , such as cloud computing , intelligent robotics , and wireless sensor networks . in this article , we consider the multichannel access and task offloading problem in mobile-edge computing ( mec ) -enabled industry 4.0 and describe this problem in multiagent environment . to solve this problem , we propose a novel multiagent deep reinforcement learning ( madrl ) scheme . the solution enables edge devices ( eds ) to cooperate with each other , which can significantly reduce the computation delay and improve the channel access success rate . extensive simulation results with different system parameters reveal that the proposed scheme could reduce computation delay by 33.38 % and increase the channel access success rate by 14.88 % and channel utilization by 3.24 % compared to the traditional single-agent reinforcement learning method . story_separator_special_tag in general , there are two kinds of cooperative driving strategies , planning-based strategy , and ad hoc negotiation-based strategy , for connected and automated vehicles merging problems . the planning-based strategy aims to find the globally optimal passing order , but it is time-consuming when the number of considered vehicles is large . in contrast , the ad hoc negotiation-based strategy runs fast , but it always finds a locally optimal solution . in this paper , we propose a grouping-based cooperative driving strategy to make a good tradeoff between computation time and coordination performance . the key idea is to fix the passing orders for some vehicles whose inter-vehicle headways are small enough ( e.g. , smaller than the pre-selected grouping threshold ) . from the viewpoint of optimization , this method reduces the size of the solution space . then , two analyses are given to explain why this kind of strategy is good and how to determine suitable values for the strategy parameters . a series of simulation experiments are carried out to validate that the proposed strategy can yield a satisfied coordination performance with less computation time and is promising to be used in practice story_separator_special_tag road accidents and traffic congestion are two critical problems for global transport systems . connected vehicles ( cv ) and automated vehicles ( av ) are among the most heavily researched and promising automotive technologies to reduce road accidents and improve road efficiency . however , both av and cv technologies have inherent shortcomings , for example , line of sight sensing limitation of av sensors and the dependency of high penetration rate for cvs . in this paper we present a cooperative connected intelligent vehicles ( cav ) framework . it is motivated by the observation that vehicles are increasingly intelligent with various levels of autonomous functionalities . the vehicles intelligence is boosted by more sensing and computing resources . these sensor and computing resources of cav vehicles and the transport infrastructure could be shared and exploited . with resource sharing and cooperation cavs can have comprehensive perception of driving environments , and novel cooperative applications can be developed to improve road safety and efficiency ( rse ) . the key feature of the cooperative cav system is the cooperation within and across the key players in the road transport systems and across system layers . for example , story_separator_special_tag academic research in the field of autonomous vehicles has reached high popularity in recent years related to several topics as sensor technologies , v2x communications , safety , security , decision making , control , and even legal and standardization rules . besides classic control design approaches , artificial intelligence and machine learning methods are present in almost all of these fields . another part of research focuses on different layers of motion planning , such as strategic decisions , trajectory planning , and control . a wide range of techniques in machine learning itself have been developed , and this article describes one of these fields , deep reinforcement learning ( drl ) . the paper provides insight into the hierarchical motion planning problem and describes the basics of drl . the main elements of designing such a system are the modeling of the environment , the modeling abstractions , the description of the state and the perception models , the appropriate rewarding , and the realization of the underlying neural network . the paper describes vehicle models , simulation possibilities and computational requirements . strategic decisions on different layers and the observation models , e.g. , continuous and story_separator_special_tag abstract inverse reinforcement learning ( irl ) is the problem of inferring the reward function of an agent , given its policy or observed behavior . analogous to rl , irl is perceived both as a problem and as a class of methods . by categorically surveying the extant literature in irl , this article serves as a comprehensive reference for researchers and practitioners of machine learning as well as those new to it to understand the challenges of irl and select the approaches best suited for the problem on hand . the survey formally introduces the irl problem along with its central challenges such as the difficulty in performing accurate inference and its generalizability , its sensitivity to prior knowledge , and the disproportionate growth in solution complexity with problem size . the article surveys a vast collection of foundational methods grouped together by the commonality of their objectives , and elaborates how these methods mitigate the challenges . we further discuss extensions to the traditional irl methods for handling imperfect perception , an incomplete model , learning multiple reward functions and nonlinear reward functions . the article concludes the survey with a discussion of some broad advances in story_separator_special_tag accurate behavior anticipation is essential for autonomous vehicles when navigating in close proximity to other vehicles , pedestrians , and cyclists . thanks to the recent advances in deep learning and inverse reinforcement learning ( irl ) , we observe a tremendous opportunity to address this need , which was once believed impossible given the complex nature of human decision making . in this article , we summarize the importance of accurate behavior modeling in autonomous driving and analyze the key approaches and major progress that researchers have made , focusing on the potential of deep irl ( d-irl ) to overcome the limitations of previous techniques . we provide quantitative and qualitative evaluations substantiating these observations . although the field of d-irl has seen recent successes , its application to model behavior in autonomous driving is largely unexplored . as such , we conclude this article by summarizing the exciting pathways for future breakthroughs . story_separator_special_tag safe reinforcement learning can be defined as the process of learning policies that maximize the expectation of the return in problems in which it is important to ensure reasonable system performance and/or respect safety constraints during the learning and/or deployment processes . we categorize and analyze two approaches of safe reinforcement learning . the first is based on the modification of the optimality criterion , the classic discounted finite/infinite horizon , with a safety factor . the second is based on the modification of the exploration process through the incorporation of external knowledge or the guidance of a risk metric . we use the proposed classification to survey the existing literature , as well as suggesting future directions for safe reinforcement learning . story_separator_special_tag recent years have witnessed significant advances in reinforcement learning ( rl ) , which has registered tremendous success in solving various sequential decision-making problems in machine learning . most of the successful rl applications , e.g. , the games of go and poker , robotics , and autonomous driving , involve the participation of more than one single agent , which naturally fall into the realm of multi-agent rl ( marl ) , a domain with a relatively long history , and has recently re-emerged due to advances in single-agent rl techniques . though empirically successful , theoretical foundations for marl are relatively lacking in the literature . in this chapter , we provide a selective overview of marl , with focus on algorithms backed by theoretical analysis . more specifically , we review the theoretical results of marl algorithms mainly within two representative frameworks , markov/stochastic games and extensive-form games , in accordance with the types of tasks they address , i.e. , fully cooperative , fully competitive , and a mix of the two . we also introduce several significant but challenging applications of these algorithms . orthogonal to the existing reviews on marl , we highlight several story_separator_special_tag reinforcement learning ( rl ) algorithms have been around for decades and employed to solve various sequential decision-making problems . these algorithms , however , have faced great challenges when dealing with high-dimensional environments . the recent development of deep learning has enabled rl methods to drive optimal policies for sophisticated and capable agents , which can perform efficiently in these challenging environments . this article addresses an important aspect of deep rl related to situations that require multiple agents to communicate and cooperate to solve complex tasks . a survey of different approaches to problems related to multiagent deep rl ( madrl ) is presented , including nonstationarity , partial observability , continuous state and action spaces , multiagent training schemes , and multiagent transfer learning . the merits and demerits of the reviewed methods will be analyzed and discussed with their corresponding applications explored . it is envisaged that this review provides insights about various madrl methods and can lead to the future development of more robust and highly useful multiagent learning methods for solving real-world problems .
scikit-learn is a python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems . this package focuses on bringing machine learning to non-specialists using a general-purpose high-level language . emphasis is put on ease of use , performance , documentation , and api consistency . it has minimal dependencies and is distributed under the simplified bsd license , encouraging its use in both academic and commercial settings . source code , binaries , and documentation can be downloaded from http : //scikit-learn.sourceforge.net . story_separator_special_tag education is a key factor in ensuring economic growth , especially for countries with growing economies . today , students have become more technologically savvy as teaching and learning uses more advance technology day in , day out . due to virtualize resources through the internet , as well as dynamic scalability , cloud computing has continued to be adopted by more organizations . despite the looming financial crisis , there has been increasing pressure for educational institutions to deliver better services using minimal resources . leaning institutions , both public and private can utilize the potential advantage of cloud computing to ensure high quality service regardless of the minimal resources available . cloud computing is taking a center stage in academia because of its various benefits . various learning institutions use different cloud-based applications provided by the service providers to ensure that their students and other users can perform both academic as well as business-related tasks . thus , this research will seek to establish the benefits associated with the use of cloud computing in learning institutions . the solutions provided by the cloud technology ensure that the research and development , as well as the teaching is more story_separator_special_tag even though the technology faces several significant challenges , many vendors and industry observers predict a bright future for cloud computing . story_separator_special_tag the indian education sector has seen a tremendous rise in the field of higher education which has led to the demand for the automation of education sector at all the levels in order to cater to the need of information of various stakeholders . due to burst in the field of communication technology every one expects the access of relevant information ( what they should ) ; in fast , accurate and anytime any where manner . information management of the educational sector including statutory bodies for the purpose of transparency and control through various information systems is the need and expectation of stakeholders . these stakeholders belong to different diversified background and have different perspectives and information need for their participation . the technological development in the abstraction and encapsulation of the it resources has been successfully implemented with the help of cloud architecture . this technology not only caters to the various stakeholders ; it also ensures the sharing , availability , security and reliability of information involved . this paper is a study of cloud based computing model with indian education as a scenario and comparing the various existing tools and applications that can be readily used story_separator_special_tag in this paper , we provide an overview of the cloud computing model and discuss its applications for collaboration between academic institutions . cloud computing is considered a typical paradigm that provides suitable , efficient network login to an appropriate pool of computing resources which can be provided and released with just nominal assiduity and service providers reciprocity . however , many organizations understand cloud computing in different ways . we briefly analyse the cloud computing applications in this paper , and describe some current and accomplished educational and research products . then , we go on to evaluate the successful applications of cloud computing models at educational institutions , and the different ways to implement cloud computing . finally , we present different education applications for education infrastructures which are implemented for academic use . story_separator_special_tag education today is becoming completely associated with the information technology on the content delivery , communication and collaboration . the need for servers , storage and software are highly demanding in the universities , colleges and schools . cloud computing is an internet based computing , whereby shared resources , software and information , are provided to computers and devices on-demand , like the electricity grid . currently , iaas ( infrastructure as a service ) , paas ( platform as a service ) and saas ( software as a service ) are used as business model for cloud computing . the paper also introduces the cloud computing infrastructure provided by microsoft , google and amazon web service . in this paper we will review the features the educational institutions can use from the cloud computing providers to increase the benefits of students and teachers . story_separator_special_tag molecular cloning has served as the foundation of technical expertise in labs worldwide for 30 years . no other manual has been so popular , or so influential . molecular cloning , fourth edition , by the celebrated founding author joe sambrook and new co-author , the distinguished hhmi investigator michael green , preserves the highly praised detail and clarity of previous editions and includes specific chapters and protocols commissioned for the book from expert practitioners at yale , u mass , rockefeller university , texas tech , cold spring harbor laboratory , washington university , and other leading institutions . the theoretical and historical underpinnings of techniques are prominent features of the presentation throughout , information that does much to help trouble-shoot experimental problems . for the fourth edition of this classic work , the content has been entirely recast to include nucleic-acid based methods selected as the most widely used and valuable in molecular and cellular biology laboratories . core chapters from the third edition have been revised to feature current strategies and approaches to the preparation and cloning of nucleic acids , gene transfer , and expression analysis . they are augmented by 12 new chapters which story_separator_special_tag 1. : / . . . . , . . , . . , . . . : - , 2006 . 648 . 2. lesk a. m. introduction to genomics . 3rd ed . new york : oxford university press , 2017 . 544 . 3. dankar f. k. , ptitsyn a. , dankar s. k. the development of large-scale de-identified biomedical databases in the age of genomics-principles and challenges // hum . genomics . 2018. vol . 12 ( 1 ) . p. 19. doi 10.1186/s40246-018-0147-5 . 4. langmead b. , nellore a. cloud computing for genomic data analysis and collaboration // nat . rev . genet . 2018. vol . 19 ( 4 ) . p. 208 219. doi 10.1038/nrg.2017.113 . 5. nakagawa h. , fujita m. whole genome sequencing analysis for cancer genomics and precision medicine // cancer sci . 2018. vol . 109 ( 3 ) . p. 513 522. doi 10.1111/cas.13505 . 6. hong d. , rhie a. , park s. s. , lee j. , ju y. s. , kim s. et al . fx : an rna-seq . analysis tool on the cloud // bioinformatics . 2012. vol . 28. p. 721 story_separator_special_tag in a first aspect of the present invention , methods and apparatus implement graphical user interfaces for interactively specifying service level agreements used to regulate delivery of services to , for example , computer systems . an interactive graphical user interface allows a user to see the effects of varying values of service delivery variables on the level of service achievable in a particular service delivery context . in a second aspect , methods and apparatus of the present invention provision resources required for service delivery . in the second aspect , the methods and apparatus of the present invention select a service delivery model dependent on context . the selected service delivery model is used to provision resources that will be required during service delivery . in a third aspect , methods and apparatus of the present invention monitor compliance with a service level agreement during a service delivery event . in instances where a given service delivery does not comply with service level attributes specified in a controlling service level agreement , the methods and apparatus of the present invention take corrective action . story_separator_special_tag abstract : in this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting . our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small ( 3x3 ) convolution filters , which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers . these findings were the basis of our imagenet challenge 2014 submission , where our team secured the first and the second places in the localisation and classification tracks respectively . we also show that our representations generalise well to other datasets , where they achieve state-of-the-art results . we have made our two best-performing convnet models publicly available to facilitate further research on the use of deep visual representations in computer vision . story_separator_special_tag educational establishments continue to seek opportunities to rationalize the way they manage their resources . the economic crisis that befell the world following the near collapse of the global financial system and the subsequent bailouts of local banks with billions of tax payers ' money will continue to affect educational establishments that are likely to discover that governments will have less money than before to invest in them . it is argued in this article that cloud computing is likely to be one of those opportunities sought by the cash-strapped educational establishments in these difficult times and could prove to be of immense benefit ( and empowering in some situations ) to them due to its flexibility and pay-as-you-go cost structure . cloud computing is an emerging new computing paradigm for delivering computing services . this computing approach relies on a number of existing technologies , e.g. , the internet , virtualization , grid computing , web services , etc . the provision of this service in a pay-as-you-go way through ( largely ) the popular medium of the internet gives this service a new distinctiveness . in this article , some aspects of this distinctiveness will be highlighted and story_separator_special_tag we study the production sensitivity of higgs bosons and , in relation to the possible existence of boson and a top quark pair at the energy scales that will be reached in the near future at projected linear colliders . we focus on the resonance and no-resonance effects of the annihilation processes and . furthermore , we develop and present novel analytical formulas to assess the total cross section involved in the production of higgs bosons . we find that the possibility of performing precision measurements for the higgs bosons and and for the boson is very promising at future linear colliders . story_separator_special_tag abstract cloud computing is becoming an adoptable technology for many of the organizations with its dynamic scalability and usage of virtualized resources as a service through the internet . it will likely have a significant impact on the educational environment in the future . cloud computing is an excellent alternative for educational institutions which are especially under budget shortage in order to operate their information systems effectively without spending any more capital for the computers and network devices . universities take advantage of available cloud-based applications offered by service providers and enable their own users/students to perform business and academic tasks . in this paper , we will review what the cloud computing infrastructure will provide in the educational arena , especially in the universities where the use of computers are more intensive and what can be done to increase the benefits of common applications for students and teachers . story_separator_special_tag the increased use of public cloud computing for business , government , and education now seems inevitable . primarily due to lower cost and greater ease of access and use , wikis , social learning sites , and free or low-cost hosted services on such sites as facebook or google are now competing with traditional proprietary course management systems such as blackboard and angel . of particular concern is the merging of social media and virtual learning environments and the personally identifiable data that are stored on off-site computers . internal abuse ( misuse or sale of personal user data by vendors ) and insufficient protection against hacking and identity theft are additional concerns because of the large amounts of personally identifiable information ( pii ) that cloud vendors are storing . also , loss of management control or intellectual property rights over materials uploaded to free cloud services is a potential barrier for creators of learning objects . this chapter , designed for educational administrators and educators in the e-learning community , looks at the pros and cons of the use of current cloud services in education , with a focus on privacy and security issues . the united story_separator_special_tag in the current financial crisis and the growing need for quality education , the educational institutions are under increasing pressure to deliver more from less . both public as well as private institutions can use the potential benefit of cloud computing to deliver better services even with fewer resources . application of cloud computing in education not only relieve the educational institutions from the burden of handling the complex it infrastructure management as well as maintenance activities but also lead to huge cost savings . government of india is having the ambitious plan to raise the present 16 million enrolments in higher education to 42 million by 2020 as well as interconnect electronically india 's 572 universities , 25,000 colleges and at least 2,000 polytechnics for enabling e-learning and content sharing across country . the lunch of low cost , affordable aakash tablet pcs for the student community is likely to increase the number of users ' for educational online resources exponentialy . in this paper we have studied about the benefit of use of cloud computing by educational institutions . story_separator_special_tag the utility model discloses an fpga-based ip core capable of freely converting multiple encoder protocols , and relates to the technical field of industrial control . the multiple encoder protocol freeconversion ip core based on the fpga comprises a basic ip core module , a peripheral ip core module and a user-defined ip core module . wherein the peripheral ip core module is connected with an external upper computer through a parallel port , and is respectively connected with the basic ip core module and the custom ip core module through an avalon bus ; the basic ip core module is also connected with the custom ip core module through an avalon bus ; and the custom ip core module is internally provided with a control register and is also connected with an external encoder and at least one destination machine . according to the utility model , parallel processing , synchronous acquisition and on-demand conversion and output of encoder data are realized , the real-time performance and synchronism of information are high , and the defect of inconsistent encoder protocols is overcome . story_separator_special_tag an elastic neutron scattering instrument , the advanced neutron diffractometer/reflectometer ( and/r ) , has recently been commissioned at the national institute of standards and technology center for neutron research . the and/r is the centerpiece of the cold neutrons for biology and technology partnership , which is dedicated to the structural characterization of thin films and multilayers of biological interest . the instrument is capable of measuring both specular and nonspecular reflectivity , as well as crystalline or semicrystalline diffraction at wave-vector transfers up to approximately 2.20a 1. a detailed description of this flexible instrument and its performance characteristics in various operating modes are given .
a goal of statistical language modeling is to learn the joint probability function of sequences of words in a language . this is intrinsically difficult because of the curse of dimensionality : a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training . traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set . we propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences . the model learns simultaneously ( 1 ) a distributed representation for each word along with ( 2 ) the probability function for word sequences , expressed in terms of these representations . generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar ( in the sense of having a nearby representation ) to words forming an already seen sentence . training such large models ( with millions of parameters ) within a reasonable story_separator_special_tag continuous word representations , trained on large unlabeled corpora are useful for many natural language processing tasks . popular models that learn such representations ignore the morphology of words , by assigning a distinct vector to each word . this is a limitation , especially for languages with large vocabularies and many rare words . in this paper , we propose a new approach based on the skipgram model , where each word is represented as a bag of character $ n $ -grams . a vector representation is associated to each character $ n $ -gram ; words being represented as the sum of these representations . our method is fast , allowing to train models on large corpora quickly and allows us to compute word representations for words that did not appear in the training data . we evaluate our word representations on nine different languages , both on word similarity and analogy tasks . by comparing to recently proposed morphological word representations , we show that our vectors achieve state-of-the-art performance on these tasks . story_separator_special_tag we propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including : part-of-speech tagging , chunking , named entity recognition , and semantic role labeling . this versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge . instead of exploiting man-made input features carefully optimized for each task , our system learns internal representations on the basis of vast amounts of mostly unlabeled training data . this work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements . story_separator_special_tag over the past few years , neural networks have re-emerged as powerful machine-learning models , yielding state-of-the-art results in fields such as image recognition and speech processing . more recently , neural network models started to be applied also to textual natural language signals , again with very promising results . this tutorial surveys neural network models from the perspective of natural language processing research , in an attempt to bring natural-language researchers up to speed with the neural techniques . the tutorial covers input encoding for natural language tasks , feed-forward networks , convolutional networks , recurrent networks and recursive networks , as well as the computation graph abstraction for automatic gradient computation . story_separator_special_tag this paper discusses the approach taken by the uwaterloo team to arrive at a solution for the fine-grained sentiment analysis problem posed by task 5 of semeval 2017. the paper describes the document vectorization and sentiment score prediction techniques used , as well as the design and implementation decisions taken while building the system for this task . the system uses text vectorization models , such as n-gram , tf-idf and paragraph embeddings , coupled with regression model variants to predict the sentiment scores . amongst the methods examined , unigrams and bigrams coupled with simple linear regression obtained the best baseline accuracy . the paper also explores data augmentation methods to supplement the training dataset . this system was designed for subtask 2 ( news statements and headlines ) . story_separator_special_tag this paper explores a simple and efficient baseline for text classification . our experiments show that our fast text classifier fasttext is often on par with deep learning classifiers in terms of accuracy , and many orders of magnitude faster for training and evaluation . we can train fasttext on more than one billion words in less than ten minutes using a standard multicore cpu , and classify half a million sentences among 312k classes in less than a minute . story_separator_special_tag many machine learning algorithms require the input to be represented as a fixed-length feature vector . when it comes to texts , one of the most common fixed-length features is bag-of-words . despite their popularity , bag-of-words features have two major weaknesses : they lose the ordering of the words and they also ignore semantics of the words . for example , `` powerful , '' `` strong '' and `` paris '' are equally distant . in this paper , we propose paragraph vector , an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts , such as sentences , paragraphs , and documents . our algorithm represents each document by a dense vector which is trained to predict words in the document . its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models . empirical results show that paragraph vectors outperform bag-of-words models as well as other techniques for text representations . finally , we achieve new state-of-the-art results on several text classification and sentiment analysis tasks . story_separator_special_tag vector-space word representations have been very successful in recent years at improving performance across a variety of nlp tasks . however , common to most existing work , words are regarded as independent entities without any explicit relationship among morphologically related words being modeled . as a result , rare and complex words are often poorly estimated , and all unknown words are represented in a rather crude way using only one or a few vectors . this paper addresses this shortcoming by proposing a novel model that is capable of building representations for morphologically complex words from their morphemes . we combine recursive neural networks ( rnns ) , where each morpheme is a basic unit , with neural language models ( nlms ) to consider contextual information in learning morphologicallyaware word representations . our learned models outperform existing word representations by a good margin on word similarity tasks across many datasets , including a new dataset we introduce focused on rare words to complement existing ones in an interesting way . story_separator_special_tag we present an approach to speech recognition that uses only a neural network to map acoustic input to characters , a character-level language model , and a beam search decoding procedure . this approach eliminates much of the complex infrastructure of modern speech recognition systems , making it possible to directly train a speech recognizer using errors generated by spoken language understanding tasks . the system naturally handles out of vocabulary words and spoken word fragments . we demonstrate our approach using the challenging switchboard telephone conversation transcription task , achieving a word error rate competitive with existing baseline systems . to our knowledge , this is the first entirely neural-network-based system to achieve strong speech transcription results on a conversational speech task . we analyze qualitative differences between transcriptions produced by our lexicon-free approach and transcriptions produced by a standard speech recognition system . finally , we evaluate the impact of large context neural network character language models as compared to standard n-gram models within our framework . story_separator_special_tag we propose two novel model architectures for computing continuous vector representations of words from very large data sets . the quality of these representations is measured in a word similarity task , and the results are compared to the previously best performing techniques based on different types of neural networks . we observe large improvements in accuracy at much lower computational cost , i.e . it takes less than a day to learn high quality word vectors from a 1.6 billion words data set . furthermore , we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities . story_separator_special_tag the recently introduced continuous skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships . in this paper we present several extensions that improve both the quality of the vectors and the training speed . by subsampling of the frequent words we obtain significant speedup and also learn more regular word representations . we also describe a simple alternative to the hierarchical softmax called negative sampling . an inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases . for example , the meanings of `` canada '' and `` air '' can not be easily combined to obtain `` air canada '' . motivated by this example , we present a simple method for finding phrases in text , and show that learning good vector representations for millions of phrases is possible . story_separator_special_tag continuous space language models have recently demonstrated outstanding results across a variety of tasks . in this paper , we examine the vector-space word representations that are implicitly learned by the input-layer weights . we find that these representations are surprisingly good at capturing syntactic and semantic regularities in language , and that each relationship is characterized by a relation-specific vector offset . this allows vector-oriented reasoning based on the offsets between words . for example , the male/female relationship is automatically learned , and with the induced vector representations , king man + woman results in a vector very close to queen . we demonstrate that the word vectors capture syntactic regularities by means of syntactic analogy questions ( provided with this paper ) , and are able to correctly answer almost 40 % of the questions . we demonstrate that the word vectors capture semantic regularities by using the vector offset method to answer semeval-2012 task 2 questions . remarkably , this method outperforms the best previous systems . story_separator_special_tag in recent years , variants of a neural network architecture for statistical language modeling have been proposed and successfully applied , e.g . in the language modeling component of speech recognizers . the main advantage of these architectures is that they learn an embedding for words ( or other symbols ) in a continuous space that helps to smooth the language model and provide good generalization even when the number of training examples is insufficient . however , these models are extremely slow in comparison to the more commonly used n-gram models , both for training and recognition . as an alternative to an importance sampling method proposed to speed-up training , we introduce a hierarchical decomposition of the conditional probabilities that yields a speed-up of about 200 both during training and recognition . the hierarchical decomposition is a binary hierarchical clustering constrained by the prior knowledge extracted from the wordnet semantic hierarchy . story_separator_special_tag recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic , but the origin of these regularities has remained opaque . we analyze and make explicit the model properties needed for such regularities to emerge in word vectors . the result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature : global matrix factorization and local context window methods . our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix , rather than on the entire sparse matrix or on individual context windows in a large corpus . the model produces a vector space with meaningful substructure , as evidenced by its performance of 75 % on a recent word analogy task . it also outperforms related models on similarity tasks and named entity recognition . story_separator_special_tag recursive structure is commonly found in the inputs of different modalities such as natural scene images or natural language sentences . discovering this recursive structure helps us to not only identify the units that an image or sentence contains but also how they interact to form a whole . we introduce a max-margin structure prediction architecture based on recursive neural networks that can successfully recover such structure both in complex scene images as well as sentences . the same algorithm can be used both to provide a competitive syntactic parser for natural language sentences from the penn treebank and to outperform alternative approaches for semantic scene segmentation , annotation and classification . for segmentation and annotation our algorithm obtains a new level of state-of-the-art performance on the stanford background dataset ( 78.1 % ) . the features from the image parse tree outperform gist descriptors for scene classification by 4 % . story_separator_special_tag the exhaustivity of document descriptions and the specificity of index terms are usually regarded as independent . it is suggested that specificity should be interpreted statistically , as a function of term use rather than of term meaning . the effects on retrieval of variations in term specificity are examined , experiments with three test collections showing , in particular , that frequently-occurring terms are required for good overall performance . it is argued that terms should be weighted according to collection frequency , so that matches on less frequent , more specific , terms are of greater value than matches on frequent terms . results for the test collections show that considerable improvements in performance are obtained with this very simple procedure .
two special calorimeters are foreseen for the instrumentation of the very forward region of the ilc detector , a luminometer designed to measure the rate of low angle bhabha scattering events with a precision better than 10-3 and a low polar angle calorimeter , adjacent to the beam-pipe . the latter will be hit by a large amount of beamstrahlung remnants . the amount and shape of these depositions will allow a fast luminosity estimate and the determination of beam parameters . the sensors of this calorimeter must be radiation hard . both devices will improve the hermeticity of the detector in the search for new particles . finely segmented and very compact calorimeters will match the requirements . due to the high occupancy fast front-end electronics is needed . the design of the calorimeters developed and optimised with monte carlo simulations is presented . sensors and readout electronics asics have been designed and prototypes are available . results on the performance of these major components are summarised . story_separator_special_tag the ilc accelerator parameters and detector concepts are still under discussion in the worldwide community . as will be shown , the performance of the beamcal , the calorimeter in the very forward area of the ilc detector , is very sensitive to the beam parameter and crossing angle choices . we propose here beamcal designs for a small ( 0 or 2 mrad ) and large ( 20 mrad ) crossing angles and report about the veto performance study done . as illustration , the influence of several proposed beam parameter sets and crossing-angles on the signal to background ratio in the stau search is estimated for a particular realization of the super-symmetric model . story_separator_special_tag we review the status of the calculation of next-to-next-to-leading order corrections to large angle bhabha scattering in pure qed . after discussing the electron-loop and photonic corrections , we focus on the recently calculated two-loop virtual corrections involving a heavy-flavor fermion loop . we conclude by assessing the numerical impact of these corrections on the bhabha scattering cross section at colliders operating at a center of mass energy of about 1-gev . story_separator_special_tag the beam calorimeter beamcal and the photon calorimeter gamcal of the ilc detectors will be used to obtain a set of parameters describing the beam properties at the interaction point . the real-time determination of the parameters is a challenge , but is mandatory to achieve the best possible luminosity for the ilc . we report on our studies about the possibilities to reduce the number of readout channels of beamcal without significant loss of precision in the beam parameter determination . in addition , the benefit of measuring the beamstrahlung photons energy using gamcal is evaluated . finally we report on the achievable precision of and the observed correlations between the beam parameters reconstructed in the multi-parameter regime . we also comment on possible solutions how to deal with these correlations . story_separator_special_tag european consensus for the management of patients with differentiated thyroid carcinoma of the follicular epithelium furio pacini , martin schlumberger , henning dralle , rossella elisei , johannes w a smit , wilmar wiersinga and the european thyroid cancer taskforce section of endocrinology and metabolism , university of siena , via bracci , 53100 siena , italy , service de medicine nucleaire , institut gustave roussy , villejuif , france , department of general , visceral and vascular surgery , university of halle , germany , department of endocrinology , university of pisa , italy , department of endocrinology and metabolic disease , leiden university medical center , the netherlands and department of endocrinology and metabolism , university of amsterdam , the netherlands story_separator_special_tag geant4 is a toolkit for simulating the passage of particles through matter . it includes a complete range of functionality including tracking , geometry , physics models and hits . the physics processes offered cover a comprehensive range , including electromagnetic , hadronic and optical processes , a large set of long-lived particles , materials and elements , over a wide energy range starting , in some cases , from 250ev and extending in others to the tev energy range . it has been designed and constructed to expose the physics models utilised , to handle complex geometries , and to enable its easy adaptation for optimal use in different sets of applications . the toolkit is the result of a worldwide collaboration of physicists and software engineers . it has been created exploiting software engineering and object-oriented technology and implemented in the c++ programming language . it has been used in applications in particle physics , nuclear physics , accelerator design , space engineering and medical physics . story_separator_special_tag abstract a method is proposed to calculate the first and second moments of the spatial distribution of the energy of electromagnetic and hadronic showers measured in laterally segmented calorimeters . the technique uses a logarithmic weightinh of the energy fraction observed in the individual detector cells . it is fast and simple requiring no fitting or complicated corrections for the position or angle of incidence . the method is demonstrated with geant simulations of a bgo detector array . the position resolution results and the c/gp separation results are found to be equal or superior to those obtained with more complicated techniques . story_separator_special_tag a study was preformed to define the constraints on the electronic readout for the proposed luminosity detector of the international linear collider . the required dynamical range was studied by simulating the passage of minimum ionizing particles and of electrons at nominal energy of 250 gev through the detector . school of physics and astronomy , the raymond and beverly sackler faculty of exact sciences , tel aviv university , tel aviv israel . list of members can be found at : http : //www-zeuthen.desy.de/ilc/fcal/ story_separator_special_tag the international linear collider ( ilc ) is a proposed electron-positron collider with a center-of-mass energy of 500~gev , and a peak luminosity of $ 2 \\cdot 10^ { 34 } ~\\mathrm { cm } ^ { -2 } \\mathrm { s } ^ { -1 } $ . the ilc will complement the large hadron collider , a proton-proton accelerator , and provide precision measurements , which may help in solving some of the fundamental questions at the frontier of scientific research , such as the origin of mass and the possible existence of new principles of nature . the linear collider community has set a goal to achieve a precision of $ 10^ { -4 } $ on the luminosity measurement at the ilc . this may be accomplished by constructing a finely granulated calorimeter , which will measure bhabha scattering at small angles . the bhabha cross-section is theoretically known to great precision , yet the rate of bhabha scattering events , which would be measured by the luminosity detector , will be influenced by beam-beam effects , and by the inherent energy spread of the collider . the electroweak radiative effects can be calculated to high story_separator_special_tag lumical is the integrated luminosity calorimeter for the forward region of the future international linear collider . lumical s two identical modules on either side of the interaction point will be used to estimate luminosity by counting bhabha scattering events , matching polar angle and energy deposition . using monte carlo simulation , we have determined that uninstrumented gaps in the sensor pads cause the energy resolution it to be worse than the acceptable limit . this can only be remedied by discounting particles that impact these gaps . a second consequence is that since lower energy deposition in the gap regions no longer must be ameliorated , it seems likely that the design of lumical can be simplified . agh-ust , krakow , poland ifj , krakow , poland story_separator_special_tag a study was preformed to define the constraints on the electronic readout for the revised design of the proposed luminosity detector of the international linear collider . the required dynamical range was studied by simulating the passage of minimum ionizing particles and of electrons at the nominal energy of 250 gev through the detector . the minimal required digitization constant was determined , and the issue of channel occupancy was addressed . list of members can be found at : http : //www-zeuthen.desy.de/ilc/fcal/ story_separator_special_tag while the bunches in a linear collider cross once only , due to their small size they experience a strong beam-beam effect . guinea-pig is a code to simulate the impact of this effect on luminosity and background . a short overview of the program is given , with examples of its application to the background studies for tesla , the top threshold scan and a possible luminosity monitor ; as well as some results for clic . story_separator_special_tag in this paper , we discuss optimization of the larger crossing angle interaction region of the linear collider , where specially shaped transverse field of the detector integrated dipole can be reversed and adjusted to optimize trajectories of the low energy pairs , so that their majority would be directed into the extraction exit hole . this decreases the backscattering and makes background in 14mrad ir to be similar to background in 2mrad ir . story_separator_special_tag we present here a intra-nuclear cascade model implemented in geant4 5.0. the cascade model is based on re-engineering of inucl code . models included are bertini intra-nuclear cascade model with exitons , preequilibrium model , nucleus explosion model , fission model , and evaporation model . intermediate energy nuclear reactions from 100 mev to 3 gev energy are treated for proton , neutron , pions , photon and nuclear isotopes . we represent overview of the models , review results achieved from simulations and make comparisons with experimental data . story_separator_special_tag beamcal is an electromagnetic sampling calorimeter in the very forward region of the detector at the ilc . beamcal will be hit by a large fraction of electronpositron pairs stemming from beamstrahlung . these pairs will create electromagnetic and hadronic showers which affects both the sensor layers and the front-end electronics of the detector . we report on our studies about background effects in beamcal sensor layers and electronics produced by e e beamstrahlung pairs as well as gammas and neutrons produced in photoand electro-nuclear reactions of the e e pairs with the detector . the challenge of beamcal is to detect single high energetic electrons or positrons , on top of a wider spread background . story_separator_special_tag we describe a method to measure a nanometer beam size during collisions at future $ { \\mathit { e } } ^ { + } $ $ { \\mathit { e } } ^ { \\mathrm { \\ensuremath { - } } } $ linear colliders by using $ { \\mathit { e } } ^ { + } $ $ { \\mathit { e } } ^ { \\mathrm { \\ensuremath { - } } } $ pairs . a huge number of pairs are deflected in a strong coulomb potential made by an oncoming beam . since the potential is a function of the beam size ( $ { \\mathrm { \\ensuremath { \\sigma } } } _ { \\mathit { x } } $ , $ { \\mathrm { \\ensuremath { \\sigma } } } _ { \\mathit { y } } $ ) , the pairs are expected to carry this information , especially in their angular distributions . we investigated this process in detail by simulation , using a computer program abel under realistic experimental conditions as well as by analytic studies . besides the beam size , a vertical displacement between two beams and story_separator_special_tag it has been recognized that e+e- pair creation during the collision of intense beams in linear colliders will cause potential background problems for high energy experiments . detailed knowledge of the angular-momentum spectrum of these low energy pairs is essential to the design of the interaction region . in this paper , we derive the analytic formulae for the integrated cross-section of this process and we also modify the computer code abel ( analysis of beam-beam effects in linear colliders ) to include the pair creation processes , using the equivalent photon approximation . special care has been taken on the non-local nature of the virtual photon exchanges . the simulation results are then compared with the analytic formulae , and applied to the next generation colliders such as jlc . story_separator_special_tag a compact and finely grained sandwich calorimeter is designed to instrument the very forward region of a detector at a future e+e- collider . the calorimeter will be exposed to low energy e+e - pairs originating from beamstrahlung , resulting in absorbed doses of about one mgy per year . gaas pad sensors interleaved with tungsten absorber plates are considered as an option for this calorimeter . several cr-doped gaas sensor prototypes were produced and irradiated with 8.5-10 mev electrons up to a dose of 1.5 mgy . the sensor performance was measured as a function of the absorbed dose . story_separator_special_tag the instrumentation of the very forward region of the ilc detectors is challenging . at lowest polar angles a beam calorimeter , beamcal , is foreseen . the main tasks of beamcal are efficient detection of high energetic particles at lowest angles as well as monitoring of the beam collisions . a large background of electron-positron pairs generated by beamstrahlung leads to an energy deposition of 10-20 tev per bunch crossing in beamcal . this corresponds locally to doses up to 10 mgy per year of electromagnetic radiation . beamcal is a compact sandwich calorimeter . tungsten is the absorber and polycrystalline cvd diamond is under study as the sensor material to allow the operation in this harsh radiation environment . the pcvd diamond sensors under investigation are from different manufacturers fabricated usually on 4 inch wafers . we study the charge collection efficiencies of the diamond sensors as function of the applied electric field and of the absorbed dose . in our application the homogeneity and linearity of the response are of critical importance . in a first test beam period we investigated the linearity of the response of different diamond sensors up to particle fluences of 107 mip/ story_separator_special_tag mosfet models for deep submicron technologies involve accurate and complex equations not suitable for hand analysis . although the gm/id design-oriented approach has overcome this limitation by combining hand calculations with data obtained from spice simulations , it has not been systematically used for noise calculations , since the dependence of noise on this parameter is not direct . an attempt to express noise as a function of gm/id is presented . by introducing the normalised noise concept , noise curves that depend solely on the device length and operation point can be obtained directly from spice simulations , and then used in the design flow . the main outcome is a simple design-oriented methodology for noise calculations that does not depend on equations for a specific technology or operating region , and that is easy to migrate among different technologies . story_separator_special_tag charge amplifiers represent the standard solution to amplify signals from capacitive detectors in high energy physics experiments . in a typical front-end , the noise due to the charge amplifier , and particularly from its input transistor , limits the achievable resolution . the classic approach to attenuate noise effects in mosfet charge amplifiers is to use the maximum power available , to use a minimum-length input device , and to establish the input transistor width in order to achieve the optimal capacitive matching at the input node . these conclusions , reached by analysis based on simple noise models , lead to sub-optimal results . in this work , a new approach on noise analysis for charge amplifiers based on an extension of the gm/id methodology is presented . this method combines circuit equations and results from spice simulations , both valid for all operation regions and including all noise sources . the method , which allows to find the optimal operation point of the charge amplifier input device for maximum resolution , shows that the minimum device length is not necessarily the optimal , that flicker noise is responsible for the non-monotonic noise versus current function , and story_separator_special_tag the investment costs given in this chapter include all components necessary for the baseline design of tesla , as described in chapters 3 to 9. not included are the costs for the high energy physics detector ( part iv ) and the x-ray fel experimental stations ( undulators , photon beam lines , etc. , see part v ) . all numbers are quoted at year 2000 prices . it is assumed that the manpower required for the various stages of the project ( i.e . preparation , procurement , testing , assembly and commissioning ) will be supplied by the existing manpower in the collaborating institutes : however some of this manpower may have to be hired . for this reason the manpower is quoted separately , and is not included in the total cost . to allow a comparison with other e + e linear collider projects , the costs for the linear collider and x-fel have been separated as follows : the costs for the linear collider part of the tesla project amount to 3136 million eur ; the costs for the additional accelerator systems and civil engineering required for the x-fel are 241 million eur . story_separator_special_tag abstract a low noise , wide bandwidth preamplifier and signal processing filter were developed for high counting rate proportional counters . the filter consists of a seven pole gaussian integrator with symmetrical weighting function and continuously variable shaping time , s , of 8 50 ns ( fwhm ) preceded by a second order pole zero circuit which cancels the long ( 1/ t ) tails of the chamber signals . the preamplifier is an optimized common base input design with 2 ns rise time and an equivalent noise input charge /lt 2000 r.m.s . electrons , when connected to a chamber with 10 pf capacitance and at a filtering time , s , of 10 ns . story_separator_special_tag we derive cosmological constraints using a galaxy cluster sample selected from the 2500 deg2 spt-sz survey . the sample spans the redshift range 0.25 < z 5. the sample is supplemented with optical weak gravitational lensing measurements of 32 clusters with 0.29 < z < 1.13 ( from magellan and hubble space telescope ) and x-ray measurements of 89 clusters with 0.25 < z < 1.75 ( from chandra ) . we rely on minimal modeling assumptions : ( i ) weak lensing provides an accurate means of measuring halo masses , ( ii ) the mean sz and x-ray observables are related to the true halo mass through power-law relations in mass and dimensionless hubble parameter e ( z ) with a priori unknown parameters , and ( iii ) there is ( correlated , lognormal ) intrinsic scatter and measurement noise relating these observables to their mean relations . we simultaneously fit for these astrophysical modeling parameters and for cosmology . assuming a flat cdm model , in which the sum of neutrino masses is a free parameter , we measure m = 0.276 \xb1 0.047 , 8 = 0.781 \xb1 0.037 , and 8 ( m/0.3 ) story_separator_special_tag japaneses encephalitis ( je ) is most common zoonoses caused by japanese encephalitis virus ( jev ) with a high mortality and disability rate . to take timely preventive and control measures , early and rapid detection of je rna is necessary . but due to characteristic brief and low viraemia , je rna detection remains challenging . in this study , a real-time nucleic acid sequence-based amplification ( rt-nasba ) was developed for rapid and simultaneous detection of jev . four pairs of primer were designed using a multiple genome alignment of all jev strains from genbank . nasba assay established and optimal reaction conditions were confirmed by using primers and probe on ns1 gene of jev . the specificity and sensitivity of the assay were compared with rt-pcr by using serial rna and virus cultivation dilutions . the results showed that jev rt-nasba assay was established , and robust signals could be observed in 10 min with high specificity . the limit of dectetion of rt-nasba was 6 copies per reaction . the assay was thus 100 to 1 , 000 times more sensitive than rt-pcr . the cross-reaction was performed with other porcine pathogens , and negative story_separator_special_tag the silicon-tungsten calorimeter lumical , located in a very forward region of the future detector at the international linear collider , is proposed for precise luminosity measurement . one of the requirements to fulfil this task is the availability of the information on the actual position of the calorimeter relative to the beam interaction area which should be known with the accuracy of a few micrometers . in this paper the possible solutions for the positioning of the lumical detector using laser alignment system ( las ) are discussed . the basic components of this system are laser beams and ccd camera . the results of the several displacement measurements are presented . the measurements achieved the accuracy \xb1 0.5 m in x , y and \xb1 1.5 m in z direction . the further studies on the laser alignment system development are discussed . 1 institute of nuclear physics pan , cracow , poland 2 jagiellonian university , cracow , poland 3 agh , university of science and technology , cracow , poland 4 list of authors can be found : http : //www-zeuthen.desy.de/ilc/fcal/ story_separator_special_tag abstract the laser alignment system of the zeus microvertex detector is described . the detector was installed in 2001 as part of an upgrade programme in preparation for the second phase of electron proton physics at the hera collider . the alignment system monitors the position of the vertex detector support structure with respect to the central tracking detector using semi-transparent amorphous-silicon sensors and diode lasers . the system is fully integrated into the general environmental monitoring of the zeus detector and data has been collected over a period of 5 years . the primary aim of defining periods of stability for track-based alignment has been achieved and the system is able to measure movements of the support structure to a precision around 10 m . story_separator_special_tag abstract a novel optical survey system for continuous alignment of hep experiments is described . the complete survey system is outlined and the underlying measurement technique , fsi , is described in detail . preliminary findings made with a laboratory demonstration system for fsi are presented . story_separator_special_tag the examination of a phd thesis marks an important stage in the phd student journey . here , the student s research , thinking and writing are assessed by experts in their field . yet , in the early stages of candidature , students often do not know what is expected of their thesis , nor what examiners will scrutinise and comment on . however , what examiners look for , expect and comment on has been the subject of recent research . this article synthesises the literature on examiner expectations into a framework and tool that can assist students to understand phd thesis examination expectations . suggestions of how this tool may be used as part of a broader supervision pedagogy are offered . story_separator_special_tag in this lecture we review the current understanding of the beam-beam interaction in e/sup +/e/sup -/ linear colliders . strictly speaking , the two effects , disruption and beamstrahlung , during beam-beam interaction are coupled . this is self-evident because without deflection there would be no radiation , and with radiation during bending the remaining trajectory of particles would not be the same . fortunately , in a large range of beam parameters the average disruption angles are rather small , and the emission of hard photons are relatively rare . for these reasons the two effects can be isolated from each other to the first degree of accuracy , and our study of the issue can be greatly simplified . this happens also to be the development historically . we discuss the effects associated with disruption with negligible beamstrahlung . here , an important parameter , the disruption parameter d , is introduced . we then discuss the maximum and rms disruption angles . the analytic scaling laws for d > > 1 and d < < 1 are then compared with simulation results . next we investigate the enhancement of luminosity due to disruption . together with the story_separator_special_tag the intense radiation , called beamstrahlung , during the collision of e { sup + } e { sup - } beams in a linear collider , is reviewed , with attention to the influence of beam-beam disruption on the beamstrahlung spectrum . we then discuss the various detector backgrounds induced by these hard beamstrahlung photons , as well as the weiszacker-williams photon , through various qed and qcd processes , namely the coherent and incoherent e { sup + } e { sup - } pair creation and the hadron production and minijet yields . story_separator_special_tag this dissertation is a detailed study of several aspects of beamstrahlung and related phenomena . the problem is formulated as the relativistic scattering of an electron from a strong but slowly varying potential . the solution is readily interpreted in terms of a classical electron trajectory , and differs from the solution of the corresponding classical problem mainly in the effect of quantum recoil due to the emission of hard photons . when the general solution is expanded for the case of an almost-uniform field , the leading term is identical to the well-known formula for quantum synchrotron radiation . the first non-leading term is negligible in all cases of interest where the expansion is valid . in applying the standard synchrotron radiation formula to the beamstrahlung problem , the effects of radiation reaction on the emission of multiple photons can be significant for some machine designs . another interesting feature is the helicity dependence of the radiation process , which is relevant to the case where the electron beam is polarized . the inverse process of coherent electron-positron pair production by a beamstrahlung photon is a potentially serious background source at future colliders , since low-energy pairs can exit story_separator_special_tag the international linear collider ( ilc ) is an electron-positron-collider with a variable center-of-mass energy s between 200 and 500 gev . the small bunch sizes needed to reach the design luminosity of lpeak = 2 \xb7 10 cm 2s 1 necessary for the physics goals of the ilc , cause the particles to radiate beamstrahlung during the bunch crossings . beamstrahlung reduces the center-of-mass energy from its nominal value to the effective center-of-mass energy s . the spectrum of the effective center-of-mass energy s is the differential luminosity dl/d s , which has to be known to precisely measure particle masses through threshold scans . the differential luminosity can be measured by using bhabha events . the real differential luminosity is simulated by the guineapig [ 1 ] software . the energy spectrum of the bhabha events is measured by the detector and compared to the energy spectrum of monte carlo ( mc ) bhabha events with a known differential luminosity given by an approximate parameterization . the parameterization is used to assign each mc event a weight . by re-weighting the events , until the energy spectra from the real and the mc bhabha events match , the story_separator_special_tag jun n-terminal kinase ( jnk ) is a stress-activated protein kinase that can be induced by inflammatory cytokines , bacterial endotoxin , osmotic shock , uv radiation , and hypoxia . we report the identification of an anthrapyrazolone series with significant inhibition of jnk1 , -2 , and -3 ( ki = 0.19 m ) . sp600125 is a reversible atp-competitive inhibitor with > 20-fold selectivity vs. a range of kinases and enzymes tested . in cells , sp600125 dose dependently inhibited the phosphorylation of c-jun , the expression of inflammatory genes cox-2 , il-2 , ifn- , tnf- , and prevented the activation and differentiation of primary human cd4 cell cultures . in animal studies , sp600125 blocked ( bacterial ) lipopolysaccharide-induced expression of tumor necrosis factor- and inhibited anti-cd3-induced apoptosis of cd4+ cd8+ thymocytes . our study supports targeting jnk as an important strategy in inflammatory disease , apoptotic cell death , and cancer . story_separator_special_tag the beam-beam interaction in electron-positron linear colliders shows very di erent aspects from that in storage rings . the single-pass nature of the linear colliders allows drastic deformation of the bunch shape during one collision . also , under the very strong electro-magnetic eld together with the high beam energy , phenomena which are not important in storage rings come into play , namely the phenomena involving the quantum eld theory . the synchrotron radiation in the beam-beam eld , called beamstrahlung , becomes extremely energetic . the strong eld can even create electronpositron pairs from the beamstrahlung photons . in the present lecture note both the classical and quantum phenomena are described . kek preprint 91-2 , april 1991 lecture at 1990 us-cern school on particle accelerators , nov.7-14 , 1990 , hilton head island , so . carolina , usa . lecture notes in physics 400. frontiers of particle beams : intensity limitations , springer verlag , pp . 415-445. revised nov. 1992. printed . april 7 , 1995 beam-beam phenomena in linear colliders kaoru yokoya and pisin cheny natinal laboratory for high energy physics , oho , tsukuba-shi , ibaraki , 305 , japan ystanford linear accelerator story_separator_special_tag searches for the exclusive decays of the higgs and z bosons into a j/ , ( 2s ) , or ( ns ) ( n=1,2,3 ) meson and a photon are performed with a pp collision data sample corresponding to an integrated luminosity of 36.1fb 1 collected at s=13tev with the atlas detector at the cern large hadron collider . no significant excess of events is observed above the expected backgrounds , and 95 % confidence-level upper limits on the branching fractions of the higgs boson decays to j/ , ( 2s ) , and ( ns ) of 3.5\xd710 4 , 2.0\xd710 3 , and ( 4.9,5.9,5.7 ) \xd710 4 , respectively , are obtained assuming standard model production . the corresponding 95 % confidence-level upper limits for the branching fractions of the z boson decays are 2.3\xd710 6 , 4.5\xd710 6 and ( 2.8,1.7,4.8 ) \xd710 6 , respectively . story_separator_special_tag the results concerning the theoretical evaluation of the small-angle bhabha scattering cross section obtained during the workshop on physics at lep2 ( cern , geneva , switzerland , 1995 ) by the working group `` event generators for bhabha scattering '' are summarized . the estimate of the theoretical error on the cross section in the luminometry region is updated . story_separator_special_tag abstract displacement damage ( dd ) caused by neutron irradiation is one of the major causes of the degradation and failure of semiconductor devices in hazardous environments . classical molecular dynamics ( md ) has been the method of choice in computer simulation of dd . in this paper , it is found , contrary to common belief , that not including electronic effects is a serious flaw of classical md even in the study of low-energy dd . the dd of bulk silicon in kev regime is investigated with the electron force field ( eff ) , which incorporates explicit electron movement in md . the eff results agree with those of the experiments , but differ significantly from those of classical md . story_separator_special_tag the use of nonionizing energy loss ( niel ) in predicting the effect of gamma , electron , and proton irradiations on si , gaas , and inp devices is discussed . the niel for electrons and protons has been calculated from the displacement threshold to 200 mev . convoluting the electron niel with the slowed down compton secondary electron spectrum gives an effective niel for co/sup 60/ gammas , enabling gamma-induced displacement damage to be correlated with particle results . the fluences of 1 mev electrons equivalent to irradiation with 1 mrad ( si ) for si , gaas , and inp are given . analytic proton niel calculations and results derived from the monte carlo trim agree exactly , as long as straggling is not significant . the niel calculations are compared with experimental proton and electron damage coefficients using solar cells as examples . a linear relationship is found between the niel and proton damage coefficients for si , gaas , and inp devices . for electrons , there appears to be a linear dependence for n-si and n-gaas , but for p-si there is a quadratic relationship which decreases the damage coefficient at 1 mev by
ultrasound is the most commonly used imaging technique for the evaluation of thyroid nodules . sonographic findings are often not specific , and definitive diagnosis is usually made through fine-needle aspiration biopsy or even surgery . in reviewing the literature , terms used to describe nodules are often poorly defined and inconsistently applied . several authors have recently described a standardized risk stratification system called the thyroid imaging , reporting and data system ( tirads ) , modeled on the bi-rads system for breast imaging . however , most of these tirads classifications have come from individual institutions , and none has been widely adopted in the united states . under the auspices of the acr , a committee was organized to develop tirads . the eventual goal is to provide practitioners with evidence-based recommendations for the management of thyroid nodules on the basis of a set of well-defined sonographic features or terms that can be applied to every lesion . terms were chosen on the basis of demonstration of consistency with regard to performance in the diagnosis of thyroid cancer or , conversely , classifying a nodule as benign and avoiding follow-up . the initial portion of this project story_separator_special_tag in this paper , we review the different studies that developed computer aided diagnostic ( cad ) for automated classification of thyroid cancer into benign and malignant types . specifically , we discuss the different types of features that are used to study and analyze the differences between benign and malignant thyroid nodules . these features can be broadly categorized into ( a ) the sonographic features from the ultrasound images , and ( b ) the non-clinical features extracted from the ultrasound images using statistical and data mining techniques . we also present a brief description of the commonly used classifiers in ultrasound based cad systems . we then review the studies that used features based on the ultrasound images for thyroid nodule classification and highlight the limitations of such studies . we also discuss and review the techniques used in studies that used the non-clinical features for thyroid nodule classification and report the classification accuracies obtained in these studies . story_separator_special_tag ultrasound is one of the most used imaging techniques for assessing and evaluating thyroid lesions . indeed , it shows a good performance in terms of discrimination between benign and malignant thyroid nodules . diagnosis by ultrasound is , however , not as easy as it seems and depends strongly on the experience of the radiologists . to help physician and radiologists to better diagnose , many computer-assisted diagnosis ( cad ) systems have been developed . these systems are based on image processing techniques and on machine learning . they represent effective and useful tools allowing doctors to have a second opinion far from human subjectivity . among the machine learning techniques , the deep learning has recently made rapid progress in interpreting medical imaging and has demonstrated an impressive efficiency . various cad systems treating ultrasound images of the thyroid have widely use it since then . this paper reviews the most recent research works on the cad systems for analyzing ultrasound images of the thyroid to diagnose the benign or malignant nature of the thyroid nodules . the cad systems studied in this paper are based on the deep learning . we present a brief description of story_separator_special_tag background : ultrasound ( us ) examination is helpful in the differential diagnosis of thyroid nodules ( malignant vs. benign ) , but its accuracy relies heavily on examiner experience . therefore , the aim of this study was to develop a less subjective diagnostic model aided by machine learning . methods : a total of 2064 thyroid nodules ( 2032 patients , 695 male ; mage = 45.25 \xb1 13.49 years ) met all of the following inclusion criteria : ( i ) hemi- or total thyroidectomy , ( ii ) maximum nodule diameter 2.5 cm , ( iii ) examination by conventional us and real-time elastography within one month before surgery , and ( iv ) no previous thyroid surgery or percutaneous thermotherapy . models were developed using 60 % of randomly selected samples based on nine commonly used algorithms , and validated using the remaining 40 % of cases . all models function with a validation data set that has a pretest probability of malignancy of 10 % . the models were refined with machine learning that consisted of 1000 repetitions of derivatization and validation , and compared to diagnosis by an experienced radiologist . sensitivity , story_separator_special_tag the present work aims to assess the role of statistical classifiers in increasing the diagnostic accuracy when differentiating between benign and malignant thyroid nodules . the classifiers considered were based on combinations of both demographic and ultrasound data . two feature selection procedures were considered , along with three different classification methods : random forest , support vector machine and logistic regression . the best results were obtained with the latter , for which the 95 % confidence intervals for the area under the roc curve , the sensitivity and the specificity were [ 0.75 , 0.80 ] , [ 76 % , 83 % ] and [ 71 % , 77 % ] , respectively . the most relevant features were a description of the contour of the nodule , presence of halo , internal composition , sphericity and echogenicity , as well as the age of the participants . story_separator_special_tag texture analysis is an important topic in ultrasound ( us ) image analysis for structure segmentation and tissue classification . in this work a novel approach for us image texture feature extraction is presented . it is mainly based on parametrical modelling of a signal version of the us image in order to process it as data resulting from a dynamical process . because of the predictive characteristics of such a model representation , good estimations of texture features can be obtained with less data than generally used methods require , allowing higher robustness to low signal-to-noise ratio and a more localized us image analysis . the usability of the proposed approach was demonstrated by extracting texture features for segmenting the thyroid in us images . the obtained results showed that features corresponding to energy ratios between different modelled texture frequency bands allowed to clearly distinguish between thyroid and non-thyroid texture . a simple k-means clustering algorithm has been used for separating us image patches as belonging to thyroid or not . segmentation of thyroid was performed in two different datasets obtaining dice coefficients over 85 % . story_separator_special_tag the proposed method integrates snlm clustering and level-set.able to delineate thyroid nodules in ultrasound images accurately and automatically.can be applied without preprocessing due to its indeterminacy handling capability.the parameters of sndrls are determined adaptively from snlm clustering.experimental results show the effectiveness of the proposed method . an accurate contour estimation plays a significant role in classification and estimation of shape , size , and position of thyroid nodule . this helps to reduce the number of false positives , improves the accurate detection and efficient diagnosis of thyroid nodules . this paper introduces an automated delineation method that integrates spatial information with neutrosophic clustering and level-sets for accurate and effective segmentation of thyroid nodules in ultrasound images . the proposed delineation method named as spatial neutrosophic distance regularized level set ( sndrls ) is based on neutrosophic l-means ( nlm ) clustering which incorporates spatial information for level set evolution . the sndrls takes rough estimation of region of interest ( roi ) as input provided by spatial nlm ( snlm ) clustering for precise delineation of one or more nodules . the performance of the proposed method is compared with level set , nlm clustering , active contour without story_separator_special_tag ultrasound ( us ) imaging deals with forming a brightness image from the amplified backscatter echo when an ultrasound wave is triggered at the region of interest . imaging artifacts and speckles occur in the image as a consequence of backscattering and subsequent amplification . we demonstrate the usefulness of speckle-related pixels and imaging artifacts as sources of information to perform multiorgan segmentation in us images of the thyroid gland . the speckle-related pixels are clustered based on a similarity constraint to quantize the image . the quantization results are used to locate useful anatomical landmarks that aid the detection of multiple organs in the image , which are the thyroid gland , the carotid artery , the muscles , and the trachea . the spatial locations of the carotid artery and the trachea are used to estimate the boundaries of the thyroid gland in transverse us scans . experiments performed on a multivendor dataset yield good quality segmentation results with probabilistic rand index > 0.83 and boundary error 1\xa0mm , and an average accuracy greater than 94 % . analysis of the results using the dice coefficient as the metric shows that the proposed method performs better than the story_separator_special_tag physicians usually diagnose the pathology of the thyroid gland by its volume . however , even if the thyroid glands are found and the shapes are hand-marked from ultrasound ( us ) images , most physicians still depend on computed tomography ( ct ) images , which are expensive to obtain , for precise measurements of the volume of the thyroid gland . this approach relies heavily on the experience of the physicians and is very time consuming . patients are exposed to high radiation when obtaining ct images . in contrast , us imaging does not require ionizing radiation and is relatively inexpensive . us imaging is thus one of the most commonly used auxiliary tools in clinical diagnosis . the present study proposes a complete solution to estimate the volume of the thyroid gland directly from us images . the radial basis function neural network is used to classify blocks of the thyroid gland . the integral region is acquired by applying a specific-region-growing method to potential points of interest . the parameters for evaluating the thyroid volume are estimated using a particle swarm optimization algorithm . experimental results of the thyroid region segmentation and volume estimation in story_separator_special_tag using right equipment and well trained personnel , ultrasound of the neck can detect a large number of non-palpable thyroid nodules . however , this technique often suffers from subjective interpretations and poor accuracy in the differential diagnosis of malignant and benign thyroid lesions . therefore , we developed an automated identification system based on knowledge representation techniques for characterizing the intra-nodular vascularization of thyroid lesions . twenty nodules ( 10 benign and 10 malignant ) , taken from 3-d high resolution ultrasound ( hrus ) images were used for this work . malignancy was confirmed using fine needle aspiration biopsy and subsequent histological studies . a combination of discrete wavelet transformation ( dwt ) and texture algorithms were used to extract relevant features from the thyroid images . these features were fed to different configurations of adaboost classifier . the performance of these configurations was compared using receiver operating characteristic ( roc ) curves . our results show that the combination of texture features and dwt features presented an accuracy value higher than that reported in the literature . among the different classifier setups , the perceptron based adaboost yielded very good result and the area under the roc story_separator_special_tag this paper presents a computer-based classification scheme that utilized various morphological and novel wavelet-based features towards malignancy risk evaluation of thyroid nodules in ultrasonography . the study comprised 85 ultrasound images-patients that were cytological confirmed ( 54 low-risk and 31 high-risk ) . a set of 20 features ( 12 based on nodules boundary shape and 8 based on wavelet local maxima located within each nodule ) has been generated . two powerful pattern recognition algorithms ( support vector machines and probabilistic neural networks ) have been designed and developed in order to quantify the power of differentiation of the introduced features . a comparative study has also been held , in order to estimate the impact speckle had onto the classification procedure . the diagnostic sensitivity and specificity of both classifiers was made by means of receiver operating characteristics ( roc ) analysis . in the speckle-free feature set , the area under the roc curve was 0.96 for the support vector machines classifier whereas for the probabilistic neural networks was 0.91. in the feature set with speckle , the corresponding areas under the roc curves were 0.88 and 0.86 respectively for the two classifiers . the proposed features story_separator_special_tag abstract objective to evaluate color thyroid elastograms quantitatively and objectively . materials and methods 125 cases ( 56 malignant and 69 benign ) were collected with the hitachi vision 900 system ( hitachi medical system , tokyo , japan ) and a liner-array-transducer of 6 13\xa0mhz . standard of reference was cytology ( fna fine needle aspiration ) or histology ( core biopsy ) . the original color thyroid elastograms were transferred from red , green , blue ( rgb ) color space to hue , saturation , value ( hsv ) color space . then , hard area ratio was defined . finally , a svm classifier was used to classify thyroid nodules into benign and malignant . the relation between the performance and hard threshold was fully investigated and studied . results the classification accuracy changed with the hard threshold , and reached maximum ( 95.2 % ) at some values ( from 144 to 152 ) . it was higher than strain ratio ( 87.2 % ) and color score ( 83.2 % ) . it was also higher than the one of our previous study ( 93.6 % ) . conclusion the hard area ratio is story_separator_special_tag total of 242 benign and malignant thyroid nodules are classified.various entropies are extracted from gabor transformed images.these features are subjected to lsda and ranked by relief-f method.various sampling strategies are used to balance the classification data.obtained classification accuracy of 94.3 % with c4.5 decision tree classifier . thyroid cancer commences from an atypical growth of thyroid tissue at the edge of the thyroid gland . initially , it forms a lump in the throat and an over-growth of this tissue leads to the formation of benign or malignant thyroid nodules . blood test and biopsies are the standard techniques used to diagnose the presence of thyroid nodules . but imaging modalities can improve the diagnosis and are marked as cost-effective , non-invasive and risk-free to identify the stages of thyroid cancer . this study proposes a novel automated system for classification of benign and malignant thyroid nodules . raw images of thyroid nodules recorded using high resolution ultrasound ( hrus ) are subjected to gabor transform . various entropy features are extracted from these transformed images and these features are reduced by locality sensitive discriminant analysis ( lsda ) and ranked by relief-f method . over-sampling strategies with wilcoxon signed-rank story_separator_special_tag purpose to develop a semiautomated computer-aided diagnosis ( cad ) system for thyroid cancer using two-dimensional ultrasound images that can be used to yield a second opinion in the clinic to differentiate malignant and benign lesions . methods a total of 118 ultrasound images that included axial and longitudinal images from patients with biopsy-confirmed malignant ( n = 30 ) and benign ( n = 29 ) nodules were collected . thyroid cad software was developed to extract quantitative features from these images based on thyroid nodule segmentation in which adaptive diffusion flow for active contours was used . various features , including histogram , intensity differences , elliptical fit , gray-level co-occurrence matrixes , and gray-level run-length matrixes , were evaluated for each region imaged . based on these imaging features , a support vector machine ( svm ) classifier was used to differentiate benign and malignant nodules . leave-one-out cross-validation with sequential forward feature selection was performed to evaluate the overall accuracy of this method . additionally , analyses with contingency tables and receiver operating characteristic ( roc ) curves were performed to compare the performance of cad with visual inspection by expert radiologists based on established gold story_separator_special_tag most thyroid nodules are heterogeneous with various internal components , which confuse many radiologists and physicians with their various echo patterns in ultrasound images . numerous textural feature extraction methods are used to characterize these patterns to reduce the misdiagnosis rate . thyroid nodules can be classified using the corresponding textural features . in this paper , six support vector machines ( svms ) are adopted to select significant textural features and to classify the nodular lesions of a thyroid . experiment results show that the proposed method can correctly and efficiently classify thyroid nodules . a comparison with existing methods shows that the feature-selection capability of the proposed method is similar to that of the sequential-floating-forward-selection ( sffs ) method , while the execution time is about 3-37 times faster . in addition , the proposed criterion function achieves higher accuracy than those of the f-score , t-test , entropy , and bhattacharyya distance methods . story_separator_special_tag the thyroid is one of the largest endocrine glands in the human body , which is involved in several body mechanisms like controlling protein synthesis and the body 's sensitivity to other hormones and use of energy sources . hence , it is of prime importance to track the shape and size of thyroid over time in order to evaluate its state . thyroid segmentation and volume computation are important tools that can be used for thyroid state tracking assessment . most of the proposed approaches are not automatic and require long time to correctly segment the thyroid . in this work , we compare three different nonautomatic segmentation algorithms ( i.e. , active contours without edges , graph cut , and pixel-based classifier ) in freehand three-dimensional ultrasound imaging in terms of accuracy , robustness , ease of use , level of human interaction required , and computation time . we figured out that these methods lack automation and machine intelligence and are not highly accurate . hence , we implemented two machine learning approaches ( i.e. , random forest and convolutional neural network ) to improve the accuracy of segmentation as well as provide automation . this comparative story_separator_special_tag this paper introduces a network for volumetric segmentation that learns from sparsely annotated volumetric images . we outline two attractive use cases of this method : ( 1 ) in a semi-automated setup , the user annotates some slices in the volume to be segmented . the network learns from these sparse annotations and provides a dense 3d segmentation . ( 2 ) in a fully-automated setup , we assume that a representative , sparsely annotated training set exists . trained on this data set , the network densely segments new volumetric images . the proposed network extends the previous u-net architecture from ronneberger et al . by replacing all 2d operations with their 3d counterparts . the implementation performs on-the-fly elastic deformations for efficient data augmentation during training . it is trained end-to-end from scratch , i.e. , no pre-trained network is required . we test the performance of the proposed method on a complex , highly variable 3d structure , the xenopus kidney , and achieve good results for both use cases . story_separator_special_tag the segmentation of the thyroid in ultrasound images is a field of active research . the thyroid is a gland of the endocrine system and regulates several body functions . measuring the volume of the thyroid is regular practice of diagnosing pathological changes . in this work , we compare three approaches for semi-automatic thyroid segmentation in freehand-tracked three-dimensional ultrasound images . the approaches are based on level set , graph cut and feature classification . for validation , sixteen 3d ultrasound records were created with ground truth segmentations , which we make publicly available . the properties analyzed are the dice coefficient when compared against the ground truth reference and the effort of required interaction . our results show that in terms of dice coefficient , all algorithms perform similarly . for interaction , however , each algorithm has advantages over the other . the graph cut-based approach gives the practitioner direct influence on the final segmentation . level set and feature classifier require less interaction , but offer less control over the result . all three compared methods show promising results for future work and provide several possible extensions . story_separator_special_tag segmentation of thyroid nodules in the ultrasound image is a chal lenging task not only because of the speckle noise in ultrasound images but also the heterogeneous appearance and blurry bound aries of thyroid nodules . in this paper , we apply u-net , a fully convolutional neural network , to thyroid nodule segmentation , and further proposed an interactive segmentation method based on it and the guidance of annotation marks . firstly , the four end-points of the major and minor axes of a nodule are determined manually . then , four white spots are directly drawn at the four points on the image to guide the training and inference of the deep neural network . our method is evaluated on a dataset composed of 900 ultrasound thyroid images . the experimental results indicate that our mark-guided segmentation method is able to delineate nodules accurately with little human intervention and achieve a remarkable improvement over its automatic counterpart . story_separator_special_tag there is large consent that successful training of deep networks requires many thousand annotated training samples . in this paper , we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently . the architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization . we show that such a network can be trained end-to-end from very few images and outperforms the prior best method ( a sliding-window convolutional network ) on the isbi challenge for segmentation of neuronal structures in electron microscopic stacks . using the same network trained on transmitted light microscopy images ( phase contrast and dic ) we won the isbi cell tracking challenge 2015 in these categories by a large margin . moreover , the network is fast . segmentation of a 512x512 image takes less than a second on a recent gpu . the full implementation ( based on caffe ) and the trained networks are available at this http url . story_separator_special_tag ultrasound image segmentation plays an important role in judgement of benign and malignant thyroid nodules . compared with the traditional convolutional neural network , the fully convolutional networks has better sparsity , higher precision and faster training speed . in this paper , we develop an 8-layer fully convolutional networks for ultrasound image segmentation of thyroid nodules , which is called fcn-thyroid nodules , or fcn-tn for short . we constructed a data set with 300 images to train fcn-tn . each nodule is delineated by expert and served as ground truth for making comparison . the segmentation accuracy of 91\\ % is obtained on the proposed network with 100 test images , which indicates that the fully convolutional networks has great potential in the field of ultrasound image segmentation of thyroid nodules . story_separator_special_tag state-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations . advances like sppnet and fast r-cnn have reduced the running time of these detection networks , exposing region proposal computation as a bottleneck . in this work , we introduce a region proposal network ( rpn ) that shares full-image convolutional features with the detection network , thus enabling nearly cost-free region proposals . an rpn is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position . the rpn is trained end-to-end to generate high-quality region proposals , which are used by fast r-cnn for detection . we further merge rpn and fast r-cnn into a single network by sharing their convolutional features -- -using the recently popular terminology of neural networks with 'attention ' mechanisms , the rpn component tells the unified network where to look . for the very deep vgg-16 model , our detection system has a frame rate of 5fps ( including all steps ) on a gpu , while achieving state-of-the-art object detection accuracy on pascal voc 2007 , 2012 , and ms coco datasets with only 300 proposals per image . in ilsvrc and story_separator_special_tag unlike daily routine images , ultrasound images are usually monochrome and low-resolution . in ultrasound images , the cancer regions are usually blurred , vague margin and irregular in shape . moreover , the features of cancer region are very similar to normal or benign tissues . therefore , training ultrasound images with original convolutional neural network ( cnn ) directly is not satisfactory . in our study , inspired by state-of-the-art object detection network faster r-cnn , we develop a detector which is more suitable for thyroid papillary carcinoma detection in ultrasound images . in order to improve the accuracy of the detection , we add a spatial constrained layer to cnn so that the detector can extract the features of surrounding region in which the cancer regions are residing . in addition , by concatenating the shallow and deep layers of the cnn , the detector can detect blurrier or smaller cancer regions . the experiments demonstrate that the potential of this new methodology can reduce the workload for pathologists and increase the objectivity of diagnoses . we find that 93:5 % of papillary thyroid carcinoma regions could be detected automatically while 81:5 % of benign and normal story_separator_special_tag we introduce yolo9000 , a state-of-the-art , real-time object detection system that can detect over 9000 object categories . first we propose various improvements to the yolo detection method , both novel and drawn from prior work . the improved model , yolov2 , is state-of-the-art on standard detection tasks like pascal voc and coco . using a novel , multi-scale training method the same yolov2 model can run at varying sizes , offering an easy tradeoff between speed and accuracy . at 67 fps , yolov2 gets 76.8 map on voc 2007. at 40 fps , yolov2 gets 78.6 map , outperforming state-of-the-art methods like faster rcnn with resnet and ssd while still running significantly faster . finally we propose a method to jointly train on object detection and classification . using this method we train yolo9000 simultaneously on the coco detection dataset and the imagenet classification dataset . our joint training allows yolo9000 to predict detections for object classes that dont have labelled detection data . we validate our approach on the imagenet detection task . yolo9000 gets 19.7 map on the imagenet detection validation set despite only having detection data for 44 of the 200 classes . story_separator_special_tag very deep convolutional networks have been central to the largest advances in image recognition performance in recent years . one example is the inception architecture that has been shown to achieve very good performance at relatively low computational cost . recently , the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ilsvrc challenge ; its performance was similar to the latest generation inception-v3 network . this raises the question of whether there are any benefit in combining the inception architecture with residual connections . here we give clear empirical evidence that training with residual connections accelerates the training of inception networks significantly . there is also some evidence of residual inception networks outperforming similarly expensive inception networks without residual connections by a thin margin . we also present several new streamlined architectures for both residual and non-residual inception networks . these variations improve the single-frame recognition performance on the ilsvrc 2012 classification task significantly . we further demonstrate how proper activation scaling stabilizes the training of very wide residual inception networks . with an ensemble of three residual and one inception-v4 , we achieve 3.08 percent top-5 error on the story_separator_special_tag summary background the incidence of thyroid cancer is rising steadily because of overdiagnosis and overtreatment conferred by widespread use of sensitive imaging techniques for screening . this overall incidence growth is especially driven by increased diagnosis of indolent and well-differentiated papillary subtype and early-stage thyroid cancer , whereas the incidence of advanced-stage thyroid cancer has increased marginally . thyroid ultrasound is frequently used to diagnose thyroid cancer . the aim of this study was to use deep convolutional neural network ( dcnn ) models to improve the diagnostic accuracy of thyroid cancer by analysing sonographic imaging data from clinical ultrasounds . methods we did a retrospective , multicohort , diagnostic study using ultrasound images sets from three hospitals in china . we developed and trained the dcnn model on the training set , 131 731 ultrasound images from 17 627 patients with thyroid cancer and 180 668 images from 25 325 controls from the thyroid imaging database at tianjin cancer hospital . clinical diagnosis of the training set was made by 16 radiologists from tianjin cancer hospital . images from anatomical sites that were judged as not having cancer were excluded from the training set and only individuals with suspected story_separator_special_tag deeper neural networks are more difficult to train . we present a residual learning framework to ease the training of networks that are substantially deeper than those used previously . we explicitly reformulate the layers as learning residual functions with reference to the layer inputs , instead of learning unreferenced functions . we provide comprehensive empirical evidence showing that these residual networks are easier to optimize , and can gain accuracy from considerably increased depth . on the imagenet dataset we evaluate residual nets with a depth of up to 152 layers -- -8x deeper than vgg nets but still having lower complexity . an ensemble of these residual nets achieves 3.57 % error on the imagenet test set . this result won the 1st place on the ilsvrc 2015 classification task . we also present analysis on cifar-10 with 100 and 1000 layers . the depth of representations is of central importance for many visual recognition tasks . solely due to our extremely deep representations , we obtain a 28 % relative improvement on the coco object detection dataset . deep residual nets are foundations of our submissions to ilsvrc & coco 2015 competitions , where we also won story_separator_special_tag abstract fine needle aspiration ( fna ) is the procedure of choice for evaluating thyroid nodules . it is indicated for nodules > 2 cm , even in cases of very low suspicion of malignancy . fna has associated risks and expenses . in this study , we developed an image analysis model using a deep learning algorithm and evaluated if the algorithm could predict thyroid nodules with benign fna results . ultrasonographic images of thyroid nodules with cytologic or histologic results were retrospectively collected . for algorithm training , 1358 ( 670 benign , 688 malignant ) thyroid nodule images were input into the inception-v3 network model . the model was pretrained to classify nodules as benign or malignant using the imagenet database . the diagnostic performance of the algorithm was tested with the prospectively collected internal ( n = 55 ) and external test sets ( n = 100 ) . for the internal test set , 20 of the 21 fna malignant nodules were correctly classified as malignant by the algorithm ( sensitivity , 95.2 % ) ; and of the 22 nodules algorithm classified as benign , 21 were fna benign ( negative predictive value [ story_separator_special_tag convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks . since 2014 very deep convolutional networks started to become mainstream , yielding substantial gains in various benchmarks . although increased model size and computational cost tend to translate to immediate quality gains for most tasks ( as long as enough labeled data is provided for training ) , computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios . here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization . we benchmark our methods on the ilsvrc 2012 classification challenge validation set demonstrate substantial gains over the state of the art : 21.2 % top-1 and 5.6 % top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters . with an ensemble of 4 models and multi-crop evaluation , we report 3.5 % top-5 error on the validation set ( 3.6 % error on the story_separator_special_tag image-based computer-aided diagnosis ( cad ) systems have been developed to assist doctors in the diagnosis of thyroid cancer using ultrasound thyroid images . however , the performance of these systems is strongly dependent on the selection of detection and classification methods . although there are previous researches on this topic , there is still room for enhancement of the classification accuracy of the existing methods . to address this issue , we propose an artificial intelligence-based method for enhancing the performance of the thyroid nodule classification system . thus , we extract image features from ultrasound thyroid images in two domains : spatial domain based on deep learning , and frequency domain based on fast fourier transform ( fft ) . using the extracted features , we perform a cascade classifier scheme for classifying the input thyroid images into either benign ( negative ) or malign ( positive ) cases . through expensive experiments using a public dataset , the thyroid digital image database ( tdid ) dataset , we show that our proposed method outperforms the state-of-the-art methods and produces up-to-date classification results for the thyroid nodule classification problem . story_separator_special_tag computer aided diagnosis systems ( cad ) have been developed to assist radiologists in the detection and diagnosis of abnormalities and a large number of pattern recognition techniques have been proposed to obtain a second opinion . most of these strategies have been evaluated using different datasets making their performance incomparable . in this work , an open access database of thyroid ultrasound images is presented . the dataset consists of a set of b-mode ultrasound images , including a complete annotation and diagnostic description of suspicious thyroid lesions by expert radiologists . several types of lesions as thyroiditis , cystic nodules , adenomas and thyroid cancers were included while an accurate lesion delineation is provided in xml format . the diagnostic description of malignant lesions was confirmed by biopsy . the proposed new database is expected to be a resource for the community to assess different cad systems . story_separator_special_tag background the presence of metastatic lymph nodes is a prognostic indicator for patients with thyroid carcinomas and is an important determinant of clinical decision making . however , evaluating neck lymph nodes requires experience and is labor- and time-intensive . therefore , the development of a computer-aided diagnosis ( cad ) system to identify and differentiate metastatic lymph nodes may be useful . methods from january 2008 to december 2016 , we retrieved clinical records for 804 consecutive patients with 812 lymph nodes . the status of all lymph nodes was confirmed by fine-needle aspiration . the datasets were split into training ( 263 benign and 286 metastatic lymph nodes ) , validation ( 30 benign and 33 metastatic lymph nodes ) , and test ( 100 benign and 100 metastatic lymph nodes ) . using the vgg-class activation map model , we developed a cad system to localize and differentiate the metastatic lymph nodes . we then evaluated the diagnostic performance of this cad system in our test set . results in the test set , the accuracy , sensitivity , and specificity of our model for predicting lymph node malignancy were 83.0 % , 79.5 % , and story_separator_special_tag this paper proposes a semi-supervised learning method based on weakly-labeled data to automatically classify ultrasound ( us ) thyroid nodules . key to our new approach is the unification of multi-instance learning ( mil ) with deep learning . benefiting from that , our method can directly use off-the-shelf clinical data , which involves no labels to indicate nodule classes . to this end , we take the us images of a patient as a bag , and take the corresponding pathology report as the bag label . specifically , we first propose a bag generating method , wherein the detected thyroid nodules are considered as instances corresponding to certain bag . after that , we design an effective em algorithm to train a convolutional neural network ( cnn ) for nodule classification . we conduct extensive experiments and comprehensive evaluations on different datasets , and all the experiments confirm that , our method significantly outperforms state-of-the-art mil algorithms , which exhibits great potential in clinical applications . story_separator_special_tag thyroid nodules are severely jeopardizing our health . in the diagnosis of thyroid nodules , the ultrasound images serve as an essential tool to discriminate the malignant nodules from the benign ones . in this paper , the method of transfer learning is applied to classify the malignant and benign thyroid nodules based on their ultrasound images . the principal steps are preprocessing , data augmentation and classification by transfer learning . the preprocessing concentrates in extracting the region of interest ( roi ) . two techniques of data augmentation are realized in our experiment , the traditional ways of augmenting images and a small convolutional network proposed by our own . after the augmentation of dataset , a pre-trained residual network is adopted to do transfer learning , and the parameters of this pre-trained net are fine-tuned with three different datasets that we have attained , including the original dataset , the augmented dataset via traditional methods and the augmented dataset via our convolutional network . the performances are then evaluated by three indexes , and the final results have proved the effectiveness of our convolutional augmentation network as well as the application of transfer learning . the best story_separator_special_tag ultrasonography is a valuable diagnosis method for thyroid nodules . automatically discriminating benign and malignant nodules in the ultrasound images can provide aided diagnosis suggestions , or increase the diagnosis accuracy when lack of experts . the core problem in this issue is how to capture appropriate features for this specific task . here , we propose a feature extraction method for ultrasound images based on the convolution neural networks ( cnns ) , try to introduce more meaningful semantic features to the classification . firstly , a cnn model trained with a massive natural dataset is transferred to the ultrasound image domain , to generate semantic deep features and handle the small sample problem . then , we combine those deep features with conventional features such as histogram of oriented gradient ( hog ) and local binary patterns ( lbp ) together , to form a hybrid feature space . finally , a positive-samplefirst majority voting and a feature-selected based strategy are employed for the hybrid classification . experimental results on 1037 images show that the accuracy of our proposed method is 0.931 , which outperformed other relative methods by over 10 % . story_separator_special_tag statistical approach is a valuable way to describe texture primitives . the aim of this study is to design and implement a classifier framework to automatically identify the thyroid nodules from ultrasound images . using rigorous mathematical foundations , this article focuses on developing a discriminative texture analysis method based on texture variations corresponding to four biological areas ( normal thyroid , thyroid nodule , subcutaneous tissues , and trachea ) . our research follows three steps : automatic extraction of the most discriminative first-order statistical texture features , building a classifier that automatically optimizes and selects the valuable features , and correlating significant texture parameters with the four biological areas of interest based on pixel classification and location characteristics . twenty ultrasound images of normal thyroid and 20 that present thyroid nodules were used . the analysis involves both the whole thyroid ultrasound images and the region of interests ( rois ) . the proposed system and the classification results are validated using the receiver operating characteristics which give a better overall view of the classification performance of methods . it is found that the proposed approach is capable of identifying thyroid nodules with a correct classification rate of
since traditional radar signals are unintelligent , regarding the amount of information they convey on the bandwidth they occupy , a joint radar and wireless communication system would constitute a unique platform for future intelligent transportation networks effecting the essential tasks of environmental sensing and the allocation of ad-hoc communication links , in terms of both spectrum efficiency and cost-effectiveness . in this paper , approaches to the design of intelligent waveforms , that are suitable for simultaneously performing both data transmission and radar sensing , are proposed . the approach is based on classical phase-coded waveforms utilized in wireless communications . in particular , requirements that allow for employing such signals for radar measurements with high dynamic range are investigated . also , a variety of possible radar processing algorithms are discussed . moreover , the applicability of multiple antenna techniques for direction-of-arrival estimation is considered . in addition to theoretical considerations , the paper presents system simulations and measurement results of complete radcom systems , demonstrating the practical feasibility of integrated communications and radar applications . story_separator_special_tag the historical development and current state-of-the-art of various joint wireless communication and radar sensing systems are reviewed and discussed in this study . different kinds of systems are categorised according to their modulation waveforms and duplex schemes . pros and cons of each category are highlighted . to showcase the current research advances , several demonstration systems are introduced with emphasis on proposed research contributions in this emerging area , and their performances are compared with respect to both communication and radar modes . also , a number of challenges are identified for the near future system developments and applications . story_separator_special_tag in this paper , we introduce a radar information metric , the estimation rate , that allows the radar user to be considered in a multiple-access channel enabling performance bounds for joint radar-communications coexistence to be derived . traditionally , the two systems were isolated in one or multiple dimensions . we categorize new attempts at spectrum-space-time convergence as either coexistence , cooperation , or co-design . the meaning and interpretation of the estimation rate and what it means to alter it are discussed . additionally , we introduce and elaborate on the concept of not all bits are equal , which states that communications rate bits and estimation rate bits do not have equal value . finally , results for joint radar-communications information bounds and their accompanying weighted spectral efficiency measures are presented . story_separator_special_tag the last decade witnessed a growing demand on radio frequency that is driven by technological advances benefiting the end consumer but requiring new allocations of frequency bandwidths . further , higher data rates for faster communications and wireless connections have called for an expanded share of existing frequency allocations . concerns for spectrum congestion and frequency unavailability have spurred extensive research efforts on spectrum management and efficiency [ 1 ] - [ 4 ] within the same type of service and have led to cognitive radio [ 5 ] and cognitive radar [ 6 ] . on the other hand , devising schemes for coexistence among different services have eased the competition for spectrum resources , especially for radar and wireless communication systems [ 7 ] - [ 14 ] . both systems have been recently given a common portion of the spectrum by the federal communications commission . story_separator_special_tag radio receivers , besides acting as wireless network nodes participating to the internet of things ( iot ) communication task , may act as opportunistic sensors participating to the iot sensing task . in particular , a radio receiver is intrinsically an electronic sensor which may be used for device-free human activity recognition . in this paper , we analyze recent results on how the identification of the human body presence and movement can be carried out analyzing the rf signals transmitted by sources of opportunity . the impact of channel bandwidth , transmission mode , carrier frequency , and signal descriptors on the recognition performance is discussed . moreover , we present a novel crowd counting system and assess the performance considering two different types of signal descriptors . results prove the effectiveness of the presented crowd counting system and allow to get more insights into the relation among the specific sensed environment , chosen signal descriptors , and classification accuracy . story_separator_special_tag increased amounts of bandwidth are required to guarantee both high-quality/high-rate wireless services ( 4g and 5g ) and reliable sensing capabilities , such as for automotive radar , air traffic control , earth geophysical monitoring , and security applications . therefore , coexistence between radar and communication systems using overlapping bandwidths has come to be a primary investigation field in recent years . various signal processing techniques , such as interference mitigation , precoding or spatial separation , and waveform design , allow both radar and communications to share the spectrum . story_separator_special_tag sharing of the frequency bands between radar and communication systems has attracted substantial attention , as it can avoid under-utilization of otherwise permanently allocated spectral resources , thus improving efficiency . further , there is increasing demand for radar and communication systems that share the hardware platform as well as the frequency band , as this not only decongests the spectrum , but also benefits both sensing and signaling operations via the full cooperation between both functionalities . nevertheless , the success of spectrum and hardware sharing between radar and communication systems critically depends on high-quality joint radar and communication designs . in the first part of this paper , we overview the research progress in the areas of radar-communication coexistence and dual-functional radar-communication ( dfrc ) systems , with particular emphasis on application scenarios and technical approaches . in the second part , we propose a novel transceiver architecture and frame structure for a dfrc base station ( bs ) operating in the millimeter wave ( mmwave ) band , using the hybrid analog-digital ( had ) beamforming technique . we assume that the bs is serving a multi-antenna user equipment ( ue ) over a mmwave channel , story_separator_special_tag to get the most use out of scarce spectrum , technologies have emerged that permit single systems to accommodate both radar and communications functions . dual-function radar communication ( dfrc ) systems , where the two systems use the same platform and share the same hardware and spectral resources , form a specific class of radio-frequency ( rf ) technology . these systems support applications where communication data , whether as target and waveform parameter information or as information independent of the radar operation , are efficiently transmitted using the same radar aperture and frequency bandwidth . this is achieved by embedding communication signals into radar pulses . in this article , we review the principles of dfrc systems and describe the progress made to date in devising different forms of signal embedding . various approaches to dfrc system design , including downlink and uplink signaling schemes , are discussed along with their respective benefits and limitations . we present tangible applications of dfrc systems and delineate their design requirements and challenges . future trends and open research problems are also highlighted . story_separator_special_tag synergistic design of communications and radar systems with common spectral and hardware resources is heralding a new era of efficiently utilizing a limited radio-frequency ( rf ) spectrum . such a joint radar communications ( jrc ) model has advantages of low cost , compact size , less power consumption , spectrum sharing , improved performance , and safety due to enhanced information sharing . today , millimeter-wave ( mmwave ) communications have emerged as the preferred technology for short distance wireless links because they provide transmission bandwidth that is several gigahertz wide . this band is also promising for short-range radar applications , which benefit from the high-range resolution arising from large transmit signal bandwidths . signal processing techniques are critical to the implementation of mm-wave jrc systems . major challenges are joint waveform design and performance criteria that would optimally trade off between communications and radar functionalities . novel multiple-input , multiple-output ( mimo ) signal processing techniques are required because mm-wave jrc systems employ large antenna arrays . there are opportunities to exploit recent advances in cognition , compressed sensing , and machine learning to reduce required resources and dynamically allocate them with low overheads . this story_separator_special_tag self-driving cars constantly assess their environment to choose routes , comply with traffic regulations , and avoid hazards . to that aim , such vehicles are equipped with wireless communications transceivers as well as multiple sensors , including automotive radars . the fact that autonomous vehicles implement both radar and communications motivates designing these functionalities in a joint manner . such dualfunction radar-communications ( dfrc ) designs are the focus of a large body of recent work . these approaches can lead to substantial gains in size , cost , power consumption , robustness , and performance , especially when both radar and communications operate in the same range , which is the case in vehicular applications . story_separator_special_tag joint radar and communication ( jrc ) technology has become important for civil and military applications for decades . this paper introduces the concepts , characteristics and advantages of jrc technology , presenting the typical applications that have benefited from jrc technology currently and in the future . this paper explores the state-of-the-art of jrc in the levels of coexistence , cooperation , co-design and collaboration . compared to previous surveys , this paper reviews the entire trends that drive the development of radar sensing and wireless communication using jrc . specifically , we explore an open research issue on radar and communication operating with mutual benefits based on collaboration , which represents the fourth stage of jrc evolution . this paper provides useful perspectives for future researches of jrc technology . story_separator_special_tag joint communication and radar/radio sensing ( jcas ) , also known as dual-function radar communications , enables the integration of communication and radio sensing into one system , sharing a single transmitted signal . the perceptive mobile network ( pmn ) is a natural evolution of jcas from simple point-to-point links to a mobile/cellular network with integrated radio-sensing capability . in this article , we present a system architecture that unifies three types of sensing , investigate the required modifications to existing mobile networks , and exemplify the signals applicable to sensing . we also provide a review to stimulate research problems and potential solutions , including mutual information , joint design and optimization for waveform and antenna grouping , clutter suppression , sensing parameter estimation and pattern recognition , and networked sensing under the cellular topology . story_separator_special_tag v2x communication in the mmwave band is one way to achieve high data rates for applications like infotainment , cooperative perception , augmented reality assisted driving , and so on . mmwave communication relies on large antenna arrays , and configuring these arrays poses high training overhead . in this article , we motivate the use of infrastructure mounted sensors ( which will be part of future smart cities ) to aid establishing and maintaining mmwave vehicular communication links . we provide numerical and measurement results to demonstrate that information from these infrastructure sensors reduces the mmwave array configuration overhead . finally , we outline future research directions to help materialize the use of infrastructure sensors for mmwave communication . story_separator_special_tag spectrum sharing enables radar and communication systems to share the spectrum efficiently by minimizing mutual interference . recently proposed multiple-input multiple-output radars based on sparse sensing and matrix completion ( mimo-mc ) , in addition to reducing communication bandwidth and power as compared with mimo radars , offer a significant advantage for spectrum sharing . the advantage stems from the way the sampling scheme at the radar receivers modulates the interference channel from the communication system transmitters , rendering it symbol dependent and reducing its row space . this makes it easier for the communication system to design its waveforms in an adaptive fashion so that it minimizes the interference to the radar subject to meeting rate and power constraints . two methods are proposed . first , based on the knowledge of the radar sampling scheme , the communication system transmit covariance matrix is designed to minimize the effective interference power ( eip ) at the radar receiver , while maintaining certain average capacity and transmit power for the communication system . second , a joint design of the communication transmit covariance matrix and the mimo-mc radar sampling scheme is proposed , which achieves even further eip reduction . story_separator_special_tag the paper proposes a cooperative scheme for the coexistence of a multiple-input-multiple-output ( mimo ) communication system and a matrix completion ( mc ) based , collocated mimo ( mimo-mc ) radar . to facilitate the coexistence , and also deal with clutter , both the radar and the communication systems use transmit precoding . for waveform flexibility , the radar uses a random unitary waveform matrix . we prove that for such waveforms and any precoding matrix , the error performance of mc is guaranteed . the radar transmit precoder , the radar subsampling scheme , and the communication transmit covariance matrix are jointly designed in order to maximize the radar sinr , while meeting certain communication rate and power constraints . the joint design is implemented at a control center , which is a node with whom both systems share physical layer information , and which also performs data fusion for the radar . we provide efficient algorithms for the proposed optimization problem , along with insight on the feasibility and properties of the proposed design . simulation results show that the proposed scheme significantly improves the spectrum sharing performance in various scenarios . story_separator_special_tag beamforming techniques are proposed for a joint multi-input-multi-output ( mimo ) radar-communication ( radcom ) system , where a single device acts as radar and a communication base station ( bs ) by simultaneously communicating with downlink users and detecting radar targets . two operational options are considered , where we first split the antennas into two groups , one for radar and the other for communication . under this deployment , the radar signal is designed to fall into the null-space of the downlink channel . the communication beamformer is optimized such that the beampattern obtained matches the radar s beampattern while satisfying the communication performance requirements . to reduce the optimizations constraints , we consider a second operational option , where all the antennas transmit a joint waveform that is shared by both radar and communications . in this case , we formulate an appropriate probing beampattern , while guaranteeing the performance of the downlink communications . by incorporating the sinr constraints into objective functions as penalty terms , we further simplify the original beamforming designs to weighted optimizations , and solve them by efficient manifold algorithms . numerical results show that the shared deployment outperforms the separated story_separator_special_tag common approaches for radar and communication system spectrum sharing consider protection zones with power allocation for in-band operation , dynamic spectrum access ( dsa ) with spectrum sensing for in-band operation , and sense-and-avoid , frequency-agile approaches for out-of-band operation . in this paper we introduce a cooperative spectrum sharing model that combines multiple aspects of the previously mentioned approaches for in-band and out-of-band coexistence . this model jointly optimizes multiple radar and communication system parameters for improved frequency agility and performance while mitigating mutual interference between secondary radiofrequency ( rf ) users . spectrum sensing is implemented to form a power spectral estimate of the electromagnetic environment ( eme ) to identify the secondary users . multi-objective optimization then adjusts the output power , center frequency , and bandwidth parameters of the radar and communication system to maximize range resolution , radar signal to interference plus noise ratio ( sinr ) , and channel capacity . simulations are used to evaluate the model for different rf spectra . the results indicate that spectrum sharing is achieved for all systems . story_separator_special_tag in this letter , we consider the coexistence and spectrum sharing between downlink multi-user multiple-input-multiple-output ( mu-mimo ) communication and an mimo radar . for a given performance requirement of the downlink communication system , we design the transmit beamforming such that the detection probability of the radar is maximized . while the original optimization problem is non-convex , we exploit the monotonically increasing relationship of the detection probability with the non-centrality parameter of the resulting probability distribution to obtain a convex lower-bound optimization . the proposed beamformer is designed to be robust to imperfect channel state information ( csi ) . simulation results verify that the proposed approach facilitates the coexistence between radar and communication links , and illustrates a scalable tradeoff between the two systems performance . story_separator_special_tag missfle range instrumentation radars are capable of transmitting pulse code groups in which some of the pulses in each group can be used to convey information from the ground to a space vehicle containing a suitable beacon receiver and decoder . thus , a one-way communication system is provided with only a small increase in ground and vehicle equipment over that required for the tracking function . since the system is one way , communication reliability becomes of paramount importance . this paper presents a method of computing bit error rates , word error rates , and frame error rates as a function of the snr at the beacon receiver . the snr vs range can then be computed by standard methods to obtain the ground-to-vehicle ranges over which reliable communications can be conducted . story_separator_special_tag millimeter-wave ( mmwave ) radar is widely used in vehicles for applications such as adaptive cruise control and collision avoidance . in this paper , we propose an ieee 802.11ad-based radar for long-range radar ( lrr ) applications at the 60\xa0ghz unlicensed band . we exploit the preamble of a single-carrier physical layer frame , which consists of golay complementary sequences with good correlation properties that make it suitable for radar . this system enables a joint waveform for automotive radar and a potential mmwave vehicular communication system based on the mmwave consumer wireless local area network standard , allowing hardware reuse . to formulate an integrated framework of vehicle-to-vehicle communication and lrr , we make typical assumptions for lrr applications , incorporating the full duplex radar operation . this new feature is motivated by the recent development of systems with sufficient isolation and self-interference cancellation . we develop single- and multi-frame radar receiver algorithms for target detection as well as range and velocity estimation for both single- and multi-target scenarios . our proposed radar processing algorithms leverage channel estimation and time frequency synchronization techniques used in a conventional ieee 802.11ad receiver with minimal modifications . analysis and simulations show story_separator_special_tag increasing safety and automation in transportation systems has led to the proliferation of radar and ieee 802.11p-based dedicated short-range communication ( dsrc ) in vehicles . however , current implementations of vehicular radar devices are expensive , use a substantial amount of bandwidth , and are susceptible to multiple security risks . in this paper , we use the ieee 802.11 orthogonal frequency-division multiplexing communications waveform , as found in ieee 802.11a/g/p , to perform radar functions . in this paper , we present an approach that determines the mean-normalized channel energy from frequency-domain channel estimates and models it as a direct sinusoidal function of target range , enabling closest target range estimation . in addition , we propose an alternative to vehicular forward collision detection by extending ieee 802.11 dsrc and wifi technology to radar , extending the foundation of joint communications and radar frameworks . furthermore , we perform an experimental demonstration near dsrc spectrum using ieee 802.11 standard compliant software defined radios with potentially minimal modification through algorithm processing on frequency-domain channel estimates . the results of this paper show that our solution delivers sufficient accuracy and reliability for vehicular radar if we use the largest bandwidth story_separator_special_tag joint communication and radar ( jcr ) waveforms with fully digital baseband generation and processing can now be realized at the millimeter-wave ( mmwave ) band . prior work has developed a mmwave wireless local area network ( wlan ) -based jcr that exploits the wlan preamble for radars . the performance of target velocity estimation , however , was limited . in this paper , we propose a virtual waveform design for an adaptive mmwave jcr . the proposed system transmits a few non-uniformly placed preambles to construct several receive virtual preambles for enhancing velocity estimation accuracy , at the cost of only a small reduction in the communication data rate . we evaluate jcr performance trade-offs using the cram\xe9r-rao bound ( crb ) metric for radar estimation and a novel distortion minimum mean square error ( mmse ) metric for data communication . additionally , we develop three different mmse-based optimization problems for the adaptive jcr waveform design . simulations show that an optimal virtual ( non-uniform ) waveform achieves a significant performance improvement as compared to a uniform waveform . for a radar crb constrained optimization , the optimal radar range of operation and the optimal communication story_separator_special_tag beamforming has a great potential for joint communication and radar sensing ( jcas ) , which is becoming a demanding feature on many emerging platforms , such as unmanned aerial vehicles and smart cars . although beamforming has been extensively studied for communication and radar sensing respectively , its application in the joint system is not straightforward due to different beamforming requirements by communication and sensing . in this paper , we propose a novel multibeam framework using steerable analog antenna arrays , which allows seamless integration of communication and sensing . different to conventional jcas schemes that support jcas using a single beam , our framework is based on the key innovation of multibeam technology : providing fixed subbeam for communication and packet-varying scanning subbeam for sensing , simultaneously from a single transmitting array . we provide a system architecture and protocols for the proposed framework , complying well with modern packet communication systems with multicarrier modulation . we also propose low-complexity and effective multibeam design and generation methods , which offer great flexibility in meeting different communication and sensing requirements . we further develop sensing parameter estimation algorithms using conventional digital fourier transform and one-dimensional compressive sensing techniques story_separator_special_tag in this paper , we develop a framework for a novel perceptive mobile/cellular network that integrates radar sensing function into the mobile communication network . we propose a unified system platform that enables downlink and uplink sensing , sharing the same transmitted signals with communications . we aim to tackle the fundamental sensing parameter estimation problem in perceptive mobile networks , by addressing two key challenges associated with sophisticated mobile signals and rich multipath in mobile networks . to extract sensing parameters from orthogonal frequency division multiple access and spatial division multiple access communication signals , we propose two approaches to formulate it to problems that can be solved by compressive sensing techniques . most sensing algorithms have limits on the number of multipath signals for their inputs . to reduce the multipath signals , as well as removing unwanted clutter signals , we\xa0propose a background subtraction method based on simple recursive computation , and provide a closed-form expression for performance characterization . the effectiveness of these methods is validated in simulations . story_separator_special_tag we focus on a dual-functional multi-input-multi-output ( mimo ) radar-communication ( radcom ) system , where a single transmitter with multiple antennas communicates with downlink cellular users and detects radar targets simultaneously . several design criteria are considered for minimizing the downlink multiuser interference . first , we consider both omnidirectional and directional beampattern design problems , where the closed-form globally optimal solutions are obtained . based on the derived waveforms , we further consider weighted optimizations targeting a flexible tradeoff between radar and communications performance and introduce low-complexity algorithms . moreover , to address the more practical constant modulus waveform design problem , we propose a branch-and-bound algorithm that obtains a globally optimal solution , and derive its worst-case complexity as function of the maximum iteration number . finally , we assess the effectiveness of the proposed waveform design approaches via numerical results . story_separator_special_tag a novel dual-function radar communication ( dfrc ) system is proposed , that achieves high target resolution and high communication rate . it consists of a multiple-input multiple-output ( mimo ) radar , where only a small number of antennas are active in each channel use . the probing waveforms are orthogonal frequency division multiplexing ( ofdm ) type . the ofdm carriers are divided into two groups , one that is used by the active antennas in a shared fashion , and another one , where each subcarrier is assigned to an active antenna in an exclusive fashion ( private subcarriers ) . target estimation is carried out based on the received and transmitted symbols . the system communicates information via the transmitted ofdm data symbols and the pattern of active antennas in a generalized spatial modulation ( gsm ) fashion . a multi-antenna communication receiver can identify the indices of active antennas via sparse signal recovery methods . the use of shared subcarriers enables high communication rate . the private subcarriers are used to synthesize a virtual array for high angular resolution , and also for improved estimation on the active antenna indices . the ofdm waveforms allow story_separator_special_tag we consider the problem of waveform design for multiple input/multiple output ( mimo ) radars , where the transmit waveforms are adjusted based on target and clutter statistics . a model for the radar returns which incorporates the transmit waveforms is developed . the target detection problem is formulated for that model . optimal and suboptimal algorithms are derived for designing the transmit waveforms under different assumptions regarding the statistical information available to the detector . the performance of these algorithms is illustrated by computer simulation . story_separator_special_tag the concept of multi-input multi-output ( mimo ) radar has drawn considerable attention in recent years . one of the key technologies to enable the mimo radar is to design the orthogonal signals to support simultaneous transmission from multiple antennas , because the orthogonality characteristics have a keen impact on the performance of the mimo radar . in this paper , a novel approach to generate and process mutually orthogonal waveforms based on orthogonal frequency division multiplexing ( ofdm ) signals is proposed . the ambiguity function of the suggested waveforms is derived and analyzed based on the statistical derivation . a corresponding multi-channel signal processing scheme at the receiver part is proposed to eliminate the doppler influence and suppress the cross-channel interference . numerical simulation results based on the monte carlo method are presented to validate the theoretical derivation and analysis . the findings indicate that the inter-channel interference for moving targets can be eliminated and that the proposed interleaved ofdm signals are a suitable waveform set for the mimo radar applications . story_separator_special_tag radar with digitally generated orthogonal frequency division multiplexing ( ofdm ) signals is an emerging research field that has been studied for the past few years . another trend for radar is the multiple-input multiple-output ( mimo ) architecture used for an efficient direction-of-arrival estimation . these two technologies can be efficiently combined into an ofdm-mimo radar with novel interleaving concepts enabled by the multicarrier structure of ofdm . by multiplexing of transmit antennas via subcarrier interleaving , the whole bandwidth can be utilized by all transmit antennas simultaneously . in case of equidistant subcarrier interleaving , however , the unambiguously measurable distance range is reduced . to avoid this reduction , we propose an ofdm-mimo radar concept with nonequidistant subcarrier interleaving ( neqsi ) that maintains the full unambiguously measurable distance range . since the neqsi leads to sidelobes in distance estimation , we present an approach for generation of near-to-optimum nonequidistant interleaving patterns . to further complement the proposed concept , compressed sensing based distance velocity estimation algorithm that achieves a high dynamic range in both distance and velocity dimensions is used . we study the performance of the presented concept in simulations and validate it by measurements story_separator_special_tag the concept of multiple-input multiple-output ( mimo ) radars has drawn considerable attention recently . unlike the traditional single-input multiple-output ( simo ) radar which emits coherent waveforms to form a focused beam , the mimo radar can transmit orthogonal ( or incoherent ) waveforms . these waveforms can be used to increase the system spatial resolution . the waveforms also affect the range and doppler resolution . in traditional ( simo ) radars , the ambiguity function of the transmitted pulse characterizes the compromise between range and doppler resolutions . it is a major tool for studying and analyzing radar signals . recently , the idea of ambiguity function has been extended to the case of mimo radar . in this paper , some mathematical properties of the mimo radar ambiguity function are first derived . these properties provide some insights into the mimo radar waveform design . then a new algorithm for designing the orthogonal frequency-hopping waveforms is proposed . this algorithm reduces the sidelobes in the corresponding mimo radar ambiguity function and makes the energy of the ambiguity function spread evenly in the range and angular dimensions . story_separator_special_tag linear stepped frequency radar is used in wide-band radar applications , such as airborne synthetic aperture radar ( sar ) , turntable inverse sar , and ground penetration radar . the frequency is stepped linearly with a constant frequency change , and range cells are formed by fast fourier transform processing . the covered bandwidth defines the range resolution , and the length of the frequency step restricts the nonambiguous range interval . a random choice of the transmitted frequencies suppresses the range ambiguity , improves covert detection , and reduces the signal interference between adjacent sensors . as a result of the random modulation , however , a noise component is added to the range/doppler sidelobes . in this paper , relationships of random step frequency radar are compared with frequency-modulated continuous wave noise radar and the statistical characteristics of the ambiguity function and the sidelobe noise floor are analyzed . algorithms are investigated , which reduce the sidelobes and the noise-floor contribution from strong dominating reflectors in the scene . theoretical predictions are compared with monte carlo simulations and experimental data story_separator_special_tag modern radar systems are expected to operate reliably in congested environments under cost and power constraints . a recent technology for realizing such systems is frequency agile radar ( far ) , which transmits narrowband pulses in a frequency hopping manner . to enhance the target recovery performance of far in complex electromagnetic environments , and particularly , its range-doppler recovery performance , multi-carrier agile phased array radar ( caesar ) was proposed . caesar extends far to multi-carrier waveforms while introducing the notion of spatial agility . in this paper , we theoretically analyze the range-doppler recovery capabilities of caesar . particularly , we derive conditions which guarantee accurate reconstruction of these range-doppler parameters . these conditions indicate that by increasing the number of frequencies transmitted in each pulse , caesar improves performance over conventional far , especially in complex environments where some radar measurements are severely corrupted by interference . story_separator_special_tag we examine the recovery of block sparse signals and extend the recovery framework in two important directions ; one by exploiting the signals ' intra-block correlation and the other by generalizing the signals ' block structure . we propose two families of algorithms based on the framework of block sparse bayesian learning ( bsbl ) . one family , directly derived from the bsbl framework , require knowledge of the block structure . another family , derived from an expanded bsbl framework , are based on a weaker assumption on the block structure , and can be used when the block structure is completely unknown . using these algorithms , we show that exploiting intra-block correlation is very helpful in improving recovery performance . these algorithms also shed light on how to modify existing algorithms or design new ones to exploit such correlation and improve performance . story_separator_special_tag detection and estimation problems in multiple-input multiple-output ( mimo ) radar have recently drawn considerable interest in the signal processing community . radar has long been a staple of signal processing , and mimo radar presents challenges and opportunities in adapting classical radar imaging tools and developing new ones . our aim in this article is to showcase the potential of tensor algebra and multidimensional harmonic retrieval ( hr ) in signal processing for mimo radar . tensor algebra and multidimensional hr are relatively mature topics , albeit still on the fringes of signal processing research . we show they are in fact central for target localization in a variety of pertinent mimo radar scenarios . tensor algebra naturally comes into play when the coherent processing interval comprises multiple pulses , or multiple transmit and receive subarrays are used ( multistatic configuration ) . multidimensional harmonic structure emerges for far-field uniform linear transmit/receive array configurations , also taking into account doppler shift ; and hybrid models arise in-between . this viewpoint opens the door for the application and further development of powerful algorithms and identifiability results for mimo radar . compared to the classical radar-imaging-based methods such as capon or story_separator_special_tag in this paper , we consider the problem of joint delay-doppler estimation of moving targets in a passive radar that makes use of orthogonal frequency-division multiplexing communication signals . a compressed sensing algorithm is proposed to achieve supper resolution and better accuracy , using both the atomic norm and the $ \\ell _1 $ -norm . the atomic norm is used to manifest the signal sparsity in the continuous domain . unlike previous works that assume the demodulation to be error free , we explicitly introduce the demodulation error signal whose sparsity is imposed by the $ \\ell _1 $ -norm . on this basis , the delays and doppler frequencies are estimated by solving a semidefinite program ( sdp ) which is convex . we also develop an iterative method for solving this sdp via the alternating direction method of multipliers where each iteration involves closed-form computation . simulation results are presented to illustrate the high performance of the proposed algorithm . story_separator_special_tag there is growing interest in integrating communication and radar sensing into one system . however , very limited results are reported on how to realize sensing using complicated mobile signals when joint communication and radar sensing ( jcas ) is applied to mobile networks . this paper studies radar sensing using one-dimension ( 1d ) to 3d compressive sensing ( cs ) techniques , referring to signals compatible with latest fifth generation ( 5g ) new radio ( nr ) standard . we demonstrate that radio sensing using both downlink and uplink 5g signals can be realized with reasonable performance using these cs techniques , and highlight the respective advantages and disadvantages of these techniques.1 story_separator_special_tag in the intelligent transportation system , it is an efficient way for the intelligent vehicle to use the integrated radar and communications system ( ircs ) to obtain the range and velocity estimations of other vehicles and simultaneously communicate with other facilities , such as vehicles and base stations . in the ircs , the transmitted waveform is the orthogonal frequency division multiplexing ( ofdm ) integrated radar and communications waveform that contains communications information . due to the existence of communications information , the traditional range and velocity estimation methods in radar can not be directly utilized in the ircs . moreover , to improve the resolution of range and velocity estimations , the signal bandwidth and coherent processing interval ( cpi ) are usually required to be increased , which will result in the increase of system cost and the reduction of update rate . to solve these problems , a auto-paired super-resolution range and velocity estimation method is proposed by using the ofdm integrated radar and communications waveform . first , the communications information in the received signals is compensated . then , the frequency smoothing is performed to reduce the correlation between the echoes reflected by story_separator_special_tag perceptive mobile network ( pmn ) is a recently proposed next-generation network that integrates radar sensing into communications . one major challenge for realizing sensing in pmns is how to deal with spatially-separated asynchronous transceivers . the asynchrony between sensing receiver and transmitter will cause both timing offsets ( tos ) and carrier frequency offsets ( cfos ) and lead to degraded sensing accuracy in both ranging and velocity measurements . in this paper , we propose an uplink sensing scheme for pmns with asynchronous transceivers , targeting at resolving the sensing ambiguity and improving the sensing accuracy . we first adopt a cross-antenna cross-correlation ( cacc ) operation to remove the sensing ambiguity associated with both tos and cfos . without sensing ambiguity , both actual propagation delay and actual doppler frequency of multiple targets can be obtained using cacc outputs . to exploit the redundancy of the cacc outputs and reduce the complexity , we then propose a novel mirrored-music algorithm , which halves the number of unknown parameters to be estimated , to obtain actual values of delays and doppler frequencies . finally , we propose a high-resolution angles-of-arrival ( aoas ) estimation algorithm , which jointly story_separator_special_tag passive radar is a concept where illuminators of opportunity are used in a multistatic radar setup . new digital signals , like digital audio/video broadcast ( dab/dvb ) , are excellent candidates for this scheme , as they are widely available , can be easily decoded to acquire the noise-free signal , and employ orthogonal frequency division multiplex ( ofdm ) . multicarrier transmission schemes like ofdm use block channel equalization in the frequency domain , efficiently implemented as a fast fourier transform , and these channel estimates can directly be used to identify targets based on fourier analysis across subsequent blocks . in this paper , we derive the exact matched filter formulation for passive radar using ofdm waveforms . we then show that the current approach using fourier analysis across block channel estimates is equivalent to the matched filter , based on a piecewise constant assumption on the doppler-induced phase rotation in the time domain . we next present high-resolution algorithms based on the same assumption : first we implement music as a 2-d spectral estimator using spatial smoothing ; then we use the new concept of compressed sensing to identify targets . we compare the new algorithms story_separator_special_tag indoor human tracking is fundamental to many real-world applications such as security surveillance , behavioral analysis , and elderly care . previous solutions usually require dedicated device being carried by the human target , which is inconvenient or even infeasible in scenarios such as elderly care and break-ins . however , compared with device-based tracking , device-free tracking is particularly challenging because the much weaker reflection signals are employed for tracking . the problem becomes even more difficult with commodity wi-fi devices , which have limited number of antennas , small bandwidth size , and severe hardware noise . in this work , we propose indotrack , a device-free indoor human tracking system that utilizes only commodity wi-fi devices . indotrack is composed of two innovative methods : ( 1 ) doppler-music is able to extract accurate doppler velocity information from noisy wi-fi channel state information ( csi ) samples ; and ( 2 ) doppler-aoa is able to determine the absolute trajectory of the target by jointly estimating target velocity and location via probabilistic co-modeling of spatial-temporal doppler and aoa information . extensive experiments demonstrate that indotrack can achieve a 35cm median error in human trajectory estimation , outperforming story_separator_special_tag this paper presents widar2.0 , the first wifi-based system that enables passive human localization and tracking using a single link on commodity off-the-shelf devices . previous works based on either specialized or commercial hardware all require multiple links , preventing their wide adoption in scenarios like homes where typically only one single ap is installed . the key insight underlying widar2.0 to circumvent the use of multiple links is to leverage multi-dimensional signal parameters from one single link . to this end , we build a unified model accounting for angle-of-arrival , time-of-flight , and doppler shifts together and devise an efficient algorithm for their joint estimation . we then design a pipeline to translate the erroneous raw parameters into precise locations , which first finds parameters corresponding to the reflections of interests , then refines range estimates , and ultimately outputs target locations . our implementation and evaluation on commodity wifi devices demonstrate that widar2.0 achieves better or comparable performance to state-of-the-art localization systems , which either use specialized hardwares or require 2 to 40 wi-fi links . story_separator_special_tag this article studies the physical layer security in a multiple-input-multiple-output ( mimo ) dual-functional radar-communication ( dfrc ) system , which communicates with downlink cellular users and tracks radar targets simultaneously . here , the radar targets are considered as potential eavesdroppers which might eavesdrop the information from the communication transmitter to legitimate users . to ensure the transmission secrecy , we employ artificial noise ( an ) at the transmitter and formulate optimization problems by minimizing the signal-to-noise ratio ( snr ) received at radar targets , while guaranteeing the signal-to-interference-plus-noise ratio ( sinr ) requirement at legitimate users . we first consider the ideal case where both the target angle and the channel state information ( csi ) are precisely known . the scenario is further extended to more general cases with target location uncertainty and csi errors , where we propose robust optimization approaches to guarantee the worst-case performance . accordingly , the computational complexity is analyzed for each proposed method . our numerical results show the feasibility of the algorithms with the existence of instantaneous and statistical csi error . in addition , the secrecy rate of secure dfrc system grows with the increasing angular interval story_separator_special_tag millimeter wave ( mmwave ) communication is the only viable approach for high bandwidth connected vehicles exchanging raw sensor data . a main challenge for mmwave in connected vehicles , is that it requires frequent link reconfiguration in mobile environments , which is a source of high overhead . in this paper we introduce the concept of radar aided mmwave vehicular communication . side information derived from radar mounted on the infrastructure operating in a given mmwave band is used to adapt the beams of the vehicular communication system operating in another millimeter wave band . we propose a set of algorithms to perform the beam alignment task in a vehicle-to-infrastructure ( v2i ) scenario , from extracting information from the radar signal to configuring the beams that illuminate the different antennas in the vehicle . simulation results confirm that radar can be a useful source of side information that helps configure the mmwave v2i link . story_separator_special_tag in vehicular networks of the future , sensing and communication functionalities will be intertwined . in this article , we investigate a radar-assisted predictive beamforming design for vehicle-to-infrastructure ( v2i ) communication by exploiting the dual-functional radar-communication ( dfrc ) technique . aiming for realizing joint sensing and communication functionalities at road side units ( rsus ) , we present a novel extended kalman filtering ( ekf ) framework to track and predict kinematic parameters of each vehicle . by exploiting the radar functionality of the rsu we show that the communication beam tracking overheads can be drastically reduced . to improve the sensing accuracy while guaranteeing the downlink communication sum-rate , we further propose a power allocation scheme for multiple vehicles . numerical results have shown that the proposed dfrc based beam tracking approach significantly outperforms the communication-only feedback based technique in the tracking performance . furthermore , the designed power allocation method is able to achieve a favorable performance trade-off between sensing and communication . story_separator_special_tag the development of dual-functional radar-communication ( dfrc ) systems , where vehicle localization and tracking can be combined with vehicular communication , will lead to more efficient future vehicular networks . in this paper , we develop a predictive beamforming scheme in the context of dfrc systems . we consider a system model where the road-side unit estimates and predicts the motion parameters of vehicles based on the echoes of the dfrc signal . compared to the conventional feedback-based beam tracking approaches , the proposed method can reduce the signaling overhead and improve the accuracy of the angle estimation . to accurately estimate the motion parameters of vehicles in real-time , we propose a novel message passing algorithm based on factor graph , which yields a near optimal performance achieved by the maximum a posteriori estimation . the beamformers are then designed based on the predicted angles for establishing the communication links . with the employment of appropriate approximations , all messages on the factor graph can be derived in a closed-form , thus reduce the complexity . simulation results show that the proposed dfrc based beamforming scheme is superior to the feedback-based approach in terms of both estimation and story_separator_special_tag in part i and ii of this three-part tutorial on dual-functional radar-communication ( dfrc ) design for vehicular networks , we overviewed the basics of radar and communication systems and the state of the art in dfrc respectively . as part iii of the tutorial , we address the issue of predictive beamforming for the vehicle-to-infrastructure ( v2i ) links without the need for explicit state evolution models . the beam tracking is done with the aid of the dual-functional radar-communication signals transmitted by the road side unit ( rsu ) . the vehicle s location parameters are estimated by exploiting the reflected echoes signals . given these estimates , we propose a prediction method to predict the next position of the vehicle , without specifying a state model . finally , we verify the superiority of the proposed approaches via numerical simulations , which show that the proposed technique outperforms the conventional benchmark schemes in terms of the achievable communication rate . story_separator_special_tag in this paper , we consider the theory and implementation of a joint radar-communication system . a radar-software library compatible with texas instruments ( ti 's ) small form factor software defined radio ( sdr ) platform is developed which allows users to implement real-time adaptive radar algorithms in matlab without requiring detailed knowledge of the inner workings of the sdr . as an example design and demonstration of the capabilities of the radar-software library , a joint radar-communication system has been implemented . the system uses wideband digital communication signals to simultaneously interrogate a scene as a radar while communicating range-doppler maps resulting from previous interrogations . this combined system eliminates the need for separate hardware for communicating recorded radar returns to a base station . we study digital communication waveforms under the performance criteria of a radar waveform . story_separator_special_tag we have designed , simulated , fabricated , and tested an ultra-wideband ( uwb ) multifunctional communication and radar system utilizing a single shared transmitting antenna aperture . two surface acoustic wave bandpass chirp filters were used to modulate the radar and communications pulses , generating linear frequency modulation waveforms with opposite slope factors . the system operates at a center frequency of 750 mhz with 500 mhz of instantaneous bandwidth . the measured range resolution is 63 cm ( 25 in ) using targets with a radar cross section of 2.7 m2 . the probability of detection was measured to be 99 % , and the probability of false alarm was 7 % with the communication and radar systems operating simultaneously . the bit error rate for simultaneous communication at 1 mb/s , and radar at 150 khz pulse repetition frequency and 1.5-ns pulsewidth is 2e - 3. our uwb multifunctional system demonstrates the ability to simultaneously interrogate the environment and communicate through a shared transmitting antenna aperture , while realizing a simple system architecture with low output power and not employing time-division multiplexing . story_separator_special_tag in this paper , we consider the design of integrated radar and communication systems that utilize weighted pulse trains with the elements of oppermann sequences serving as complex-valued weights . an analytical expression of the ambiguity function for weighted pulse trains with oppermann sequences is derived . given a family of oppermann sequences , it is shown that the related ambiguity function depends only on one sequence parameter . this property simplifies the design of the associated weighted pulse trains as it constrains the degrees of freedom . in contrast to the single polyphase pulse compression sequences that are typically deployed in radar applications , the families considered in this paper form sets of sequences . as such , they readily facilitate also multiple-access in communication systems . numerical examples are provided that show the wide range of options offered by oppermann sequences in the design of integrated radar and communication systems . story_separator_special_tag dual-function radar communication ( dfrc ) systems implement both sensing and communication using the same hardware . such schemes are often more efficient in terms of size , power , and cost , over using distinct radar and communication systems . since these functionalities share resources such as spectrum , power , and antennas , dfrc methods typically entail some degradation in both radar and communication performance . in this work we propose a dfrc scheme based on the carrier agile phased array radar ( caesar ) , which combines frequency and spatial agility . the proposed dfrc system , referred to as multi-carrier agile joint radar communication ( majorcom ) , exploits the inherent spatial and spectral randomness of caesar to convey digital messages in the form of index modulation . the resulting communication scheme naturally coexists with the radar functionality , and thus does not come at the cost of reduced radar performance . we analyze the performance of majorcom , quantifying its achievable bit rate . in addition , we develop a low complexity decoder and a codebook design approach , which simplify the recovery of the communicated bits . our numerical results demonstrate that majorcom is story_separator_special_tag frequency-hopping ( fh ) mimo radar-based dual-function radar communication ( fh-mimo dfrc ) enables communication symbol rate to exceed radar pulse repetition frequency , which requires accurate estimations of timing offset and channel parameters . the estimations , however , are challenging due to unknown , fast-changing hopping frequencies and the multiplicative coupling between timing offset and channel parameters . in this article , we develop accurate methods for a single-antenna communication receiver to estimate timing offset and channel for fh-mimo dfrc . first , we design a novel fh-mimo radar waveform , which enables a communication receiver to estimate the hopping frequency sequence ( hfs ) used by radar , instead of acquiring it from radar . importantly , the novel waveform incurs no degradation to radar ranging performance . then , via capturing distinct hfs features , we develop two estimators for timing offset and derive mean squared error lower bound of each estimator . using the bounds , we design an hfs that renders both estimators applicable . furthermore , we develop an accurate channel estimation method , reusing the single hop for timing offset estimation . validated by simulations , the accurate channel estimates attained by story_separator_special_tag recently , dual-function radar-communication systems in which the radar platform and resources are used for communication signal embedding have emerged as means to alleviate spectrum congestion and ease competition over frequency bandwidth . in this paper , we introduce a new technique for information embedding specific to multiple-input multiple output ( mimo ) radar . we exploit the fact that in a mimo radar system , the receiver needs to know the association of the transmit waveforms to the transmit antennas . however , this association can change over different pulse repetition periods without impacting the radar functionality . we show that by shuffling the waveforms across the transmit antennas over constant pulse repetition periods , a data rate of megabits per second can be achieved for a moderate number of transmit antennas . the probability of error is analyzed and the bounds on the symbol error rate are derived . simulation examples are provided for performance evaluation and to demonstrate the effectiveness of the proposed information embedding technique . story_separator_special_tag intensifying competition over the frequency spectrum has driven the research effort into strategies for coexistence between radar and communications . strategies for achieving this range from spectrum sharing using cognitive radio techniques to co-design where the radar and communications systems are re-imagined to ensure they do not interfere with each other . in this paper we propose a new signaling scheme for dual-function radar communications ( dfrc ) that enables frequency-hopped multiple-input-multiple-output ( mimo ) orthogonal radar wave-forms to carry communication symbols . the frequency-hopping ( fh ) code is changed from subpulse to another such that the index of the selected code carries the desired symbol . contrary to recent phase-shift keying ( psk ) -based schemes which embed one psk symbol within each subpulse , the proposed scheme does not suffer from phase-discontinuity and is shown to have better spectral efficiency . we show that the data rate that can be achieved using the proposed scheme is proportional to the size of the fh code , the number of transmit antennas , number of subpulses within a pulse repetition interval , and pulse repetition frequency . simulations examples are provided to evaluate the performance of the proposed method story_separator_special_tag spectrum congestion and competition over frequency bandwidth could be alleviated by deploying dual-function radar-communications systems , where the radar platform presents itself as a system of opportunity to secondary communication functions . in this paper , we propose a new technique for communication information embedding into the emission of multiple-input multiple-output ( mimo ) radar using sparse antenna array configurations . the phases induced by antenna displacements in a sensor array are unique , which makes array configuration feasible for symbol embedding . we also exploit the fact that in a mimo radar system , the association of independent waveforms with the transmit antennas can change over different pulse repetition periods without impacting the radar functionality . we show that by reconfiguring sparse transmit array through antenna selection and reordering waveform-antenna paring , a data rate of megabits per second can be achieved for a moderate number of transmit antennas . to counteract practical implementation issues , we propose a regularized antenna selection based signaling scheme . the possible data rate is analyzed and the symbol/bit error rates are derived . simulation examples are provided for performance evaluations and to demonstrate the effectiveness of proposed dfrc techniques . story_separator_special_tag dual-function radar-communication ( dfrc ) based on frequency hopping ( fh ) mimo radar ( fh-mimo dfrc ) achieves symbol rate much higher than radar pulse repetition frequency . such dfrc , however , is prone to eavesdropping due to the spatially uniform illumination of fh-mimo radar . how to enhance the physical layer security of fh-mimo dfrc is vital yet unsolved . in this paper , we reveal the potential of using permutations of hopping frequencies to achieve secure and high-speed fh-mimo dfrc . detecting permutations at a communication user is challenging due to the dependence on spatial angle . we propose a series of baseband waveform processing methods which address the challenge specifically for the legitimate user ( bob ) and meanwhile scrambles constellations almost omnidirectionally . we discover a deterministic sign rule from the signals processed by the proposed methods . based on the rule , we develop accurate algorithms for information decoding at bob . confirmed by simulation , our design achieves substantially high physical layer security for fh-mimo dfrc , improves decoding performance compared with existing designs and reduces mutual interference among radar targets . story_separator_special_tag increasing pressure on the available spectrum , particularly from wireless communications services , has led to significant research into strategies for coexistence between radar and communications . in this context , dual-function radar-communications ( dfrc ) systems have emerged as a potential solution to the spectrum congestion problem . dfrc schemes treat the radar as the primary modality and aim to embed communications symbols into the transmitted waveforms . one such approach is to use the frequency hops in a frequency-hopped ( fh ) multiple-input multiple-output ( mimo ) radar to encode communications symbols . while this scheme allows for higher data rates , embedding the information in the fast time impacts the radar performance . therefore , the code book selection is an important design consideration that we focus on in this work . we discuss the impact of the selection of the code book on the ambiguity functions of the radar waveforms and elucidate its relation to the probability of having degenerate waveforms . we illustrate these ideas using two code book selections , namely an arbitrary selection and a balanced code book . the results show that balancing the codebook leads to improved radar performance . story_separator_special_tag the use of information theory to design waveforms for the measurement of extended radar targets exhibiting resonance phenomena is investigated . the target impulse response is introduced to model target scattering behavior . two radar waveform design problems with constraints on waveform energy and duration are then solved . in the first , a deterministic target impulse response is used to design waveform/receiver-filter pairs for the optimal detection of extended targets in additive noise . in the second , a random target impulse response is used to design waveforms that maximize the mutual information between a target ensemble and the received signal in additive gaussian noise . the two solutions are contrasted to show the difference between the characteristics of waveforms for extended target detection and information extraction . the optimal target detection solution places as much energy as possible in the largest target scattering mode under the imposed constraints on waveform duration and energy . the optimal information extraction solution distributes the energy among the target scattering modes in order to maximize the mutual information between the target ensemble and the received radar waveform . > story_separator_special_tag in this paper , we study optimal spatio-temporal power mask design to maximize mutual information ( mi ) for a joint communication and ( radio ) sensing ( jcas , a.k.a. , radar-communication ) multi-input multi-output ( mimo ) downlink system . we consider a typical packet-based signal structure which includes training and data symbols . we first derive the conditional mi for both sensing and communication under correlated channels by considering the training overhead and channel estimation error ( cee ) . then , we derive a lower bound for the cee and optimize the energy arrangement between the training and data signals to minimize the cee . based on the optimal energy arrangement , we provide optimal spatio-temporal power mask design for three scenarios , including maximizing mi for communication only and for sensing only , and maximizing a weighted sum mi for both communication and sensing . extensive simulations validate the effectiveness of the proposed designs . story_separator_special_tag to improve the effectiveness of limited spectral resources , an adaptive orthogonal frequency division multiplexing integrated radar and communications waveform design method is proposed . first , the conditional mutual information ( mi ) between the random target impulse response and the received signal , and the data information rate ( dir ) of frequency selective fading channel are formulated . then , with the constraint on the total power , the optimization problem , which simultaneously considers the conditional mi for radar and dir for communications , is devised , and the analytic solution is derived . with low transmit power , the designed integrated waveform outperforms the fixed waveform ( i.e. , equal power allocation ) . finally , several simulated experiments are provided to verify the effectiveness of the designed waveform . story_separator_special_tag for a noncoherent multiple-input multiple-output ( mimo ) radar system , the maximum likelihood estimator ( mle ) of the target location and velocity , as well as the corresponding cramer-rao lower bound ( crlb ) matrix , is derived . mimo radar 's potential in localization and tracking performance is demonstrated by adopting simple gaussian pulse waveforms . due to the short duration of the gaussian pulses , a very high localization performance can be achieved , even when the matched filter ignores the doppler effect by matching to zero doppler shift . this leads to significantly reduced complexities for the matched filter and the mle . further , two interactive signal processing and tracking algorithms , based on the kalman filter and the particle filter ( pf ) , respectively , are proposed for noncoherent mimo radar target tracking . for a system with a large number of transmit/receive elements and a high signal-to-noise ratio ( snr ) value , the kalman filter ( kf ) is a good choice ; while for a system with a small number of elements and a low snr value , the pf outperforms the kf significantly . in both methods , story_separator_special_tag the performance of a mobile multiple-input multiple-output orthogonal-frequency-division multiplexing ( mimo-ofdm ) system depends on the ability of the system to accurately account for the effects of the frequency-selective time-varying channel at every symbol time and at every frequency subcarrier . typically , pilot symbols are strategically placed at various times over various subcarriers in order to calculate sample channel estimates , and then these estimates are interpolated or extrapolated forward to provide channel estimates where no pilot data was transmitted . performance is highly dependent on the distribution of the pilots with respect to the coherence time and coherence bandwidth of the channel , and on the chosen channel parameterization . in this paper , a vector formulation of the cramer-rao bound ( crb ) for biased estimators and for functions of parameters is used to derive a lower bound on the channel estimation and prediction error of such a system . numerical calculations using the bound demonstrate the benefits of multiple antennas for channel estimation and prediction and illustrate the impact of modeling errors on estimation performance when using channel models based on calibrated arrays . story_separator_special_tag abstract to improve the availability of limited spectral resources and construct a cost-efficient platform simultaneously performing both radar and communication functions , the orthogonal frequency division multiplexing ( ofdm ) integrated radar and communication system ( ircs ) , which is referred to as ofdm-ircs , is provided . for the frequency sensitive target and frequency selective fading channel , it is able to improve the radar and communication performance by efficiently employing the limited transmit power in the ofdm-ircs . to this end , we propose two optimal waveform design methods that meet different users demands . first , the cramer-rao bounds ( crbs ) for estimating range , velocity and target scattering coefficients with integrated ofdm waveform are derived . and the channel capacity in communication is formulated . then , the multiobjective optimization problem is devised , and the adaptive weighted-optimal and pareto-optimal waveform design approaches are proposed to simultaneously improve the estimation accuracy of range and velocity in radar and the channel capacity in communication . finally , several numerical examples are presented to demonstrate the effectiveness of the proposed design methods . story_separator_special_tag joint communication and radar sensing ( jcas ) integrates communication and radar/radio sensing into one system , sharing one transmitted signal . in this paper , we investigate jcas waveform optimization underlying communication signals , where a base station detects radar targets and communicates with mobile users simultaneously . we first develop individual novel waveform optimization problems for communications and sensing , respectively . for communications , we propose a novel lower bound of sum rate by integrating multi-user interference and effective channel gain into one metric that simplifies the optimization of the sum rate . for radar sensing , we consider optimizing one of two metrics , the mutual information or the cramer-rao bound . then , we formulate the jcas problem by optimizing the communication metric under different constraints of the radar metric , and we obtain both closed-form solutions and iterative solutions to the non-convex jcas optimization problem . numerical results are provided and verify the proposed optimization solutions . story_separator_special_tag joint communication and radio sensing ( jcas ) in millimeter-wave ( mmwave ) systems requires the use of a steerable beam . for analog antenna arrays , a single beam is typically used , which limits the sensing area within the direction of the communication . multibeam technology can overcome this limitation by separately generating package-level direction-varying sensing subbeams and fixed communication subbeams and then combine them coherently . in this paper , we investigate the optimal combination of the two subbeams and the quantization of the beamforming ( bf ) vector that generates the combined beam . when either the full channel matrix or only the angle of departure ( aod ) of the dominating line-of-sight ( los ) path is known at the transmitter , we derive the closed-form expressions for the optimal combining coefficients that maximize the received communication signal power . for the quantization of the bf vector , we focus on the two-phase-shifter array where two phase shifters are used to represent each bf weight . we propose novel joint quantization methods by combining the codebooks of the two phase shifters . the mean squared quantization error is derived for various quantization methods . extensive story_separator_special_tag multibeam technology enables the use of two or more subbeams for joint communication and radio sensing , to meet different requirements of beamwidth and pointing directions . generating and optimizing multibeam subject to the requirements is critical and challenging , particularly for systems using analog arrays . this paper develops optimal solutions to a range of multibeam design problems , where both communication and sensing are considered . we first study the optimal combination of two pre-generated subbeams , and their beamforming vectors , using a combining phase coefficient . closed-form optimal solutions are derived to the constrained optimization problems , where the received signal powers for communication and the beamforming waveforms are alternatively used as the objective and constraint functions . we also develop global optimization methods which directly find optimal solutions for a single beamforming vector . by converting the original intractable complex np-hard global optimization problems to real quadratically constrained quadratic programs , near-optimal solutions are obtained using semidefinite relaxation techniques . extensive simulations validate the effectiveness of the proposed constrained multibeam generation and optimization methods .
australian & new zealand journal of psychiatry , 49 ( 3 ) some readers may recall the classic episode of the tv series yes minister in which a new hospital is running beautifully , with 500 administrators but no patients first of all you have to sort out the smooth running of the hospital . having patients around would be no help at all . this month s issue of the journal opens with a fascinating article by an architect , dr jan golembiewski , in which he argues that our current facilities for psychiatric care are designed around staff efficiency , routines and protocols , and do not provide person-centred care with a focus on recovery . he describes the honeypot syndrome with patients gathered around the nursing station , waiting like supplicants for a nurse to attend to their request . in contrast , some overseas hospitals do not have a nurses station , instead providing nurses with small , open workstations scattered through the day rooms . many factors other than patient welfare influence the design of new buildings the budget , the size of the footprint available , occupational health and safety considerations , risk reduction story_separator_special_tag recent developments in mobile , cloud , and graphics processing technologies have enabled mobile cloud gaming , a gaming model where players use mobile devices to play graphics-intensive games that run remotely on cloud servers . this delivery paradigm is called gaming as a service gaas . gaas is used to stream computer games across the internet . it gives rise to various technical , legal , and ethical issues . in this paper , we present the current state of the art in gaas along with open issues and research challenges . story_separator_special_tag the widespread availability and demand for multimedia capable devices and multimedia content have fueled the need for high-speed wireless connectivity beyond the capabilities of existing commercial standards . while fiber optic data transfer links can provide multigigabit- per-second data rates , cost and deployment are often prohibitive in many applications . wireless links , on the contrary , can provide a cost-effective fiber alternative to interconnect the outlining areas beyond the reach of the fiber rollout . with this in mind , the ever increasing demand for multi-gigabit wireless applications , fiber segment replacement mobile backhauling and aggregation , and covering the last mile have posed enormous challenges for next generation wireless technologies . in particular , the unbalanced temporal and geographical variations of spectrum usage along with the rapid proliferation of bandwidth- hungry mobile applications , such as video streaming with high definition television ( hdtv ) and ultra-high definition video ( uhdv ) , have inspired millimeter-wave ( mmwave ) communications as a promising technology to alleviate the pressure of scarce spectrum resources for fifth generation ( 5g ) mobile broadband . story_separator_special_tag in the september 2014 issue of ieee communications magazine , the first part of this feature topic included five articles that covered the fundamentals of mmwave communications with topics ranging from propagation to coverage , presenting a holistic view of research challenges and opportunities in the emerging area of mmwave radio systems and 5g mobile broadband . the use of this technology is expected to surge in the next few years and to transform the internet industry in the next 10 years . this part of the feature topic will address in more detail many technical and application issues related to beamforming , device-to-device communications , heterogeneous networks , and multimedia transmission . story_separator_special_tag the millimeter wave ( mmwave ) frequency band spanning from 30 to 300 ghz constitutes a substantial portion of the unused frequency spectrum , which is an important resource for future wireless communication systems in order to fulfill the escalating capacity demand . given the improvements in integrated components and enhanced power efficiency at high frequencies , wireless systems can operate in the mmwave frequency band . in this paper , we present a survey of the mmwave propagation characteristics , channel modeling , and design guidelines , such as system and antenna design considerations for mmwave , including the link budget of the network , which are essential for mmwave communication systems . we commence by introducing the main channel propagation characteristics of mmwaves followed by channel modeling and design guidelines . then , we report on the main measurement and modeling campaigns conducted in order to understand the mmwave band s properties and present the associated channel models . we survey the different channel models focusing on the channel models available for the 28 , 38 , 60 , and 73 ghz frequency bands . finally , we present the mmwave channel model and its challenges in the story_separator_special_tag millimeter wave ( mmwave ) communication has raised increasing attentions from both academia and industry due to its exceptional advantages . compared with existing wireless communication techniques , such as wifi and 4g , mmwave communications adopt much higher carrier frequencies and thus come with advantages including huge bandwidth , narrow beam , high transmission quality , and strong detection ability . these advantages can well address difficult situations caused by recent popular applications using wireless technologies . for example , mmwave communications can significantly alleviate the skyrocketing traffic demand of wireless communication from video streaming . meanwhile , mmwave communications have several natural disadvantages , e.g. , severe signal attenuation , easily blocked by obstacles , and small coverage , due to its short wavelengths . hence , the major challenge is how to overcome its shortcomings while fully utilizing its advantages . in this paper , we present a taxonomy based on the layered model and give an extensive review on mmwave communications . specially , we divide existing efforts into four categories that investigate : physical layer , medium access control ( mac ) layer , network layer , and cross layer optimization , respectively . first story_separator_special_tag almost all cellular mobile communications including first generation analog systems , second generation digital systems , third generation wcdma , and fourth generation ofdma systems use ultra high frequency ( uhf ) band of radio spectrum with frequencies in the range of 300mhz-3ghz . this band of spectrum is becoming increasingly crowded due to spectacular growth in mobile data and other related services . the portion of the rf spectrum above 3ghz has largely been uxexploited for commercial mobile applications . in this paper , we reason why wireless community should start looking at 3 300ghz spectrum for mobile broadband applications . we discuss propagation and device technology challenges associated with this band as well as its unique advantages such as spectrum availability and small component sizes for mobile applications . story_separator_special_tag in the millimeter-wave ( 30-300 ghz ) and terahertz ( 0.1-10 thz ) frequency bands , the high spreading loss and molecular absorption often limit the signal transmission distance and coverage range . in this article , four directions to tackle the crucial problem of distance limitation are investigated , namely , a distance-aware physical layer design , ultra-massive mimo communication , reflectarrays , and intelligent surfaces . additionally , the potential joint design of these solutions is proposed to combine the benefits and further extend the communication distance . qualitative and quantitative evaluations are provided to illustrate the benefits of the proposed solutions . the feasibility of mmwave and thz band communications up to 100 m in both line-of-sight and nonline- of-sight areas are demonstrated . story_separator_special_tag bioinformatics research depends on high-quality databases to provide accurate results . in silico experiments , correctly performed , may prospect novel discoveries and elucidates pathways for biological experiments through data analysis in large scale . however , most biological databases have presented mistakes , such as data incorrectly classified or incomplete information . also , sometimes , data mining algorithms can not treat these errors , leading to serious problems for the in silico analysis . manual curation of data extracted from literature is a possible solution for this problem . systematic literature review ( slr ) , or systematic review , is a method to identify , evaluate and summarize the state-of-the-art of a specific theme . moreover , slr allows the collection from databases restrictively , which allows an analysis with lower bias than traditional reviews . the srl approaches have been widely used for decision-making in medical and environmental studies . however , other research areas , such as bioinformatics , do not have a specific step-by-step to guide researchers undertaking the procedures of an slr . in this study , we propose a guideline , called bisrl , to perform slr in bioinformatics . our procedures story_separator_special_tag in this paper , we present a real-time interacting with hand description system via millimeter-wave sensor ( riddle ) for human-computer interaction . firstly , we describe a new approach to developing a radar-based system . when hand motions are captured by millimeter- wave radar sensor , the unique range information can be observed in the spectrogram . compared to traditional hand gesture recognition systems based on optical sensors , the radar-based system avoids the influence of ambient light conditions . secondly , we employ deep neural networks combined with connectionist temporal classification algorithm to recognize diverse hand gestures in real-time . besides , we visualize the feature maps extracted from different layers to understand the deep neural networks . the deep neural networks are powerful to extract hand gesture features as well as class boundaries through a training process . finally , we demonstrate that riddle is capable of detecting six hand gestures and achieving high recognition accuracy of 96 % . story_separator_special_tag while malicious attacks on electronic devices ( e-devices ) have become commonplace , the use of e-devices themselves for malicious attacks has increased ( e.g. , explosives and eavesdropping ) . modern e-devices ( e.g. , spy cameras , bugs or concealed weapons ) can be sealed in parcels/boxes , hidden under clothing or disguised with cardboard to conceal their identities ( named as hidden e-devices hereafter ) , which brings challenges in security screening . inspection equipment ( e.g. , x-ray machines ) is bulky and expensive . moreover , screening reliability still rests on human performance , and the throughput in security screening of passengers and luggages is very limited . to this end , we propose to develop a low-cost and practical hidden e-device recognition technique to enable efficient screenings for threats of hidden electronic devices in daily life . first , we investigate and model the characteristics of nonlinear effects , a special passive response of electronic devices under millimeter-wave ( mmwave ) sensing . based on this theory and our preliminary experiments , we design and implement , e-eye , an end-to-end portable hidden electronics recognition system . e-eye comprises a low-cost ( i.e. , story_separator_special_tag in this paper , a foreign object debris ( fod ) detection system using mm-wave fmcw radar is presented . characteristic system parameters and tower allocation procedure are provided . land clutter signal modelling is summarized . in the land clutter modeling scenario , besides the 3-db beam of the antenna pattern , whole pattern is also considered to obtain a more realistic approach . fod detection steps are summarized . the masking effect of the high rcs objects allocated outside the runway is discussed . elimination of this effect by windowing is presented by simulations . story_separator_special_tag the detection and location of objects concealed under clothing is a very challenging task that has crucial applications in security . in this domain , passive millimeter-wave images ( pmmwis ) can be used . however , the quality of the acquired images , and the unknown position , shape , and size of hidden objects render this task difficult . in this paper , we propose a machine learning-based solution to this detection/localization problem . our method outperforms currently used approaches . the effect of non-stationary noise on different classification algorithms is analyzed and discussed , and a detailed experimental comparative study of classification techniques is presented using a new and comprehensive pmmwi database . the low computational testing cost of this solution allows for its use in real-time applications . a new approach to the detection of hidden objects in pmmwi based on machine learning.a comparative experimental study between two type of features and six classifiers.a new database of passive millimeter wave images ( pmmwi ) . story_separator_special_tag this paper describes algorithms for detection and evaluation of trajectory parameters of small , spatially moving objects by passive optical and radio thermal vision system . the algorithms are based on spatial and temporal image processing . during spatial processing , a system of equations representing a sufficient condition for the conjugation of direction vectors to objects in stereo pairs is solved . during temporal processing vectors of the directions on accessory to objects in a sequence of the periods of supervision are distributed . the results of the algorithms theoretical and experimental examination are given . they are showing the advantage of the joint application of the two approaches . story_separator_special_tag this paper addresses concealed object detection by passive millimeter wave ( mmw ) imaging . passive mmw imaging penetrates into clothing to capture metal and man-made objects . in this paper , we propose a multi-level expectation maximization ( em ) method to separate the concealed object from the body area . the performance is evaluated by the average probability of error . we will show that the proposed em processes segments the object area more accurately than the conventional em method . story_separator_special_tag in this paper , a high-performance detection algorithm of concealed forbidden objects on human body is presented based on deep neural networks ( dnn ) and complementary advantages of passive millimeter wave imagery ( pmmwi ) and visible imagery ( vi ) . with well capacity of penetrability , pmmwi can effectively reveal suspected forbidden objects concealed on human body without harm of ionizing radiation compared with conventional x-ray methods . however , due to its current limited imaging capability , the resolution of pmmwi is still unsatisfactory and easy to result in false alarms . therefore , by complementarity of superiorities , vi is employed to overcome the deficiency of confusions . in this way , massive image samples of pmmwi and vi are simultaneously acquired and manually annotated as necessary training datasets to carry out deep learning on dnn models so as to achieve high-performance human body profile segmentation on both pmmwi and vi . then , high-precision region registration of human body profiles is implemented between pmmwi and vi to localize and confirm high confident suspected targets and remove false alarm regions as well . according to the principle of synthetic integration and global optimization , a story_separator_special_tag as a result of its relatively short wavelength coupled with relatively high penetration of many materials , millimeter- wave imaging provides a powerful tool for the detection of concealed articles . by using a passive approach such as that implemented here , it is possible to image ( detect ) concealed weapons and articles or look through certain types of walls , all without generating any form of radiation that might raise health concerns . in this paper we will show images from our current first generation unit and discuss the technology and performance of the second generation state- of-the-art unit currently being built . story_separator_special_tag this paper describes a portable passive millimeter-wave sensor designed for remote detection of both metallic and non-metallic objects hidden on a human body under cloth . the sensor is based on a directly detection and analyses of the energy emitted by a human body . the algorithm of detection estimates unimodality features of traces recorded in the process of a manual scan . the sensor demonstrates a detection probability in the laboratory environment close to 100 % at a distance of up to 3 m for the tested samples of explosives and metal objects hidden under cloth . story_separator_special_tag millimeter wave ( mmw ) imaging is finding rapid adoption in security applications such as concealed object detection under clothing . a passive mmw imaging system can operate as a stand-off type sensor that scans people in both indoors and outdoors . however , the imaging system often suffers from the diffraction limit and the low signal level . therefore , suitable intelligent image processing algorithms would be required for automatic detection and recognition of the concealed objects . this paper proposes real-time outdoor concealed-object detection and recognition with a radiometric imaging system . the concealed object region is extracted by the multi-level segmentation . a novel approach is proposed to measure similarity between two binary images . principal component analysis ( pca ) regularizes the shape in terms of translation and rotation . a geometric-based feature vector is composed of shape descriptors , which can achieve scale and orientation-invariant and distortion-tolerant property . class is decided by minimum euclidean distance between normalized feature vectors . experiments confirm that the proposed methods provide fast and reliable recognition of the concealed object carried by a moving human subject . story_separator_special_tag this paper mainly deals with the problem of detecting and identifying target in close range , the performance of which will be effected by the radiometer s parameters and target s characteristics . according to the relationship between the range equation of the passive millimeter wave ( pmmw ) and these parameters , we present a convenient statistical method based on pmmw image detection to solve the inherent problem by statistical radiometer parameters , which can be achieved by the w band radiometer experimental data . finally , we validate the method by simulation and experiment . the results show that the method is convenient for detecting and identifying target in close range . story_separator_special_tag the detection of land mines and other ordnance on the battlefield has grown in importance with their increased use , not only for military personnel , but for civilians after hostilities have ceased . the need for new approaches and sensors to increase the speed and efficiency of methods to clear mines is an issue that must be addressed . a method to detect metal mines , on top of or buried under dry sand , is demonstrated using the passive detection of naturally occurring millimeter wave radiation ( at 44 ghz ) emanating from the scene . measurements will be shown that indicate the feasibility of detection of metal under at least 3 inches of dry sand . story_separator_special_tag in this study , we present passive millimeter wave ( pmmw ) radiometric images for several concealed object experiments . pmmw is an imaging technique that is realized by collecting the existing cosmic background radiation ( cbr ) from the target and surrounding environment based on their temperatures and electromagnetic wave reflectivity . pmmw imaging , one of the detection and classifying methods for determining and localizing concealed objects in the cases where security is prioritized is presented without employing any active radiation and so preventing the human health . we present a detection algorithm based on the histogram of the raw data and applying an auto-segmentation routine to the data . story_separator_special_tag sun-tracking ( st ) microwave radiometry is a technique where the sun is used as a microwave signal source and it is here rigorously summarized . the antenna noise temperature of a ground-based microwave radiometer is measured by alternately pointing toward-the-sun and off-the-sun while tracking it along its diurnal ecliptic . during clear sky the brightness temperature of the sun disk emission at k and ka band and in the unexplored millimeter-wave frequency region at v and w band can be estimated by adopting different techniques . using a unique dataset collected during 2015 through a st multifrequency radiometer , the sun brightness temperature shows a decreasing behavior with frequency with values from about 9000 k at k band down to about 6600 k at w band . in the presence of precipitating clouds the st technique can also provide an accurate estimate of the atmospheric extinction up to about 32\xa0db at w band with the current radiometric system . parametric prediction models for retrieving all-weather atmospheric extinction from ground-based microwave radiometers are then tested and their accuracy evaluated . story_separator_special_tag detection of non-moving humans is important for physical security , search-and-rescue , homeland protection , and military applications , although it has proven difficult to achieve because traditional techniques have significant drawbacks . for instance , ir sensors do not detect well in warm daytime environments and radars can not easily discriminate stationary humans . the authors have recently presented a solution by applying millimeter-wave radiometry to the detection of stationary humans in cluttered outdoor environments [ 1 2 ] . mmw radiometers operate in both day and night environments and are affected only by moderate to heavy rain . fog , smoke , or dust , which render ir sensors inoperable , are also transparent to mmw sensors . the radiometric detection system was designed to operate from a moving platform , making image recognition prohibitive due to the high computational expense of image processing in real-time with a constantly changing background . the presence-detection radiometer operates on one-dimensional sensor signals , resulting in lower computation time . story_separator_special_tag this article focuses on the establishment of convolutional neural network model to achieve the detection of human concealment in millimeter wave images . the convolutional neural network is applied to the image data set for detection training , pictures are randomly selected for identification , and the target location is marked . this paper proves that convolutional neural network can be applied as a feasible general image detection technology to different detection problems . story_separator_special_tag a multilayer feedforward neural network was developed for passive microwave relative humidity profile retrievals . retrievals for radiosonde-based simulated radiances at fifteen frequencies between 23.8 and 183.3 ghz yielded rms retrieval errors in relative humidity of 6-14 percent both over ocean and over land at pressure levels ranging from 131 mb to 1013 mb . these radio frequencies approximated those of the temperature- and humidity-sounding noaa amsu and mhs microwave spectrometers planned for u.s. operational low-earth-orbit meteorological satellites . these retrieval results were comparable or superior to those obtained earlier with an iterative combined statistical and physical retrieval scheme . > story_separator_special_tag the paper presents the results of design and testing non-imaging mm-wave sensor aimed for remote detecting contrabands hidden on a human body under cloth . it operates in passive direct receiver mode at the range 80-100ghz . the unimodality approximation has been suggested for applying in the algorithm of detection . the antenna system of the sensor is based on horn antenna integrated with focusing dielectric lens . the sensor has demonstrated detection probability up to 100 % in laboratory environment for the tested samples both plastics and metals hidden under cloth at the distance up to 3m with spatial resolution about 5cm . story_separator_special_tag a simple and fast single channel passive millimeter wave ( pmmw ) imaging system for public security check is presented in this paper . it distinguishes itself with traditional ones by an innovative scanning mechanism . indoor experiments against human body with or without concealed items in clothes show that imaging could be completed in 3 s with angular resolution of about 0.7\xb0 . in addition , its field of view ( fov ) is adjustable according to the size of actual target . story_separator_special_tag the unique fingerprint spectra of volatile organic compounds for breath analysis and toxic industrial chemicals make an mm-wave ( mmw ) /thz gas sensor very specific and sensitive . this paper reviews and updates results of our recent work on sensor systems for gas spectroscopy based on integrated transmitter ( tx ) and receiver ( rx ) , which are developed and fabricated in ihp s $ 0.13~\\mu \\text { m } $ sige bicmos technology . in this paper , we present an mmw/thz spectroscopic system including a folded gas absorption cell of 1.9 m length between the tx and rx modules . we discuss the results and specifications of our sensor system based on integrated tx and rx . we demonstrate txs and rxs with integrated antennas for spectroscopy at 238 252 ghz and 494 500 ghz using integer- $ n $ phase-locked loops ( plls ) . we present a compact system by using fractional- $ { n } $ plls allowing frequency ramps for the tx and rx , and for tx with superimposed frequency shift keying or reference frequency modulation . in another configuration , the voltage controlled oscillators of the tx and rx local story_separator_special_tag this paper describes the development of a system for indoor intrusion detection that takes advantage of interference between asynchronous millimetre-wave radars . the approach exploits the information embedded in the interference pattern observed in the doppler domain when two or more radars operate in a common environment and share the same frequency spectrum . by continuously monitoring the interference , it is possible to detect the corresponding energy variations . a sharp decrease in the interference energy is thus interpreted as an intrusion of an object or a person . within this approach the source of the interference can be identified taking advantage of beam-forming of mimo radars . compared with the standard configuration , which exploits the reflection of radar signals , the proposed setup has the advantage of maximizing the energy available for intrusion detection and an increased capacity of obstacles and walls penetration . when combined with the capacity of mobile robots to dynamically position the radars , this scheme permits the implementation of highly versatile intrusion detection solutions . story_separator_special_tag studies of biological and artificial membrane systems , such as niosomes , currently rely on the use of fluorescent tags , which can influence the system under investigation . for this reason , the development of label-free , non-invasive detection techniques is of great interest . we demonstrate an open-volume label-free millimeter-wave sensing platform based on a coplanar waveguide , developed for identification and characterization of niosome constituents . a design based on a /2-line resonator was used and on-wafer measurements of transmission and reflection parameters were performed up to 110 ghz . our sensor was able to clearly distinguish between common niosome constituents , non-ionic surfactants tween 20 and span 80 , measuring a resonance shift of 3 ghz between them . the complex permittivities of the molecular compounds have been extracted . our results indicate insignificant frequency dependence in the investigated frequency range ( 3 ghz 110 ghz ) . values of permittivity around 3.0 + 0.7i and 2.2 + 0.4i were obtained for tween 20 and span 80 , respectively . story_separator_special_tag silicon carbide ( sic ) is used as a sintering aid by adding to the heating objects or a heating assistance by surrounding the heating object in studies of microwave and/or millimeter-wave sintering of ceramics . therefore , it is important to know electric behaviors , dielectric constant , loss tangent ( tan ) and so on , of sic at the microwave and/or millimeter-wave frequencies . the importance is not only for dense bodies but also for low-density bodies including the powder state . millimeter-wave dielectric measurement of -sic powder was performed by using measuring fixtures , wave-guide and free-space fixtures , with a millimeter-wave network analyzer system in this study . measured results of dielectric properties such as dielectric constant , loss tangent and so on at the millimeter-wave frequency will be shown . story_separator_special_tag corrosion pitting defection is a critical issue in the maintenance of aircraft . near-field microwave nondestructive techniques have been successfully used for defection of corrosion under paint . in this paper a comparison between several different millimeter wave probes is made for the detection and evaluation of corrosion precursor pitting under paint at ka-band and v-band . since the pittings investigated here are very small in size , spatial resolution and sensitivity of the probes are critical issues . if is shown that modified open-ended rectangular probes namely , tapered waveguide and dielectric slab-loaded waveguide probes provide high resolution and sensitivity for the detection and evaluation of very small pittings under paint . story_separator_special_tag millimeter-wave imaging has emerged over the last several years as an effective method for screening people for non-metallic weapons , including explosives . millimeter-waves are effective for personnel screening , since the waves pass through common clothing materials and are reflected by the human body and any concealed objects . completely passive imaging systems have also been developed that rely on the natural thermal emission of millimeter-waves from the body and concealed objects . millimeter-waves are non-ionizing and are harmless to people at low or moderate power levels . active and passive imaging systems have been developed by several research groups , with several commercial imaging sensors becoming available recently . these systems provide images revealing concealed items , and as such , do not specifically identify detected materials . rather , they provide indications of unusual concealed items . the design of practical , effective , high-speed ( real-time or near real-time ) imaging systems presents a number of scientific and engineering challenges , and this chapter will describe the current state-of-the-art in active and passive millimeter-wave imaging for personnel screening . numerous imaging results are shown to demonstrate the effectiveness of the techniques described . the authors have story_separator_special_tag this article details the development of a gesture recognition technique using a mm-wave radar sensor for in-car infotainment control . gesture recognition is becoming a more prominent form of human-computer interaction and can be used in the automotive industry to provide a safe and intuitive control interface that will limit driver distraction . we use a 60 ghz mm-wave radar sensor to detect precise features of fine motion . specific gesture features are extracted and used to build a machine learning engine that can perform real-time gesture recognition . this article discusses the user requirements and in-car environmental constraints that influenced design decisions . accuracy results of the technique are presented , and recommendations for further research and improvements are made . story_separator_special_tag radar sensors offer several advantages over optical sensors in the gesture recognition for remote control of electronic devices . in this paper , we investigate the feasibility of human gesture recognition using the spectra of radar measurement parameters . with the combination of radar theory and classification methods , we found that the frequencies of different gestures ' parameters could be utilized as features for gesture recognition . six kinds of periodic dynamic gestures are designed to avoid the complexity of defining and extracting the start and end of the dynamic gesture . in addition to the frequency ratio , we also extracted some features related to motion range and detection coherence to eliminate the interferences brought by the unintended gestures . the decision tree classifier designed on the basis of experimental phenomena can guarantee effective classification between different gestures , and in general , the correct recognition rate of each gesture is higher than 90 % . finally , we collected the position and the doppler velocity information of hand for classification by a w-band millimeter wave radar in the experiment and verified the usability of the proposed method . story_separator_special_tag it is very difficult for visually impaired people to perceive and avoid obstacles at a distance . to address this problem , the unified framework of multiple target detection , recognition , and fusion is proposed based on the sensor fusion system comprising a low-power millimeter wave ( mmw ) radar and an rgb-depth ( rgb-d ) sensor . in this paper , the mask r-cnn and the single shot multibox detector network are utilized to detect and recognize the objects from color images . the obstacles ' depth information is obtained from the depth images using the meanshift algorithm . the position and velocity information on the multiple target is detected by the mmw radar based on the principle of a frequency modulated continuous wave . the data fusion based on the particle filter obtains more accurate state estimation and richer information by fusing the detection results from the color images , depth images , and radar data compared with using only one sensor . the experimental results show that the data fusion enriches the detection results . meanwhile , the effective detection range is expanded compared to using only the rgb-d sensor . moreover , the data fusion story_separator_special_tag this paper presents soli , a new , robust , high-resolution , low-power , miniature gesture sensing technology for human-computer interaction based on millimeter-wave radar . we describe a new approach to developing a radar-based sensor optimized for human-computer interaction , building the sensor architecture from the ground up with the inclusion of radar design principles , high temporal resolution gesture tracking , a hardware abstraction layer ( hal ) , a solid-state radar chip and system architecture , interaction models and gesture vocabularies , and gesture recognition . we demonstrate that soli can be used for robust gesture recognition and can track gestures with sub-millimeter accuracy , running at over 10,000 frames per second on embedded hardware . story_separator_special_tag the key to offering personalised services in smart spaces is knowing where a particular person is with a high degree of accuracy . visual tracking is one such solution , but concerns arise around the potential leakage of raw video information and many people are not comfortable accepting cameras in their homes or workplaces . we propose a human tracking and identification system ( mid ) based on millimeter wave radar which has a high tracking accuracy , without being visually compromising . unlike competing techniques based on wifi channel state information ( csi ) , it is capable of tracking and identifying multiple people simultaneously . using a lowcost , commercial , off-the-shelf radar , we first obtain sparse point clouds and form temporally associated trajectories . with the aid of a deep recurrent network , we identify individual users . we evaluate and demonstrate our system across a variety of scenarios , showing median position errors of 0.16 m and identification accuracy of 89 % for 12 people . story_separator_special_tag accurate human activity recognition ( har ) is the key to enable emerging context-aware applications that require an understanding and identification of human behavior , e.g. , monitoring disabled or elderly people who live alone . traditionally , har has been implemented either through ambient sensors , e.g. , cameras , or through wearable devices , e.g. , a smartwatch , with an inertial measurement unit ( imu ) . the ambient sensing approach is typically more generalizable for different environments as this does not require every user to have a wearable device . however , utilizing a camera in privacy-sensitive areas such as a home may capture superfluous ambient information that a user may not feel comfortable sharing . radars have been proposed as an alternative modality for coarse-grained activity recognition that captures a minimal subset of the ambient information using micro-doppler spectrograms . however , training fine-grained , accurate activity classifiers is a challenge as low-cost millimeter-wave ( mmwave ) radar systems produce sparse and non-uniform point clouds . in this paper , we propose radhar , a framework that performs accurate har using sparse and non-uniform point clouds . radhar utilizes a sliding time window to accumulate story_separator_special_tag a real-time behavior detection system using millimeter wave radar is presented in this article . radar is used to sense the micro-doppler information of targets . a convolution neural network ( cnn ) is further implemented in the detection and classification of the human motion behaviors using this information . both the convolution layers and architecture of cnns are presented . the analysis on loss and accuracy of training results is also shown . the experimental result indicates a precise determination of human motion behavior detection using the proposed system . story_separator_special_tag biometrics offer a personal and convenient way of keeping our identities and our data secure . here , we introduce a method of using mm-wave sensors to identify various individuals . in our system prototype , the compact radar sensor has two transmit antennas and four receive ones . the transmitter ( s ) send a sequence of signals which are reflected and scattered from a nearby part of the body of a user ( a hand in our demo case ) . different signal processing algorithms are applied to the received signals in order to create a rich feature dataset . in our demo system , the resulting dataset is classified using a random forest machine learning model , which is shown to facilitate identifying a group of individuals with high accuracy . this technology has promising implications in terms of using mm-wave radars as an independent or an auxiliary tool for biometric authentication . story_separator_special_tag in this paper , we use a low-cost low-power mm-wave frequency modulated continuous wave ( fmcw ) radar for the in-vehicle occupant detection . we propose an algorithm using capon filter for the joint range-azimuth estimation . then , the minimum necessary features are extracted to train machine learning classifiers to have reasonable computational complexity while achieving high accuracy . in addition , experiments were carried out in a minivan to detect occupancy of each row using support vector machine ( svm ) . finally , our proposed system achieved 97.8 % accuracy on average in finding the defined scenarios . moreover , the system can correctly identify if the vehicle is occupied or not with 100 % accuracy . story_separator_special_tag with the potential to increase road safety and provide economic benefits , intelligent vehicles have elicited a significant amount of interest from both academics and industry . a robust and reliable vehicle detection and tracking system is one of the key modules for intelligent vehicles to perceive the surrounding environment . the millimeter-wave radar and the monocular camera are two vehicular sensors commonly used for vehicle detection and tracking . despite their advantages , the drawbacks of these two sensors make them insufficient when used separately . thus , the fusion of these two sensors is considered as an efficient way to address the challenge . this paper presents a collaborative fusion approach to achieve the optimal balance between vehicle detection accuracy and computational efficiency . the proposed vehicle detection and tracking design is extensively evaluated with a real-world data set collected by the developed intelligent vehicle . experimental results show that the proposed system can detect on-road vehicles with 92.36 % detection rate and 0 % false alarm rate , and it only takes ten frames ( 0.16 s ) for the detection and tracking of each vehicle . this system is installed on kuafu-ii intelligent vehicle for the story_separator_special_tag with the continuous development of the intelligent transportation industry , target tracking has become an important research direction . under normal circumstances , due to the complex road environment and changing backgrounds , millimeter wave radar has more interference when detecting targets . in addition to the variety of targets in the road and the different scattering intensity of multiple parts , the interference of the flicker noise on the radar must be considered . the combination of these noises can affect the accuracy of radar measurement and even make the radar to lose the target for a short time . the paper constructs a target tracking model based on adaptive sage-husa kalman filter algorithm to track radar signals . the algorithm can not only estimate the real-time state of the system , but also estimate and modify the parameters of the system and the statistical parameters of the noise , so that the system model is closer to the current real state of the system , thus improving the accuracy of the target tracking . even if radar loses its target in a short time , the target tracking model can estimate the approximate value of the true value story_separator_special_tag to address potential gaps noted in patient monitoring in the hospital , a novel patient behavior detection system using mmwave radar and deep convolution neural network ( cnn ) , which supports the simultaneous recognition of multiple patients ' behaviors in real-time , is proposed . in this study , we use an mmwave radar to track multiple patients and detect the scattering point cloud of each one . for each patient , the doppler pattern of the point cloud over a time period is collected as the behavior signature . a three-layer cnn model is created to classify the behavior for each patient . the tracking and point clouds detection algorithm was also implemented on an mmwave radar hardware platform with an embedded graphics processing unit ( gpu ) board to collect doppler pattern and run the cnn model . a training dataset of six types of behavior were collected , over a long duration , to train the model using adam optimizer with an objective to minimize cross-entropy loss function . lastly , the system was tested for real-time operation and obtained a very good inference accuracy when predicting each patient 's behavior in a two-patient scenario . story_separator_special_tag technology that can be used to unobtrusively detect and monitor the presence of human subjects from a distance and through barriers can be a powerful tool for meeting new security challenges , including asymmetric battlefield threats abroad and defense infrastructure needs back home . our team is developing mobile remote sensing technology for battle-space awareness and warfighter protection , based on microwave and millimeter-wave doppler radar motion sensing devices that detect human presence . this technology will help overcome a shortfall of current see-through-thewall ( sttw ) systems , which is , the poor detection of stationary personnel . by detecting the minute doppler shifts induced by a subject 's cardiopulmonary related chest motion , the technology will allow users to detect personnel that are completely stationary more effectively . this personnel detection technique can also have an extremely low probability of intercept since the signals used can be those from everyday communications . the software and hardware developments and challenges for personnel detection and count at a distance will be discussed , including a 2.4 ghz quadrature radar single-chip silicon cmos implementation , a low-power double side-band ka-band transmission radar , and phase demodulation and heart rate extraction algorithms story_separator_special_tag gesture recognition is gaining attention as an attractive feature for the development of ubiquitous , context-aware , iot applications . use of radars as a primary or secondary system is tempting , as they can operate in darkness , high light intensity environments , and longer distances than many competitor systems . starting from this observation , we present a generic , low-cost , mm-wave radar-based gesture recognition system . among potential benefits of mm-wave radars are a high spatial resolution due to small wavelength , the availability of multiple antennas in a small area and the low interference due to the natural attenuation of mm-wave radiation . we experimentally evaluate our cots solution considering eight different gestures and using two low-complexity classification algorithms : the unsupervised self organized map ( som ) and the supervised learning vector quantization ( lvq ) . to test robustness , we consider gestures performed by a human hand and a human body , at short and long distance . from our preliminary evaluations , we observe that lvq and som correctly detect 75 % and 60 % of all gestures , respectively , from the raw , unprocessed data . the detection rate story_separator_special_tag electromagnetic radars have been shown potentially to be used for remote sensing of biosignals in a more comfortable and easier way than wearable and contact devices . while there is an increasing interest in using radars for health monitoring , their performance has not been tested and reported either in practical scenarios or with acceptable low errors . therefore , we use a frequency modulated continuous wave ( fmcw ) radar operating at 77 ghz in a bedroom environment to extract the respiration and heart rates of a patient , who is used to lying down on the bed . indeed , the proposed signal processing contains advanced phase unwrapping manipulation , which is unique . in addition , the results are compared with a reliable reference sensor . our results show that the correlations between the reference sensor and the radar estimates are in 94 % and 80 % for breathing and heart rates , respectively . story_separator_special_tag array radar needs enough array length to realize high angular resolution . however , it takes cost to increase elements physically . recently , minimum redundancy multiple- input multiple-output ( mr-mimo ) was proposed to increase the number of virtual elements efficiently . further improvements by using khatri-rao ( kr ) transformation in the case of incoherent waves were also proposed . in this paper , we show experimental results on indoor human tracking with millimeter-wave mr- mimo radar to verify the improvements of resolution and tracking performance in comparison with the results by a conventional mimo radar . story_separator_special_tag a compact millimeter-wave ( mmw ) sensor has been developed for remote monitoring of human vital signs ( heart and respiration rate ) . the low-power homodyne transceiver operating at 94 ghz was assembled by using solid-state active and passive block-type components and can be battery operated . a description of the mmw system front end and the back-end acquisition hardware and software is presented . representative test case results on the application of various signal processing and data analysis algorithms developed to extract faint physiological signals of interest in presence of strong background interference are provided . although the laboratory experiments so far have been limited to standoff distances of up to 15 m , the upper limit of the detection range is expected to be higher . in comparison with its microwave counterparts , the mmw system described here provides higher directivity , increased sensitivity , and longer detection range for measuring subtle mechanical displacements associated with heart and respiration functions . the system may be adapted for use in a wide range of standoff sensing applications including for patient health care , structural health monitoring , nondestructive testing , biometric sensing , and remote vibrometry in general story_separator_special_tag in this paper , we will describe the development of a 228 ghz heterodyne radar system as a vital signs sensing monitor that can remotely measure respiration and heart rates from distances of 1 to 50 meters . we will discuss the design of the radar system along with several studies of its performance . the system includes the 228 ghz transmitter and heterodyne receiver that are optically coupled to the same 6 inch optical mirror that is used to illuminate the subject under study . intermediate frequency ( if ) signal processing allows the system to track the phase of the reflected signal through i and q detection and phase unwrapping . the system monitors the displacement in real time , allowing various studies of its performance to be made . we will review its successes by comparing the measured rates with a wireless health monitor and also describe the challenges of the system . story_separator_special_tag currently , the method of using millimeter wave radar to detect obstacles in front of vehicles has been widely used . when using millimeter wave radar to detect obstacles on the road , the radar has more noise interference due to the changeable road environment and complex background . combined with the complexity and variety of road targets , the random changes of scattering intensity and relative phase of different parts cause the distortion of the echo phase wave , resulting in the flicker noise that affects the accuracy of measurement , and even lead to the loss of targets . in this case , there are some shortcomings in tracking the target using the ordinary kalman filter algorithm . in this paper , a sage-husa adaptive kalman filtering algorithm is designed for the road environment to track radar targets and improve the accuracy of target tracking . then , the radar and machine vision information fusion method is used to intuitively judge the filtering effect and determine whether the radar loses the target . finally , the true value of the target position is approximated by filtered value when the radar loses its target . the experimental results show story_separator_special_tag in this paper , we describe non-contact monitoring of vital signs of multiple people using a frequency modulated continuous wave ( fmcw ) sensor . we use the inherent range-gating ability of the fmcw waveforms and multiple receive channels to separate objects in range-azimuth plane . we sub-sequently utilize the fact that body surface movements due to physiological motions modulates the phase of the received radar signal and can be further processed to extract the breathing and heart-rate . range-gating and beamforming techniques allow the signal of interest to be isolated from surrounding clutter , however several challenges such as random body movements need to be addressed before radar-based non-contact measurements can be deployed in real-world settings . story_separator_special_tag gesture recognition is one of the most intuitive forms of humancomputer interface . gesture sensing can replace interfaces such as touch and clicks needed for interacting with a device . in this article , we present a short-range compact 60-ghz mm-wave radar sensor that is sensitive to fine dynamic hand motions . a series of rangedoppler images are extracted and processed using a long recurrent all-convolution neural network for real-time dynamic hand gesture recognition . furthermore , we make use of novel data augmentation techniques for the proposed gesture recognition system to generalize for multiple users and operating environments . the results show accurate classification performance requiring very low processor footprint facilitating implementation in embedded platforms with real-time user feedback . story_separator_special_tag in advanced driver assistance systems to conditional automation systems , monitoring of driver state is vital for predicting the driver s capacity to supervise or maneuver the vehicle in cases of unexpected road events and to facilitate better in-car services . the paper presents a technique that exploits millimeter-wave doppler radar for 3d head tracking . identifying the bistatic and monostatic geometry for antennas to detect rotational vs. translational movements , the authors propose the biscattering angle for computing a distinctive feature set to isolate dynamic movements via class memberships . through data reduction and joint time frequency analysis , movement boundaries are marked for creation of a simplified , uncorrelated , and highly separable feature set . the authors report movement-prediction accuracy of 92 % . this non-invasive and simplified head tracking has the potential to enhance monitoring of driver state in autonomous vehicles and aid intelligent car assistants in guaranteeing seamless and safe journeys . story_separator_special_tag in medical and personal health systems for vital sign monitoring , contact-free remote detection is favourable compared to wired solutions . for example , they help to avoid severe pain , which is involved when a patient with burned skin has to be examined . continuous wave ( cw ) radar systems have proven to be good candidates for this purpose . in this paper a monolithic millimetre-wave integrated circuit ( mmic ) based cw radar system operating in the w-band ( 75 110\xa0ghz ) at 96\xa0ghz is presented . the mmic components are custom-built and make use of 100\xa0nm metamorphic high electron mobility transistors ( mhemts ) . the radar system is employing a frequency multiplier-by-twelve mmic and a receiver mmic both packaged in split-block modules . they allow for the determination of respiration and heartbeat frequency of a human target sitting in 1\xa0m distance . the analysis of the measured data is carried out in time and frequency domain and each approach is shown to have its advantages and drawbacks . story_separator_special_tag in this demo , we introduce a hands-free human activity recognition framework leveraging millimeter-wave ( mmwave ) sensors . compared to other existing approaches , our network protects user privacy and can remodel a human skeleton performing the activity . moreover , we show that our network can be achieved in one architecture , and be further optimized to have higher accuracy than those that can only get singular results ( i.e . only get pose estimation or activity recognition ) . to demonstrate the practicality and robustness of our model , we will demonstrate our model in different settings ( i.e . facing different backgrounds ) and effectively show the accuracy of our network . story_separator_special_tag abstract research in moving object tracking has shown significant progress towards application in recent years . however , single sensors suffer from illumination variations for the vision sensor and low directional resolution for the radar . in this paper , we propose a moving object tracking method that fuses radar and vision data . first , false radar objects are filtered out . second , an adaptive background subtraction method is used to detect candidate regions in images . finally , moving objects are determined when the effective radar objects are in the candidate regions . story_separator_special_tag the paper describes millimeter wave ( mmw ) sensors designed for detecting both metallic and nonmetallic objects placed on a human body and hidden under clothes . the sensor is based on the synchronized detection principle and estimating a power of back-scattered signal from hidden objects . time-gating algorithm combined with preliminary determined threshold level has been implemented in order to reach detection probability of ~ 90 % or more for metal and plastic hidden objects at the distance up to 3 m . story_separator_special_tag identifying cardiac abnormalities has mainly been determined by the observation of electrocardiogram ( ecg ) signals . to collect ecg signals , it is often necessary to place ecg electrodes on the body for critical analysis of ecg data transmitted by such electrodes . by analyzing this collected data , it is then possible , for example , to examine the intervals between the heartbeats ( or r-r intervals ) to measure the heart rate variability ( hrv ) . however , this process requires a multilayered setup for both hardware and software which can be costly and time consuming . to overcome these challenges , we introduce in this paper a real-time millimeter-wave radar-based , non-contact vital sign monitoring system that is capable of detecting the heart variability rate without the use of any heart rate sensors or wires required . through this system , it is then possible to detect any heart rate abnormalities by analyzing the collected data . throughout the paper , we present results for three individuals and compare our approach to heart rate monitoring devices and apple watch . story_separator_special_tag we present a method to estimate distance using millimeter-wave radar with an accuracy of around a few micro meters . with the radar operating using frequency-modulated continuous waves ( fmcw ) , we show how both frequency and phase of radar beat signal is used to precisely determine the distance between the radar sensor and the object from which the radar signal is reflected . the method was tested using single-chip fmcw radar operating at 77 ghz employing a chirp bandwidth of 4 ghz . we compared the estimated relative distance against the true relative distance at a few different locations of the object . the estimated distance has a variance of less than 10 m , which approaches the cramer-rao lower bound . we will also present a novel solution to minimize estimate bias due to reflections from neighboring objects as well as a solution to the undesired phase ambiguity problem that is associated with phase recovery during distance estimation . story_separator_special_tag this article contains experimental results of object detection in a typical urban environment using millimeter-wave radar and comparative analysis of those results with the data obtained employing simulation model . story_separator_special_tag all-weather sensors are necessary to realize automated driving level 3 or more , one of which is a millimeter-wave radar . however , the radar has some issues such as low space resolution and noisy signal . in order to solve them , stochastic processing is necessary such as deep learning techniques . in this research , classification and tracking of target objects were tried by applying lstm ( long short term memory ) which can treat time-series data . reflection signals from the 76ghz radar for randomly moving objects which are cars , bicycles and pedestrians in the parking lot were measured , and they were tracked and classified by the classifier applying lstm . three types of input feature amount for the four and three-class classifiers with two types of lstm were evaluated and compared . then , the best algorithm and combination achieved high accuracy of 98.67 % as the results . furthermore , the classifier was evaluated by the dataset measured on the actual public road , and then no misclassification was occurred for crossing pedestrians at the signalized intersection . this indicates high generalization ability of this classifier . story_separator_special_tag 79ghz millimeter-wave radar has many advantages against image sensor , infrared sensor and microwave band radar . in this paper , we present a bathroom monitoring system using a 79ghz band sensor in order to prevent accidents caused by heat- shock and so on where a k-means clustering method is proposed to monitor the existence and motion of a bathing person . the measurement was conducted for various scenarios ( normal bathing , falling , drowning , etc . ) and the usefulness is discussed against waving from the surface of bathtub hot water . as a result , the estimated rate of the dangerous state is shown to be more than 90 % for various bathing person s movements in a bathroom . story_separator_special_tag a millimeter wave obstacle detection system for helicopters is installed to observe the feasibility for the detection of power lines from a safe distance , using 77 and 94 ghz central frequencies , separately . a two wire power line system is illuminated using two different antennas with 24 and 27 db i gains , comparatively . an external w band ( 92 96 ghz ) power amplifier is also used to measure power line response for increasing the maximum detection distance . it 's shown that the use of 77 ghz central frequency , as well as higher antenna gain and output power , is better in terms of increasing the detection range of the millimeter wave radar system . the feasibility measurements are also validated using the theoretical radar formula . a collision avoidance and warning system to detect power lines is also discussed using a compact millimeter wave radar module for the future study . story_separator_special_tag radar micro-doppler signatures can be utilized for security applications like detection and assessment of human activity at airports , power plants etc . the micro-doppler signature reflects the movement of various body parts . using a 77 ghz radar we have obtained human micro-doppler signatures of one or two persons , with different ways of movement , different movement directions and without or with carrying objects . we have analyzed the micro-doppler signatures for these cases and we observe general properties of the signatures . we further suggest properties to use when designing detectors and classifiers of human targets . story_separator_special_tag fall is one of the main reasons for body injuries among seniors . traditional fall detection methods are mainly achieved by wearable and non-wearable techniques , which may cause skin discomfort or invasion of privacy to users . in this paper , we propose an automatic fall detection method with the assist of the mmwave radar signal to solve the aforementioned issues . the radar devices are capable to record the reflection from objects in both the spatial and temporal domain , which can be used to depict the activities of users with the support of a recurrent neural network ( rnn ) with long-short-term memory ( lstm ) units . first , we employ the radar low-dimension embedding ( rlde ) algorithm to preprocess the range-angle reflection heatmap sequence converted from the raw radar signal for reducing the redundancy in the spatial domain . then , the processed sequence is split into frames for inputting lstm units one by one . eventually , the output from the last lstm unit is fed in a softmax layer for classifying different activities . to validate the effectiveness of our proposed method , we construct a radar dataset with the assist of story_separator_special_tag radar technology has great potential for use as security systems in college campuses with privacy advantages over its visual counterparts . we experimented with the texas instruments iwr1642 radar sensor to evaluate the feasibility of using radar systems for security monitoring . we introduce an end-to-end neural architecture which is capable of taking radar data inputs in real time and identify human versus nonhuman targets , and classify various human behavioral motions . the model is real time , robust to noise and displays state-of-the-art results for the problem . story_separator_special_tag radar micro-doppler signatures ( mds ) , which show how different parts of the target move , can be utilized for security and safety applications like detection and assessment of human activity at airports , nuclear power plants etc . we have evaluated a mds classification method on measured data at 77 ghz . the important part of the method is the feature extraction , which is based on selecting the strongest parts of a cadence-velocity diagram ( cvd ) , which expresses how the curves in the mds repeat . by our classification of mdss of human gaits we study also how mdss of more general target types and activities can be distinguished . we have analyzed and improved the method . the method is sound with good classification results but needs further evaluations and improvements . story_separator_special_tag the ability to identify human movements can serve as an important tool in many different applications such as surveillance , military combat situations , search and rescue operations and patient monitoring in hospitals . this information can provide soldiers , security personnel and search and rescue workers with critical knowledge that can be used to potentially save lives and/or avoid dangerous situations . most research involving human activity recognition employs the short-time fourier transform ( stft ) as a method of analysing human micro-doppler signatures . however , the stft has time-frequency resolution limitations and fourier transform-based methods are not well-suited for use with non-stationary and non-linear signals . the authors approach uses the empirical mode decomposition to produce a unique feature vector from the human micro-doppler signals following which a support vector machine is used to classify human motions . this study presents simulations of simple human motions , which are subsequently validated using experimental data obtained from both an s-band radar and a w-band millimetre wave ( mm-wave ) radar . very good classification accuracies are obtained at distances of up to 90 m between the human and the radar . story_separator_special_tag millimeter-wave ( mm-wave ) technology is emerging as a de facto enabler for next-generation high-rate communications . we propose that a unified mm-wave system for combined communication and robust sensing will turbocharge the capabilities of application domains from in-home digital health to new possibilities for building analytics . story_separator_special_tag ever-increasing demands of energy consumption have propelled the need for a robust occupancy sensor . occupancy sensors can be used to control lighting , heating , ventilation , and air conditioning ( hvac ) in smart homes , as well as other presence-related loads in commercial , office , and public spaces . the evolution of ubiquitous sensing technologies , such as frequency modulated continuous wave radar , has led to the development of reliable occupancy sensors that can facilitate energy savings by being responsive to human presence and regulate energy load by intelligently adapting to their environment . we , in this article , present a short-range 60 ghz compact radar sensor that can enable detection and counting of people in a space by monitoring people 's vital signs , minute motions , and major bodily motions . we present the radar-based occupancy sensing solution and experimentally validate the performance of the proposed system . story_separator_special_tag non-contact vital sign detection using 60-ghz radar offers various advantages such as higher sensitivity and smaller antennas compared to lower-frequency systems , however , the respiration amplitude comparable to wavelength causes strong non-linear phase modulation , and relatively small heartbeat amplitude results in detection difficulty . in this paper , theoretical analysis and simulation of 60-ghz detection are provided to address these issues . both shallow and deep breathings are tested in the experiments , and the detection technique monitoring both the fundamental and second harmonic of respiration is proposed . the phenomena explained in the work can be applied to many millimeter-wave doppler radar applications where target displacement is comparable to or larger than the wavelength to ensure robust detection . story_separator_special_tag weak target detection is one of the key problems facing the foreign object debris ( fod ) surveillance radar at the airport runway . a novel fod detection algorithm based on higher order statistics features and support vector domain description ( svdd ) classifier for 77ghz millimeter wave ( mm-wave ) radar is proposed in this paper . clutter map constant false alarm rate ( cfar ) is firstly applied to the measured data to suppress the heavy background clutter while the fod returns accompanied by some false alarms are distinguished from the background clutter . then higher order statistics features are extracted to transform the radar returns into feature domain where fod and false alarms are more distinguishable . finally , the one-class svdd classifier is utilized to accomplish the classification of fod and false alarms . experimental results based on measured data show that the proposed method can not only successfully detect fod but also correctly classify fod and false alarms . story_separator_special_tag millimeter wave ( mm-wave ) radar system is widely utilized in detecting the foreign object debris ( fod ) at the airport runways due to its high resolution and low power . in this paper , a novel hierarchical fod detection method is proposed based on eigenvalue spectrum feature extraction and minimax probability machine ( mpm ) for a 77ghz mm-wave radar system . the clutter map constant false alarm rate ( cfar ) detection technology is utilized firstly to categorize radar echoes into two classes , i.e. , background clutter and the fod returns ( including the false alarm returns ) . then eigenvalue spectrum features are extracted to transform the fod returns and false alarm returns into the feature domain where they are more distinguishable . finally , the mpm classifier is utilized to categorize the fod and false alarm into different kinds so as to reduce the false alarm rate . experiments results based on measured data show that the proposed method can achieve good detection performance . story_separator_special_tag the paper is a joint work between the leat ( france ) and the enri ( japan ) in the framework of a sakura project supported by the jsps and the french ministry of foreign affairs . the purpose is the study of a fod ( foreign object debris ) detection system on airport runways . a fm-cw mm-wave radar working between 76.25 and 76.75 ghz is used together with a high directivity printed reflectarray . measurement results show detection capabilities of a -20 tlbsm cylinder up to 35 m which is 10 m less than the faa recommendations . antenna improvements are discussed for reaching the requirements and system performance as well as the use of calibration objects . story_separator_special_tag this paper describes a compact broadband ( 73-80 ghz ) mm-wave front-end used for fod detection application . the design philosophy of our system is to have several low-profile , low-cost mm-wave sensors placed along the runway . tests were conducted on the small airport of aix les milles ( south of france ) . high sensitivity and simultaneous objects detection capabilities were shown . even very small objects like nuts were seen . the extension of the actual detection range is needed in order to go from 110 m ( in the most favourable case ) to 500 m . story_separator_special_tag safe , precise landing on planetary bodies requires knowledge of altitude and velocity , and may require active detection and avoidance of hazardous terrain . radar offers a superior solution to both problems due to its ability to operate at any time of day , through dust and engine plumes , and ability to detect velocity coherently . while previous efforts have focused on providing near term solutions to the safe landing problem , we are designing radar velocimeters and radar imagers for missions beyond the next decade . in this paper we identify the fundamental issues within each approach , at arrive at strawman sensor designs at a center frequency at or around 160 ghz ( g-band ) . we find that a g-band radar velocimeter design is capable of sub-10 cm/s accuracy , and a g-band imager is capable of sub-0.5 degree resolution over a 28 degree field of view . from those designs , we arrive at the key technology requirements for the development of power and low noise amplifiers , signal distribution methods , and antenna arrays that enable the construction of these next generation sensors story_separator_special_tag the feasibility of a radar instrument working at 95 ghz placed on iss ( international space station ) to detect very small debris has been investigated and analyzed in this paper . first of all a study about the debris population around the iss orbit has been taken under consideration by analyzing the debris flux and by determining the preliminary design and mission parameters for warden instrument , as for instance the pointing angle of the antenna reflector . a technology survey has been also performed to individuate the state of the art in the millimeter wave frequencies band , with particular reference to the transmitter , the front-end , the master oscillator and for a/d converters and dsps ( digital signal processors ) suitable for space . the proposed solution for the iss on board experiment was basically composed of two segments : the on-orbit segment , that is the payload and the ground segment . the on-orbit segment deals with a radar sensor working at 95 ghz which represents a good trade-off between satisfying the limited power consumption available on iss expa ( express pallet ) adapter and having significant range and detection performance necessary for the success story_separator_special_tag robust indoor ego-motion estimation has attracted significant interest in the last decades due to the fast-growing demand for location-based services in indoor environments . among various solutions , frequency-modulated continuous-wave ( fmcw ) radar sensors in millimeter-wave ( mmwave ) spectrum are gaining more prominence due to their intrinsic advantages such as penetration capability and high accuracy . single-chip low-cost mmwave radar as an emerging technology provides an alternative and complementary solution for robust ego-motion estimation , making it feasible in resource-constrained platforms thanks to low-power consumption and easy system integration . in this paper , we introduce milli-rio , an mmwave radar-based solution making use of a single-chip low-cost radar and inertial measurement unit sensor to estimate six-degrees-of-freedom ego-motion of a moving radar . detailed quantitative and qualitative evaluations prove that the proposed method achieves precisions on the order of few centimeters for indoor localization tasks . story_separator_special_tag target classification based on the power intensity models has been used to post-process the measured data captured by on-vehicle mmwave radar sensors . by grouping the point targets with compensated ego motion , moving vehicles and pedestrians are identified . the measured intensity inclusive of all practical environmental effects has been used for calibration , and the relation between the intensity and the range are found based on a few scenario for demonstration purpose . as a reference , the model has been tested in collected data , the target capture rate is 65 % while the classification accuracy reaches 90 % . further improvement can be expected when tracking model is included . story_separator_special_tag robots need to use their end-effectors not only to grasp and manipulate objects but also to understand the environment surrounding them . object identification is of paramount importance in robotics applications , as it facilitates autonomous object handling , sorting , and quality inspection . in this paper , we present a new hyper-adaptive robot hand that is capable of discriminating between different everyday objects , as well as model objects with the same external geometry but varying material , density , or volume , with a single grasp . this work leverages all the benefits of simple , adaptive grasping mechanisms ( robustness , simplicity , low weight , adaptability ) , a random forests classifier , tactile modules based on barometric sensors , and radar technology offered by the google soli sensor . unlike prior work , the method does not rely on object exploration , object release or re-grasping and works for a wide variety of everyday objects . the feature space used consists of the google soli readings , the motor positions and the contact forces measured at different time instances of the grasping process . the whole approach is model-free and the hand is controlled story_separator_special_tag in this work , the authors present results for classification of different classes of targets ( car , single and multiple people , bicycle ) using automotive radar data and different neural networks . a fast implementation of radar algorithms for detection , tracking , and micro-doppler extraction is proposed in conjunction with the automotive radar transceiver tef810x and microcontroller unit sr32r274 manufactured by nxp semiconductors . three different types of neural networks are considered , namely a classic convolutional network , a residual network , and a combination of convolutional and recurrent network , for different classification problems across the four classes of targets recorded . considerable accuracy ( close to 100 % in some cases ) and low latency of the radar pre-processing prior to classification ( 0.55 s to produce a 0.5 s long spectrogram ) are demonstrated in this study , and possible shortcomings and outstanding issues are discussed . story_separator_special_tag millimeter-wave ( mmw ) radars are being increasingly integrated in commercial vehicles to support new adaptive driver assisted systems ( adas ) for its ability to provide high accuracy location , velocity , and angle estimates of objects , largely independent of environmental conditions . such radar sensors not only perform basic functions such as detection and ranging/angular localization , but also provide critical inputs for environmental perception via object recognition and classification . to explore radar-based adas applications , we have assembled a lab-scale frequency modulated continuous wave ( fmcw ) radar test-bed ( https : //depts.washington.edu/funlab/research ) based on texas instrument s ( ti ) automotive chipset family . in this work , we describe the test-bed components and provide a summary of fmcw radar operational principles . to date , we have created a large raw radar dataset for various objects under controlled scenarios . thereafter , we apply some radar imaging algorithms to the collected dataset , and present some preliminary results that validate its capabilities in terms of object recognition . story_separator_special_tag among the manifold wireless technologies recently adopted for indoor localization and tracking , radars based on phased-array transceivers at 60 ghz are gaining momentum . the main advantages of this technology are : high accuracy , good ability to track multiple target with a low computation burden and preservation of privacy . despite the growing commercial success of low-cost radar platforms , accurate studies to evaluate their tracking performance are not frequent in the literature , the main reasons being the commercial policies that prevent a direct access to the processed data and the difficult calibration of indoor positioning systems under dynamic conditions . this paper aims to fill this gap providing an extensive and scientifically sound performance analysis of one of these sensors ( i.e. , the system-onchip ( soc ) ti iwr6843 ) and exposing benefits and limitations of 60-ghz mm-wave sensors for people localization and tracking . multiple experimental results show that the average standard positioning uncertainty is about 30 cm under dynamic conditions . our study also reveals the critical impact of three parameters , which are not properly documented by the manufacturer . localization accuracy and robustness are also significantly affected by the risk of story_separator_special_tag in this paper , a new radar-camera fusion system is presented . the fusion system takes into consideration the error bounds of the two different coordinate systems from the heterogeneous sensors , and further a new fusion-extended kalman filter is utilized to adapt to the heterogeneous sensors . real-world application considerations such as asynchronous sensors , multi-target tracking and association are also studied and illustrated in this paper . experimental results demonstrated that the proposed fusion system can realize a range accuracy of 0.29m with an angular accuracy of 0.013rad in real-time . therefore , the proposed fusion system is effective , reliable and computationally efficient for real-time kinematic fusion applications . story_separator_special_tag high-resolution millimeter-wave radar that operates in the 79 ghz band is expected to achieve a significant increase in the distance resolution of radar systems because of the availability of a wide frequency bandwidth of 4 ghz as compared with 0.5 ghz of the existing 77 ghz-band millimeter-wave radar . for this reason , it has the potential to distinguish between a vehicle and a human , which was conventionally difficult , and recognize their movements . therefore it raises expectations for use as a surrounding monitoring radar in driving safety support and automatic driving . as one of fujitsu ten s efforts regarding sensing technologies for driving safety support and automatic driving , it has been developing 79 ghz-band high-resolution millimeterwave radar . this paper presents specifications of radar for application to systems that assist in safe driving and automatic driving and the results of testing a prototype for a wider bandwidth that is required to accomplish the technology s purpose . this radar increases the ability to detect a pedestrian in the surroundings of a vehicle , which was difficult to do with the existing 77 ghz-band radar . furthermore , this paper also describes how the newly developed story_separator_special_tag this paper describes a millimeter-wave sensor that is able to detect pedestrians , thereby reducing the likelihood of human road injuries or fatalities . the sensor consists of a transmit/receive channel module , operating in the millimeter-wave range ( w-band ) using frequency-modulated-continuous-wave mode . the laboratory prototypes of the sensor have been designed and tested in real-life environment . an analysis of system performance and experiments conducted has indicated a high-resolution , detection ability of both adults and children at a distance of up to 100-150 m . story_separator_special_tag the need to improve efficiency , safety and security of airports becomes every day much more demanding . today , different radar-based systems are available for these purposes : for advanced surface movement guidance control system , for foreign object debris detection , for bird strike prevention and for intrusion detection . millimeter-wave radars have , mainly , the capability to provide all these functions thanks to their high resolution , high renewal rate and high sensitivity to small objects . in this paper the performances of an airport w-band radar network are evaluated ( with real and simulated data ) in presence of small objects and/or humans . in particular , a radar raw level data-fusion is proposed and evaluated to improve the system detection capability in case of foreign object and human . story_separator_special_tag this paper describes a millimeter-wave ( mm-wave ) radar system that has been used to range humans concealed in light foliage at 30 meters and range exposed humans at distances up to 213 meters . human micro-doppler is also detected through light foliage at 30 meters and up to 90 meters when no foliage is present . this is done by utilizing a composite signal consisting of two waveforms : a wide-band noise waveform and a single tone . these waveforms are summed together and transmitted simultaneously . matched filtering of the received and transmitted noise signals is performed to range targets with high resolution , while the received single tone signal is used for doppler analysis . the doppler measurements are used to distinguish between different human movements using characteristic micro-doppler signals . using hardware and software filters allows for simultaneous processing of both the noise and doppler waveforms . our measurements establish the mm-wave system 's ability to range humans up to 213 meters and distinguish between different human movements at 90 meters . the radar system was also tested through light foliage . in this paper , we present results on human target ranging and doppler characterization story_separator_special_tag in this paper , mm-pose , a novel approach to detect and track human skeletons in real-time using an mmwave radar , is proposed . to the best of the authors knowledge , this is the first method to detect > 15 distinct skeletal joints using mmwave radar reflection signals . the proposed method would find several applications in traffic monitoring systems , autonomous vehicles , patient monitoring systems and defense forces to detect and track human skeleton for effective and preventive decision making in real-time . the use of radar makes the system operationally robust to scene lighting and adverse weather conditions . the reflected radar point cloud in range , azimuth and elevation are first resolved and projected in range-azimuth and range-elevation planes . a novel low-size high-resolution radar-to-image representation is also presented , that overcomes the sparsity in traditional point cloud data and offers significant reduction in the subsequent machine learning architecture . the rgb channels were assigned with the normalized values of range , elevation/azimuth and the power level of the reflection signals for each of the points . a forked cnn architecture was used to predict the real-world position of the skeletal joints in 3-d story_separator_special_tag the paper compares the waveform of a millimeter ( mm ) -wave differentiated gaussian pulse ( dgp ) centered at 30 ghz with others dgps centered at different microwave frequencies . the performance are assessed using an artificial neural network ( ann ) -based radar data processing technique for breast cancer detection . the radar signals are measured using a set of realistic two-dimensional ( 2d ) breast geometries derived from the realistic three-dimensional ( 3d ) breast phantoms provided by the numerical breast phantom repository of the university of wisconsin cross-disciplinary electromagnetic laboratory ( uwcem ) . the results show that , using the dgp of central frequency 30 ghz , tumors are detected with sensitivity of 88 % , a specificity of 90 % , and an overall accuracy of 89 % . story_separator_special_tag this paper presents a solution to an aiming problem in the remote sensing of vital signs using an integration of two systems . the problem is that to collect meaningful data with a millimeter-wave sensor , the antenna must be pointed very precisely at the subject 's chest . even small movements could make the data unreliable . to solve this problem , we attached a camera to the millimeter-wave antenna , and mounted this combined system on a pan/tilt base . our algorithm initially finds a subject 's face and then tracks him/her through subsequent frames , while calculating the position of the subject 's chest . for each frame , the camera sends the location of the chest to the pan/tilt base , which rotates accordingly to make the antenna point at the subject 's chest . this paper presents a system for concurrent tracking and data acquisition with results from some sample scenarios . story_separator_special_tag non-contact vital sign monitoring of multiple people can be realized using a radar sensor that is able to provide a high resolution two-dimensional image of the monitored area . this is particularly user-friendly , since no electrodes have to be attached to the body , which is important especially in view of applications like monitoring neonates or burn victims . there are many further applications of remote cardiopulmonary monitoring in the area of home health care , driver monitoring , sleep monitoring , or even monitoring of arrested people in a prison cell . the radar is able to measure the tiny movements of the chest , caused by respiration and heartbeat . due to the superposition of both movements a sophisticated processing of the measured data is necessary to extract the heartbeat and respiration signal from the measured overall signal . this paper presents radar measurements at 24 and 77 ghz , where vital signs of multiple people have been simultaneously monitored and proposes a signal processing method to separate heartbeat and respiration in the measured data . story_separator_special_tag in this paper an ultra-wideband 80 ghz fmcw-radar system for contactless monitoring of respiration and heart rate is investigated and compared to a standard monitoring system with ecg and co2 measurements as reference . the novel fmcw-radar enables the detection of the physiological displacement of the skin surface with submillimeter accuracy . this high accuracy is achieved with a large bandwidth of 10 ghz and the combination of intermediate frequency and phase evaluation . this concept is validated with a radar system simulation and experimental measurements are performed with different radar sensor positions and orientations . story_separator_special_tag there are so many patients who need a continuous vital sign monitoring . monitoring a patient with a wearable device such as using electrode-based electrocardiography ( ecg ) signal recording device for a long time is very inconvenient . one possible solution is using contactless sensors such as radars to find vital signs of the subjects . in this paper , we demonstrate the use of frequency-modulated continuous wave ( fmcw ) radar operating at 77 ghz for monitoring a patient s heart rate in a retirement environment . several experiments were conducted to validate the reliability of the radar responses . finally , the whole system is tested in a bedroom with the radar above a bed and a patient lying on it . story_separator_special_tag a first reported experimental study of a 60 ghz millimeter-wave life detection system ( mlds ) for noncontact human vital-signal monitoring is presented . this detection system is constructed by using v-band millimeter-wave waveguide components . a clutter canceller is implemented in the system with an adjustable attenuator and phase shifter . it performs clutter cancellation for the transmitting power leakage from the circulator and background reflection to enhance the detecting sensitivity of weak vital signals . the noncontact vital signal measurements have been conducted on a human subject in four different physical orientations from distances of 1 and 2 m. the time-domain and spectrum waveforms of the measured breathing and heartbeat are presented . this prototype system will be useful for the development of the 60-ghz cmos mlds detector chip design . story_separator_special_tag air is not the only medium that can spread and can be used to detect speech . in our previous paper , another valuable medium | millimeter wave ( mmw ) was introduced to develop a new kind of speech acquisition technique ( 6 ) . because of the special features of the mmw radar , this speech acquisition method may provide some exciting possibilities for a wide range of applications . in the proposed study , we have designed a new kind of speech acquisition radar system . the super-heterodyne receiver was used in the new system so that to mitigate the severe dc ofiset problem and the associated 1=f noise at baseband . furthermore , in order to decrease the harmonic noise , electro-circuit noise , and ambient noise which were combined in the mmw detected speech , an adaptive wavelet packet entropy algorithm is also proposed in this study , which incorporates the wavelet packet entropy based voice/unvoiced radar speech adaptive detection method and the human ear perception properties in a wavelet packet time- scale adaptation speech enhancement process . the performance of the proposed method is evaluated objectively by signal-to-noise ratio and subjectively by mean-opinion-score . story_separator_special_tag millimeter wave ( mmw ) doppler radar with grating structures for the applications of detecting speech signals has been discovered in our laboratory . the operating principle of detection the acoustic wave signals based on the wave propagation theory and wave equations of the electromagnetic wave ( emw ) and acoustic wave ( aw ) propagating , scattering , reflecting and interacting has been investigated . the experimental and observation results have been provided to verify that mmw cw 40ghz dielectric integrated radar can detect and identify out exactly the existential speech signals in free space from a person speaking . the received sound signal have been reproduced by the dsp and the reproducer . story_separator_special_tag different speech detection sensors have been developed over the years but they are limited by the loss of high frequency speech energy , and have restricted non-contact detection due to the lack of penetrability . this paper proposes a novel millimeter microwave radar sensor to detect speech signals . the utilization of a high operating frequency and a superheterodyne receiver contributes to the high sensitivity of the radar sensor for small sound vibrations . in addition , the penetrability of microwaves allows the novel sensor to detect speech signals through nonmetal barriers . results show that the novel sensor can detect high frequency speech energies and that the speech quality is comparable to traditional microphone speech . moreover , the novel sensor can detect speech signals through a nonmetal material of a certain thickness between the sensor and the subject . thus , the novel speech sensor expands traditional speech detection techniques and provides an exciting alternative for broader application prospects . story_separator_special_tag high frequency millimeter-wave ( mmw ) radar-like sensors enable the detection of speech signals . this novel non-acoustic speech detection method has some special advantages not offered by traditional microphones , such as preventing strong-acoustic interference , high directional sensitivity with penetration , and long detection distance . a 94-ghz mmw radar sensor was employed in this study to test its speech acquisition ability . a 34-ghz zero intermediate frequency radar , a 34-ghz superheterodyne radar , and a microphone were also used for comparison purposes . a short-time phase-spectrum-compensation algorithm was used to enhance the detected speech . the results reveal that the 94-ghz radar sensor showed the highest sensitivity and obtained the highest speech quality subjective measurement score . this result suggests that the mmw radar sensor has better performance than a traditional microphone in terms of speech detection for detection distances longer than 1 m. as a substitute for the traditional speech acquisition method , this novel speech acquisition method demonstrates a large potential for many speech related applications . story_separator_special_tag the use of drones has seen a surge in the last few years with their employment in a large variety of applications . however , this popularity has also made improper drone use a threat to privacy , economy and human lives , requiring the development of methods to detect and track drones . in this work , we present and experimentally validate an airborne drone detection system that utilizes a millimeter wave radar system to detect and follow target drones . story_separator_special_tag characteristics of a target are essential for the classification purpose . input parameters such as blade length , rotation rate and a number of rotors can be used for classification of drone . an algorithm for accurate measurement of blade length and rotation rate using pattern analysis has been proposed . the time frequency domain image is used for calculation of maximum doppler frequency and the rotational speed of the blade . accuracy and precision of the proposed technique are validated by experimental data using w-band radar system . story_separator_special_tag the detection and defense of unmanned aerial systems ( uas ) is becoming increasingly important for the protection of public and private areas . the low cost of micro- and mini-drones , the easy handling , and a considerable payload make them an excellent tool for unwanted surveillance or attacks . the platforms can be equipped with all kind of sensors or , in the worst case , with explosive devices . on the other hand , the size , material , and flight characteristics of these micro aerial vehicles is not advantageous for their detection with any kind of sensor . therefore , great efforts are needed to ensure reliable detection , localization , tracking , and classification of the low , small , and slow systems . story_separator_special_tag an estimator to infer parameters from radar measurements is presented , applicable for further detection and localisation of unmanned aircraft systems . the problem of parameter estimation is presented as an inverse problem , necessitating a model of the measurement data . based on the data model , a maximum-likelihood based objective function is derived . the objective function is minimised by a framework similar to the rimax framework , including joint parameter and model order estimation . a simo radar system is presented and employed to test and demonstrate the estimators capability in simulations as well as test measurements with flying uass . story_separator_special_tag due to the substantial increase in the number of affordable drones in the consumer market and their regrettable misuse , there is a need for efficient technology to detect drones in airspace . this paper presents the characteristic radar micro-doppler properties of drones and birds . drones and birds both induce micro-doppler signatures due to their propeller blade rotation and wingbeats , respectively . these distinctive signatures can then be used to differentiate a drone from a bird , along with studying them separately . here , experimental measurements of micro-doppler signatures of different types of drones and birds are presented and discussed . the data have been collected using two radars operating at different frequencies ; k-band ( 24 ghz ) and w-band ( 94 ghz ) . three different models of drones and four species of birds of varying sizes have been used for data collection . the results clearly demonstrate that a phase coherent radar system can retrieve highly reliable and distinctive micro-doppler signatures of these flying targets , both at k-band and w-band . comparison of the signatures obtained at the two frequencies indicates that the micro-doppler return from the w-band radar has higher snr . story_separator_special_tag for autonomous driving , it is important to detect obstacles in all scales accurately for safety consideration . in this paper , we propose a new spatial attention fusion ( saf ) method for obstacle detection using mmwave radar and vision sensor , where the sparsity of radar points are considered in the proposed saf . the proposed fusion method can be embedded in the feature-extraction stage , which leverages the features of mmwave radar and vision sensor effectively . based on the saf , an attention weight matrix is generated to fuse the vision features , which is different from the concatenation fusion and element-wise add fusion . moreover , the proposed saf can be trained by an end-to-end manner incorporated with the recent deep learning object detection framework . in addition , we build a generation model , which converts radar points to radar images for neural network training . numerical results suggest that the newly developed fusion method achieves superior performance in public benchmarking . in addition , the source code will be released in the github . story_separator_special_tag the paper presents the results of drone detection and analysis of propellers signature performed using xy-demorad fmcw ( frequency-modulated continuous-wave ) radar sensor . the description of propellers signature is provided with experimental results proving a possibility of distinguishing the drones from another small objects or event detect them when they are not moving . story_separator_special_tag gait is the human s natural walking style that is a complex biological process unique to each person . this paper aims to exploit millimeter wave ( mmwave ) to extract fine-grained microdoppler signatures of human movements , which are used as the mmwave gait biometric for user recognition . towards this goal , a deep microdoppler learning system is proposed , which utilizes deep neural networks to automatically learn and extract the discriminative features in the mmwave gait biometic data to distinguish a large number of people from each other . in particular , our system consists of two subsystems including human target tracking and human target recognition . the tracking subsystem is responsible for detecting the appearance of a human subject , tracking his/her locations and estimating his/her walking velocity . the recognition subsystem utilizes the tracking data to generate the microdoppler signatures as the mmwave biometrics , which are fed into a custom-designed residual deep convolutional neural network ( dcnn ) for automatic feature extractions . finally , a softmax classifier utilizes the extracted features for user identification . in a typical indoor environment , a top-1 identification accuracy of 97.45 % is achieved for a dataset story_separator_special_tag millimeter wave ( mmwave ) based gesture recognition technology provides a good human computer interaction ( hci ) experience . prior works focus on the close-range gesture recognition , but fall short in range extension , i.e. , they are unable to recognize gestures more than one meter away from considerable noise motions . in this paper , we design a long-range gesture recognition model which utilizes a novel data processing method and a customized artificial convolutional neural network ( cnn ) . firstly , we break down gestures into multiple reflection points and extract their spatial-temporal features which depict gesture details . secondly , we design a cnn to learn changing patterns of extracted features respectively and output the recognition result . we thoroughly evaluate our proposed system by implementing on a commodity mmwave radar . besides , we also provide more extensive assessments to demonstrate that the proposed system is practical in several real-world scenarios . story_separator_special_tag this paper proposes a novel machine learning architecture , specifically designed for radio-frequency based gesture recognition . we focus on high-frequency ( 60 ] ghz ) , short-range radar based sensing , in particular google 's soli sensor . the signal has unique properties such as resolving motion at a very fine level and allowing for segmentation in range and velocity spaces rather than image space . this enables recognition of new types of inputs but poses significant difficulties for the design of input recognition algorithms . the proposed algorithm is capable of detecting a rich set of dynamic gestures and can resolve small motions of fingers in fine detail . our technique is based on an end-to-end trained combination of deep convolutional and recurrent neural networks . the algorithm achieves high recognition rates ( avg 87 % ) on a challenging set of 11 dynamic gestures and generalizes well across 10 users . the proposed model runs on commodity hardware at 140 hz ( cpu only ) . story_separator_special_tag gesture recognition provides an easy , convenient and intuitive way of remotely controlling several consumer electronics devices such as audio devices , television sets , projector or gaming consoles . in recent years , radar sensors have been shown to be effective sensing modality to sense and recognize fine-grained dynamic finger-gestures in watch or smartphone and thus offers an user-friendly human-computer interface in ultrashort range applications . however , hand-gesture recognition from a farther distance such as to control consumer devices like tv or projector pose challenge particularly arising due to interferences from multiple humans in the field of view . in this paper , we present a novel unguided spatio-doppler attention mechanism to enable hand-gesture recognition in presence of multiple humans using a low power , compact 60-ghz fmcw radar operated in 500mhz ism frequency band . the spatio-doppler mechanism in 2d deep convolutional neural network with long short term memory ( 2d cnn-lstm ) makes use of the range-doppler images and range-angle images . we experimentally present the classification accuracy of 94.75 % of our proposed system on test dataset using eight gestures , namely wave , push forward , pull , left swipe , right swipe , story_separator_special_tag gesture recognition is the most intuitive form of human computer-interface . gesture sensing can replace interfaces such as touch and clicks needed for interacting with a device . gesture recognition in a practical scenario is an open-set classification , i.e . the recognition system should classify correct known gestures while rejecting arbitrary unknown gestures during inference . to address the issue of gesture recognition in an open set , we present , in this paper , a novel distance-metric based meta-learning approach to learn embedding features from a video of range-doppler images generated by hand gestures at the radar receiver . further , k-nearest neighbor ( knn ) is used to classify known gestures , distance-thresholding is used to reject unknown gesture motions and clustering is used to add new custom gestures on-the-fly without explicit model re-training . we propose to use 3d deep convolutional neural network ( 3d-dcnn ) architecture to learn the embedding model using distance-based triplet-loss similarity metric . we demonstrate our approach to correctly classify gestures using short-range 60-ghz compact short-range radar sensor achieving an overall classification accuracy of 94.5 % over six fine-grained gestures under challenging practical environments , while rejecting other unknown gestures with story_separator_special_tag radar technology plays a vital role in contact-less detection of hand gestures or motions , which forms an alternate and intuitive form of human computer interface . air-writing refers to the writing of linguistic characters or words in free space by hand gesture movements . in this paper , we propose an air-writing system based on a network of millimeter wave radars . we propose a two-stage approach for extraction and recognition of handwriting gestures . the extraction processing stage uses a fine range estimate combined with the trilateration technique to detect and localize the hand marker , followed by a smoothening filter to create a trajectory of the character through the hand movement . for the recognition stage , we explore two approaches : one extracts the time-series trajectory data and recognizes the drawn character using long short term memory ( lstm ) , bi-directional lstm ( blstm ) , and convolutional lstm ( convlstm ) with connectionist temporal classification ( ctc ) loss function , and the other approach reconstructs a 2d image from the trajectory of drawn character and uses deep convolutional neural network ( dcnn ) to classify the alphabets drawn by the user . convlstm-ctc story_separator_special_tag this paper deals with gesture recognition using a 77 ghz fmcw radar system based on the micro-doppler ( d ) signatures . in addition to the doppler information , the range information is also available in the fmcw radar . therefore , it is utilized to filter out the irrelevant targets . we have proposed five micro-doppler based handcrafted features for gesture recognition . finally , a simple k-nearest neighbor ( k-nn ) classifier is applied to evaluate the importance of the five features . the classification results demonstrate that the proposed features can guarantee a promising recognition accuracy . story_separator_special_tag in this paper , a region-based deep convolutional neural network ( r-dcnn ) is proposed to detect and classify gestures measured by a frequency-modulated continuous wave radar system . micro-doppler ( \xb5d ) signatures of gestures are exploited , and the resulting spectrograms are fed into a neural network . we are the first to use the r-dcnn for radar-based gesture recognition , such that multiple gestures could be automatically detected and classified without manually clipping the data streams according to each hand movement in advance . further , along with the \xb5d signatures , we incorporate phase-difference information of received signals from an l-shaped antenna array to enhance the classification accuracy . finally , the classification results show that the proposed network trained with spectrogram and phase-difference information can guarantee a promising performance for nine gestures . story_separator_special_tag foreign object debris ( fod ) , like stones and metal fasteners in the airport runway may damage aircraft . many companies have developed relevant detection systems . however weak radar-cross section ( rcs ) static and dynamic targets detection is still a problem . this paper proposes time accumulation method to remove background clutter and adaptively detect weak rcs targets using two-dimensional rectangular window function . then we propose a space-time joint estimation algorithm by introducing subarray delays to achieve high precision and high accuracy target speed and angle . story_separator_special_tag in contrast to cameras , lidars , gps , and proprioceptive sensors , radars are affordable and efficient systems that operate well under variable weather and lighting conditions , require no external infrastructure , and detect long-range objects . in this paper , we present a reliable and accurate radar-only motion estimation algorithm for mobile autonomous systems . using a frequency-modulated continuous-wave ( fmcw ) scanning radar , we first extract landmarks with an algorithm that accounts for unwanted effects in radar returns . to estimate relative motion , we then perform scan matching by greedily adding point correspondences based on unary descriptors and pairwise compatibility scores . our radar odometry results are robust under a variety of conditions , including those under which visual odometry and gps/ins fail . story_separator_special_tag the growing use of doppler radars in the automotive field and the constantly increasing measurement accuracy open new possibilities for estimating the motion of the ego-vehicle . the following paper presents a robust and self-contained algorithm to instantly determine the velocity and yaw rate of the ego-vehicle . the algorithm is based on the received reflections ( targets ) of a single measurement cycle . it analyzes the distribution of their radial velocities over the azimuth angle . the algorithm does not require any preprocessing steps such as clustering or clutter suppression . storage of history and data association is avoided . as an additional benefit , all targets are instantly labeled as stationary or non-stationary . story_separator_special_tag abstract for automotive applications , an accurate estimation of the ego-motion is required to make advanced driver assistant systems work reliably . the proposed framework for ego-motion estimation involves two components : the first component is the spatial registration of consecutive scans . in this paper , the reference scan is represented by a sparse gaussian mixture model . this structural representation is improved by incorporating clustering algorithms . for the spatial matching of consecutive scans , a normal distributions transform-based optimization is used . the second component is a likelihood model for the doppler velocity . using a hypothesis for the ego-motion state , the expected radial velocity can be calculated and compared to the actual measured doppler velocity . the ego-motion estimation framework of this paper is a joint spatial and doppler-based optimization function which shows reliable performance on real world data and compared to state-of-the-art algorithms . story_separator_special_tag autonomous vehicles rely on gps aided by motion sensors to localize globally within the road network . however , not all driving surfaces have satellite visibility . therefore , it is important to augment these systems with localization based on environmental sensing such as cameras , lidar and radar in order to increase reliability and robustness . in this work we look at using radar for localization . radar sensors are available in compact format devices well suited to automotive applications . past work on localization using radar in automotive applications has been based on careful sensor modeling and sequential monte carlo , ( particle ) filtering . in this work we investigate the use of the iterative closest point , icp , algorithm together with an extended kalman filter , ekf , for localizing a vehicle equipped with automotive grade radars . experiments using data acquired on public roads shows that this computationally simpler approach yields sufficiently accurate results on par with more complex methods . story_separator_special_tag significant advances have been achieved in mobile robot localization and mapping in dynamic environments , however these are mostly incapable of dealing with the physical properties of automotive radar sensors . in this paper we present an accurate and robust solution to this problem , by introducing a memory efficient cluster map representation . our approach is validated by experiments that took place on a public parking space with pedestrians , moving cars , as well as different parking configurations to provide a challenging dynamic environment . the results prove its ability to reproducibly localize our vehicle within an error margin of below 1 % with respect to ground truth using only point based radar targets . a decay process enables our map representation to support local updates . story_separator_special_tag in radarcat we present a small , versatile radar-based system for material and object classification which enables new forms of everyday proximate interaction with digital devices . we demonstrate that we can train and classify different types of materials and objects which we can then recognize in real time . based on established research designs , we report on the results of three studies , first with 26 materials ( including complex composite objects ) , next with 16 transparent materials ( with different thickness and varying dyes ) and finally 10 body parts from 6 participants . both leave one-out and 10-fold cross-validation demonstrate that our approach of classification of radar signals using random forest classifier is robust and accurate . we further demonstrate four working examples including a physical object dictionary , painting and photo editing application , body shortcuts and automatic refill based on radarcat . we conclude with a discussion of our results , limitations and outline future directions . story_separator_special_tag in this paper , we propose thumouse , a novel interaction paradigm aimed to create a gesture-based and touch-free cursor interaction that accurately tracks the motion of fingers in real-time . thumouse enables users to move the cursor using frequency-modulated continuous-wave ( fmcw ) radar . while previous work with fmcw radar in human-computer-interfaces ( hci ) has focused on classifying a set of predefined hand gestures , thumouse regressively tracks the position of a finger , which allows for finer-grained interaction . this paper presents the gesture sensing pipeline we built , with regressive tracking through deep neural networks , data augmentation for robustness , and computer vision as a training base . we also report on a proof-of-concept demonstration shows how our system can function as a mouse , and identify areas for future work . this work builds a foundation for designing finer micro gesture-based interactions , allowing the finger to emulate external input devices such as a joystick and touch-pad . story_separator_special_tag research has explored miniature radar as a promising sensing technique for the recognition of gestures , objects , users ' presence and activity . however , within human-computer interaction ( hci ) , its use remains underexplored , in particular in tangible user interface ( tui ) . in this paper , we explore two research questions with radar as a platform for sensing tangible interaction with the counting , ordering , identification of objects and tracking the orientation , movement and distance of these objects . we detail the design space and practical use-cases for such interaction which allows us to identify a series of design patterns , beyond static interaction , which are continuous and dynamic . with a focus on planar objects , we report on a series of studies which demonstrate the suitability of this approach . this exploration is grounded in both a characterization of the radar sensing and our rigorous experiments which show that such sensing is accurate with minimal training . with these techniques , we envision both realistic and future applications and scenarios . the motivation for what we refer to as solinteraction , is to demonstrate the potential for radar-based interaction story_separator_special_tag a radcom system allows for the combination of automotive radar sensing and wireless communication applications , while relying on a single hardware platform with a single transmission waveform . by using the joint millimeter-wave signals , both radar and communication applications could be operated simultaneously , thus granting the pervasive availability of both functions . this paper presents an end-to-end simulation of a complete radcom system in order to assess the feasibility and efficiency of performing both radar and communication functions using the joint millimeter-wave in the context of vehicle to vehicle ( v2v ) systems . we propose the use of 77 ghz radio frequency and show , through various simulations , its suitability for both applications . through the evaluation of the auto-correlation function we optimized radar parameters between vehicles . for the communication function , we evaluate the bit error rate ( ber ) using the pulse jamming method . through simulations show that our system outperforms existing schemes . story_separator_special_tag the rapid development of the security inspection system makes the original security inspection equipment gradually unable to meet the needs of the community . the physical characteristics of the millimeter-wave make it more suitable for security imaging systems than x-rays and the active millimeter-wave imaging system has a higher sensitivity and is less affected by the environment than a passive millimeter-wave imaging system . this paper introduces a ka-band active millimeter-wave imaging system and imaging principle , and uses a new calibration method to correct the images . finally , the convolutional neural network is used to detect and identify the target . story_separator_special_tag this paper presents a novel non-contact , nondestructive crack/ defect detection technique using an active millimeter wave radar system operating in v band for automatic structural health monitoring and quality check of consumer products at industry dispatch end . the increasing mishaps due to invisible cracks in building structures , roadways etc has lead to the demand of an accurate , non-destructive forecast system . the designed mmw radar enjoys good cross-range and down-range resolution of 4.27 mm and 8.8 mm , respectively . further , experimental results show an accurate , non-contact multilayer dielectric thickness measurement as well as concealed crack detection ability of the proposed imaging methodology . story_separator_special_tag in recent years , millimeter-wave ( mmwave ) is becoming a significant component of the next-generation wireless communication due to its up to 7 gbps transmission rate . in addition to the communication benefits , the unique sensing feature of mmwave attracts more attention . nowadays , the services of human detection and identification are needed in numerous application scenarios , such as smart home and smart industry . the rf-based sensing techniques , especially wifi-based , are widely utilized in human detection and identification . however , these work either require humans to carry devices or can not detect and identify multiple people simultaneously . in this paper , we propose mmsense , a device-free multi-person detection and identification framework , which exploits the unique mmwave sensing features . first , we utilize the properties of directionality , impenetrability , and reflection of 60 ghz signal for objects to fingerprint the environments . based on the generated environment fingerprints with and without human presence , mmsense can detect and localize the presence of multiple people simultaneously via the lstm-based classification model . moreover , we propose a novel approach to use humans ' outline profile and vital signs to story_separator_special_tag continuous monitoring of human 's breathing and heart rates is useful in maintaining better health and early detection of many health issues . designing a technique that can enable contactless and ubiquitous vital sign monitoring is a challenging research problem . this paper presents mmvital , a system that uses 60 ghz millimeter wave ( mmwave ) signals for vital sign monitoring . we show that the mmwave signals can be directed to human 's body and the rss of the reflections can be analyzed for accurate estimation of breathing and heart rates . we show how the directional beams of mmwave can be used to monitor multiple humans in an indoor space concurrently . mmvital relies on a novel human finding procedure where a human can be located within a room by reflection loss based object/human classification . we evaluate mmvital using a 60 ghz testbed in home and office environment and show that it provides the mean estimation error of 0.43 bpm ( breathing rate ) and 2.15 bpm ( heart rate ) . also , it can locate the human subject with 98.4 % accuracy within 100 ms of dwell time on reflection . we also demonstrate story_separator_special_tag millimeter-wave ( mm-wave ) location systems not only provide accurate positioning for location-based services but can also help optimize network operations , for example , through location-driven beam steering and access point association . in this paper , we design and evaluate localization schemes that exploit the characteristics of mm-wave communication systems . we propose two range-free algorithms belonging to the broad classes of triangulation and angle difference of arrival . the schemes work both with multiple anchors and with as few as a single anchor , under the only assumption that the floor plan and the positions of the mm-wave access points are known . moreover , they are designed to be lightweight so that even computationally-constrained devices can run them . we evaluate our proposed algorithms against two benchmark approaches based on fingerprinting and angles of arrival , respectively . our results , obtained both by means of simulations and through measurements involving commercial 60-ghz mm-wave devices , show that sub-meter accuracy is achieved in most of the cases , even in the presence of only a single access point . the availability of multiple access points substantially improves the localization accuracy , especially for large indoor spaces story_separator_special_tag automated cars benefit greatly from millimeter wave broadband communication links . also , computer vision is becoming more and more used in automotive applications . in this scenario , we propose a system capable to track the position of a given car on a road by fusing data from a camera system and a wireless radio link at 60 ghz . a gaussian mixture model and epipolar geometry have been used for computer vision . from a v-band wireless link a doppler shift estimate has been obtained . the effectiveness of our method is shown on measurement data . story_separator_special_tag continuous monitoring of human s breathing and heart rates is useful in maintaining better health and early detection of many health issues . designing a technique that can enable contactless and ubiquitous vital sign monitoring is a challenging research problem . this article presents mmvital , a system that uses 60ghz millimeter wave ( mmwave ) signals for vital sign monitoring . we show that the mmwave signals can be directed to human s body and the received signal strength ( rss ) of the reflections can be analyzed for accurate estimation of breathing and heart rates . we show how the directional beams of mmwave can be used to monitor multiple humans in an indoor space concurrently . mmvital also provides sleep monitoring with sleeping posture identification and detection of central apnea and hypopnea events . it relies on a novel human finding procedure where a human can be located within a room by reflection loss-based object/human classification . we evaluate mmvital using a 60ghz testbed in home and office environment and show that it provides the mean estimation error of 0.43 breaths per minute ( bpm ; breathing rate ) and 2.15 beats per minute ( bpm ; story_separator_special_tag we propose human mobility tracking and activity monitoring using 60 ghz millimeter wave ( mmwave ) . we discuss the benefits of using mmwave signals for the purpose over existing 2.4/5 ghz based techniques . we also identify related challenges of determining human 's initial location and tracking , and demonstrate the feasibility of activity monitoring using an example of walking activity . story_separator_special_tag an active-mode mm-wave ( 60 ghz ) imaging system with yagi antenna has been developed . the optics for the system was designed with the ray tracing method to reduce aberration . a signal processing using a neural network has been successfully introduced to recognize objects distorted with coherent mm-wave illumination . with 10/spl times/10 sampling points the recognition rate of 98 % has been obtained for the objects of 10 alphabetical letters and the 5 teaching trials . story_separator_special_tag as a device-free approach , radio tomographic imaging ( rti ) is ideally suited for low-cost indoor localization in context-aware internet-of-things applications . however , the fundamental rti algorithm relies on shadowing of the line of sight ( los ) links and therefore , conventional rti implementations using 2.4 ghz sensing networks ( microrti ) fail to accurately localize users in multipath-rich indoor environments . the localization accuracy is further degraded by external human movement that affects the signal propagation . in this paper , we propose mmrti , a novel rti approach based on a highly-directional , millimeter-wave sensing network , that aims to improve indoor localization by utilizing the los-dominant nature of millimeter-wave signal propagation . we experimentally evaluate mmrti , operating at 60 ghz , with and without human movement around the sensing network , in two indoor environments , and compare its performance against the conventional microrti approach . we observe that mmrti achieves a 90 % -ile localization error of 0.07m-0.25m , an improvement of 2.41 m-2.60 m compared to microrti , while remaining unaffected by external human movement , which degrades the microrti localization accuracy by up to 1.2 m . story_separator_special_tag this paper presents non-destructive inspection of sub-millimeter wide concrete surface cracks covered by paper using near-field scattering . one of the problem for nondestructive imaging using near-field scattering is low millimeter-wave ( mmw ) image contrast at the surface crack , because the decrease of received power due to near-field scattering is about 1 db . this paper presents the nondestructive inspection technologies that improve the mmw image contrast at the surface crack by using standing wave between the probe antenna and sample surface . we have found that the black and white of the mmw image pixels that correspond to the concrete cracks is inverted according to the paper thickness or antenna height , and the image contrast can be improved by calculating the difference of two mmw images obtained from different paper thickness . when we measured the frequency characteristics of the reflected mmw signal , a sharp spectral notch was observed in the reflected mmw signals , and the spectral notch frequency depends on the presence or absence of crack , paper thickness , and antenna height . we have achieved the improvement of mmw image contrast at surface crack by using these mmw spectroscopy technologies . story_separator_special_tag this paper examines a novel concept for estimating the position of a lap joint based on polarimetric scattering effects . while the principle measurement method and setup have already been presented in [ 1 ] we focus on the associated accuracy limits , i.e. , the cramer-rao lower bound ( crlb ) calculation for this approach . the minimum achievable position estimation variance is calculated for a variety of estimation scenarios . these calculations are then validated with simulations and real world measurements . story_separator_special_tag the future of mobile computing involves autonomous drones , robots and vehicles . to accurately sense their surroundings in a variety of scenarios , these mobile computers require a robust environmental mapping system . one attractive approach is to reuse millimeterwave communication hardware in these devices , e.g . 60ghz networking chipset , and capture signals reflected by the target surface . the devices can also move while collecting reflection signals , creating a large synthetic aperture radar ( sar ) for high-precision rf imaging . our experimental measurements , however , show that this approach provides poor precision in practice , as imaging results are highly sensitive to device positioning errors that translate into phase errors . we address this challenge by proposing a new 60ghz imaging algorithm , { \\em rss series analysis } , which images an object using only rss measurements recorded along the device 's trajectory . in addition to object location , our algorithm can discover a rich set of object surface properties at high precision , including object surface orientation , curvature , boundaries , and surface material . we tested our system on a variety of common household objects ( between 5cm story_separator_special_tag in this work , we propose a novel approach for high accuracy user localization by merging tools from both millimeter wave ( mmwave ) imaging and communications . the key idea of the proposed solution is to leverage mmwave imaging to construct a high-resolution 3d image of the line-of-sight ( los ) and non-line-of-sight ( nlos ) objects in the environment at one antenna array . then , uplink pilot signaling with the user is used to estimate the angle-of-arrival and time-of-arrival of the dominant channel paths . by projecting the aoa and toa information on the 3d mmwave images of the environment , the proposed solution can locate the user with a sub-centimeter accuracy . this approach has several gains . first , it allows accurate simultaneous localization and mapping ( slam ) from a single standpoint , i.e. , using only one antenna array . second , it does not require any prior knowledge of the surrounding environment . third , it can locate nlos users , even if their signals experience more than one reflection and without requiring an antenna array at the user . the approach is evaluated using a hardware setup and its ability to story_separator_special_tag millimeter wave ( mmwave ) bands are considered highly for localization and object detection . in this paper we assess the potential of commercial ieee 802.11ad mmwave equipment to offer accurate object detection , ultimately providing models of the physical environment . unlike solutions using bespoke mmwave equipment for detection , the use of ieee 802.11ad ensures a low-cost system , and one in which detection can be integrated with communication , creating potential for innovative applications . our approach is to build a laboratory testbed in which we capture reflected mmwave signals that are generated and transmitted by a commercial off-the-shelf ( cots ) ieee 802.11ad mmwave device . from the measured channel impulse response , we measured the distance from the mmwave transceiver to the objects in the environment , by some simple signal processing techniques . by knowing the angle of mmwave departure/arrival and this measured distance , we can develop a 2d model of the physical environment . we report on the achieved accuracy , which is 2cm in most experiments , and discuss technology limitations and research opportunities . story_separator_special_tag this chapter investigates the feasibility of using 3d holographic millimeter-wave ( hmmw ) imaging for diagnosis of concealed metallic forging objects ( mfos ) in inhomogeneous medium . a 3d numerical system , including radio frequency ( rf ) transmitters and detectors , various realistic mfos models and signal and imaging processing , is developed to analyze the measured data and reconstruct images of target mfos . simulation and experimental validations are performed to evaluate the hmmw approach for diagnosis of concealed mfos . results show that various concealed objects can be clearly represented in the reconstructed images with accurate sizes , locations and shapes . the proposed system has the potential for further investigation of concealed mfos under clothing in the future , which has the potential applications in on body concealed weapon detection at security sites or mfos detection in children . story_separator_special_tag this paper presents recent progress on portable doppler , frequency-shift keying ( fsk ) and frequency-modulated continuous-wave ( fmcw ) radar systems for life activity sensing and human localization . it starts from a software-based calibration technology that significantly improves the accuracy and reliability of millimeter-wave interferometry radar front-end for physiological motion and vocal vibration detection . then , the operation principle and unique features of fsk and fmcw radar , such as rf/digital beamforming , for human-aware sensing and localization will be presented . to reject clutter noise , which is a common challenge for practical deployment of short-range radar system , intermodulation radar technique will be discussed . then , machine learning will be presented as an efficient approach to make the radar system smart for automatic classification and decision making . finally , challenges for biomedical radar systems and future development directions will be discussed . story_separator_special_tag driven by a wide range of real-world applications , significant efforts have recently been made to explore device-free human activity recognition techniques that utilize the information collected by various wireless infrastructures to infer human activities without the need for the monitored subject to carry a dedicated device . existing device free human activity recognition approaches and systems , though yielding reasonably good performance in certain cases , are faced with a major challenge . the wireless signals arriving at the receiving devices usually carry substantial information that is specific to the environment where the activities are recorded and the human subject who conducts the activities . due to this reason , an activity recognition model that is trained on a specific subject in a specific environment typically does not work well when being applied to predict another subject 's activities that are recorded in a different environment . to address this challenge , in this paper , we propose ei , a deep-learning based device free activity recognition framework that can remove the environment and subject specific information contained in the activity data and extract environment/subject-independent features shared by the data collected on different subjects under different environments . we story_separator_special_tag radio-based passive-object sensing can enable a new form of pervasive user-computer interface . prior work has employed various wireless signal features to sense objects under a set of predefined , coarse motion patterns . but an operational ui , like a trackpad , often needs to identify fine-grained , arbitrary motion . this paper explores the feasibility of tracking a passive writing object ( e.g. , pen ) at sub-centimeter precision . we approach this goal through a practical design , mtrack , which uses highly-directional 60 ghz millimeter-wave radios as key enabling technology . mtrack runs a discrete beam scanning mechanism to pinpoint the object 's initial location , and tracks its trajectory using a signal-phase based model . in addition , mtrack incorporates novel mechanisms to suppress interference from background reflections , taking advantage of the short wavelength of 60 ghz signals . we prototype mtrack and evaluate its performance on a 60 ghz reconfigurable radio platform . experimental results demonstrate that mtrack can locate/track a pen with 90-percentile error below 8 mm , enabling new applications such as wireless transcription and virtual trackpad . story_separator_special_tag ambient environment information , including reflectors location , dimension and reflectivity , is a key input to many millimeter-wave ( mmwave ) networking and sensing applications . it has found versatile applications in optimizing network coverage and robustness , enhancing mobile link performance , and enabling high-accuracy indoor localization and navigation . recent approaches of deriving mmwave environment information require heavy infrastructure support or non-trivial human labor , and rely on costly software defined radios , which prevent their usage in practice . in this work , we design and implement mmranger , a system can automatically sense environment without any infrastructure support . mmranger equips a pair of low-cost off-the-shelf mmwave radios in a commodity robot , which constantly samples the ambient environment by exchanging a series of mmwave signals while it moves and rotates . mmranger then re-engineers the time-domain signal series to derive the spatial-domain environment structure , through novel reflection path extraction and reflector mapping algorithms . our experiments verify that mmranger can accurately sense a given environment with minimal overhead , and the learned information can bring 1.6\xd7 and 2.1\xd7 performance gain , in terms of network coverage and mobile link throughput , respectively , story_separator_special_tag in this paper , we describe the overall system design for a radar being developed for the nasa mars science laboratory , set to launch in 2009 . story_separator_special_tag in this paper we analyze the benefit of using beamforming arrays as opposed to using omnidirectional antennas as the receiving antennas of motion tracking systems . a 60 ghz carrier-based ultra-wide band positioning system for the simultaneous location of diverse markers is considered . two targets simultaneously moving in a realistic indoor environment are tracked using both antenna systems . results show that the tracking precision is greatly improved with the use of beamforming arrays meanwhile for omnidirectional antennas the error is in the range of meters . story_separator_special_tag the article describes the irctr parsax radar system , the fully polarimetric fm-cw radar with dualorthogonal sounding signals , which has the possibility to measure all elements of the radar targets polarization scattering matrix simultaneously , in one sweep . story_separator_special_tag a novel series-fed microstrip patch array antenna for 37/39 ghz beamforming is proposed . to improve the antenna bandwidth , two of the patches are modified with truncated corners in the diagonal direction . this truncation generates two degenerate resonances which result in a flattened frequency response of the input impedance . then , the recessed microstrip feeds for the other two patches are designed to yield a proper current distribution for radiation while maintaining minimal return loss , wide bandwidth , and low sidelobes . though the individual patch antenna is elliptically polarized due to the truncated corners , a phased array with linear polarization can still be obtained by alternately deploying left-handed and right-handed elliptically polarized patches . for validation of the proposed design , an array is fabricated with 16 elements on a substrate with 10 mil thickness and =2.2 . the beamforming capability of the proposed array is also demonstrated . the experiment results agree well with the simulation and show that the antenna gain and the return loss bandwidth can be more than 21 dbi and 8 % , respectively . story_separator_special_tag this paper presents a review of the most recent information on the effects of the earth 's atmosphere on space communications systems . the design and reliable operation of satellite systems which provide the many applications in space and rely on the transmission of radio waves for communications and scientific purposes are dependent on the propagation characteristics of the transmission path . the presence of atmospheric gases , clouds , fog , precipitation , and turbulence cause uncontrolled variations in the signal characteristics which can result in a reduction of the quality and reliability of the transmitted information . models and techniques used in the prediction of atmospheric effects as influenced by frequency , geography , elevation angle , and type of transmission are discussed . recent data on performance characteristics obtained from direct measurements on satellite links operating to above 30 ghz are reviewed . particular emphasis is placed on the effects of precipitation on the earth-space path , including rain attenuation , and rain and ice-particle depolarization . sky noise , antenna gain degradation , scintillations , and bandwidth coherence are also discussed . the impact of the various propagation factors on communications system design criteria is presented story_separator_special_tag in this paper , we sum up our experience gathered working with mmwave fmcw radar sensors for localization problems . we give a glimpse of the foundations of radar that is necessary to understand the benefit and advantages of this technology . moreover , we introduce our open-source software toolbox pymmw based on python for texas instruments iwr1443 es2.0 evm sensors to provide students and researchers easy access to those radar sensors . in doing so , one can jump right into sensing with mmwave fmcw radar from a practical point of view and start doing experiments and developing own applications . finally , pymmw is used for data acquisition of a scene illuminated by three virtual radars in three different states of occupancy showing the potential of mmwave fmcw radar for indoor and distance-based localization applications . story_separator_special_tag in the recent years , the radar technology , once used predominantly in the military , has started to emerge in numerous civilian applications . one of the areas that this technology appeared is the automotive industry . nowadays , we can find various radars in modern cars that are used to assist a driver to ensure a safe drive and increase the quality of the driving experience . the future of the automotive industry promises to offer a fully autonomous car which is able to drive itself without any driver assistance . these vehicles will require powerful radar sensors that can provide precise information about the surrounding of the vehicle . these sensors will also need a computing platform that can ensure real-time processing of the received signals . the subject of this thesis is to investigate the processing platforms for the real-time signal processing of the automotive fmcw radar developed at the nxp semiconductors . the radar sensor is designed to be used in the self-driving vehicles . the thesis first investigates the signal processing algorithm for the mimo fmcw radar . it is found that the signal processing consists of the three-dimensional fft processing . taking into story_separator_special_tag random and unwanted fluctuations that perturb the phase of an ideal reference sinusoidal signal may cause significant performance degradation in radar systems employing coherent integration techniques . in this second part of the study , resorting to the fast-time/slow-time data matrix representation developed in `` part i '' of this two-part study , we assess the performance of both pulse doppler processing ( pdp ) algorithms and sidelobe blanker ( slb ) techniques when phase noise and gaussian interference ( clutter plus noise ) impair the data . specifically , we derive analytically manageable expressions for : the probability of false alarm and the probability of detection of pdp algorithms ; the probability of false alarm , the probability of blanking a coherent repeater interference , and the probability of blanking a target in the mainlobe of slb processors . simulation results show that phase noise may slightly degrade the performance of pdp and slb processors as long as its power spectral density correctly represents the available measurements . additionally , the matched filter receiver ensures a high level of robustness against phase noise , highlighting its robustness against steering and covariance matrix mismatches , a property that we formally story_separator_special_tag in radar processing , coloured transmission aims at improving the detection of targets that appear as fast as they disappear . it consists in simultaneously transmitting different waveforms using a wide beam . to differentiate the transmitted waveforms , various orthogonal coding schemes such as phase and frequency coding have been considered . in this paper , we suggest using in the mimo radar case the so-called multicarrier phase coded signals ( mcpc ) initially introduced in siso radar processing . this approach has the advantage of having better performances than phase coding based schemes . in addition , it has a lower computational cost than the frequency coding system . story_separator_special_tag this paper is concerned with the accuracy of singlehit angle estimation by a monopulse radar in the search mode . the system analyzed is an amplitude comparison estimator . off-axis targets are considered . the effects of narrow-band gaussian receiver noise on the sum and difference signals are considered . an asymptotic expansion for the angle estimate is derived which converges for snr 's of 12 db or greater . the mean and the standard deviation of the angle estimate are found . these statistics are given in terms of antenna radiation patterns , true target angular position , snr , and noise statistics . it is found that the expected value of off-boresight angle estimates is equal to the true angle of the target . further , the standard deviation of the angle estimate increases as the target is moved off boresight . this increase is greater than would be expected from the decrease in antenna gain due to off-axis operation . correlation between noise in the sum and difference channels reduces the off-axis errors . story_separator_special_tag the use of millimeter-waves for imaging purposes is becoming increasingly important , as millimeter-waves can penetrate most clothing and packaging materials , so that the detector does not require physical contact with the object . this will offer a view to the hidden content of e.g . packets or bags without the need to open them , whereby packaging and content will not be damaged . nowadays x-ray is used , but as the millimeter-wave quantum energy is far below the ionization energy , it is less harmful for the human health . in this paper we report an active millimeter-wave imaging tomograph for material analysis and concealed object detection purposes . the system is build using in-house w-band components . the object is illuminated with low-power millimeter-waves in the frequency range between 89 and 96ghz ; mirrors are used to guide and focus the beam . the object is moved through the focus point to scan the object pixel by pixel . depending on the actual material some parts of the waves are reflected , the other parts penetrate the object . a single-antenna transmit and receive module is used for illumination and measurement of the material-specific reflected power story_separator_special_tag autonomous vehicle ( auto-v ) has received wide attention for its possible advantages that can provide much safer driving than human drivers . autonomous parking becomes a tricky issue because the parking environment is often too complex to be fully perceived . the information acquired with the millimeter wave synthetic aperture radar ( sar ) could be helpful to solve this problem . in this paper , an information perception method for parking is presented . it adopts the visual saliency detection method based on spectral residual to obtain the locations of the vehicles and empty parking sites , and use the morphology filter to judge the postures of the vehicles . then , the vehicle types are classified based on principal component analysis ( pca ) and support vector machine ( svm ) . finally , the suitable parking sites are obtained according to the parking information perception . this study can be used to search the available parking sites , and confirm whether the obstacles of the empty parking site exist , which can assist the safe parking of autonomous vehicle . experimental results based on measured automotive millimeter wave sar images show the effectiveness and accuracy of story_separator_special_tag this paper proposes a parking space information monitoring system by millimeter wave synthetic aperture radar ( sar ) based on unmanned aerial vehicle ( uav ) . parking space information that people are concerned about includes vacant parking place , parking place occupied by obstacles and place parked by vehicles . specially , the free parking space detection is an important module for the parking guidance system ( pgs ) that can help drivers to find parking space efficiently . in this system , we obtain high resolution sar images of parking lots at first . then , in order to define the free parking space , maximally stable extremal region ( mser ) method is exploited to leach the candidate regions occupied by vehicles from millimeter wave sar images . next , the system utilize visual saliency detection method to extract obstacles from the non-parked parking space acquired by pre-detection . ultimately , the three types of information have been determined , including vacant parking space , parking space occupied by obstacles and the parked place . experimental results prove that the integrated scheme performs well in parking information determination . story_separator_special_tag a 3d surface reconstruction method based on the unwrapped 2d phase grid of an 80 ghz synthetic aperture radar scan is presented . the introduction is followed by the explanation of the used 2d phase unwrapping algorithm . the results show the visualization of the radar scan which enables the detection of defects which are less than 10 mm in diameter and 30 \xb5m in depth . this underlines the potential of mmwave radar scans for high quality monitoring systems . story_separator_special_tag recent progress in complementary metal-oxide semiconductor ( cmos ) based frequency-modulated continuous-wave ( fmcw ) radars has made it possible to design low-cost and low-power millimeter-wave ( mmwave ) sensors . as a result , there is a strong desire to exploit the progress in mmwave sensors in wide range of imaging applications including medical , automotive , and security . in this paper , we present a low-cost high-resolution mmwave imager prototype that combines commercially available 77 ghz system-on-chip fmcw radar sensors and synthetic aperture radar ( sar ) signal processing techniques for concealed item detection . to create a synthetic aperture over a target scene , the imager is constructed with a two-axis motorized rail system which can synthesize a large aperture in both horizontal and vertical directions . our prototype system is described in detail along with signal processing techniques for two-dimensional ( 2-d ) image reconstruction . the imaging examples of concealed items in various scenarios confirm that our low-cost prototype has a great potential for high-resolution imaging tasks in security applications . story_separator_special_tag abstract : radar and millimeter wave methods provide a means of remote and noncontacting examination of structures through controlled electromagnetic ( em ) interactions . metallic and nonmetallic structures reflect and scatter em waves impinging at the outer surfaces . nonmetallic , i.e. , dielectric , materials allow for em waves to penetrate the surface , and scatter or reflect off of subsurface objects and features . actively measuring surface and subsurface reflectivity and scattering by the controlled launching and receiving of em waves provides information that when suitably processed can indicate surface and subsurface feature geometry , material properties , and overall structural condition . these active em technical tests are often called ground penetrating radar for em wave frequencies less than about 10\xa0ghz and millimeter wave methods for those at higher frequencies . this chapter describes the range of uses , operating principles , signal processing , and data interpretation for these methods . story_separator_special_tag active millimeter wave ( ammw ) imaging technique has been widely applied in the security industries due to its under-controlled privacy concerns and no health hazards . in fact , how to automatically and precisely detect the concealed objects on the human body in the ammw images is one key issue in industrial security scanner systems . in this paper , we first investigate the deep-learning-based object detection approaches for the ammw images , and then , we develop a concealed object detector for the security system in the airport . for our particular application , the concealed objects include several small items such as the knife , the lighter , the phone , and so on . however , most of the current deep-learning-based methods focus mainly on detecting large objects , which occupy a large part in an image , resulting in the unsatisfactory performance on small objects . the reason lies in that the signal for the small region is rather weak before feeding into the detector . to address this issue , we first enlarge the resolution of the feature map by applying the dilated convolution . then , we propose a context embedding object detection story_separator_special_tag this paper outlines basic principle of synthetic aperture radar ( sar ) . matched filter approaches for processing the received data and pulse compression technique are presented . besides the sar radar equation , the linear frequency modulation ( lfm ) waveform and matched filter response are also discussed . finally the system design consideration of various parameters and aspects are also highlighted . story_separator_special_tag accurate precise positioning at millimeter wave frequencies is possible due to the large available bandwidth that permits precise on-the-fly time of flight measurements using conventional air interface standards . in addition , narrow antenna beamwidths may be used to determine the angles of arrival and departure of the multipath components between the base station and mobile users . by combining accurate temporal and angular information of multipath components with a 3-d map of the environment ( that may be built by each user or downloaded a-priori ) , robust localization is possible , even in non-line-of-sight environments . in this work , we develop an accurate 3-d ray tracer for an indoor office environment and demonstrate how the fusion of angle of departure and time of flight information in concert with a 3-d map of a typical large office environment provides a mean accuracy of 12.6 cm in line-of- sight and 16.3 cm in non-line-of-sight , over 100 receiver distances ranging from 1.5 m to 24.5 m using a single base station . we show how increasing the number of base stations improves the average non-line-of-sight position location accuracy to 5.5 cm at 21 locations with a maximum propagation distance story_separator_special_tag vehicle positioning based on gps is limited due to multipath and blockage . 5g mmwave signals can provide an attractive complement , as it is possible to estimate the state of a vehicle ( position and heading ) from transmissions from a single base station . we propose a bayesian 5g mmwave tracking filter , which explicitly relies on mapping the radio environment . the filter thus solves a novel type of simultaneous localization and mapping problem , which enables estimating not only the vehicle heading and position , but also its clock bias . story_separator_special_tag simultaneous localization and environment mapping ( slam ) is the core to robotic mapping and navigation as it constructs simultaneously the unknown environment and localizes the agent within . however , in millimeter wave ( mmwave ) research , slam is still at its infancy . this paper consists a first of its kind in mapping an indoor environment based on the rss , time-difference-of-arrival , and angle-of-arrival measurements . we introduce mosaic as a new approach for slam in indoor environment by exploiting the map-based channel model . more precisely , we perform localization and environment inference through obstacle detection and dimensioning . the concept of virtual anchor nodes ( vans ) , known in literature as the mirrors of the real anchors with respect to the obstacles in the environment , is explored . then , based on these vans , the obstacles positions and dimensions are estimated by detecting the zone of paths obstruction , points of reflection , and obstacle vertices . then , extended kalman filter is adapted to the studied environment to improve the estimation of the points of reflection hence the mapping accuracy . cramer rao lower bounds are also derived to find story_separator_special_tag sensor array processing mostafa kaveh complex random variables and stochastic processes daniel r. fuhrmann beamforming techniques for spatial filtering barry van veen and kevin m. buckley subspace-based direction-finding methods egemen gonen and jerry m. mendel esprit and closed-form 2d angle estimation with planar arrays martin haardt , michael d. zoltowski , cherian p. mathews , and javier ramos a unified instrumental variable approach to direction finding in colored noise fields p. stoica , mats viberg , m. wong , and q. wu electromagnetic vector-sensor array processing arye nehorai and eytan paldi subspace tracking r. d. degroat , e. m. dowling , and d. a. linebarger detection : determining the number of sources douglas b. williams array processing for mobile communications a. paulraj and c. b. papadias beamforming with correlated arrivals in mobile communications victor a. n. barroso and jose m. f. moura peak-to-average power ratio reduction robert j. baxley and g. tong zhou space-time adaptive processing for airborne surveillance radar hong wang nonlinear and fractal signal processing alan v. oppenheim and gregory w. wornell chaotic signals and signal processing alan v. oppenheim and kevin m. cuomo nonlinear maps steven h. isabelle and gregory w. wornell fractal signals gregory w. wornell story_separator_special_tag in this paper , we proposed a novel method called patch based mixture of gaussians-low rank matrix factorization ( patch based mog-lrmf ) to detect concealed objects in a body image acquired by millimeter wave scanner at airport security procedures . concealed objects vary significantly from size to type , which is comparatively random . mog enables to model complex and uncertain information , which exactly match the characteristics of objects . however , related work is only able to model objects with pixel-wise information , which neglects the structure of the object , our patch based mog utilizes structure and uncertainty of objects to detect concealed items . we demonstrate the effectiveness of our approach with enough experiment results . we find that many small and different material objects can be detected with our method , which performs well under relatively complicated data . story_separator_special_tag in this paper , we propose a method to accurately estimate the position of the vehicle by millimeter wave radar ( mwr ) . in recent years , many techniques of autonomous driving have been developed actively . in order to realize a safety autonomous driving system , mwr plays an important role for its inherent robustness against external circumstances . however , mwr has low spatial resolution . to achieve highly accurate estimation , we propose a model-based matching approach to the point cloud observed from mwr . simulation experiments show that accurate estimation of the position of a moving vehicle can be obtained . story_separator_special_tag radar-based noncontact heart rate measurement is attracting increasing attention . however , the measurement accuracy is dependent on the position and posture of the target body . in this work , we generate radar echo waveforms using a physical optics approximation and a numerical human body model that was obtained using a depth camera . millimeter-wave array radar is used to form an optimal antenna pattern directed towards the most dominant scattering center on the human body . we demonstrate that the proposed method can track a moving human body automatically and accurately , even when the body is in motion , thus enabling noncontact heartbeat measurement . story_separator_special_tag we report results from millimeter wave vehicle-to-infrastructure ( v2i ) channel measurements conducted on sept. 25 , 2018 in an urban street environment , down-town vienna , austria . measurements of a frequency-division multiplexed multiple-input single-output channel have been acquired with a time-domain channel sounder at 60 ghz with a bandwidth of 100 mhz and a frequency resolution of 5 mhz . two horn antennas were used on a moving transmitter vehicle : one horn emitted a beam towards the horizon and the second horn emitted an elevated beam at 15-degrees up-tilt . this configuration was chosen to assess the impact of beam elevation on v2i communication channel characteristics : propagation loss and sparsity of the local scattering function in the delay-doppler domain . the measurement results within urban speed limits show high sparsity in the delay-doppler domain . story_separator_special_tag active millimeter-wave ( mmw ) near-filed human imaging is a means for concealed objects detection . a method of concealed objects detection based on fast wavelet transforms ( fwt ) in the usage of active mmw images is presented as a result of image characteristics , which includes high resolution , characteristics varying in different parts of the human , imaging influenced among human , concealed objects and other objects , and different textures of concealed objects . images segmentation utilizing results of edge detection based on fwt is conducted and preliminary segmentation results can be obtained . some kinds of concealed objects according to comparing gray value of concealed objects to human average gray value can be detected in this paper . the experiments of concealed objects on images of actual acquisition are conducted with a result of accurate rate 80.92 % and false alarm rate 11.78 % , illustrating the effectiveness of the method proposed in this paper . story_separator_special_tag millimeter wave imaging technology has been used in human body security inspection in public . comparing to traditional x-ray imaging , it is more efficient and without harm to human body . thus an automatic dangerous objects detection for millimeter wave images is useful to greatly save human labor . however , due to technology limitation , millimeter wave images are usually low resolution and with high noise , thus the dangerous objects hidden in human body are hard to be found . in addition , the detection speed is of great significance in practice . this paper proposes an efficient method for dangerous objects detection for millimeter wave images . it is based on a single unified cnn ( convolutional neural network ) . compared to traditional region-based method like rcnn , by setting some default anchors over different aspect ratios at the last feature map , it is able to frame object detection as a regression problem to these anchors while predicting class probabilities . the model gets 70.9 map at 50 frames per second in millimeter wave images dataset , obtaining better performance than other method , showing a promise in practical using in the future . story_separator_special_tag millimeter wave imaging technology has been a leading field for its perfect performance at airports and other secure locations . this paper tried to detect items concealed on human body with the deep learning method . the deep learning needs a lot of images to achieve an excellent result . as a result of that , this paper firstly collected lots of millimeter wave images and established corresponding human dataset for detection , then proposed a detection method based on threshold segmentation and faster-rcnn . the experimental results on testing datasets valid the effectiveness of the proposed detection method . story_separator_special_tag short-range compact radar systems are non-invasive sensors that can locate the position and also monitor minute vibrations of the targets . they find wide use in healthcare monitoring , medical imaging , surveillance , industrial , occupancy sensing and gesture sensing applications . in this paper , we present a compact short-range 60 ghz low-power , system integrated radar system which can simultaneously operate several functional modes such as range-doppler imaging , range-cross range imaging and doppler interferometric mode without loss of performance . we also present the mechanism and processing to interleave various modes and experimentally validate the performance of the system . story_separator_special_tag this paper describes the results of our evaluation of a pedestrian 's radio wave reflection characteristics . the reflection characteristics of radio waves from a pedestrian were measured as part of the effort to improve the pedestrian detection performance of the radar sensor . a pedestrian 's radio wave reflection intensity is low , at about 15-20db less than that of the rear of a vehicle , and can vary by as much as 20db . evaluating these characteristics in detail is a prerequisite to the development of a radar sensor that is capable of detecting pedestrians reliably . story_separator_special_tag this paper describes a millimeter-wave ( mm-wave ) radar system that has been constructed to simultaneously range and detect humans at distances up to 82 meters . this is done by utilizing a composite signal consisting of two waveforms : a wideband noise waveform and a single tone . these waveforms are summed together and transmitted simultaneously . matched filtering of the received and transmitted noise signals is performed to range targets with high resolution , while the received single tone signal is used for doppler analysis . the doppler measurements are used to distinguish between different human movements using characteristic micro-doppler signals . using hardware and software filters allows for simultaneous processing of both the noise and doppler waveforms . our measurements establish the mm-wave system 's ability to detect humans up to and beyond 80 meters and distinguish between different human movements . in this paper , we describe the architecture of the multi-modal mm-wave radar system and present results on human target ranging and doppler characterization of human movements . in addition , data are presented showing the differences in reflected signal strength between a human with and without a concealed metallic object . story_separator_special_tag using a 94-ghz millimeter-wave interferometer , we are able to calculate the relative displacement of an object . when aimed at the chest of a human subject , we measure the minute motions of the chest due to cardiac activity . after processing the data using a wavelet multiresolution decomposition , we are able to obtain a signal with peaks at heartbeat temporal locations . in order for these heartbeat temporal locations to be accurate , the reflected signal must not be very noisy . since there is noise in all but the most ideal conditions , we created a statistical algorithm in order to compensate for unconfident temporal locations as computed by the wavelet transform . by analyzing the statistics of the peak locations , we fill in missing heartbeat temporal locations and eliminate superfluous ones . along with this , we adapt the processing procedure to the current signal , as opposed to using the same method for all signals . with this method , we are able to find the heart rate of ambulatory subjects without any physical contact . story_separator_special_tag it is well known that speech enhancement using spectral filtering will result in residual noise . residual noise which is musical in nature is very annoying to human listeners . many speech enhancement approaches assume that the transform coefficients are independent of one another and can thus be attenuated separately , thereby ignoring the correlations that exist between different time frames and within each frame . this paper , proposes a single channel speech enhancement system which exploits such correlations between the different time frames to further reduce residual noise . unlike other 2d speech enhancement techniques which apply a post-processor after some classical algorithms such as spectral subtraction , the proposed approach uses a hybrid wiener spectrogram filter ( hwsf ) for effective noise reduction , followed by a multi-blade post-processor which exploits the 2d features of the spectrogram to preserve the speech quality and to further reduce the residual noise . this results in pleasant sounding speech for human listeners . spectrogram comparisons show that in the proposed scheme , musical noise is significantly reduced . the effectiveness of the proposed algorithm is further confirmed through objective assessments and informal subjective listening tests . story_separator_special_tag robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology . image based benchmark datasets have driven development in computer vision tasks such as object detection , tracking and segmentation of agents in the environment . most autonomous vehicles , however , carry a combination of cameras and range sensors such as lidar and radar . as machine learning based methods for detection and tracking become more prevalent , there is a need to train and evaluate such methods on datasets containing range sensor data along with images . in this work we present nutonomy scenes ( nuscenes ) , the first dataset to carry the full autonomous vehicle sensor suite : 6 cameras , 5 radars and 1 lidar , all with full 360 degree field of view . nuscenes comprises 1000 scenes , each 20s long and fully annotated with 3d bounding boxes for 23 classes and 8 attributes . it has 7x as many annotations and 100x as many images as the pioneering kitti dataset . we define novel 3d detection and tracking metrics . we also provide careful dataset analysis as well as baselines for lidar and image based detection and story_separator_special_tag adaptive radar target detection in a noise or clutter environment is a very important device in each radar receiver . in almost all detection procedures the received echo signal amplitude is simply compared with a certain threshold . the main objective in target detection is to maximize the target detection probability under the constraints of very low and constant false alarm rate ( cfar ) . the noise and clutter background will be described by a statistical model with e.g . independent and identically rayleigh or exponentially distributed random variables of known average noise power . but in practical applications this average noise or clutter power is absolutely unknown and can additionally vary over range , time and azimuth angle . therefore this paper describes some of the so-called range cfar techniques for several different background signal situations in which the average noise power and some other additional statistical parameter are assumed to be unknown . all range cfar techniques combine therefore an estimation procedure ( to get precise or estimated knowledge about the noise power ) and a decision step by applying an amplitude threshold to the echo signal amplitude inside the test cell . this general detection scheme story_separator_special_tag an analysis of the probability of target detection for a clutter map cfar using digital exponential filtering has been performed . general performance equations are derived . the probability of detection versus signal-to-noise ratio is plotted for a false alarm probability of 1.e-06 for several weight values . the cfar loss is plotted for a detection probability of 0.9 and false alarm probabilities of 1.e-06 and 1.e-08 . story_separator_special_tag the problem of extracting features from given input data is of critical importance for the successful application of machine learning . feature extraction , as usually understood , seeks an optimal transformation from input data into a ( typically real-valued ) feature vector that can be used as an input for a learning algorithm . over time , this problem has been attacked using a growing number of diverse techniques that originated in separate research communities , including feature selection , dimensionality reduction , manifold learning , distance metric learning and representation learning . the goal of this paper is to contrast and compare feature extraction techniques coming from different machine learning areas , discuss the modern challenges and open problems in feature extraction and suggest novel solutions to some of them . story_separator_special_tag this chapter introduces the reader to the various aspects of feature extraction covered in this book . section 1 reviews definitions and notations and proposes a unified view of the feature extraction problem . section 2 is an overview of the methods and results presented in the book , emphasizing novel contributions . section 3 provides the reader with an entry point in the field of feature extraction by showing small revealing examples and describing simple but effective algorithms . finally , section 4 introduces a more theoretical formalism and points to directions of research and open problems . story_separator_special_tag the success of machine learning algorithms generally depends on data representation , and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data . although specific domain knowledge can be used to help design representations , learning with generic priors can also be used , and the quest for ai is motivating the design of more powerful representation-learning algorithms implementing such priors . this paper reviews recent work in the area of unsupervised feature learning and deep learning , covering advances in probabilistic models , auto-encoders , manifold learning , and deep networks . this motivates longer-term unanswered questions about the appropriate objectives for learning good representations , for computing representations ( i.e. , inference ) , and the geometrical connections between representation learning , density estimation and manifold learning . story_separator_special_tag background : the acceptance of virtual preclinical testing of control algorithms is growing and thus also the need for robust and reliable models . models based on ordinary differential equations ( odes ) can rarely be validated with standard statistical tools . stochastic differential equations ( sdes ) offer the possibility of building models that can be validated statistically and that are capable of predicting not only a realistic trajectory , but also the uncertainty of the prediction . in an sde , the prediction error is split into two noise terms . this separation ensures that the errors are uncorrelated and provides the possibility to pinpoint model deficiencies . methods : an identifiable model of the glucoregulatory system in a type 1 diabetes mellitus ( t1dm ) patient is used as the basis for development of a stochastic-differential-equation-based grey-box model ( sde-gb ) . the parameters are estimated on clinical data from four t1dm patients . the optimal sde-gb is determined from likelihoodratio tests . finally , parameter tracking is used to track the variation in the time to peak of meal response parameter . results : we found that the transformation of the ode model into an sde-gb story_separator_special_tag generally , in most applied fields , the dynamic state space models are of nonlinearity with non-gaussian noise . however , as a famous and simple algorithmic filter , kalman filter can only estimate linear system with gaussian noise state space models . the extend kalman filter and the unscented kalman filter still have limitations and therefore are not accurate enough for nonlinear estimation . the bayesian filtering approach which is based on sequential monte carlo sampling is called particle filters . particle filters were developed and widely applied in various areas because of the ability to process observations represented by nonlinear state-space models where the noise of the models can be non-gaussian . however , particle filters suffer from two long-standing problems that are referred as sample degeneracy and impoverishment . to fight these problems , resampling step is necessary . in this review work , a variety of resampling of particle filter methods as well as their characteristics and algorithms are introduced and discussed , such as sampling-importance resampling , auxiliary particle filter , optimal resampling and so on to combat against the sample degeneracy and impoverishment . finally , efficient importance sampling , as a more accurate story_separator_special_tag sequential filtering provides a suitable framework for estimating and updating the unknown parameters of a system as data become available . the foundations of sequential bayesian filtering with emphasis on practical issues are first reviewed covering both kalman and particle filter approaches . filtering is demonstrated to be a powerful estimation tool , employing prediction from previous estimates and updates stemming from physical and statistical models that relate acoustic measurements to the unknown parameters . ocean acoustic applications are then reviewed focusing on source tracking , estimation of environmental parameters evolving in time or space , and frequency tracking . spatial arrival time tracking is illustrated with 2006 shallow water experiment data .
the research work presented in this thesis is concerned with the analysis of the human body as a calibration platform for estimation of a pinhole camera model used in augmented reality environments . story_separator_special_tag the postural sway in 24 subjects performing a boresight calibration task on a large format head-up display is studied to estimate the impact of human limits on boresight calibration precision and ultimately on static registration errors . the dependent variables , accumulated sway path and omni-directional standard deviation , are analyzed for the calibration exercise and compared against control cases where subjects are quietly standing with eyes open and eyes closed . findings show that postural stability significantly deteriorates during boresight calibration compared to when the subject is not occupied with a visual task . analysis over time shows that the calibration error can be reduced by 39 % if calibration measurements are recorded in a three second interval at approximately 15 seconds into the calibration session as opposed to an initial reading . furthermore parameter optimization on experiment data suggests a weibull distribution as a possible error description and estimation for omni-directional calibration precision . this paper extends previously published preliminary analyses and the conclusions are verified with experiment data that has been corrected for subject inverted pendulum compensatory head rotation by providing a better estimate of the position of the eye . with correction the statistical findings are story_separator_special_tag the correct spatial registration between virtual and real objects in optical see-through augmented reality implies accurate estimates of the user s eyepoint relative to the location and orientation . story_separator_special_tag the parameter estimation variance of the single point active alignment method ( spaam ) is studied through an experiment where 11 subjects are instructed to create alignments using an optical see-through head mounted display ( osthmd ) such that three separate correspondence point distributions are acquired . modeling the osthmd and the subject 's dominant eye as a pinhole camera , findings show that a correspondence point distribution well distributed along the user 's line of sight yields less variant parameter estimates . the estimated eye point location is studied in particular detail . the findings of the experiment are complemented with simulated data which show that image plane orientation is sensitive to the number of correspondence points . the simulated data also illustrates some interesting properties on the numerical stability of the calibration problem as a function of alignment noise , number of correspondence points , and correspondence point distribution . story_separator_special_tag uncertainty in measurement of point correspondences negatively affects the accuracy and precision in the calibration of head-mounted displays ( hmd ) . in general , the distribution of alignment errors for optical see-through calibration are not isotropic , and one can estimate its distribution based on interaction requirements of a given calibration process and the user 's measurable head motion and hand-eye coordination characteristics . current calibration methods , however , mostly utilize the direct linear transformation ( dlt ) method which minimizes euclidean distances for hmd projection matrix estimation , disregarding the anisotropicity in the alignment errors . we utilize the error covariance in order to take the anisotropic nature of error distribution into account . the main hypothesis of this study is that using mahalonobis distance within the nonlinear optimization can improve the accuracy of the hmd calibration . the simulation results indicate that our new method outperforms the standard dlt method both in accuracy and precision , and is more robust against user alignment errors . to the best of our knowledge , this is the first time that anisotropic noise has been accommodated in the optical see-through hmd calibration . story_separator_special_tag in augmented reality , see-through hmds superimpose virtual 3d objects on the real world . this technology has the potential to enhance a user 's perception and interaction with the real world . however , many augmented reality applications will not be accepted until we can accurately register virtual objects with their real counterparts . in previous systems , such registration was achieved only from a limited range of viewpoints , when the user kept his head still . this paper offers improved registration in two areas . first , our system demonstrates accurate static registration across a wide variety of viewing angles and positions . an optoelectronic tracker provides the required range and accuracy . three calibration steps determine the viewing parameters . second , dynamic errors that occur when the user moves his head are reduced by predicting future head locations . inertial sensors mounted on the hmd aid head-motion prediction . accurate determination of prediction distances requires low-overhead operating systems and eliminating unpredictable sources of latency . on average , prediction with inertial sensors produces errors 2-3 times lower than prediction without inertial sensors and 5-10 times lower than using no prediction at all . future steps story_separator_special_tag this paper surveys the field of augmented reality ar , in which 3d virtual objects are integrated into a 3d real environment in real time . it describes the medical , manufacturing , visualization , path planning , entertainment , and military applications that have been explored . this paper describes the characteristics of augmented reality systems , including a detailed discussion of the tradeoffs between optical and video blending approaches . registration and sensing errors are two of the biggest problems in building effective augmented reality systems , so this paper summarizes current efforts to overcome these problems . future directions and areas requiring further research are discussed . this survey provides a starting point for anyone interested in researching or using augmented reality . story_separator_special_tag the authors describe the design and prototyping steps they have taken toward the implementation of a heads-up , see-through , head-mounted display ( hudset ) . combined with head position sensing and a real world registration system , this technology allows a computer-produced diagram to be superimposed and stabilized on a specific position on a real-world object . successful development of the hudset technology will enable cost reductions and efficiency improvements in many of the human-involved operations in aircraft manufacturing , by eliminating templates , formboard diagrams , and other masking devices . > story_separator_special_tag accommodative depth cues , a wide field of view , and ever-higher resolutions all present major hardware design challenges for near-eye displays . optimizing a design to overcome one of these challenges typically leads to a trade-off in the others . we tackle this problem by introducing an all-in-one solution a new wide field of view , gaze-tracked near-eye display for augmented reality applications . the key component of our solution is the use of a single see-through , varifocal deformable membrane mirror for each eye reflecting a display . they are controlled by airtight cavities and change the effective focal power to present a virtual image at a target depth plane which is determined by the gaze tracker . the benefits of using the membranes include wide field of view ( 100\xb0 diagonal ) and fast depth switching ( from 20 cm to infinity within 300 ms ) . our subjective experiment verifies the prototype and demonstrates its potential benefits for near-eye see-through displays . story_separator_special_tag projective geometry modelling and calibrating cameras edge detection representing geometric primitives and their uncertainty stereo vision determining discrete motion from points and lines tracking tokens over time motion fields of curves interpolating and approximating three-dimensional data recognizing and locating objects and places answers to problems . appendices : constrained optimization some results from algebraic geometry differential geometry . story_separator_special_tag ever since the development of the first applications in image-guided therapy ( igt ) , the use of head-mounted displays ( hmds ) was considered an important extension of existing igt technologies . several approaches to utilizing hmds and modified medical devices for augmented reality ( ar ) visualization were implemented . these approaches include video-see through systems , semitransparent mirrors , modified endoscopes , and modified operating microscopes . common to all these devices is the fact that a precise calibration between the display and three-dimensional coordinates in the patient 's frame of reference is compulsory . in optical see-through devices based on complex optical systems such as operating microscopes or operating binoculars-as in the case of the system presented in this paper-this procedure can become increasingly difficult since precise camera calibration for every focus and zoom position is required . we present a method for fully automatic calibration of the operating binocular varioscope/spl trade/ m5 ar for the full range of zoom and focus settings available . our method uses a special calibration pattern , a linear guide driven by a stepping motor , and special calibration software . the overlay error in the calibration plane was found story_separator_special_tag augmented reality overlays computer generated images over the real world . these images have to be generated using transformations which correctly project a point in virtual space onto its corresponding point in the real world.we present a simple and fast calibration scheme for head-mounted displays ( hmds ) , which does not require additional instrumentation or complicated procedures . the user is interactively guided through the calibration process , allowing even inexperienced users to calibrate the display to their eye distance and head geometry.the calibration is stable - meaning that slight errors made by the user do not result in gross miscalibrations - and easily applicable for see-through and video-based hmds . story_separator_special_tag augmented reality ( ar ) superimposes computer-generated virtual images on the real world to allow users exploring both virtual and real worlds simultaneously . for a successful augmented reality application , an accurate registration of a virtual object with its physical counterpart has to be achieved , which requires precise knowledge of the projection information of the viewing device . the paper proposes a fast and easy off-line calibration strategy based on well-established camera calibration methods . our method does not need exhausting effort on the collection of world-to-image correspondence data . all the correspondence data are sampled with an image based method and they are able to achieve sub-pixel accuracy . the method is applicable for all ar systems based on optical see-through head-mounted display ( hmd ) , though we took a head-mounted projective display ( hmpd ) as the example . we first review the calibration requirements for an augmented reality system and the existing calibration methods . then a new view projection model for optical see through hmd is addressed in detail , and proposed calibration method and experimental result are presented . finally , the evaluation experiments and error analysis are also included . the story_separator_special_tag we present initial results from ongoing work to calibrate optical see-through head-mounted displays ( hmds ) . we have developed a method to calibrate stereoscopic optical see-through hmds based on the 3d alignment of a target in the physical world with a virtual object in the user 's view . this is an extension of the single point active alignment method ( spaam ) ( tuceryan and navab , 2000 ) developed for monocular hmds . going from the monocular to the stereoscopic optical hmds for calibration purposes is not straightforward . this is in part due to the perceptual complexity of the stereo fusion process bringing up completely new challenges including the choice of the shape of the virtual object , the physical target and how to display the virtual object without any knowledge of the characteristics of the hmd and eye combination , i.e . the projection model . we have addressed these issues and proposed a solution for the calibration problem which we have validated through experiments on the see-through hmd system described in ( sauer et al. , 2000 ) . by experimenting , we have found the appropriate type of virtual objects and physical features story_separator_special_tag recently , m. tuceryan and n. navab ( 2000 ) introduced a method for calibrating an optical see-through system based on the alignment of a set of 2d markers on the display with a single point in the scene , while not restricting the user 's head movements ( the single point active alignment method or spaam ) . this method is applicable with any tracking system , provided that it gives the pose of the sensor attached to the see-through display . when cameras are used for tracking , one can avoid the computationally intensive and potentially unstable pose estimation process . a vision-based tracker usually consists of a camera attached to the optical see-through display , which observes a set of known features in the scene . from the observed locations of these features , the pose of the camera can be computed . most pose computation methods are very involved and can be unstable at times . the authors propose to keep the projection matrix for the tracker camera without decomposing it into intrinsic and extrinsic parameters and use it within the spaam method directly . the propagation of the projection matrices from the tracker camera to story_separator_special_tag registration is a crucial task in a see-through augmented reality ( ar ) system . the importance stems not only from the fact that registration requires careful calibration but also from the necessity that any calibration procedure should take into account the users . [ 14 ] proposed a general method for calibrating a see-through device based on dynamic alignment of virtual and real points . although a powerful tool , our experiments showed that users find alignment of many points overwhelming.we introduce improvements to simplify the calibration process and increase the success rate . we first identified the causes why calibration parameters differ from user to user that can be prevented by adopting particular configurations for the tracker sensor and the display . this allowed us to re-use the existing calibrations . furthermore , we have introduced a simpler model for the calibration that requires less number of user inputs , typically four , to calibrate the system . story_separator_special_tag we present here a method for calibrating an optical see-through head-mounted display ( hmd ) using techniques usually applied to camera calibration ( photogrammetry ) . using a camera placed inside the hmd to take pictures simultaneously of a tracked object and features in the hmd display , we could exploit established camera calibration techniques to recover both the intrinsic and extrinsic properties of the hmd ( width , height , focal length , optic centre and principal ray of the display ) . our method gives low re-projection errors and , unlike existing methods , involves no time-consuming and error-prone human measurements , nor any prior estimates about the hmd geometry . story_separator_special_tag the potential of augmented reality ( ar ) to support industrial processes has been demonstrated in several studies . while there have been first investigations on user related issues in the long-duration use of mobile ar systems , to date the impact of theses systems on physiological and psychological aspects is not explored extensively . we conducted an extended study in which 19 participants worked 4 hours continuously in an order picking process with and without ar support . results of the study comparing strain and work efficiency are presented and open issues are discussed . story_separator_special_tag calibration of optical see-through head-mounted displays as well as measuring the overall accuracy of an augmented reality system are challenging tasks . this paper describes a user study comparing the execution time and accuracy of the depth-spaam and mpaam see-through calibration methods . while both methods resulted in comparable accuracy inside the calibrated range , one of them was significantly faster to execute . story_separator_special_tag abstract the results of a multi-year research program to identify the factors associated with variations in subjective workload within and between different types of tasks are reviewed . subjective evaluations of 10 workload-related factors were obtained from 16 different experiments . the experimental tasks included simple cognitive and manual control tasks , complex laboratory and supervisory control tasks , and aircraft simulation . task- , behavior- , and subject-related correlates of subjective workload experiences varied as a function of difficulty manipulations within experiments , different sources of workload between experiments , and individual differences in workload definition . a multi-dimensional rating scale is proposed in which information about the magnitude and sources of six workload-related factors are combined to derive a sensitive and reliable estimate of workload . story_separator_special_tag from the publisher : a basic problem in computer vision is to understand the structure of a real world scene given several images of it . recent major developments in the theory and practice of scene reconstruction are described in detail in a unified framework . the book covers the geometric principles and how to represent objects algebraically so they can be computed and applied . the authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly . story_separator_special_tag whenever a sensor is mounted on a robot hand , it is important to know the relationship between the sensor and the hand . the problem of determining this relationship is referred to as the hand-eye calibration problem . hand-eye calibration is impor tant in at least two types of tasks : ( 1 ) map sensor centered measurements into the robot workspace frame and ( 2 ) tasks allowing the robot to precisely move the sensor . in the past some solutions were proposed , particularly in the case of the sensor being a television camera . with almost no exception , all existing solutions attempt to solve a homogeneous matrix equation of the form ax = x b. this article has the following main contributions . first we show that there are two possible formulations of the hand-eye calibration problem . one formu lation is the classic one just mentioned . a second formulation takes the form of the following homogeneous matrix equation : my = m'yb . the advantage of the latter formulation is that the extrinsic and intrinsic parameters of the camera need not be made explicit . indeed , this formulation directly uses the 3 story_separator_special_tag depth-fused multi-focal-plane display was proposed to create a fixed-viewpoint volumetric display capable of rendering correct or nearly-correct focus cues in a stereoscopic display through a small number of discretely placed focal planes . it may effectively address the negative effects of conventional stereoscopic displays on depth perception accuracy and visual fatigue due to the lack of focus cues . in this paper , we presented the design and assessment of a novel depth-fused six-focal plane display prototype , capable of rendering nearly-accurate focus cues for a depth range of 3 diopters with high image quality at flicker-free speed . the optical system design , prototype implementation and demonstration , and experimental assessment of the prototype were discussed in detail . story_separator_special_tag conventional stereoscopic displays force an unnatural decoupling of the accommodation and convergence cues , which may contribute to various visual artifacts and have adverse effects on depth perception accuracy . in this paper , we present the design and implementation of a high-resolution optical see-through multi-focal-plane head-mounted display enabled by state-of-the-art freeform optics . the prototype system is capable of rendering nearly-correct focus cues for a large volume of 3d space , extending into a depth range from 0 to 3 diopters . the freeform optics , consisting of a freeform prism eyepiece and a freeform lens , demonstrates an angular resolution of 1.8 arcminutes across a 40-degree diagonal field of view in the virtual display path while providing a 0.5 arcminutes angular resolution to the see-through view . story_separator_special_tag in augmented reality ( ar ) application , registering a virtual object with its real counterpart accurately and comfortably is one of the basic and challenging issues in the sense that the size , depth , geometry , as well as physical attributes of the virtual objects have to be rendered precisely relative to a physical reference , which is well-known as the calibration or registration problem . this paper presents a systematic calibration process to address static registration issue in a custom-designed augmented reality system , which is based upon the recent advancement of head-mounted projective display ( hmpd ) technology . following a concise review of the hmpd concept and system configuration , we present in detail a computational model for the system calibration , describe the calibration procedures to obtain the estimations of the unknown transformations , and include the calibration results , evaluation experiments and results . story_separator_special_tag a head-mounted display system with fully-integrated eyetracking capability offers multi-fold benefits , not only to fundamental scientific research but also to emerging applications of such technology . a key limitation of the state-of-the-art eyetracked head-mounted display ( et-hmd ) technology is the lack of compactness and portability . in this paper , we present an innovative design of a high resolution optical see-through et-hmd system based on freeform optical technology . a prototype system is demonstrated , which offers a goggle-like compact form factor , non-obstructive see-through field of view and true high-definition image resolution for the virtual display . the see-through view , via the combination of a freeform prism and corrector , achieved better than 0.5 arc minute of angular resolution for the central region of approximately 40-degrees to ensure minimal impacts on the see-through vision of an hmd user . story_separator_special_tag most patients with advanced age-related macular degeneration ( amd ) experience reduced visual acuity ( va ) and contrast sensitivity ( cs ) because they have to rely on the residual non-foveal retina to inspect targets of interest . although patients peri-peripheral vision is sufficient to recognize the gist of scenes,1 their quality of life is significantly affected by the impairment.2 4 the reduced visual function has a large impact on emotional well-being,5 and social engagement,6 especially due to its effect on tasks such as face recognition,7 , 8 which require the ability to discriminate fine details or small contrast differences . the low vision enhancement system ( lves ) was the first such commercial system for distance use , utilizing an opaque head mounted display ( hmd ) . it improved visual acuity and contrast sensitivity by converting the camera image into high contrast-magnified video.9 later , video-based contrast enhancement and zoom-controlling hmd devices were commercialized , as the jordy ( enhanced vision , ca , usa ) and the sightmate ( vuzix , ny , usa ) . all these devices use full virtual vision hmds that block the wearer s natural field of vision . an alternative story_separator_special_tag we propose a method to calibrate viewpoint-dependent , channel-wise image blur of near-eye displays , especially of optical see-through head-mounted displays ( ost-hmds ) . imperfections in hmd optics cause channel-wise image shift and blur that degrade the image quality of the display at a user 's viewpoint . if we can estimate such characteristics perfectly , we could mitigate the effect by applying correction techniques from the computational photography in computer vision as analogous to cameras . unfortunately , directly applying existing calibration techniques of cameras to ost-hmds is not a straightforward task . unlike ordinary imaging systems , image blur in ost-hmds is viewpoint-dependent , i.e. , the optical characteristic of a display dynamically changes depending on the current viewpoint of the user . this constraint makes the problem challenging since we must measure image blur of an hmd , ideally , over the entire 3d eyebox in which a user can see an image . to overcome this problem , we model the viewpoint-dependent blur as a gaussian light field ( glf ) that stores spatial information of the display screen as a ( 4d ) light field with depth information and the blur as point-spread functions story_separator_special_tag the fundamental issues in augmented reality ( ar ) are on how to naturally mediate the reality with virtual content as seen by users . in ar applications with optical see-through head-mounted displays ( ost-hmd ) , the issues often raise the problem of rendering color on the ost-hmd consistently to input colors . however , due to various display constraints and eye properties , it is still a challenging task to indistinguishably reproduce the colors on ost-hmds . an approach to solve this problem is to pre-process the input color so that a user perceives the output color on the display to be the same as the input . we propose a color calibration method for ost-hmds . we start from modeling the physical optics in the rendering and perception process between the hmd and the eye . we treat the color distortion as a semi-parametric model which separates the non-linear color distortion and the linear color shift . we demonstrate that calibrated images regain their original appearance on two ost-hmd setups with both synthetic and real datasets . furthermore , we analyze the limitations of the proposed method and remaining problems of the color reproduction in ost-hmds . story_separator_special_tag it is a common problem of ar applications that optical see-through head-mounted displays ( ost-hmd ) move on users ' heads or are even temporarily taken off , thus requiring frequent ( re ) calibrations . if such calibrations involve user interactions , they are time consuming and distract users from their applications . furthermore , they inject user-dependent errors into the system setup and reduce users ' acceptance of ost-hmds . to overcome these problems , we present a method that utilizes dynamic 3d eye position measurements from an eye tracker in combination with pre-computed , static display calibration parameters . our experiments provide a comparison of our calibration with spaam ( single point active alignment method ) for several head-display conditions : in the first condition , repeated calibrations are conducted while keeping the display position on the user 's head fixed . in the second condition , users take the hmd off and put it back on in between calibrations . the result shows that our new calibration with eye tracking is more stable than repeated spaam calibrations . we close with a discussion on potential error sources which should be removed to achieve higher calibration quality story_separator_special_tag an issue in ar applications with optical see-through head-mounted display ( ost-hmd ) is to correctly project 3d information to the current viewpoint of the user . manual calibration methods give the projection as a black box which explains observed 2d-3d relationships well ( fig . 1 ) . recently , we have proposed an interaction-free display calibration method ( indica ) for ost-hmd , utilizing camera-based eye tracking [ 7 ] . it reformulates the projection in two ways : a black box with an actual eye model ( recycle setup ) , and a combination of an explicit display model and an eye model ( full setup ) . although we have shown the former performs more stably than a repeated spaam calibration , we could not yet prove whether the same holds for the full setup . more importantly , it is still unclear how the error in the calibration parameters affects the final results . thus , the users can not know how accurately they need to estimate each parameter in practice . we provide : ( 1 ) the fact that the full setup performs as accurately as the recycle setup under a marker-based display story_separator_special_tag a critical requirement for ar applications with optical see-through head-mounted displays ( ost-hmd ) is to project 3d information correctly into the current viewpoint of the user - more particularly , according to the user 's eye position . recently-proposed interaction-free calibration methods [ 16 ] , [ 17 ] automatically estimate this projection by tracking the user 's eye position , thereby freeing users from tedious manual calibrations . however , the method is still prone to contain systematic calibration errors . such errors stem from eye-/hmd-related factors and are not represented in the conventional eye-hmd model used for hmd calibration . this paper investigates one of these factors - the fact that optical elements of ost-hmds distort incoming world-light rays before they reach the eye , just as corrective glasses do . any ost-hmd requires an optical element to display a virtual screen . each such optical element has different distortions . since users see a distorted world through the element , ignoring this distortion degenerates the projection quality . we propose a light-field correction method , based on a machine learning technique , which compensates the world-scene distortion caused by ost-hmd optics . we demonstrate that our story_separator_special_tag in augmented reality ( ar ) with an optical see-through head-mounted display ( ost-hmd ) , the spatial calibration between a user 's eye and the display screen is a crucial issue in realizing seamless ar experiences . a successful calibration hinges upon proper modeling of the display system which is conceptually broken down into an eye part and an hmd part . this paper breaks the hmd part down even further to investigate optical aberration issues . the display optics causes two different optical aberrations that degrade the calibration quality : the distortion of incoming light from the physical world , and that of light from the image source of the hmd . while methods exist for correcting either of the two distortions independently , there is , to our knowledge , no method which corrects for both simultaneously . this paper proposes a calibration method that corrects both of the two distortions simultaneously for an arbitrary eye position given an ost-hmd system . we expand a light-field ( lf ) correction approach [ 8 ] originally designed for the former distortion . our method is camera-based and has an offline learning and an online correction step . we story_separator_special_tag vision is our primary , essential sense to perceive the real world . human beings have been keen to enhance the limit of the eye function by inventing various vision devices such as corrective glasses , sunglasses , telescopes , and night vision goggles . recently , optical see-through head-mounted displays ( ost-hmd ) have penetrated in the commercial market . while the traditional devices have improved our vision by altering or replacing it , ost-hmds can augment and mediate it . we believe that future ost-hmds will dramatically improve our vision capability , combined with wearable sensing systems including image sensors . for taking a step toward this future , this paper investigates vision enhancement ( ve ) techniques via ost-hmds . we aim at correcting optical defects of human eyes , especially defocus , by overlaying a compensation image on the user 's actual view so that the filter cancels the aberration . our contributions are threefold . firstly , we formulate our method by taking the optical relationships between ost-hmd and human eye into consideration . secondly , we demonstrate the method in proof-of-concept experiments . lastly and most importantly , we provide a thorough analysis of story_separator_special_tag the ieee international symposium on mixed and augmented reality ( ismar ) is the leading venue for publishing the latest mixed and augmented reality research , applications , and technologies . this special section presents significantly extended versions of the best papers from the ieee ismar 2014 proceedings . within the past few years , augmented reality ( ar ) has reached a critical mass in both research and commercial applications . it is now becoming truly feasible to use augmented reality to place graphics anywhere at any time . however , although the basic capabilities exist , many open research problems continue . this collection of papers considers underlying issues and technologies the first paper , design and error analysis of a vehicular ar system with auto-harmonization by eric foxlin , thomas calloway , and hongsheng zhang , considers the problem of developing an ar system for aerospace and ground vehicles . unlike many commercial applications where the registration merely has to be plausible , poor registration in these systems can jeopardize safety and mission criticality . furthermore , the display is being worn by an operator who is inside a vehicle which is moving . the authors present story_separator_special_tag this paper presents a schematic eye model designed for use by virtual environments researchers and practitioners . this model , based on a combination of several ophthalmic models , attempts to very closely approximate a user 's optical centers and intraocular separation using as little as a single measurement of pupillary distance ( pd ) . typically , these parameters are loosely approximated based on the pd of the user while converged to some known distance . however , this may not be sufficient for users to accurately perform spatially sensitive tasks in the near field . we investigate this possibility by comparing the impact of several common pd-based models and our schematic eye model on users ' ability to accurately match real and virtual targets in depth . this was done using a specially designed display and robotic positioning apparatus that allowed sub-millimeter measurement of target positions and user responses . we found that the schematic eye model resulted in significantly improved real to virtual matches with average accuracy , in some cases , well under 1mm . we also present a novel , low-cost method of accurately measuring pd using an off-the-shelf trial frame and pinhole filters . story_separator_special_tag we propose a fast and accurate calibration method for the optical see-through ( ost ) head-mounted displays ( hmd ) , taking advantage of a low-cost time-of-flight depth-camera . recently , affordable ost-hmds and depth-cameras are widely appearing in the commercial market . in order to correctly reflect the user experience into the calibration process , our method demands a user wearing the hmd to repeatedly point at rendered virtual circles with their fingertips . from the repeated calibration data , we perform two stages of full calibration and simplified calibration , to compute key calibration parameters . the full calibration is required when the depth-camera is first installed to the hmd , and afterwards only the simplified calibration is performed whenever a user wears it again . our experimental results show that the full and simplified calibration can be achieved with 10 and 5 user 's repetitions ( theoretically 3 and 2 at minimum ) , which are significantly less than about 20 of the stereo-spaam , one of the most popular existing calibration techniques . we also demonstrate that the 3d position errors of our calibration become much quickly smaller than those of the state-of-the-art method . story_separator_special_tag we describe an augmented reality conferencing system which uses the overlay of virtual images on the real world . remote collaborators are represented on virtual monitors which can be freely positioned about a user in space . users can collaboratively view and interact with virtual objects using a shared virtual whiteboard . this is possible through precise virtual image registration using fast and accurate computer vision techniques and head mounted display ( hmd ) calibration . we propose a method for tracking fiducial markers and a calibration method for optical see-through hmd based on the marker tracking . story_separator_special_tag head-mounted displays ( hmds ) allow users to observe virtual environments ( ves ) from an egocentric perspective . however , several experiments have provided evidence that egocentric distances are perceived as compressed in ves relative to the real world . recent experiments suggest that the virtual view frustum set for rendering the ve has an essential impact on the user 's estimation of distances . in this article we analyze if distance estimation can be improved by calibrating the view frustum for a given hmd and user . unfortunately , in an immersive virtual reality ( vr ) environment , a full per user calibration is not trivial and manual per user adjustment often leads to mini- or magnification of the scene . therefore , we propose a novel per user calibration approach with optical see-through displays commonly used in augmented reality ( ar ) . this calibration takes advantage of a geometric scheme based on 2d point - 3d line correspondences , which can be used intuitively by inexperienced users and requires less than a minute to complete . the required user interaction is based on taking aim at a distant target marker with a close marker , story_separator_special_tag simulator sickness ( ss ) in high-fidelity visual simulators is a byproduct of modem simulation technology . although it involves symptoms similar to those of motion-induced sickness ( ms ) , ss tends to be less severe , to be of lower incidence , and to originate from elements of visual display and visuo-vestibular interaction atypical of conditions that induce ms. most studies of ss to date index severity with some variant of the pensacola motion sickness questionnaire ( msq ) . the msq has several deficiencies as an instrument for measuring ss . some symptoms included in the scoring of ms are irrelevant for ss , and several are misleading . also , the configural approach of the msq is not readily adaptable to computer administration and scoring . this article describes the development of a simulator sickness questiomaire ( ssq ) , derived from the msq using a series of factor analyses , and illustrates its use in monitoring simulator performance with data from a computerized ssq survey of 3,691 simulator hops . the databas . story_separator_special_tag augmented reality ( ar ) constitutes a very powerful three-dimensional user interface paradigm for many `` hands-on '' application scenarios in which users can not sit at a conventional desktop computer . users ' views of the real world are augmented with synthetic information from a computer . current ar research fans out into several different activities , all of which are essential to generating a system which eventually will be able to sustain a truly immersive ar-experience in extended practical applications rather than short laboratory demonstrations . but the current state of technology can not yet provide simultaneous support for an optimal solution to all aspects of ar . today 's ar systems have to balance a wealth of trade-offs between striving for high quality , physically correct presentations and user modelling on the one hand , and making short cuts and simplifications on the other hand in order to achieve a real-time response . in our work , we have selected two different positions among many possible trade-offs , demonstrating the real-time immersive impression that can be generated with today 's technology in one approach , and presenting a glimpse of the future in the other approach , story_separator_special_tag optical see-through head-mounted displays are currently seeing a transition out of research labs towards the consumer-oriented market . however , whilst availability has improved and prices have decreased , the technology has not matured much . most commercially available optical see-through head mounted displays follow a similar principle and use an optical combiner blending the physical environment with digital information . this approach yields problems as the colors for the overlaid digital information can not be correctly reproduced . the perceived pixel colors are always a result of the displayed pixel color and the color of the current physical environment seen through the head-mounted display . in this paper we present an initial approach for mitigating the effect of color-blending in optical see-through head-mounted displays by introducing a real-time radiometric compensation . our approach is based on a novel prototype for an optical see-through head-mounted display that allows the capture of the current environment as seen by the user 's eye . we present three different algorithms using this prototype to compensate color blending in real-time and with pixel-accuracy . we demonstrate the benefits and performance as well as the results of a user study . we see application for story_separator_special_tag one of the problems in using hmds is that the virtual images shown through the hmds are usually warped due to their optical distortions . in order to correctly compensate the optical distortions through a predistortion technique , accurate values of the distortion parameters are required . although several distortion calibration methods have been developed in prior work , these methods have some limitations . in this paper , we propose a method for accurately estimating the optical distortion parameters of both immersive and ( optical ) see-through hmds , without the limitations . the proposed method is based on photogrammetry and considers not only the radial distortion but also the tangential distortion . its effectiveness , including the effects of different coefficient orders of radial and tangential distortions , are evaluated through an experiment conducted with two different hmds , an optical see-through head-mounted projection display ( hmpd ) and a commercially available immersive hmd . according to the experimental results , the proposed method showed significantly lower reprojection and line-fitting errors than a previous method proposed by owen , and the radial distortion coefficients estimated by the proposed method were significantly more accurate than the nominal values obtained story_separator_special_tag we describe an augmented reality , optical see-through display based on a dmd chip with an extremely fast ( 16 khz ) binary update rate . we combine the techniques of post-rendering 2-d offsets and just-in-time tracking updates with a novel modulation technique for turning binary pixels into perceived gray scale . these processing elements , implemented in an fpga , are physically mounted along with the optical display elements in a head tracked rig through which users view synthetic imagery superimposed on their real environment . the combination of mechanical tracking at near-zero latency with reconfigurable display processing has given us a measured average of 80 s of end-to-end latency ( from head motion to change in photons from the display ) and also a versatile test platform for extremely-low-latency display systems . we have used it to examine the trade-offs between image quality and cost ( i.e . power and logical complexity ) and have found that quality can be maintained with a fairly simple display modulation scheme . story_separator_special_tag we present the design and implementation of an optical see-through head-mounted display ( hmd ) with addressable focus cues utilizing a liquid lens . we implemented a monocular bench prototype capable of addressing the focal distance of the display from infinity to as close as 8 diopters . two operation modes of the system were demonstrated : a vari-focal plane mode in which the accommodation cue is addressable , and a time-multiplexed multi-focal plane mode in which both the accommodation and retinal blur cues can be rendered . we further performed experiments to assess the depth perception and eye accommodative response of the system operated in a vari-focal plane mode . both subjective and objective measurements suggest that the perceived depths and accommodative responses of the user match with the rendered depths of the virtual display with addressable accommodation cues , approximating the real-world 3-d viewing condition . story_separator_special_tag an optical see-through head-mounted display ( hmd ) system integrating a miniature camera that is aligned with the user 's pupil is developed and tested . such an hmd system has a potential value in many augmented reality applications , in which registration of the virtual display to the real scene is one of the critical aspects . the camera alignment to the user 's pupil results in a simple yet accurate calibration and a low registration error across a wide range of depth . in reality , a small camera-eye misalignment may still occur in such a system due to the inevitable variations of hmd wearing position with respect to the eye . the effects of such errors are measured . calculation further shows that the registration error as a function of viewing distance behaves nearly the same for different virtual image distances , except for a shift . the impact of prismatic effect of the display lens on registration is also discussed . story_separator_special_tag this book introduces the geometry of 3-d vision , that is , the reconstruction of 3-d models of objects from a collection of 2-d images . it details the classic theory of two view geometry and shows that a more proper tool for studying the geometry of multiple views is the so-called rank consideration of the multiple view matrix . it also develops practical reconstruction algorithms and discusses possible extensions of the theory . story_separator_special_tag the calibration of optical see-through head-mounted displays is an important fundament for correct object alignment in augmented reality . any calibration process for osthmds requires users to align 2d points in screen space with 3d points in the real world and to confirm each alignment . in this poster , we present the results of our empiric evaluation where we compared four confirmation methods : keyboard , hand-held , voice , and waiting . the waiting method , designed to reduce head motion during confirmation , showed a significantly higher accuracy than all other methods . averaging over a time frame for sampling user input before the time of confirmation improved the accuracy of all methods in addition . we conducted a further expert study proving that the results achieved with a video see-through head-mounted display showed valid for optical see-through head-mounted display calibration , too . story_separator_special_tag in this paper we discuss the design of an optical see-through head-worn display supporting a wide field of view , selective occlusion , and multiple simultaneous focal depths that can be constructed in a compact eyeglasses-like form factor . building on recent developments in multilayer desktop 3d displays , our approach requires no reflective , refractive , or diffractive components , but instead relies on a set of optimized patterns to produce a focused image when displayed on a stack of spatial light modulators positioned closer than the eye accommodation distance . we extend existing multilayer display ray constraint and optimization formulations while also purposing the spatial light modulators both as a display and as a selective occlusion mask . we verify the design on an experimental prototype and discuss challenges to building a practical display . story_separator_special_tag we present novel designs for virtual and augmented reality near-eye displays based on phase-only holographic projection . our approach is built on the principles of fresnel holography and double phase amplitude encoding with additional hardware , phase correction factors , and spatial light modulator encodings to achieve full color , high contrast and low noise holograms with high resolution and true per-pixel focal control . we provide a gpu-accelerated implementation of all holographic computation that integrates with the standard graphics pipeline and enables real-time ( 90 hz ) calculation directly or through eye tracked approximations . a unified focus , aberration correction , and vision correction model , along with a user calibration process , accounts for any optical defects between the light source and retina . we use this optical correction ability not only to fix minor aberrations but to enable truly compact , eyeglasses-like displays with wide fields of view ( 80\xb0 ) that would be inaccessible through conventional means . all functionality is evaluated across a series of hardware prototypes ; we discuss remaining challenges to incorporate all features into a single device . story_separator_special_tag we propose vision-based robust calibration ( virc ) method for osthmds equipped with a camera . in the virc method , calibration parameters are decomposed into off-line parameters that remain constant relative to the positional relationship between the camera and the virtual screen , and on-line parameters related to the user 's eye . calculating the off-line parameters beforehand reduces the number of unknown parameters in the on-line phase , giving robust protection against the user 's misalignments during calibration . in the off-line phase , the approximate position of the user 's eye is calculated using the pnp algorithm . in the online phase , the actual position of the user 's eye is estimated from the approximate one by non-linear minimization . in our experiments , we show that the virc method can decrease reprojection error by as much as 83 % compared with the conventional method based on the dlt algorithm . story_separator_special_tag conventional binocular head-mounted displays ( hmds ) vary the stimulus to vergence with the information in the picture , while the stimulus to accommodation remains fixed at the apparent distance of the display , as created by the viewing optics . sustained vergence-accommodation conflict ( vac ) has been associated with visual discomfort , motivating numerous proposals for delivering near-correct accommodation cues . we introduce focal surface displays to meet this challenge , augmenting conventional hmds with a phase-only spatial light modulator ( slm ) placed between the display screen and viewing optics . this slm acts as a dynamic freeform lens , shaping synthesized focal surfaces to conform to the virtual scene geometry . we introduce a framework to decompose target focal stacks and depth maps into one or more pairs of piecewise smooth focal surfaces and underlying display images . we build on recent developments in `` optimized blending '' to implement a multifocal display that allows the accurate depiction of occluding , semi-transparent , and reflective objects . practical benefits over prior accommodation-supporting hmds are demonstrated using a binocular focal surface display employing a liquid crystal on silicon ( lcos ) phase slm and an organic light-emitting story_separator_special_tag a crucial aspect in the implementation of an augmented reality ( ar ) system is determining its accuracy . the accuracy of a system determines the applications it can be used for . the aim of our research is measuring the overall accuracy of an arbitrary ar system . once measurements of a system are made , they can be analyzed for determining the structure and sources of errors . from the analysis it may also be possible to improve the methods used to calibrate and register the virtual to the real . this paper describes an online system for measuring the registration accuracy of optical see-through augmentation . by online , we mean that the user can measure the registration error they are experiencing while they are using the system . we overcome the difficulty of not having retinal access by having the user indicate the projection of a perceived object on a planar measurement device . our method provides information which can be used to analyze the structure of the system error in two or three dimensions . the results of the application of our method to two monocular optical see-through ar systems are shown . story_separator_special_tag this paper investigates discrete and continuous hand-drawn loops and marks in mid-air as a selection input for gesture-based menu systems on optical see-through head-mounted displays ( ost hmds ) . we explore two fundamental methods of providing menu selection : the marking menu and the loop menu , and a hybrid method which combines the two . the loop menu design uses a selection mechanism with loops to approximate directional selections in a menu system . we evaluate the merits of loop and marking menu selection in an experiment with two phases and report that 1 ) the loop-based selection mechanism provides smooth and effective interaction ; 2 ) users prioritize accuracy and comfort over speed for mid-air gestures ; 3 ) users can exploit the flexibility of a final hybrid marking/loop menu design ; and , finally , 4 ) users tend to chunk gestures depending on the selection task and their level of familiarity with the menu layout . story_separator_special_tag the nonlinear least-squares minimization problem is considered . algorithms for the numerical solution of this problem have been proposed in the past , notably by levenberg ( quart . appl . math. , 2 , 164-168 ( 1944 ) ) and marquardt ( siam j. appl . math. , 11 , 431-441 ( 1963 ) ) . the present work discusses a robust and efficient implementation of a version of the levenberg -- marquardt algorithm and shows that it has strong convergence properties . in addition to robustness , the main features of this implementation are the proper use of implicitly scaled variables and the choice of the levenberg -- marquardt parameter by means of a scheme due to hebden ( aere report tp515 ) . numerical results illustrating the behavior of this implementation are included . 1 table . ( rwr ) story_separator_special_tag with the growing availability of optical see-through ( ost ) head-mounted displays ( hmds ) there is a present need for robust , uncomplicated , and automatic calibration methods suited for non-expert users . this work presents the results of a user study which both objectively and subjectively examines registration accuracy produced by three ost hmd calibration methods : ( 1 ) spaam , ( 2 ) degraded spaam , and ( 3 ) recycled indica , a recently developed semi-automatic calibration method . accuracy metrics used for evaluation include subject provided quality values and error between perceived and absolute registration coordinates . our results show all three calibration methods produce very accurate registration in the horizontal direction but caused subjects to perceive the distance of virtual objects to be closer than intended . surprisingly , the semi-automatic calibration method produced more accurate registration vertically and in perceived object distance overall . user assessed quality values were also the highest for recycled indica , particularly when objects were shown at distance . the results of this study confirm that recycled indica is capable of producing equal or superior on-screen registration compared to common ost hmd calibration methods . we also story_separator_special_tag we conducted an experiment in an attempt to generate baseline accuracy and precision values for optical see-through ( ost ) head mounted display ( hmd ) calibration without the inclusion of human postural sway error . this preliminary work will act as a control condition for future studies into postural error reduction . an experimental apparatus was constructed to allow performance of a spaam calibration using 25 alignments taken using one of three distance distribution patterns : static , sequential , and magic square . the accuracy of the calibrations were determined by calculating the extrinsic x , y , z translation values from the resulting projection matrix . the standard deviation for each translation component was also calculated . the results show that the magic square distribution resulted in the most accurate parameter estimation and also resulted in the smallest standard deviation for each extrinsic translation component . story_separator_special_tag advances in optical see-through head-mounted display technology have yielded a number of consumer accessible options , such as the google glass and epson moverio bt-200 , and have paved the way for promising next generation hardware , including the microsoft hololens and epson pro bt-2000 . the release of consumer devices , though , has also been accompanied by an ever increasing need for standardized optical see-through display calibration procedures easily implemented and performed by researchers , developers , and novice users alike . automatic calibration techniques offer the possibility for ubiquitous environment independent solutions , un-reliant upon user interaction . these processes , however , require the use of additional eye tracking hardware and algorithms not natively present in current display offerings . user dependent approaches , therefore , remain the only viable option for effective calibration of current generation optical see-through hardware . inclusion of depth sensors and hand tracking cameras , promised in forthcoming consumer models , offer further potential to improve these manual methods and provide practical intuitive calibration options accessible to a wide user base . in this work , we evaluate the accuracy and precision of manual optical see-through head-mounted display calibration performed using story_separator_special_tag this work introduces a technique that allows final users to evaluate and recalibrate their ar system as frequently as needed . we developed an interactive game as a prototype for such evaluation system and explain how this technique can be implemented to be used in real life . story_separator_special_tag see-through head-mounted displays sthmds , which superimpose the virtual environment generated by computer graphics cg on the real world , are expected to be able to vividly display various simulations and designs by using both the real environment and the virtual environment around us . however , we must ensure that the virtual environment is superimposed exactly on the real environment because both environments are visible . disagreement in matching locations and size between real and virtual objects is likely to occur between the world coordinates of the real environment where the sthmd user actually exists and those of the virtual environment described as parameters of cg . this disagreement directly causes displacement of locations where virtual objects are superimposed . the sthmd must be calibrated so that the virtual environment is superimposed properly . among the causes of such errors , we focus both on systematic errors of projection transformation parameters caused in manufacturing and differences between actual and supposed location of user 's eye on sthmd when in use , and propose a calibration method to eliminate these effects . in the calibration method , the virtual cursor drawn in the virtual environment is directly fitted onto targets story_separator_special_tag highly anticipated consumer level optical see-through head-mounted display offerings , such as the microsoft hololens and epson moverio pro bt-2000 , include not only the standard imu and gps sensors common to modern mobile devices , but also feature additional depth sensing and hand tracking cameras intended to support and promote the development of innovative user interaction experiences . through this demonstration , we showcase the potential of these technologies in facilitating not only interaction , but also intuitive user-centric calibration , for optical see-through augmented reality . additionally , our hardware configuration provides a straightforward example for combining consumer level sensors , such as the leap motion controller , with existing head-mounted displays and secondary tracking devices to ease the development and deployment of immersive stereoscopic experiences . we believe that the methodologies presented within our demonstration not only illustrate the potential for ubiquitous calibration across next generation consumer devices , but will also inspire and encourage further developmental efforts for optical see-through augmented reality from the community at large . story_separator_special_tag optical see-through head-mounted displays ( osthmds ) have many advantages in augmented reality application , but their utility in practical applications has been limited by the complexity of calibration . because the human subject is an inseparable part of the eye-display system , previous methods for osthmd calibration have required extensive manual data collection using either instrumentation or manual point correspondences and are highly dependent on operator skill . this paper describes display-relative calibration ( drc ) for osthmds , a new two phase calibration method that minimizes the human element in the calibration process and ensures reliable calibration . phase i of the calibration captures the parameters of the display system relative to a normalized reference frame and is performed in a jig with no human factors issues . the second phase optimizes the display for a specific user and the placement of the display on the head . several phase ii alternatives provide flexibility in a variety of applications including applications involving untrained users . story_separator_special_tag in recent years optical see-through head-mounted displays ( ost-hmds ) have moved from conceptual research to a market of mass-produced devices with new models and applications being released continuously . it remains challenging to deploy augmented reality ( ar ) applications that require consistent spatial visualization . examples include maintenance , training and medical tasks , as the view of the attached scene camera is shifted from the user 's view . a calibration step can compute the relationship between the hmd-screen and the user 's eye to align the digital content . however , this alignment is only viable as long as the display does not move , an assumption that rarely holds for an extended period of time . as a consequence , continuous recalibration is necessary . manual calibration methods are tedious and rarely support practical applications . existing automated methods do not account for user-specific parameters and are error prone . we propose the combination of a pre-calibrated display with a per-frame estimation of the user 's cornea position to estimate the individual eye center and continuously recalibrate the system . with this , we also obtain the gaze direction , which allows for instantaneous uncalibrated story_separator_special_tag properly calibrating an optical see-through head-mounted display ( ost-hmd ) and maintaining a consistent calibration over time can be a very challenging task . automated methods need an accurate model of both the ost-hmd screen and the user 's constantly changing eye-position to correctly project virtual information . while some automated methods exist , they often have restrictions , including fixed eye-cameras that can not be adjusted for different users.to address this problem , we have developed a method that automatically determines the position of an adjustable eye-tracking camera and its unconstrained position relative to the display . unlike methods that require a fixed pose between the hmd and eye camera , our framework allows for automatic calibration even after adjustments of the camera to a particular individual 's eye and even after the hmd moves on the user 's face . using two sets of ir-leds rigidly attached to the camera and ost-hmd frame , we can calculate the correct projection for different eye positions in real time and changes in hmd position within several frames . to verify the accuracy of our method , we conducted two experiments with a commercial hmd by calibrating a number of different story_separator_special_tag from the publisher : the accessible presentation of this book gives both a general view of the entire computer vision enterprise and also offers sufficient detail to be able to build useful applications . users learn techniques that have proven to be useful by first-hand experience and a wide range of mathematical methods . a cd-rom with every copy of the text contains source code for programming practice , color images , and illustrative movies . comprehensive and up-to-date , this book includes essential topics that either reflect practical significance or are of theoretical importance . topics are discussed in substantial and increasing depth . application surveys describe numerous important application areas such as image based rendering and digital libraries . many important algorithms broken down and illustrated in pseudo code . appropriate for use by engineers as a comprehensive reference to the computer vision enterprise . story_separator_special_tag in enhanced reality umgebungen nimmt der betrachter die umgebung und zusatzliche , in eine halbdurchsichtige datenbrille eingeblendete informationen wahr . das kalibrierungsproblem der datenbrille ist die aufgabe , die eingeblendete information mit dem korrekten realen hintergrund zur deckung zu bringen . heutige datenbrillen sind vergleichsweise klobig und schwer , deshalb kommt es haufig zu leichtem verrutschen der brille . wird dieses verrutschen nicht in die position der einblendung in der brille einbezogen , so passt die einblendung nicht mehr zum realen hintergrund . dies wird in abbildung 1.1 exemplarisch dargestellt . nach initialer kalibrierung der teildurchsichtigen datenbrille auf das auge des betrachters soll deshalb bei relativer lageveranderung der datenbrille zum auge ( durch leichtes verrutschen ) eine re-kalibrierung in echtzeit automatisch erfolgen . eine automatische re-kalibrierung bei verrutschen wird von uns erstmalig erforscht . story_separator_special_tag for stereoscopic optical see-through head-mounted display calibration , existing methods that calibrate both eyes at the same time highly depend on the hmd user 's unreliable depth perception . on the other hand , treating both eyes separately requires the user to perform twice the number of alignment tasks , and does not satisfy the physical structure of the system . this paper introduces a novel method that models physical structure as additional constraints and explicitly solves for intrinsic and extrinsic parameters of the stereoscopic system by optimizing a unified cost function . the calibration does not involve the unreliable depth alignment of the user , and lessens the burden for user interaction . story_separator_special_tag with users always involved in the calibration of optical see-through head-mounted displays , the accuracy of calibration is subject to human-related errors , for example , postural sway , an unstable input medium , and fatigue . in this paper we propose a new calibration approach : fixed-head 2 degree-of-freedom ( dof ) interaction for single point active alignment method ( spaam ) reduces the interaction space from a typical 6 dof head motion to a 2 dof cursor position on the semi-transparent screen . it uses a mouse as input medium , which is more intuitive and stable , and reduces user fatigue by simplifying and speeding up the calibration procedure.a multi-user study confirmed the significant reduction of humanrelated error by comparing our novel fixed-head 2 dof interaction to the traditional interaction methods for spaam . story_separator_special_tag the visual display transformation for virtual reality vr systems is typically much more complex than the standard viewing transformation discussed in the literature for conventional computer graphics . the process can be represented as a series of transformations , some of which contain parameters that must match the physical configuration of the system hardware and the user 's body . because of the number and complexity of the transformations , a systematic approach and a thorough understanding of the mathematical models involved are essential . this paper presents a complete model for the visual display transformation for a vr system ; that is , the series of transformations used to map points from object coordinates to screen coordinates . virtual objects are typically defined in an object-centered coordinate system cs , but must be displayed using the screen-centered css of the two screens of a head-mounted display hmd . this particular algorithm for the vr display computation allows multiple users to independently change position , orientation , and scale within the virtual world , allows users to pick up and move virtual objects , uses the measurements from a head tracker to immerse the user in the virtual world , story_separator_special_tag a large-area multiple-pen tablet system for three-dimensional data input is described . the large tablet area provides space for simultaneous use of several views of the three-dimensional object being digitized . the multiple pens enable the user to indicate a single point simultaneously in two such views , thus defining the three-dimensional position of the point . five significant techniques are outlined . first , the large-area digitizing surface with multiple pens has proved to be an instrument very different from the more familiar single-pen small tablets . second , a pair of two-dimensional positions is converted into a four-dimensional space and then back to three dimensions . third , the specification of view areas , viewing directions , view positions , and coordinate axis is accomplished by giving examples directly in the viewing space rather than by specifying abstract viewing parameters . fourth , an attitude about coordinate conversion using the inverse of a basis matrix is used throughout which automatically compensates for any tilt in the views on the tablet surface and any nonperpendicularity of the tablet axis . fifth , the mathematics of converting from pairs of perspective views or pairs of photographs , while not new story_separator_special_tag optical see-through head-mounted displays ( hmds ) are less commonly used because they are difficult to accurately calibrate . in this article , we report a user study to compare the accuracy of 4 variants of the spaam calibration method . among the 4 variants , stylus-marker calibration , where the user aligns a crosshair projected in the hmd with a tracked stylus tip , achieved the most accurate result . a decomposition and analysis of the calibration matrices from the trials is performed and the characteristics of the computed calibration matrices are examined . a physiological engineering point of view is also discussed to explain why calibrating optical see-through hmd is so difficult for users . story_separator_special_tag from the publisher : features : provides a guide to well-tested theory and algorithms including solutions of problems encountered in modern computer vision . contains many practical hints highlighted in the book . develops two parallel tracks in the presentation , showing how fundamental problems are solved using both intensity and range images , the most popular types of images used today . each chapter contains notes on the literature , review questions , numerical exercises , and projects . provides an internet list for accessing links to test images , demos , archives and additional learning material . story_separator_special_tag a new technique for three-dimensional ( 3d ) camera calibration for machine vision metrology using off-the-shelf tv cameras and lenses is described . the two-stage technique is aimed at efficient computation of camera external position and orientation relative to object reference coordinate system as well as the effective focal length , radial lens distortion , and image scanning parameters . the two-stage technique has advantage in terms of accuracy , speed , and versatility over existing state of the art . a critical review of the state of the art is given in the beginning . a theoretical framework is established , supported by comprehensive proof in five appendixes , and may pave the way for future research on 3d robotics vision . test results using real data are described . both accuracy and speed are reported . the experimental results are analyzed and compared with theoretical prediction . recent effort indicates that with slight modification , the two-stage calibration can be done in real time . story_separator_special_tag augmented reality ( ar ) is a technology in which a user 's view of the real world is enhanced or augmented with additional information generated from a computer model . to have a working ar system , the see-through display system must be calibrated so that the graphics are properly rendered . the optical see-through systems present an additional challenge because , unlike the video see-through systems , we do not have direct access to the image data to be used in various calibration procedures.this paper reports on a calibration method we developed for optical see-through head-mounted displays . we first introduce a method for calibrating monocular optical see-through displays ( that is , a display for one eye only ) and then extend it to stereo optical see-through displays in which the displays for both eyes are calibrated in a single procedure . the method integrates the measurements for the camera and a six-degrees-of-freedom tracker that is attached to the camera to do the calibration . we have used both an off-the-shelf magnetic tracker as well as a vision-based infrared tracker we have built . in the monocular case , the calibration is based on the alignment of story_separator_special_tag augmented reality ( ar ) is a technology in which a user 's view of the real world is enhanced or augmented with additional information generated from a computer model . in order to have a working ar system , the see-through display system must be calibrated so that the graphics is properly rendered . the optical see-through systems present an additional challenge because we do not have access to the image data directly as in video see-through systems . this paper reports on a method we developed for optical see-through head-mounted displays . the method integrates the measurements for the camera and the magnetic tracker which is attached to the camera in order to do the calibration . the calibration is based on the alignment of image points with a single 3d point in the world coordinate system from various viewpoints . the user interaction to do the calibration is extremely easy compared to prior methods , and there is no requirement for keeping the head static while doing the calibration . story_separator_special_tag john vince explains a wide range of mathematical techniques and problem-solving strategies associated with computer games , computer animation , virtual reality , cad and other areas of computer graphics in this updated and expanded fourth edition . the first four chapters revise number sets , algebra , trigonometry and coordinate systems , which are employed in the following chapters on vectors , transforms , interpolation , 3d curves and patches , analytic geometry and barycentric coordinates . following this , the reader is introduced to the relatively new topic of geometric algebra , and the last two chapters provide an introduction to differential and integral calculus , with an emphasis on geometry . mathematics for computer graphics covers all of the key areas of the subject , including : number setsalgebratrigonometrycoordinate systemstransformsquaternionsinterpolationcurves and surfacesanalytic geometrybarycentric coordinatesgeometric algebradifferential calculusintegral calculusthis fourth edition contains over 120 worked examples and over 270 illustrations , which are central to the authors descriptive writing style . mathematics for computer graphics provides a sound understanding of the mathematics required for computer graphics , giving a fascinating insight into the design of computer graphics software and setting the scene for further reading of more advanced books story_separator_special_tag single point active alignment method ( spaam ) has become the basic calibration method for optical-see-through head-mounted displays since its appearance . however , spaam is based on a simple static pinhole camera model that assumes a static relationship between the user 's eye and the hmd . such theoretic defects lead to a limitation in calibration accuracy . we model the eye as a dynamic pinhole camera to account for the displacement of the eye during the calibration process . we use region-induced data enhancement ( ride ) to reduce the system error in the acquisition process . the experimental results prove that the proposed dynamic model performs better than the traditional static model , and the ride method can help users obtain a more accurate calibration result based on the dynamic model , which improves the accuracy significantly compared to the standard spaam . story_separator_special_tag the most commonly used single point active alignment method ( spaam ) is based on a static pinhole camera model , in which it is assumed that both the eye and the hmd are fixed . this leads to a limitation for calibration precision . in this work , we propose a dynamic pinhole camera model according to the fact that the human eye would experience an obvious displacement over the whole calibration process . based on such a camera model , we propose a new calibration data acquisition method called the region-induced data enhancement ( ride ) to revise the calibration data . the experimental results prove that the proposed dynamic model performs better than the traditional static model in actual calibration .