In the path planning problem of autonomous application, the existing studies separately consider the path planning and trajectory tracking control of the autonomous vehicle and few of them have integrated the trajectory planning and trajectory control together. To fill in this research gap, this study proposes an integrated trajectory planning and trajectory control method. This paper also studies the collision avoidance problem of autonomous by considering static and dynamic obstacles. Simulation results have been presented to show the effectiveness of the proposed control method.
http://arxiv.org/abs/1905.03444
Recently salient object detection has witnessed remarkable improvement owing to the deep convolutional neural networks which can harvest powerful features for images. In particular, state-of-the-art salient object detection methods enjoy high accuracy and efficiency from fully convolutional network (FCN) based frameworks which are trained from end to end and predict pixel-wise labels. However, such framework suffers from adversarial attacks which confuse neural networks via adding quasi-imperceptible noises to input images without changing the ground truth annotated by human subjects. To our knowledge, this paper is the first one that mounts successful adversarial attacks on salient object detection models and verifies that adversarial samples are effective on a wide range of existing methods. Furthermore, this paper proposes a novel end-to-end trainable framework to enhance the robustness for arbitrary FCN-based salient object detection models against adversarial attacks. The proposed framework adopts a novel idea that first introduces some new generic noise to destroy adversarial perturbations, and then learns to predict saliency maps for input images with the introduced noise. Specifically, our proposed method consists of a segment-wise shielding component, which preserves boundaries and destroys delicate adversarial noise patterns and a context-aware restoration component, which refines saliency maps through global contrast modeling. Experimental results suggest that our proposed framework improves the performance significantly for state-of-the-art models on a series of datasets.
http://arxiv.org/abs/1905.03434
Maximum a posteriori (MAP) inference is an important task for graphical models. Due to complex dependencies among variables in realistic model, finding an exact solution for MAP inference is often intractable. Thus, many approximation methods have been developed, among which the linear programming (LP) relaxation based methods show promising performance. However, one major drawback of LP relaxation is that it is possible to give fractional solutions. Instead of presenting a tighter relaxation, in this work we propose a continuous but equivalent reformulation of the original MAP inference problem, called LS-LP. We add the L2-sphere constraint onto the original LP relaxation, leading to an intersected space with the local marginal polytope that is equivalent to the space of all valid integer label configurations. Thus, LS-LP is equivalent to the original MAP inference problem. We propose a perturbed alternating direction method of multipliers (ADMM) algorithm to optimize the LS-LP problem, by adding a sufficiently small perturbation epsilon onto the objective function and constraints. We prove that the perturbed ADMM algorithm globally converges to the epsilon-Karush-Kuhn-Tucker (epsilon-KKT) point of the LS-LP problem. The convergence rate will also be analyzed. Experiments on several benchmark datasets from Probabilistic Inference Challenge (PIC 2011) and OpenGM 2 show competitive performance of our proposed method against state-of-the-art MAP inference methods.
http://arxiv.org/abs/1905.03433
This paper proposes the first, known to us, open source presentation attack detection (PAD) solution to distinguish between authentic iris images (possibly wearing clear contact lenses) and irises with textured contact lenses. This software can serve as a baseline in various PAD evaluations, and also as an open-source platform with an up-to-date reference method for iris PAD. The software is written in C++ and Python and uses only open source resources, such as OpenCV. This method does not incorporate iris image segmentation, which may be problematic for unknown fake samples. Instead, it makes a best guess to localize the rough position of the iris. The PAD-related features are extracted with the Binary Statistical Image Features (BSIF), which are classified by an ensemble of classifiers incorporating support vector machine, random forest and multilayer perceptron. The models attached to the current software have been trained with the NDCLD’15 database and evaluated on the independent datasets included in the LivDet-Iris 2017 competition. The software implements the functionality of retraining the classifiers with any database of authentic and attack images. The accuracy of the current version offered with this paper exceeds 99% when tested on subject-disjoint subsets of NDCLD’15, and oscillates around 85% when tested on the LivDet-Iris 2017 benchmarks, which is on par with the results obtained by the LivDet-Iris 2017 winner.
http://arxiv.org/abs/1809.10172
Testing and evaluation is a critical step in the development and deployment of connected and automated vehicles (CAVs), and yet there is no systematic framework to generate testing scenario library. In Part I of the paper, a general framework is proposed to solve the testing scenario library generation (TSLG) problem with four associated research questions. The methodologies of solving each research question have been proposed and analyzed theoretically. In Part II of the paper, three case studies are designed and implemented to demonstrate the proposed methodologies. First, a cut-in case is designed for safety evaluation and to provide answers to three particular questions in the framework, i.e., auxiliary objective function design, naturalistic driving data (NDD) analysis, and surrogate model (SM) construction. Second, a highway exit case is designed for functionality evaluation. Third, a car-following case is designed to show the ability of the proposed methods in handling high-dimensional scenarios. To address the challenges brought by higher dimensions, the proposed methods are enhanced by reinforcement learning (RL) techniques. Typical CAV models are chosen and evaluated by simulations. Results show that the proposed methods can accelerate the CAV evaluation process by $255$ to $3.75\times10^5$ times compared with the public road test method, with same accuracy of indices.
http://arxiv.org/abs/1905.03428
Instance search is an interesting task as well as a challenging issue due to the lack of effective feature representation. In this paper, an instance level feature representation built upon fully convolutional instance-aware segmentation is proposed. The feature is ROI-pooled from the segmented instance region. So that instances in various sizes and layouts are represented by deep features in uniform length. This representation is further enhanced by the use of deformable ResNeXt blocks. Superior performance is observed in terms of its distinctiveness and scalability on a challenging evaluation dataset built by ourselves. In addition, the proposed enhancement on the network structure also shows superior performance on the instance segmentation task.
http://arxiv.org/abs/1806.03576
Bin Packing with Conflicts (BPC) are problems in which items with compatibility constraints must be packed in the least number of bins, not exceeding the capacity of the bins and ensuring that non-conflicting items are packed in each bin. In this work, we introduce the Bin Packing Problem with Compatible Categories (BPCC), a variant of the BPC in which items belong to conflicting or compatible categories, in opposition to the item-by-item incompatibility found in previous literature. It is a common problem in the context of last mile distribution to nanostores located in densely populated areas. To efficiently solve real-life sized instances of the problem, we propose a Variable Neighborhood Search (VNS) metaheuristic algorithm. Computational experiments suggest that the algorithm yields good solutions in very short times while compared to linear integer programming running on a high-performance computing environment.
http://arxiv.org/abs/1905.03427
Targeted sentiment analysis (TSA), also known as aspect based sentiment analysis (ABSA), aims at detecting fine-grained sentiment polarity towards targets in a given opinion document. Due to the lack of labeled datasets and effective technology, TSA had been intractable for many years. The newly released datasets and the rapid development of deep learning technologies are key enablers for the recent significant progress made in this area. However, the TSA tasks have been defined in various ways with different understandings towards basic concepts like target' and
aspect’. In this paper, we categorize the different tasks and highlight the differences in the available datasets and their specific tasks. We then further discuss the challenges related to data collection and data annotation which are overlooked in many previous studies.
http://arxiv.org/abs/1905.03423
Contemporary person re-identification (Re-ID) methods usually require access to data from the deployment camera network during training in order to perform well. This is because contemporary Re-ID models trained on one dataset do not generalise to other camera networks due to the domain-shift between datasets. This requirement is often the bottleneck for deploying Re-ID systems in practical security or commercial applications as it may be impossible to collect this data in advance or prohibitively costly to annotate it. This paper alleviates this issue by proposing a simple baseline for domain generalizable~(DG) person re-identification. That is, to learn a Re-ID model from a set of source domains that is suitable for application to unseen datasets out-of-the-box, without any model updating. Specifically, we observe that the domain discrepancy in Re-ID is due to style and content variance across datasets and demonstrate appropriate Instance and Feature Normalization alleviates much of the resulting domain-shift in Deep Re-ID models. Instance Normalization~(IN) in early layers filters out style statistic variations and Feature Normalization~(FN) in deep layers is able to further eliminate disparity in content statistics. Compared to contemporary alternatives, this approach is extremely simple to implement, while being faster to train and test, thus making it an extremely valuable baseline for implementing Re-ID in practice. With a few lines of code, it increases the rank 1 Re-ID accuracy by 11.7\%, 28.9\%, 10.1\% and 6.3\% on the VIPeR, PRID, GRID, and i-LIDS benchmarks respectively. Source code will be made available.
http://arxiv.org/abs/1905.03422
Thanks to recent advances in Deep Neural Networks (DNNs), face recognition systems have achieved high accuracy in classification of a large number of face images. However, recent works demonstrate that DNNs could be vulnerable to adversarial examples and raise concerns about robustness of face recognition systems. In particular adversarial examples that are not restricted to small perturbations could be more serious risks since conventional certified defenses might be ineffective against them. To shed light on the vulnerability of the face recognition systems to this type of adversarial examples, we propose a flexible and efficient method to generate unrestricted adversarial examples using image translation techniques. Our method enables us to translate a source into any desired facial appearance with large perturbations so that target face recognition systems could be deceived. We demonstrate through our experiments that our method achieves about $90\%$ and $30\%$ attack success rates under a white- and black-box setting, respectively. We also illustrate that our generated images are perceptually realistic and maintain personal identity while the perturbations are large enough to defeat certified defenses.
http://arxiv.org/abs/1905.03421
Testing and evaluation is a critical step in the development and deployment of connected and automated vehicles (CAVs), and yet there is no systematic framework to generate testing scenario library. This paper aims to provide a general framework to solve the testing scenario library generation (TSLG) problem for different scenario types, CAV models, and performance metrics. In part I of the paper, four research questions are identified: (1) scenario description, (2) metric design, (3) library generation, and (4) CAV evaluation. To answer these questions, a unified framework is proposed. First, the operational design domain of CAVs is considered for scenario description and decision variable formulation. Second, a set of incremental performance metrics are designed including safety, functionality, mobility, and rider’s comfort. Third, a new definition of criticality is proposed as a combination of maneuver challenge and exposure frequency, and a critical scenario searching method is designed based on multi-start optimization and seed-fill method. Finally, with the generated library, CAVs can be evaluated through three steps: scenario sampling, field testing, and index value estimation. The proposed framework is theoretically proved to obtain accurate evaluation results with much fewer number of tests, compared with public road test method. In part II of the paper, three case studies are investigated to demonstrate the proposed methodologies. Reinforcement learning based technique is applied to enhance the method under high-dimensional scenarios.
http://arxiv.org/abs/1905.03419
Deep Learning (DL) has become a crucial technology for Artificial Intelligence (AI). It is a powerful technique to automatically extract high-level features from complex data which can be exploited for applications such as computer vision, natural language processing, cybersecurity, communications, and so on. For the particular case of computer vision, several algorithms like object detection in real time videos have been proposed and they work well on Desktop GPUs and distributed computing platforms. However these algorithms are still heavy for mobile and embedded visual applications. The rapid spreading of smart portable devices and the emerging 5G network are introducing new smart multimedia applications in mobile environments. As a consequence, the possibility of implementing deep neural networks to mobile environments has attracted a lot of researchers. This paper presents emerging deep learning acceleration techniques that can enable the delivery of real time visual recognition into the hands of end users, anytime and anywhere.
http://arxiv.org/abs/1905.03418
In this paper, we study various theoretical properties of a class of prioritized inverse kinematics (PIK) solutions that can be considered as a class of (regulation or output tracking) control laws of a dynamical system with prioritized multiple outputs. We first develop tools to investigate nonsmoothness of PIK solutions and find a sufficient condition for nonsmoothness. It implies that existence and uniqueness of a joint trajectory satisfying the PIK solution cannot be guaranteed by the classical theorems. So, we construct an alternative existence and uniqueness theorem that uses structural information of the PIK solution. Then, we narrow the class of PIK solutions down to the case that all tasks are designed to follow some desired task trajectories and discover a few properties related to convergence. The study goes further to analyze stability of equilibrium points of the differential equation whose right hand side is the PIK solution when all tasks are designed to reach some desired task positions. Finally, we furnish an example with a two-link manipulator that shows how our findings can be used to analyze the behavior of the joint trajectory generated from the PIK solution.
http://arxiv.org/abs/1905.03416
In this paper, we present a novel framework to detect line segments in man-made environments. Specifically, we propose to describe junctions, line segments and relationships between them with a simple graph, which is more structured and informative than end-point representation used in existing line segment detection methods. In order to extract a line segment graph from an image, we further introduce the PPGNet, a convolutional neural network that directly infers a graph from an image. We evaluate our method on published benchmarks including York Urban and Wireframe datasets. The results demonstrate that our method achieves satisfactory performance and generalizes well on all the benchmarks. The source code of our work is available at \url{https://github.com/svip-lab/PPGNet}.
http://arxiv.org/abs/1905.03415
This letter describes a network that is able to capture spatiotemporal correlations over arbitrary timestamps. The proposed scheme operates as a complementary, extended network over spatiotemporal regions. Recently, multimodal fusion has been extensively researched in deep learning. For action recognition, the spatial and temporal streams are vital components of deep Convolutional Neural Network (CNNs), but reducing the occurrence of overfitting and fusing these two streams remain open problems. The existing fusion approach is to average the two streams. To this end, we propose a correlation network with a Shannon fusion to learn a CNN that has already been trained. Long-range video may consist of spatiotemporal correlation over arbitrary times. This correlation can be captured using simple fully connected layers to form the correlation network. This is found to be complementary to the existing network fusion methods. We evaluate our approach on the UCF-101 and HMDB-51 datasets, and the resulting improvement in accuracy demonstrates the importance of multimodal correlation.
http://arxiv.org/abs/1807.08291
A general and fast method is conceived for computing the cyclic convolution of n points, where n is a prime number. This method fully exploits the internal structure of the cyclic matrix, and hence leads to significant reduction of the multiplication complexity in terms of CPU time by 50%, as compared with Winograd’s algorithm. In this paper, we only consider the real and complex fields due to their most important applications, but in general, the idea behind this method can be extended to any finite field of interest. Clearly, it is well-known that the discrete Fourier transform (DFT) can be expressed in terms of cyclic convolution, so it can be utilized to compute the DFT when the block length is a prime.
http://arxiv.org/abs/1905.03398
In recent years, attention models have been extensively used for person and vehicle re-identification. Most reidentification methods are designed to focus attention at key-point locations. However, depending on the orientation the contribution of each key-point varies. In this paper, we present a novel dual path adaptive attention model for vehicle re-identification (AAVER). The global appearance path captures macroscopic vehicle features while the orientation conditioned part appearance path learns to capture localized discriminative features by focusing attention to the most informative key-points. Through extensive experimentation, we show that the proposed AAVER method is able to accurately re-identify vehicles in unconstrained scenarios, yielding state of the art results on the challenging dataset VeRi-776. As a byproduct, the proposed system is also able to accurately predict vehicle key-points and shows an improvement of more than 7% over state of the art.
http://arxiv.org/abs/1905.03397
Visual defect assessment is a form of anomaly detection. This is very relevant in finding faults such as cracks and markings in various surface inspection tasks like pavement and automotive parts. The task involves detection of deviation/divergence of anomalous samples from the normal ones. Two of the major challenges in supervised anomaly detection are the lack of labelled training data and the low availability of anomaly instances. Semi-supervised methods which learn the underlying distribution of the normal samples and then measure the deviation/divergence from the estimated model as the anomaly score have limitations in their overall ability to detect anomalies. This paper proposes the application of network-based deep transfer learning using convolutional neural networks (CNNs) for the task of anomaly detection. Single class SVMs have been used in the past with some success, however we hypothesize that deeper networks for single class classification should perform better. Results obtained on established anomaly detection benchmarks as well as on a real-world dataset, show that the proposed method clearly outperforms the existing state-of-the-art methods, by achieving a staggering average area under the receiver operating characteristic curve value of 0.99 for the tested data-sets which is an average improvement of 41% on the CIFAR10, 20% on MNIST and 16% on Cement Crack data-sets.
http://arxiv.org/abs/1905.13147
Evolution and learning are two of the fundamental mechanisms by which life adapts in order to survive and to transcend limitations. These biological phenomena inspired successful computational methods such as evolutionary algorithms and deep learning. Evolution relies on random mutations and on random genetic recombination. Here we show that learning to evolve, i.e. learning to mutate and recombine better than at random, improves the result of evolution in terms of fitness increase per generation and even in terms of attainable fitness. We use deep reinforcement learning to learn to dynamically adjust the strategy of evolutionary algorithms to varying circumstances. Our methods outperform classical evolutionary algorithms on combinatorial and continuous optimization problems.
http://arxiv.org/abs/1905.03389
Deep neural networks have yielded superior performance in many applications; however, the gradient computation in a deep model with millions of instances lead to a lengthy training process even with modern GPU/TPU hardware acceleration. In this paper, we propose AutoAssist, a simple framework to accelerate training of a deep neural network. Typically, as the training procedure evolves, the amount of improvement in the current model by a stochastic gradient update on each instance varies dynamically. In AutoAssist, we utilize this fact and design a simple instance shrinking operation, which is used to filter out instances with relatively low marginal improvement to the current model; thus the computationally intensive gradient computations are performed on informative instances as much as possible. We prove that the proposed technique outperforms vanilla SGD with existing importance sampling approaches for linear SVM problems, and establish an O(1/k) convergence for strongly convex problems. In order to apply the proposed techniques to accelerate training of deep models, we propose to jointly train a very lightweight Assistant network in addition to the original deep network referred to as Boss. The Assistant network is designed to gauge the importance of a given instance with respect to the current Boss such that a shrinking operation can be applied in the batch generator. With careful design, we train the Boss and Assistant in a nonblocking and asynchronous fashion such that overhead is minimal. We demonstrate that AutoAssist reduces the number of epochs by 40% for training a ResNet to reach the same test accuracy on an image classification data set and saves 30% training time needed for a transformer model to yield the same BLEU scores on a translation dataset.
http://arxiv.org/abs/1905.03381
The exponential growth of information on the Internet is a big challenge for information retrieval systems towards generating relevant results. Novel approaches are required to reformat or expand user queries to generate a satisfactory response and increase recall and precision. Query expansion (QE) is a technique to broaden users’ queries by introducing additional tokens or phrases based on some semantic similarity metrics. The tradeoff is the added computational complexity to find semantically similar words and a possible increase in noise in information retrieval. Despite several research efforts on this topic, QE has not yet been explored enough and more work is needed on similarity matching and composition of query terms with an objective to retrieve a small set of most appropriate responses. QE should be scalable, fast, and robust in handling complex queries with a good response time and noise ceiling. In this paper, we propose Xu, an automated QE technique, using high dimensional clustering of word vectors and Datamuse API, an open source query engine to find semantically similar words. We implemented Xu as a command line tool and evaluated its performances using datasets containing news articles and human-generated QEs. The evaluation results show that Xu was better than Datamuse by achieving about 88% accuracy with reference to the human-generated QE.
http://arxiv.org/abs/1808.09353
Image retrieval utilizes image descriptors to retrieve the most similar images to a given query image. Convolutional neural network (CNN) is becoming the dominant approach to extract image descriptors for image retrieval. For low-power hardware implementation of image retrieval, the drawback of CNN-based feature descriptor is that it requires hundreds of megabytes of storage. To address this problem, this paper applies deep model quantization and compression to CNN in ASIC chip for image retrieval. It is demonstrated that the CNN-based features descriptor can be extracted using as few as 2-bit weights quantization to deliver a similar performance as floating-point model for image retrieval. In addition, to implement CNN in ASIC, especially for large scale images, the limited buffer size of chips should be considered. To retrieve large scale images, we propose an improved pooling strategy, region nested invariance pooling (RNIP), which uses cropped sub-images for CNN. Testing results on chip show that integrating RNIP with the proposed 2-bit CNN model compression approach is capable of retrieving large scale images.
http://arxiv.org/abs/1905.03362
Constraint answer set programming integrates answer set programming with constraint processing. System EZSMT+ is a constraint answer set programming tool that utilizes satisfiability modulo theory solvers for search. Its theoretical foundation lies on generalizations of Niemela’s characterization of answer sets of a logic program via so called level rankings.
http://arxiv.org/abs/1905.03334
Recent deep learning approaches have achieved impressive performance on speech enhancement and separation tasks. However, these approaches have not been investigated for separating mixtures of arbitrary sounds of different types, a task we refer to as universal sound separation, and it is unknown whether performance on speech tasks carries over to non-speech tasks. To study this question, we develop a universal dataset of mixtures containing arbitrary sounds, and use it to investigate the space of mask-based separation architectures, varying both the overall network architecture and the framewise analysis-synthesis basis for signal transformations. These network architectures include convolutional long short-term memory networks and time-dilated convolution stacks inspired by the recent success of time-domain enhancement networks like ConvTasNet. For the latter architecture, we also propose novel modifications that further improve separation performance. In terms of the framewise analysis-synthesis basis, we explore using either a short-time Fourier transform (STFT) or a learnable basis, as used in ConvTasNet, and for both of these bases, we examine the effect of window size. In particular, for STFTs, we find that longer windows (25-50 ms) work best for speech/non-speech separation, while shorter windows (2.5 ms) work best for arbitrary sounds. For learnable bases, shorter windows (2.5 ms) work best on all tasks. Surprisingly, for universal sound separation, STFTs outperform learnable bases. Our best methods produce an improvement in scale-invariant signal-to-distortion ratio of over 13 dB for speech/non-speech separation and close to 10 dB for universal sound separation.
http://arxiv.org/abs/1905.03330
Antarctic penguins are important ecological indicators – especially in the face of climate change. In this work, we present a deep learning based model for semantic segmentation of Ad'elie penguin colonies in high-resolution satellite imagery. To train our segmentation models, we take advantage of the Penguin Colony Dataset: a unique dataset with 2044 georeferenced cropped images from 193 Ad'elie penguin colonies in Antarctica. In the face of a scarcity of pixel-level annotation masks, we propose a weakly-supervised framework to effectively learn a segmentation model from weak labels. We use a classification network to filter out data unsuitable for the segmentation network. This segmentation network is trained with a specific loss function, based on the average activation, to effectively learn from the data with the weakly-annotated labels. Our experiments show that adding weakly-annotated training examples significantly improves segmentation performance, increasing the mean Intersection-over-Union from 42.3 to 60.0% on the Penguin Colony Dataset.
http://arxiv.org/abs/1905.03313
Current approaches for service composition (assemblies of atomic services) require developers to use: (a) domain-specific semantics to formalize services that restrict the vocabulary for their descriptions, and (b) translation mechanisms for service retrieval to convert unstructured user requests to strongly-typed semantic representations. In our work, we argue that effort to developing service descriptions, request translations, and matching mechanisms could be reduced using unrestricted natural language; allowing both: (1) end-users to intuitively express their needs using natural language, and (2) service developers to develop services without relying on syntactic/semantic description languages. Although there are some natural language-based service composition approaches, they restrict service retrieval to syntactic/semantic matching. With recent developments in Machine learning and Natural Language Processing, we motivate the use of Sentence Embeddings by leveraging richer semantic representations of sentences for service description, matching and retrieval. Experimental results show that service composition development effort may be reduced by more than 44\% while keeping a high precision/recall when matching high-level user requests with low-level service method invocations.
http://arxiv.org/abs/1901.07910
Confirmatory Factor Analysis (CFA) is a particular form of factor analysis, most commonly used in social research. In confirmatory factor analysis, the researcher first develops a hypothesis about what factors they believe are underlying the used measures and may impose constraints on the model based on these a priori hypotheses. For example, if two factors are accounting for the covariance in the measures, and these factors are unrelated to one another, we can create a model where the correlation between factor X and factor Y is set to zero. Measures could then be obtained to assess how well the fitted model captured the covariance between all the items or measures in the model. Thus, if the results of statistical tests of the model fit indicate a poor fit, the model will be rejected. If the fit is weak, it may be due to a variety of reasons. We propose to introduce state of the art techniques to do CFA in R language. Then, we propose to do some examples of CFA with R and some datasets, revealing several scenarios where CFA is relevant.
https://arxiv.org/abs/1905.05598
Point cloud registration is a key problem for computer vision applied to robotics, medical imaging, and other applications. This problem involves finding a rigid transformation from one point cloud into another so that they align. Iterative Closest Point (ICP) and its variants provide simple and easily-implemented iterative methods for this task, but these algorithms can converge to spurious local optima. To address local optima and other difficulties in the ICP pipeline, we propose a learning-based method, titled Deep Closest Point (DCP), inspired by recent techniques in computer vision and natural language processing. Our model consists of three parts: a point cloud embedding network, an attention-based module combined with a pointer generation layer, to approximate combinatorial matching, and a differentiable singular value decomposition (SVD) layer to extract the final rigid transformation. We train our model end-to-end on the ModelNet40 dataset and show in several settings that it performs better than ICP, its variants (e.g., Go-ICP, FGR), and the recently-proposed learning-based method PointNetLK. Beyond providing a state-of-the-art registration technique, we evaluate the suitability of our learned features transferred to unseen objects. We also provide preliminary analysis of our learned model to help understand whether domain-specific and/or global features facilitate rigid registration.
http://arxiv.org/abs/1905.03304
Due to the variability in characteristics of audio scenes, some scenes can naturally be recognized earlier than others. In this work, rather than using equal-length snippets for all scene categories, as is common in the literature, we study to which temporal extent an audio scene can be reliably recognized given state-of-the-art models. Moreover, as model fusion with deep network ensemble is prevalent in audio scene classification, we further study whether, and if so, when model fusion is necessary for this task. To achieve these goals, we employ two single-network systems relying on a convolutional neural network and a recurrent neural network for classification as well as early fusion and late fusion of these networks. Experimental results on the LITIS-Rouen dataset show that some scenes can be reliably recognized with a few seconds while other scenes require significantly longer durations. In addition, model fusion is shown to be the most beneficial when the signal length is short.
http://arxiv.org/abs/1811.01095
Convolutional Neural Network (CNN) is the state-of-the-art for image classification task. Here we have briefly discussed different components of CNN. In this paper, We have explained different CNN architectures for image classification. Through this paper, we have shown advancements in CNN from LeNet-5 to latest SENet model. We have discussed the model description and training details of each model. We have also drawn a comparison among those models.
http://arxiv.org/abs/1905.03288
In most automatic speech recognition (ASR) systems, the audio signal is processed to produce a time series of sensor measurements (e.g., filterbank outputs). This time series encodes semantic information in a speaker-dependent way. An earlier paper showed how to use the sequence of sensor measurements to derive an “inner” time series that is unaffected by any previous invertible transformation of the sensor measurements. The current paper considers two or more speakers, who mimic one another in the following sense: when they say the same words, they produce sensor states that are invertibly mapped onto one another. It follows that the inner time series of their utterances must be the same when they say the same words. In other words, the inner time series encodes their speech in a manner that is speaker-independent. Consequently, the ASR training process can be simplified by collecting and labelling the inner time series of the utterances of just one speaker, instead of training on the sensor time series of the utterances of a large variety of speakers. A similar argument suggests that the inner time series of music is instrument-independent. This is demonstrated in experiments on monophonic electronic music.
http://arxiv.org/abs/1905.03278
Compared to DSLR cameras, smartphone cameras have smaller sensors, which limits their spatial resolution; smaller apertures, which limits their light gathering ability; and smaller pixels, which reduces their signal-to noise ratio. The use of color filter arrays (CFAs) requires demosaicing, which further degrades resolution. In this paper, we supplant the use of traditional demosaicing in single-frame and burst photography pipelines with a multiframe super-resolution algorithm that creates a complete RGB image directly from a burst of CFA raw images. We harness natural hand tremor, typical in handheld photography, to acquire a burst of raw frames with small offsets. These frames are then aligned and merged to form a single image with red, green, and blue values at every pixel site. This approach, which includes no explicit demosaicing step, serves to both increase image resolution and boost signal to noise ratio. Our algorithm is robust to challenging scene conditions: local motion, occlusion, or scene changes. It runs at 100 milliseconds per 12-megapixel RAW input burst frame on mass-produced mobile phones. Specifically, the algorithm is the basis of the Super-Res Zoom feature, as well as the default merge method in Night Sight mode (whether zooming or not) on Google’s flagship phone.
http://arxiv.org/abs/1905.03277
We present a conceptually simple yet effective algorithm to detect wireframes in a given image. Compared to the previous methods which first predict an intermediate heat map and then extract straight lines with heuristic algorithms, our method is end-to-end trainable and can directly output a vectorized wireframe that contains semantically meaningful and geometrically salient junctions and lines. To better understand the quality of the outputs, we propose a new metric for wireframe evaluation that penalizes overlapped line segments and incorrect line connectivities. We conduct extensive experiments and show that our method significantly outperforms the previous state-of-the-art wireframe and line extraction algorithms. We hope our simple approach can be served as a baseline for future wireframe parsing studies. Code has been made publicly available at https://github.com/zhou13/lcnn.
http://arxiv.org/abs/1905.03246
This paper addresses the problem of 3D human pose and shape estimation from a single image. Previous approaches consider a parametric model of the human body, SMPL, and attempt to regress the model parameters that give rise to a mesh consistent with image evidence. This parameter regression has been a very challenging task, with model-based approaches underperforming compared to nonparametric solutions in terms of pose estimation. In our work, we propose to relax this heavy reliance on the model’s parameter space. We still retain the topology of the SMPL template mesh, but instead of predicting model parameters, we directly regress the 3D location of the mesh vertices. This is a heavy task for a typical network, but our key insight is that the regression becomes significantly easier using a Graph-CNN. This architecture allows us to explicitly encode the template mesh structure within the network and leverage the spatial locality the mesh has to offer. Image-based features are attached to the mesh vertices and the Graph-CNN is responsible to process them on the mesh structure, while the regression target for each vertex is its 3D location. Having recovered the complete 3D geometry of the mesh, if we still require a specific model parametrization, this can be reliably regressed from the vertices locations. We demonstrate the flexibility and the effectiveness of our proposed graph-based mesh regression by attaching different types of features on the mesh vertices. In all cases, we outperform the comparable baselines relying on model parameter regression, while we also achieve state-of-the-art results among model-based pose estimation approaches.
http://arxiv.org/abs/1905.03244
We analyze the stability of recurrent networks, specifically, reservoir computing models during training by evaluating the eigenvalue spectra of the reservoir dynamics. To circumvent the instability arising in examining a closed loop reservoir system with feedback, we propose to break the closed loop system. Essentially, we unroll the reservoir dynamics over time while incorporating the feedback effects that preserve the overall temporal integrity of the system. We evaluate our methodology for fixed point and time varying targets with least squares regression and FORCE training, respectively. Our analysis establishes eigenvalue spectra (which is, shrinking of spectral circle as training progresses) as a valid and effective metric to gauge the convergence of training as well as the convergence of the chaotic activity of the reservoir toward stable states.
http://arxiv.org/abs/1905.03219
Endoscopic artifacts are a core challenge in facilitating the diagnosis and treatment of diseases in hollow organs. Precise detection of specific artifacts like pixel saturations, motion blur, specular reflections, bubbles and debris is essential for high-quality frame restoration and is crucial for realizing reliable computer-assisted tools for improved patient care. At present most videos in endoscopy are currently not analyzed due to the abundant presence of multi-class artifacts in video frames. Through the endoscopic artifact detection (EAD 2019) challenge, we address this key bottleneck problem by solving the accurate identification and localization of endoscopic frame artifacts to enable further key quantitative analysis of unusable video frames such as mosaicking and 3D reconstruction which is crucial for delivering improved patient care. This paper summarizes the challenge tasks and describes the dataset and evaluation criteria established in the EAD 2019 challenge.
http://arxiv.org/abs/1905.03209
Segmenting aerial images is being of great potential in surveillance and scene understanding of urban areas. It provides a mean for automatic reporting of the different events that happen in inhabited areas. This remarkably promotes public safety and traffic management applications. After the wide adoption of convolutional neural networks methods, the accuracy of semantic segmentation algorithms could easily surpass 80% if a robust dataset is provided. Despite this success, the deployment of a pre-trained segmentation model to survey a new city that is not included in the training set significantly decreases the accuracy. This is due to the domain shift between the source dataset on which the model is trained and the new target domain of the new city images. In this paper, we address this issue and consider the challenge of domain adaptation in semantic segmentation of aerial images. We design an algorithm that reduces the domain shift impact using Generative Adversarial Networks (GANs). In the experiments, we test the proposed methodology on the International Society for Photogrammetry and Remote Sensing (ISPRS) semantic segmentation dataset and found that our method improves the overall accuracy from 35% to 52% when passing from Potsdam domain (considered as source domain) to Vaihingen domain (considered as target domain). In addition, the method allows recovering efficiently the inverted classes due to sensor variation. In particular, it improves the average segmentation accuracy of the inverted classes due to sensor variation from 14% to 61%.
http://arxiv.org/abs/1905.03198
This paper presents a new Unified pre-trained Language Model (UniLM) that can be fine-tuned for both natural language understanding and generation tasks. The model is pre-trained using three types of language modeling objectives: unidirectional (both left-to-right and right-to-left), bidirectional, and sequence-to-sequence prediction. The unified modeling is achieved by employing a shared Transformer network and utilizing specific self-attention masks to control what context the prediction conditions on. We can fine-tune UniLM as a unidirectional decoder, a bidirectional encoder, or a sequence-to-sequence model to support various downstream natural language understanding and generation tasks. UniLM compares favorably with BERT on the GLUE benchmark, and the SQuAD 2.0 and CoQA question answering tasks. Moreover, our model achieves new state-of-the-art results on three natural language generation tasks, including improving the CNN/DailyMail abstractive summarization ROUGE-L to 40.63 (2.16 absolute improvement), pushing the CoQA generative question answering F1 score to 82.5 (37.1 absolute improvement), and the SQuAD question generation BLEU-4 to 22.88 (6.50 absolute improvement).
http://arxiv.org/abs/1905.03197
Automation applications are pushing the deployment of many high DoF manipulators in warehouse and manufacturing environments. This has motivated many efforts on optimizing manipulation tasks involving a single arm. Coordinating multiple arms for manipulation, however, introduces additional computational challenges arising from the increased DoFs, as well as the combinatorial increase in the available operations that many manipulators can perform, including handoffs between arms. The focus here is on the case of pick-and-place tasks, which require a sequence of handoffs to be executed, so as to achieve computational efficiency, asymptotic optimality and practical anytime performance. The paper leverages recent advances in multi-robot motion planning for high DoF systems to propose a novel multi-modal extension of the dRRT* algorithm. The key insight is that, instead of naively solving a sequence of motion planning problems, it is computationally advantageous to directly explore the composite space of the integrated multi-arm task and motion planning problem, given input sets of possible pick and handoff configurations. Asymptotic optimality guarantees are possible by sampling additional picks and handoffs over time. The evaluation shows that the approach finds initial solutions fast and improves their quality over time. It also succeeds in finding solutions to harder problem instances relative to alternatives and can scale effectively as the number of robots increases.
http://arxiv.org/abs/1905.03179
The Connectionist Temporal Classification (CTC) has achieved great success in sequence to sequence analysis tasks such as automatic speech recognition (ASR) and scene text recognition (STR). These applications can use the CTC objective function to train the recurrent neural networks (RNNs), and decode the outputs of RNNs during inference. While hardware architectures for RNNs have been studied, hardware-based CTCdecoders are desired for high-speed CTC-based inference systems. This paper, for the first time, provides a low-complexity and memory-efficient approach to build a CTC-decoder based on the beam search decoding. Firstly, we improve the beam search decoding algorithm to save the storage space. Secondly, we compress a dictionary (reduced from 26.02MB to 1.12MB) and use it as the language model. Meanwhile searching this dictionary is trivial. Finally, a fixed-point CTC-decoder for an English ASR and an STR task using the proposed method is implemented with C++ language. It is shown that the proposed method has little precision loss compared with its floating-point counterpart. Our experiments demonstrate the compression ratio of the storage required by the proposed beam search decoding algorithm are 29.49 (ASR) and 17.95 (STR).
https://arxiv.org/abs/1905.03175
Text effects are combinations of visual elements such as outlines, colors and textures of text, which can dramatically improve its artistry. Although text effects are extensively utilized in the design industry, they are usually created by human experts due to their extreme complexity, which is laborious and not practical for normal users. In recent years, some efforts have been made for automatic text effects transfer, however, the lack of data limits the capability of transfer models. To address this problem, we introduce a new text effects dataset, TE141K, with 141,081 text effects/glyph pairs in total. Our dataset consists of 152 professionally designed text effects, rendered on glyphs including English letters, Chinese characters, Arabic numerals, etc. To the best of our knowledge, this is the largest dataset for text effects transfer as far. Based on this dataset, we propose a baseline approach named Text Effects Transfer GAN (TET-GAN), which supports the transfer of all 152 styles in one model and can efficiently extend to new styles. Finally, we conduct a comprehensive comparison where 14 style transfer models are benchmarked. Experimental results demonstrate the superiority of TET-GAN both qualitatively and quantitatively, and indicate that our dataset is effective and challenging.
http://arxiv.org/abs/1905.03646
Artificial Intelligence (AI) has the opportunity to revolutionize the way the United States Department of Defense (DoD) and Intelligence Community (IC) address the challenges of evolving threats, data deluge, and rapid courses of action. Developing an end-to-end artificial intelligence system involves parallel development of different pieces that must work together in order to provide capabilities that can be used by decision makers, warfighters and analysts. These pieces include data collection, data conditioning, algorithms, computing, robust artificial intelligence, and human-machine teaming. While much of the popular press today surrounds advances in algorithms and computing, most modern AI systems leverage advances across numerous different fields. Further, while certain components may not be as visible to end-users as others, our experience has shown that each of these interrelated components play a major role in the success or failure of an AI system. This article is meant to highlight many of these technologies that are involved in an end-to-end AI system. The goal of this article is to provide readers with an overview of terminology, technical details and recent highlights from academia, industry and government. Where possible, we indicate relevant resources that can be used for further reading and understanding.
http://arxiv.org/abs/1905.03592
The Fact Extraction and VERification (FEVER) shared task was launched to support the development of systems able to verify claims by extracting supporting or refuting facts from raw text. The shared task organizers provide a large-scale dataset for the consecutive steps involved in claim verification, in particular, document retrieval, fact extraction, and claim classification. In this paper, we present our claim verification pipeline approach, which, according to the preliminary results, scored third in the shared task, out of 23 competing systems. For the document retrieval, we implemented a new entity linking approach. In order to be able to rank candidate facts and classify a claim on the basis of several selected facts, we introduce two extensions to the Enhanced LSTM (ESIM).
http://arxiv.org/abs/1809.01479
Addressing uncertainty is critical for autonomous systems to robustly adapt to the real world. We formulate the problem of model uncertainty as a continuous Bayes-Adaptive Markov Decision Process (BAMDP), where an agent maintains a posterior distribution over latent model parameters given a history of observations and maximizes its expected long-term reward with respect to this belief distribution. Our algorithm, Bayesian Policy Optimization, builds on recent policy optimization algorithms to learn a universal policy that navigates the exploration-exploitation trade-off to maximize the Bayesian value function. To address challenges from discretizing the continuous latent parameter space, we propose a new policy network architecture that encodes the belief distribution independently from the observable state. Our method significantly outperforms algorithms that address model uncertainty without explicitly reasoning about belief distributions and is competitive with state-of-the-art Partially Observable Markov Decision Process solvers.
http://arxiv.org/abs/1810.01014
We propose a method for room layout estimation that does not rely on the typical box approximation or Manhattan world assumption. Instead, we reformulate the geometry inference problem as an instance detection task, which we solve by directly regressing 3D planes using an R-CNN. We then use a variant of probabilistic clustering to combine the 3D planes regressed at each frame in a video sequence, with their respective camera poses, into a single global 3D room layout estimate. Finally, we showcase results which make no assumptions about perpendicular alignment, so can deal effectively with walls in any alignment.
http://arxiv.org/abs/1905.03105
The potential benefit of hybrid X-ray and MR imaging in the interventional environment is large due to the combination of fast imaging with high contrast variety. However, a vast amount of existing image enhancement methods requires the image information of both modalities to be present in the same domain. To unlock this potential, we present a solution to image-to-image translation from MR projections to corresponding X-ray projection images. The approach is based on a state-of-the-art image generator network that is modified to fit the specific application. Furthermore, we propose the inclusion of a gradient map in the loss function to allow the network to emphasize high-frequency details in image generation. Our approach is capable of creating X-ray projection images with natural appearance. Additionally, our extensions show clear improvement compared to the baseline method.
http://arxiv.org/abs/1804.03955
This paper proposes a method for machine learning from unlabeled data in the form of a time-series. The mapping that is learned is shown to extract slowly evolving information that would be useful for control applications, while efficiently filtering out unwanted, higher-frequency noise. The method consists of training a feedforward artificial neural network with backpropagation using two opposing objectives. The first of these is to minimize the squared changes in activations between time steps of each unit in the network. This “temporal smoothing” has the effect of correlating inputs that occur close in time with outputs that are close in the L2-norm. The second objective is to maximize the log determinant of the covariance matrix of activations in each layer of the network. This objective ensures that information from each layer is passed through to the next. This second objective acts as a balance to the first, which on its own would result in a network with all input weights equal to zero.
http://arxiv.org/abs/1905.03100
Audio-driven 3D facial animation has been widely explored, but achieving realistic, human-like performance is still unsolved. This is due to the lack of available 3D datasets, models, and standard evaluation metrics. To address this, we introduce a unique 4D face dataset with about 29 minutes of 4D scans captured at 60 fps and synchronized audio from 12 speakers. We then train a neural network on our dataset that factors identity from facial motion. The learned model, VOCA (Voice Operated Character Animation) takes any speech signal as input - even speech in languages other than English - and realistically animates a wide range of adult faces. Conditioning on subject labels during training allows the model to learn a variety of realistic speaking styles. VOCA also provides animator controls to alter speaking style, identity-dependent facial shape, and pose (i.e. head, jaw, and eyeball rotations) during animation. To our knowledge, VOCA is the only realistic 3D facial animation model that is readily applicable to unseen subjects without retargeting. This makes VOCA suitable for tasks like in-game video, virtual reality avatars, or any scenario in which the speaker, speech, or language is not known in advance. We make the dataset and model available for research purposes at this http URL
http://arxiv.org/abs/1905.03079
We present state-of-the-art automatic speech recognition (ASR) systems employing a standard hybrid DNN\/HMM architecture compared to an attention-based encoder-decoder design for the LibriSpeech task. Detailed descriptions of the system development, including model design, pretraining schemes, training schedules, and optimization approaches are provided for both system architectures. Both hybrid DNN/HMM and attention-based systems employ bi-directional LSTMs for acoustic modeling/encoding. For language modeling, we employ both LSTM and Transformer based architectures. All our systems are built using RWTHs open-source toolkits RASR and RETURNN. To the best knowledge of the authors, the results obtained when training on the full LibriSpeech training set, are the best published currently, both for the hybrid DNN/HMM and the attention-based systems. Our single hybrid system even outperforms previous results obtained from combining eight single systems. Our comparison shows that on the LibriSpeech 960h task, the hybrid DNN/HMM system outperforms the attention-based system by 15% relative on the clean and 40% relative on the other test sets in terms of word error rate. Moreover, experiments on a reduced 100h-subset of the LibriSpeech training corpus even show a more pronounced margin between the hybrid DNN/HMM and attention-based architectures.
http://arxiv.org/abs/1905.03072
This paper presents an efficient model for object detection from LiDAR sensors in self-driving cars and a strategy to train the model using data from a different type of LiDAR sensor. Currently, the highest performing algorithms for object detection from LiDAR measurements are based on neural networks. Training these networks using supervised learning requires large annotated datasets. This has led to the situation that most research using neural networks for object detection from LiDAR point clouds is done on a very small number of publicly available datasets and a very small number of sensor types. This paper uses an existing annotated dataset to train a neural network that can be used with a LiDAR sensor that has a lower resolution than the one used for recording the annotated dataset. This is done by simulating data from the lower resolution LiDAR sensor based on the higher resolution dataset. Furthermore, improvements to models that use LiDAR range images for object detection are presented. The results are validated both on simulated sensor data and on data from an actual lower resolution sensor mounted to a research vehicle. It is shown, that the model can predict objects from 360{\deg} range images in real time.
http://arxiv.org/abs/1905.03066