Recent advances in image super-resolution (SR) explored the power of deep learning to achieve a better reconstruction performance. However, the feedback mechanism, which commonly exists in human visual system, has not been fully exploited in existing deep learning based image SR methods. In this paper, we propose an image super-resolution feedback network (SRFBN) to refine low-level representations with high-level information. Specifically, we use hidden states in an RNN with constraints to achieve such feedback manner. A feedback block is designed to handle the feedback connections and to generate powerful high-level representations. The proposed SRFBN comes with a strong early reconstruction ability and can create the final high-resolution image step by step. In addition, we introduce a curriculum learning strategy to make the network well suitable for more complicated tasks, where the low-resolution images are corrupted by multiple types of degradation. Extensive experimental results demonstrate the superiority of the proposed SRFBN in comparison with the state-of-the-art methods. Code is avaliable at https://github.com/Paper99/SRFBN_CVPR19.
http://arxiv.org/abs/1903.09814
End-to-end dialogue generation has achieved promising results without using handcrafted features and attributes specific for each task and corpus. However, one of the fatal drawbacks in such approaches is that they are unable to generate informative utterances, so it limits their usage from some real-world conversational applications. This paper attempts at generating diverse and informative responses with a variational generation model, which contains a joint attention mechanism conditioning on the information from both dialogue contexts and extra knowledge.
http://arxiv.org/abs/1903.09813
Noise in speckle-prone optical coherence tomography tends to obfuscate important details necessary for medical diagnosis. In this paper, a denoising approach that preserves disease characteristics on retinal optical coherence tomography images in ophthalmology is presented. By combining a deep convolutional autoencoder with a priorly trained ResNet image classifier as regularizer, the perceptibility of delicate details is encouraged and only information-less background noise is filtered out. With our approach, higher peak signal-to-noise ratios with $ \mathrm{PSNR} = 31.2\,\mathrm{dB} $ and higher classification accuracy of $\mathrm{ACC} = 85.0\,\%$ can be achieved for denoised images compared to state-of-the-art denoising with $ \mathrm{PSNR} = 29.4\,\mathrm{dB} $ or $\mathrm{ACC} = 70.3\,\%$, depending on the method. It is shown that regularized autoencoders are capable of denoising retinal OCT images without blurring details of diseases.
http://arxiv.org/abs/1903.09809
Significant computational cost and memory requirements for deep neural networks (DNNs) make it difficult to utilize DNNs in resource-constrained environments. Binary neural network (BNN), which uses binary weights and binary activations, has been gaining interests for its hardware-friendly characteristics and minimal resource requirement. However, BNN usually suffers from accuracy degradation. In this paper, we introduce “BitSplit-Net”, a neural network which maintains the hardware-friendly characteristics of BNN while improving accuracy by using multi-bit precision. In BitSplit-Net, each bit of multi-bit activations propagates independently throughout the network before being merged at the end of the network. Thus, each bit path of the BitSplit-Net resembles BNN and hardware friendly features of BNN, such as bitwise binary activation function, are preserved in our scheme. We demonstrate that the BitSplit version of LeNet-5, VGG-9, AlexNet, and ResNet-18 can be trained to have similar classification accuracy at a lower computational cost compared to conventional multi-bit networks with low bit precision (<= 4-bit). We further evaluate BitSplit-Net on GPU with custom CUDA kernel, showing that BitSplit-Net can achieve better hardware performance in comparison to conventional multi-bit networks.
http://arxiv.org/abs/1903.09807
This work focuses on recognizing the unknown emotion based on the Third-Order Circular Suprasegmental Hidden Markov Model (CSPHMM3) as a classifier. Our work has been tested on Emotional Prosody Speech and Transcripts (EPST) database. The extracted features of EPST database are Mel-Frequency Cepstral Coefficients (MFCCs). Our results give average emotion recognition accuracy of 77.8% based on the CSPHMM3. The results of this work demonstrate that CSPHMM3 is superior to the Third-Order Hidden Markov Model (HMM3), Gaussian Mixture Model (GMM), Support Vector Machine (SVM), and Vector Quantization (VQ) by 6.0%, 4.9%, 3.5%, and 5.4%, respectively, for emotion recognition. The average emotion recognition accuracy achieved based on the CSPHMM3 is comparable to that found using subjective assessment by human judges.
http://arxiv.org/abs/1903.09803
According to the World Health Organization, 285 million people worldwide live with visual impairment. The most commonly used imaging technique for diagnosis in ophthalmology is optical coherence tomography (OCT). However, analysis of retinal OCT requires trained ophthalmologists and time, making a comprehensive early diagnosis unlikely. A recent study established a diagnostic tool based on convolutional neural networks (CNN), which was trained on a large database of retinal OCT images. The performance of the tool in classifying retinal conditions was on par to that of trained medical experts. However, the training of these networks is based on an enormous amount of labeled data, which is expensive and difficult to obtain. Therefore, this paper describes a method based on variational autoencoder regularization that improves classification performance when using a limited amount of labeled data. This work uses a two-path CNN model combining a classification network with an autoencoder (AE) for regularization. The key idea behind this is to prevent overfitting when using a limited training dataset size with small number of patients. Results show superior classification performance compared to a pre-trained and fully fine-tuned baseline ResNet-34. Clustering of the latent space in relation to the disease class is distinct. Neural networks for disease classification on OCTs can benefit from regularization using variational autoencoders when trained with limited amount of patient data. Especially in the medical imaging domain, data annotated by experts is expensive to obtain.
http://arxiv.org/abs/1904.00790
Model robustness has been an important issue, since adding small adversarial perturbations to images is sufficient to drive the model accuracy down to nearly zero. In this paper, we propose a new training objective “Guided Complement Entropy” (GCE) that has dual desirable effects: (a) neutralizing the predicted probabilities of incorrect classes, and (b) maximizing the predicted probability of the ground-truth class, particularly when (a) is achieved. Training with GCE encourages models to learn latent representations where samples of different classes form distinct clusters, which we argue, improves the model robustness against adversarial perturbations. Furthermore, compared with the state-of-the-arts trained with cross-entropy, same models trained with GCE achieve significant improvements on the robustness against white-box adversarial attacks, both with and without adversarial training. When no attack is present, training with GCE also outperforms cross-entropy in terms of model accuracy.
http://arxiv.org/abs/1903.09799
Visual anomaly detection is common in several applications including medical screening and production quality check. Although a definition of the anomaly is an unknown trend in data, in many cases some hints or samples of the anomaly class can be given in advance. Conventional methods cannot use the available anomaly data, and also do not have a robustness of noise. In this paper, we propose a novel spatially-weighted reconstruction-loss-based anomaly detection with a likelihood value from a regression model trained by all known data. The spatial weights are calculated by a region of interest generated from employing visualization of the regression model. We introduce some ways to combine with various strategies to propose a state-of-the-art method. Comparing with other methods on three different datasets, we empirically verify the proposed method performs better than the others.
http://arxiv.org/abs/1903.09798
We present the ConditionaL Neural Network (CLNN) and the Masked ConditionaL Neural Network (MCLNN) designed for temporal signal recognition. The CLNN takes into consideration the temporal nature of the sound signal and the MCLNN extends upon the CLNN through a binary mask to preserve the spatial locality of the features and allows an automated exploration of the features combination analogous to hand-crafting the most relevant features for the recognition task. MCLNN has achieved competitive recognition accuracies on the GTZAN and the ISMIR2004 music datasets that surpass several state-of-the-art neural network based architectures and hand-crafted methods applied on both datasets.
http://arxiv.org/abs/1803.02421
Recent geometric methods need reliable estimates of 3D motion parameters to procure accurate dense depth map of a complex dynamic scene from monocular images \cite{kumar2017monocular, ranftl2016dense}. Generally, to estimate \textbf{precise} measurements of relative 3D motion parameters and to validate its accuracy using image data is a challenging task. In this work, we propose an alternative approach that circumvents the 3D motion estimation requirement to obtain a dense depth map of a dynamic scene. Given per-pixel optical flow correspondences between two consecutive frames and, the sparse depth prior for the reference frame, we show that, we can effectively recover the dense depth map for the successive frames without solving for 3D motion parameters. Our method assumes a piece-wise planar model of a dynamic scene, which undergoes rigid transformation locally, and as-rigid-as-possible transformation globally between two successive frames. Under our assumption, we can avoid the explicit estimation of 3D rotation and translation to estimate scene depth. In essence, our formulation provides an unconventional way to think and recover the dense depth map of a complex dynamic scene which is incremental and motion free in nature. Our proposed method does not make object level or any other high-level prior assumption about the dynamic scene, as a result, it is applicable to a wide range of scenarios. Experimental results on the benchmarks dataset show the competence of our approach for multiple frames.
http://arxiv.org/abs/1902.03791
Segmentation of highly slanted and horizontally overlapped characters is a challenging research area that is still fresh. Several techniques are reported in the state of art, but produce low accuracy for the highly slanted characters segmentation and cause overall low handwriting recognition precision. Accordingly, this paper presents a simple yet effective approach for character segmentation of such difficult slanted cursive words without using any slant correction technique. Rather a new concept of core-zone is introduced for segmenting such difficult slanted handwritten words. However, due to the inherent nature of cursive words, few characters are over-segmented and therefore, a threshold is selected heuristically to overcome this problem. For fair comparison, difficult words are extracted from the IAM benchmark database. Experiments thus performed exhibit promising result and high speed.
http://arxiv.org/abs/1904.00792
We study relations between evidence theory and S-approximation spaces. Both theories have their roots in the analysis of Dempster’s multivalued mappings and lower and upper probabilities and have close relations to rough sets. We show that an S-approximation space, satisfying a monotonicity condition, can induce a natural belief structure which is a fundamental block in evidence theory. We also demonstrate that one can induce a natural belief structure on one set, given a belief structure on another set if those sets are related by a partial monotone S-approximation space.
http://arxiv.org/abs/1805.10672
Socially-intelligent agents are of growing interest in artificial intelligence. To this end, we need systems that can understand social relationships in diverse social contexts. Inferring the social context in a given visual scene not only involves recognizing objects, but also demands a more in-depth understanding of the relationships and attributes of the people involved. To achieve this, one computational approach for representing human relationships and attributes is to use an explicit knowledge graph, which allows for high-level reasoning. We introduce a novel end-to-end-trainable neural network that is capable of generating a Social Relationship Graph - a structured, unified representation of social relationships and attributes - from a given input image. Our Social Relationship Graph Generation Network (SRG-GN) is the first to use memory cells like Gated Recurrent Units (GRUs) to iteratively update the social relationship states in a graph using scene and attribute context. The neural network exploits the recurrent connections among the GRUs to implement message passing between nodes and edges in the graph, and results in significant improvement over previous methods for social relationship recognition.
http://arxiv.org/abs/1903.09784
Scene Parsing is a crucial step to enable autonomous systems to understand and interact with their surroundings. Supervised deep learning methods have made great progress in solving scene parsing problems, however, come at the cost of laborious manual pixel-level annotation. To alleviate this effort synthetic data as well as weak supervision have both been investigated. Nonetheless, synthetically generated data still suffers from severe domain shift while weak labels are often imprecise. Moreover, most existing works for weakly supervised scene parsing are limited to salient foreground objects. The aim of this work is hence twofold: Exploit synthetic data where feasible and integrate weak supervision where necessary. More concretely, we address this goal by utilizing depth as transfer domain because its synthetic-to-real discrepancy is much lower than for color. At the same time, we perform weak localization from easily obtainable image level labels and integrate both using a novel contour-based scheme. Our approach is implemented as a teacher-student learning framework to solve the transfer learning problem by generating a pseudo ground truth. Using only depth-based adaptation, this approach already outperforms previous transfer learning approaches on the popular indoor scene parsing SUN RGB-D dataset. Our proposed two-stage integration more than halves the gap towards fully supervised methods when compared to previous state-of-the-art in transfer learning.
http://arxiv.org/abs/1903.09781
Prevailing deep convolutional neural networks (CNNs) for person re-IDentification (reID) are usually built upon the ResNet or VGG backbones, which were originally designed for classification. Because reID has certain differences from classification, the architecture should be modified accordingly. We propose to search for a CNN architecture that is specifically suitable for the reID task. There are three main problems. First, body structural information plays an important role in reID but is not encoded in backbones. Part-based reID models incorporate structure information at the tail of a CNN. Performance relies heavily on human experts and the models are backbone-dependent, requiring extensive human effort when a different backbone is used. Second, Neural Architecture Search (NAS) automates the process of architecture design without human effort, but no existing NAS methods incorporate the structure information of input images. Third, reID is essentially a retrieval task but current NAS algorithms are merely designed for classification. To solve these problems, we propose a retrieval-based search algorithm over a specifically designed reID search space, named Auto-ReID. Our Auto-ReID enables the automated approach to find an efficient and effective CNN architecture that is specifically suitable for reID. Extensive experiments indicate that the searched architecture achieves state-of-the-art performance while requiring less than about 50% parameters and 53% FLOPs compared to others.
http://arxiv.org/abs/1903.09776
Weight pruning and weight quantization are two important categories of DNN model compression. Prior work on these techniques are mainly based on heuristics. A recent work developed a systematic frame-work of DNN weight pruning using the advanced optimization technique ADMM (Alternating Direction Methods of Multipliers), achieving one of state-of-art in weight pruning results. In this work, we first extend such one-shot ADMM-based framework to guarantee solution feasibility and provide fast convergence rate, and generalize to weight quantization as well. We have further developed a multi-step, progressive DNN weight pruning and quantization framework, with dual benefits of (i) achieving further weight pruning/quantization thanks to the special property of ADMM regularization, and (ii) reducing the search space within each step. Extensive experimental results demonstrate the superior performance compared with prior work. Some highlights: (i) we achieve 246x,36x, and 8x weight pruning on LeNet-5, AlexNet, and ResNet-50 models, respectively, with (almost) zero accuracy loss; (ii) even a significant 61x weight pruning in AlexNet (ImageNet) results in only minor degradation in actual accuracy compared with prior work; (iii) we are among the first to derive notable weight pruning results for ResNet and MobileNet models; (iv) we derive the first lossless, fully binarized (for all layers) LeNet-5 for MNIST and VGG-16 for CIFAR-10; and (v) we derive the first fully binarized (for all layers) ResNet for ImageNet with reasonable accuracy loss.
http://arxiv.org/abs/1903.09769
In this paper, we present a conditional generative adversarial network-based model for underwater image enhancement in real-time. In order to supervise the adversarial training, we formulate an objective function that evaluates the perceptual image quality based on its global content, color, and local style information. In addition, we present EUVP, a large-scale dataset of a paired and an unpaired collection of underwater images (of poor and good quality) that are captured using seven different cameras over various visibility conditions during oceanic explorations and human-robot collaborative experiments. Furthermore, we perform a number of qualitative and quantitative evaluations which suggest that the proposed model can learn to enhance the quality of underwater images from both paired and unpaired training. More importantly, the enhanced images provide improved performances for several standard models for underwater object detection and human pose estimation; hence, the proposed model can be used as an image processing pipeline by visually-guided underwater robots in real-time applications.
http://arxiv.org/abs/1903.09766
Model-free reinforcement learning (RL) provides an attractive approach for learning control policies directly in high dimensional state spaces. However, many goal-oriented tasks involving sparse rewards remain difficult to solve with state-of-the-art model-free RL algorithms, even in simulation. One of the key difficulties is that deep RL, due to its relatively poor sample complexity, often requires a prohibitive number of trials to obtain a learning signal. We propose a novel, non-sparse reward function for robotic RL tasks by leveraging physical priors in the form of a time-to-reach (TTR) function computed from an approximate system dynamics model. TTR functions come from the optimal control field and measure the minimal time required to move from any state to the goal. However, TTR functions are intractable to compute for complex systems, so we compute it in a lower-dimensional state space, and then do a simple transformation to convert it into a TTR-based reward function for the MDP in RL tasks. Our TTR-based reward function provides highly-informative rewards that account for system dynamics.
http://arxiv.org/abs/1903.09762
Over the past few years, deep learning techniques have achieved tremendous success in many visual understanding tasks such as object detection, image segmentation, and caption generation. Despite this thriving in computer vision and natural language processing, deep learning has not yet shown significant impact in robotics. Due to the gap between theory and application, there are many challenges when applying the results of deep learning to the real robotic systems. In this study, our long-term goal is to bridge the gap between computer vision and robotics by developing visual methods that can be used in real robots. In particular, this work tackles two fundamental visual problems for autonomous robotic manipulation: affordance detection and fine-grained action understanding. Theoretically, we propose different deep architectures to further improves the state of the art in each problem. Empirically, we show that the outcomes of our proposed methods can be applied in real robots and allow them to perform useful manipulation tasks.
http://arxiv.org/abs/1903.09761
Recent style transfer models have provided promising artistic results. However, given a photograph as a reference style, existing methods are limited by spatial distortions or unrealistic artifacts, which should not happen in real photographs. We introduce a theoretically sound correction to the network architecture that remarkably enhances photorealism and faithfully transfers the style. The key ingredient of our method is wavelet transforms that naturally fits in deep networks. We propose a wavelet corrected transfer based on whitening and coloring transforms (WCT$^2$) that allows features to preserve their structural information and statistical properties of VGG feature space during stylization. This is the first and the only end-to-end model that can stylize $1024\times1024$ resolution image in 4.7 seconds, giving a pleasing and photorealistic quality without any post-processing. Last but not least, our model provides a stable video stylization without temporal constraints. The code, generated images, supplementary materials, and pre-trained models are all available at https://github.com/ClovaAI/WCT2.
http://arxiv.org/abs/1903.09760
We present a new minimal problem for relative pose estimation mixing point features with lines incident at points observed in three views and its efficient homotopy continuation solver. We demonstrate the generality of the approach by analyzing and solving an additional trifocal problem with mixed points and line correspondences in three views. The minimal problems include correspondences across three views of (i) three points and one line and (ii) three points and two lines through two of the points which is reported and analyzed here for the first time. These are difficult to solve, as they have 216 and – as shown here – 312 solutions, but cover important practical situations when line and point features appear together, \eg, in urban scenes or when observing curves. We demonstrate that even such difficult problems can be solved robustly using a suitable homotopy continuation technique and we provide an implementation optimized for minimal problems that can be integrated into engineering applications. Our simulated and real experiments demonstrate our solvers in the camera geometry computation task in structure from motion. We show that new solvers allow for reconstructing challenging scenes where the standard two-view initialization of structure from motion fails.
http://arxiv.org/abs/1903.09755
Person re-identification (re-ID) has attracted much attention recently due to its great importance in video surveillance. In general, distance metrics used to identify two person images are expected to be robust under various appearance changes. However, our work observes the extreme vulnerability of existing distance metrics to adversarial examples, generated by simply adding human-imperceptible perturbations to person images. Hence, the security danger is dramatically increased when deploying commercial re-ID systems in video surveillance. Although adversarial examples have been extensively applied for classification analysis, it is rarely studied in metric analysis like person re-identification. The most likely reason is the natural gap between the training and testing of re-ID networks, that is, the predictions of a re-ID network cannot be directly used during testing without an effective metric. In this work, we bridge the gap by proposing Adversarial Metric Attack, a parallel methodology to adversarial classification attacks. Comprehensive experiments clearly reveal the adversarial effects in re-ID systems. Meanwhile, we also present an early attempt of training a metric-preserving network, thereby defending the metric against adversarial attacks. At last, by benchmarking various adversarial settings, we expect that our work can facilitate the development of adversarial attack and defense in metric-based applications.
http://arxiv.org/abs/1901.10650
We propose V2CNet, a new deep learning framework to automatically translate the demonstration videos to commands that can be directly used in robotic applications. Our V2CNet has two branches and aims at understanding the demonstration video in a fine-grained manner. The first branch has the encoder-decoder architecture to encode the visual features and sequentially generate the output words as a command, while the second branch uses a Temporal Convolutional Network (TCN) to learn the fine-grained actions. By jointly training both branches, the network is able to model the sequential information of the command, while effectively encodes the fine-grained actions. The experimental results on our new large-scale dataset show that V2CNet outperforms recent state-of-the-art methods by a substantial margin, while its output can be applied in real robotic applications. The source code and trained models will be made available.
http://arxiv.org/abs/1903.10869
Impedance control and specifically stiffness control are widely applied for physical human-robot interaction. The series elastic actuator (SEA) provides inherent compliance, safety and further benefits. This paper aims to improve the stiffness control performance of a cable-driven SEA. Existing impedance controllers were designed within the full frequency domain, though human-robot interaction commonly falls in the low frequency range. We enhance the stiffness rendering performance under formulated constraints of passivity, actuator limitation, disturbance attenuation, noise rejection at their specific frequency ranges. Firstly, we reformulate this multiple frequency-band optimization problem into the $H_\infty$ synthesis framework. Then, the performance goals are quantitatively characterized by respective restricted frequency-domain specifications as norm bounds. Further, a structured controller is directly synthesized to satisfy all the competing performance requirements. Both simulation and experimental results showed that the produced controller enabled good interaction performance for each desired stiffness varying from 0 to 1 times of the physical spring constant. Compared with the passivity-based PID method, the proposed $H_\infty$ synthesis method achieved more accurate and robust stiffness control performance with guaranteed passivity.
http://arxiv.org/abs/1903.09749
Purpose: This paper presents an impedance control method with mixed $H_2/H_\infty$ synthesis and relaxed passivity for a cable-driven series elastic actuator to be applied for physical human-robot interaction. Design/methodology/approach: To shape the system’s impedance to match a desired dynamic model, the impedance control problem was reformulated into an impedance matching structure. The desired competing performance requirements as well as constraints from the physical system can be characterized with weighting functions for respective signals. Considering the frequency properties of human movements, the passivity constraint for stable human-robot interaction, which is required on the entire frequency spectrum and may bring conservative solutions, has been relaxed in such a way that it only restrains the low frequency band. Thus, impedance control became a mixed $H_2/H_\infty$ synthesis problem, and a dynamic output feedback controller can be obtained. Findings: The proposed impedance control strategy has been tested for various desired impedance with both simulation and experiments on the cable-driven series elastic actuator platform. The actual interaction torque tracked well the desired torque within the desired norm bounds, and the control input was regulated below the motor velocity limit. The closed loop system can guarantee relaxed passivity at low frequency. Both simulation and experimental results have validated the feasibility and efficacy of the proposed method. Originality/value: This impedance control strategy with mixed $H_2/H_\infty$ synthesis and relaxed passivity provides a novel, effective and less conservative method for physical human-robot interaction control.
http://arxiv.org/abs/1903.09748
Pixel-level semantic segmentation is a challenging task with a huge amount of computation, especially if the size of input is large. In the segmentation model, apart from the feature extraction, the extra decoder structure is often employed to recover spatial information. In this paper, we put forward a method for single-shot segmentation in a feature residual pyramid network (RPNet), which learns the main and residuals of segmentation by decomposing the label at different levels of residual blocks. Specifically speaking, we use the residual features to learn the edges and details, and the identity features to learn the main part of targets. At testing time, the predicted residuals are used to enhance the details of the top-level prediction. Residual learning blocks split the network into several shallow sub-networks which facilitates the training of the RPNet. We then evaluate the proposed method and compare it with recent state-of-the-art methods on CamVid and Cityscapes. The proposed single-shot segmentation based on RPNet achieves impressive results with high efficiency on pixel-level segmentation.
http://arxiv.org/abs/1903.09746
Low-dose X-ray CT technology is one of important directions of current research and development of medical imaging equipment. A fast algorithm of blockwise sinogram filtering is presented for realtime low-dose CT imaging. A nonstationary Gaussian noise model of low-dose sinogram data is proposed in the low-mA (tube current) CT protocol. Then, according to the linear minimum mean square error principle, an adaptive blockwise algorithm is built to filter contaminated sinogram data caused by photon starvation. A moving sum technique is used to speed the algorithm into a linear time one, regardless of the block size and thedata range. The proposedfast filtering givesa better performance in noise reduction and detail preservation in the reconstructed images,which is verified in experiments on simulated and real data compared with some related filtering methods.
http://arxiv.org/abs/1903.09745
Machine Learning is proving invaluable across disciplines. However, its successis often limited by the quality and quantity of available data, while its adoption by the level of trust that models afford users. Human vs. machine performance is commonly compared empirically to decide whether a certain task should be performed by a computer or an expert. In reality, the optimal learning strategy may involve combining the complementary strengths of man and machine. Here we present Expert-Augmented Machine Learning(EAML), an automated method that guides the extraction ofexpert knowledgeand its integration intomachine-learned models. We use a large dataset of intensive care patient data to predict mortality and show that we can extract expert knowledge using an online platform, help reveal hidden confounders, improve generalizability ona different population and learn using less data. EAML presents a novel framework for high performance and dependable machine learning in critical applications.
http://arxiv.org/abs/1903.09731
Class imbalance is a long-standing problem relevant to a number of real-world applications of deep learning. Oversampling techniques, which are effective for handling class imbalance in classical learning systems, can not be directly applied to end-to-end deep learning systems. We propose a three-player adversarial game between a convex generator, a multi-class classifier network, and a real/fake discriminator to perform oversampling in deep learning systems. The convex generator generates new samples from the minority classes as convex combinations of existing instances, aiming to fool both the discriminator as well as the classifier into misclassifying the generated samples. Consequently, the artificial samples are generated at critical locations near the peripheries of the classes. This, in turn, adjusts the classifier induced boundaries in a way which is more likely to reduce misclassification from the minority classes. Extensive experiments on multiple class imbalanced image datasets establish the efficacy of our proposal.
http://arxiv.org/abs/1903.09730
Pre-trained language model representations have been successful in a wide range of language understanding tasks. In this paper, we examine different strategies to integrate pre-trained representations into sequence to sequence models and apply it to neural machine translation and abstractive summarization. We find that pre-trained representations are most effective when added to the encoder network which slows inference by only 14%. Our experiments in machine translation show gains of up to 5.3 BLEU in a simulated resource-poor setup. While returns diminish with more labeled data, we still observe improvements when millions of sentence-pairs are available. Finally, on abstractive summarization we achieve a new state of the art on the full text version of CNN/DailyMail.
http://arxiv.org/abs/1903.09722
Extracting accurate foreground objects from a scene is a fundamental step in creating virtual reality content. However, majority of the professional softwares today still require human interventions, e.g., providing trimaps or labeling key frames. This is not only time consuming, but is also error prone. In this paper, we present a fully automatic algorithm to extract foreground objects. Our solution is based on a newly developed concept called the Multi-Agent Consensus Equilibrium (MACE), a framework which allows us to integrate multiple sources of expertise to produce an overall superior result. Our MACE framework consists of three agents: (1) A new dual layer closed-form matting agent to estimate the foreground mask using the color image and a background image; (2) A background probability estimator using color difference and object segmentation; (3) A total variation minimization agent to control the smoothness of the foreground masks. We show how these agents are constructed, and how their interactions lead to better performance. The algorithm is evaluated by comparing to several state-of-the-art methods. Extensive experimental study shows that the proposed method has less error compared to the competing methods.
http://arxiv.org/abs/1808.08210
Machine learning has been applied to a number of creative, design-oriented tasks. However, it remains unclear how to best empower human users with these machine learning approaches, particularly those users without technical expertise. In this paper we propose a general framework for turn-based interaction between human users and AI agents designed to support human creativity, called {co-creative systems}. The framework can be used to better understand the space of possible designs of co-creative systems and reveal future research directions. We demonstrate how to apply this framework in conjunction with a pair of recent human subject studies, comparing between the four human-AI systems employed in these studies and generating hypotheses towards future studies.
http://arxiv.org/abs/1903.09709
We present a user study to investigate the impact of explanations on non-experts’ understanding of reinforcement learning (RL) agents. We investigate both a common RL visualization, saliency maps (the focus of attention), and a more recent explanation type, reward-decomposition bars (predictions of future types of rewards). We designed a 124 participant, four-treatment experiment to compare participants’ mental models of an RL agent in a simple Real-Time Strategy (RTS) game. Our results show that the combination of both saliency and reward bars were needed to achieve a statistically significant improvement in mental model score over the control. In addition, our qualitative analysis of the data reveals a number of effects for further study.
http://arxiv.org/abs/1903.09708
We show how to teach machines to paint like human painters, who can use a few strokes to create fantastic paintings. By combining the neural renderer and model-based Deep Reinforcement Learning (DRL), our agent can decompose texture-rich images into strokes and make long-term plans. For each stroke, the agent directly determines the position and color of the stroke. Excellent visual effect can be achieved using hundreds of strokes. The training process does not require experience of human painting or stroke tracking data.
http://arxiv.org/abs/1903.04411
While Generative Adversarial Networks (GANs) have seen huge successes in image synthesis tasks, they are notoriously difficult to use, in part due to instability during training. One commonly accepted reason for this instability is that gradients passing from the discriminator to the generator can quickly become uninformative, due to a learning imbalance during training. In this work, we propose the Multi-Scale Gradient Generative Adversarial Network (MSG-GAN), a simple but effective technique for addressing this problem which allows the flow of gradients from the discriminator to the generator at multiple scales. This technique provides a stable approach for generating synchronized multi-scale images. We present a very intuitive implementation of the mathematical MSG-GAN framework which uses the concatenation operation in the discriminator computations. We empirically validate the effect of our MSG-GAN approach through experiments on the CIFAR10 and Oxford102 flowers datasets and compare it with other relevant techniques which perform multi-scale image synthesis. In addition, we also provide details of our experiment on CelebA-HQ dataset for synthesizing 1024 x 1024 high resolution images.
https://arxiv.org/abs/1903.06048
Recent object detectors use four-coordinate bounding box (bbox) regression to predict object locations. Providing additional information indicating the object positions and coordinates will improve detection performance. Thus, we propose two types of masks: a bbox mask and a bounding shape (bshape) mask, to represent the object’s bbox and boundary shape, respectively. For each of these types, we consider two variants: the Thick model and the Scored model, both of which have the same morphology but differ in ways to make their boundaries thicker. To evaluate the proposed masks, we design extended frameworks by adding a bshape mask (or a bbox mask) branch to a Faster R-CNN framework, and call this BshapeNet (or BboxNet). Further, we propose BshapeNet+, a network that combines a bshape mask branch with a Mask R-CNN to improve instance segmentation as well as detection. Among our proposed models, BshapeNet+ demonstrates the best performance in both tasks and achieves highly competitive results with state of the art (SOTA) models. Particularly, it improves the detection results over Faster R-CNN+RoIAlign (37.3% and 28.9%) with a detection AP of 42.4% and 32.3% on MS COCO test-dev and Cityscapes val, respectively. Furthermore, for small objects, it achieves 24.9% AP on COCO test-dev, a significant improvement over previous SOTA models. For instance segmentation, it is substantially superior to Mask R-CNN on both test datasets.
http://arxiv.org/abs/1810.10327
Exoskeletons which amplify the strength of their operators can enable heavy-duty manipulation of unknown objects. However, this type of behavior is difficult to accomplish; it requires the exoskeleton to sense and amplify the operator’s interaction forces while remaining stable. But, the goals of amplification and robust stability when connected to the operator fundamentally conflict. As a solution, we introduce a design with a spring in series with the force sensitive cuff. This allows us to design an exoskeleton compliance behavior which is nominally passive, even with high amplification ratios. In practice, time delay and discrete time filters prevent our strategy from actually achieving passivity, but the designed compliance still makes the exoskeleton more robust to spring-like human behaviors. Our exoskeleton is actuated by a series elastic actuator (SEA), which introduces another spring into the system. We show that shaping the cuff compliance for the exoskeleton can be made into approximately the same problem as shaping the spring compliance of an SEA. We therefore introduce a feedback controller and gain tuning method which takes advantage of an existing compliance shaping technique for SEAs. We call our strategy the “double compliance shaping” method. With large amplification ratios, this controller tends to amplify nonlinear transmission friction effects, so we additionally propose a “transmission disturbance observer” to mitigate this drawback. Our methods are validated on a single-degree-of-freedom elbow exoskeleton.
http://arxiv.org/abs/1903.09673
Capsule Networks (CapsNet) use the Softmax function to convert the logits of the routing coefficients into a set of normalized values that signify the assignment probabilities between capsules in adjacent layers. We show that the use of Softmax prevents capsule layers from forming optimal couplings between lower and higher-level capsules. Softmax constrains the dynamic range of the routing coefficients and leads to probabilities that remain mostly uniform after several routing iterations. Instead, we propose the use of Max-Min normalization. Max-Min performs a scale-invariant normalization of the logits that allows each lower-level capsule to take on an independent value, constrained only by the bounds of normalization. Max-Min provides consistent improvement in test accuracy across five datasets and allows more routing iterations without a decrease in network performance. A single CapsNet trained using Max-Min achieves an improved test error of 0.20% on the MNIST dataset. With a simple 3-model majority vote, we achieve a test error of 0.17% on MNIST.
http://arxiv.org/abs/1903.09662
Adversarial examples are inputs to machine learning models designed by an adversary to cause an incorrect output. So far, adversarial examples have been studied most extensively in the image domain. In this domain, adversarial examples can be constructed by imperceptibly modifying images to cause misclassification, and are practical in the physical world. In contrast, current targeted adversarial examples applied to speech recognition systems have neither of these properties: humans can easily identify the adversarial perturbations, and they are not effective when played over-the-air. This paper makes advances on both of these fronts. First, we develop effectively imperceptible audio adversarial examples (verified through a human study) by leveraging the psychoacoustic principle of auditory masking, while retaining 100% targeted success rate on arbitrary full-sentence targets. Next, we make progress towards physical-world over-the-air audio adversarial examples by constructing perturbations which remain effective even after applying realistic simulated environmental distortions.
http://arxiv.org/abs/1903.10346
We focus on the word-level visual lipreading, which requires to decode the word from the speaker’s video. Recently, many state-of-the-art visual lipreading methods explore the end-to-end trainable deep models, involving the use of 2D convolutional networks (e.g., ResNet) as the front-end visual feature extractor and the sequential model (e.g., Bi-LSTM or Bi-GRU) as the back-end. Although a deep 2D convolution neural network can provide informative image-based features, it ignores the temporal motion existing between the adjacent frames. In this work, we investigate the spatial-temporal capacity power of I3D (Inflated 3D ConvNet) for visual lipreading. We demonstrate that, after pre-trained on the large-scale video action recognition dataset (e.g., Kinetics), our models show a considerable improvement of performance on the task of lipreading. A comparison between a set of video model architectures and input data representation is also reported. Our extensive experiments on LRW shows that a two-stream I3D model with RGB video and optical flow as the inputs achieves the state-of-the-art performance.
http://arxiv.org/abs/1903.09616
Speech emotion recognition (SER) has attracted great attention in recent years due to the high demand for emotionally intelligent speech interfaces. Deriving speaker-invariant representations for speech emotion recognition is crucial. In this paper, we propose to apply adversarial training to SER to learn speaker-invariant representations. Our model consists of three parts: a representation learning sub-network with time-delay neural network (TDNN) and LSTM with statistical pooling, an emotion classification network and a speaker classification network. Both the emotion and speaker classification network take the output of the representation learning network as input. Two training strategies are employed: one based on domain adversarial training (DAT) and the other one based on cross-gradient training (CGT). Besides the conventional data set, we also evaluate our proposed models on a much larger publicly available emotion data set with 250 speakers. Evaluation results show that on IEMOCAP, DAT and CGT provides 5.6% and 7.4% improvement respectively, over a baseline system without speaker-invariant representation learning on 5-fold cross validation. On the larger emotion data set, while CGT fails to yield better results than baseline, DAT can still provide 9.8% relative improvement on a standalone test set.
http://arxiv.org/abs/1903.09606
We propose a method to translate cartoon images to real world images using Generative Aderserial Network (GAN). Existing GAN-based image-to-image translation methods which are trained on paired datasets are impractical as the data is difficult to accumulate. Therefore, in this paper we exploit the Cycle-Consistent Adversarial Networks (CycleGAN) method for images translation which needs an unpaired dataset. By applying CycleGAN we show that our model is able to generate meaningful real world images from cartoon images. However, we implement another state of the art technique $-$ Deep Analogy $-$ to compare the performance of our approach.
https://arxiv.org/abs/1811.11796
In trick-taking card games, a two-step process of state sampling and evaluation is widely used to approximate move values. While the evaluation component is vital, the accuracy of move value estimates is also fundamentally linked to how well the sampling distribution corresponds the true distribution. Despite this, recent work in trick-taking card game AI has mainly focused on improving evaluation algorithms with limited work on improving sampling. In this paper, we focus on the effect of sampling on the strength of a player and propose a novel method of sampling more realistic states given move history. In particular, we use predictions about locations of individual cards made by a deep neural network — trained on data from human gameplay - in order to sample likely worlds for evaluation. This technique, used in conjunction with Perfect Information Monte Carlo (PIMC) search, provides a substantial increase in cardplay strength in the popular trick-taking card game of Skat.
http://arxiv.org/abs/1903.09604
The growing demand of industrial, automotive and service robots presents a challenge to the centralized Cloud Robotics model in terms of privacy, security, latency, bandwidth, and reliability. In this paper, we present a `Fog Robotics’ approach to deep robot learning that distributes compute, storage and networking resources between the Cloud and the Edge in a federated manner. Deep models are trained on non-private (public) synthetic images in the Cloud; the models are adapted to the private real images of the environment at the Edge within a trusted network and subsequently, deployed as a service for low-latency and secure inference/prediction for other robots in the network. We apply this approach to surface decluttering, where a mobile robot picks and sorts objects from a cluttered floor by learning a deep object recognition and a grasp planning model. Experiments suggest that Fog Robotics can improve performance by sim-to-real domain adaptation in comparison to exclusively using Cloud or Edge resources, while reducing the inference cycle time by 4\times to successfully declutter 86% of objects over 213 attempts.
http://arxiv.org/abs/1903.09589
Aspect-based sentiment analysis (ABSA), which aims to identify fine-grained opinion polarity towards a specific aspect, is a challenging subtask of sentiment analysis (SA). In this paper, we construct an auxiliary sentence from the aspect and convert ABSA to a sentence-pair classification task, such as question answering (QA) and natural language inference (NLI). We fine-tune the pre-trained model from BERT and achieve new state-of-the-art results on SentiHood and SemEval-2014 Task 4 datasets.
http://arxiv.org/abs/1903.09588
Researchers on artificial intelligence have achieved human-level intelligence in large-scale perfect-information games, but it is still a challenge to achieve (nearly) optimal results (in other words, an approximate Nash Equilibrium) in large-scale imperfect-information games (i.e. war games, football coach or business strategies). Neural Fictitious Self Play (NFSP) is an effective algorithm for learning approximate Nash equilibrium of imperfect-information games from self-play without prior domain knowledge. However, it relies on Deep Q-Network, which is off-line and is hard to converge in online games with changing opponent strategy, so it can’t approach approximate Nash equilibrium in games with large search scale and deep search depth. In this paper, we propose Monte Carlo Neural Fictitious Self Play (MC-NFSP), an algorithm combines Monte Carlo tree search with NFSP, which greatly improves the performance on large-scale zero-sum imperfect-information games. Experimentally, we demonstrate that the proposed Monte Carlo Neural Fictitious Self Play can converge to approximate Nash equilibrium in games with large-scale search depth while the Neural Fictitious Self Play can’t. Furthermore, we develop Asynchronous Neural Fictitious Self Play (ANFSP). It use asynchronous and parallel architecture to collect game experience. In experiments, we show that parallel actor-learners have a further accelerated and stabilizing effect on training.
http://arxiv.org/abs/1903.09569
Purpose: To validate the performance of a commercially-available, CE-certified deep learning (DL) system, RetCAD v.1.3.0 (Thirona, Nijmegen, The Netherlands), for the joint automatic detection of diabetic retinopathy (DR) and age-related macular degeneration (AMD) in color fundus (CF) images on a dataset with mixed presence of eye diseases. Methods: Evaluation of joint detection of referable DR and AMD was performed on a DR-AMD dataset with 600 images acquired during routine clinical practice, containing referable and non-referable cases of both diseases. Each image was graded for DR and AMD by an experienced ophthalmologist to establish the reference standard (RS), and by four independent observers for comparison with human performance. Validation was furtherly assessed on Messidor (1200 images) for individual identification of referable DR, and the Age-Related Eye Disease Study (AREDS) dataset (133821 images) for referable AMD, against the corresponding RS. Results: Regarding joint validation on the DR-AMD dataset, the system achieved an area under the ROC curve (AUC) of 95.1% for detection of referable DR (SE=90.1%, SP=90.6%). For referable AMD, the AUC was 94.9% (SE=91.8%, SP=87.5%). Average human performance for DR was SE=61.5% and SP=97.8%; for AMD, SE=76.5% and SP=96.1%. Regarding detection of referable DR in Messidor, AUC was 97.5% (SE=92.0%, SP=92.1%); for referable AMD in AREDS, AUC was 92.7% (SE=85.8%, SP=86.0%). Conclusions: The validated system performs comparably to human experts at simultaneous detection of DR and AMD. This shows that DL systems can facilitate access to joint screening of eye diseases and become a quick and reliable support for ophthalmological experts.
http://arxiv.org/abs/1903.09555
In psychology and neuroscience it is common to describe cognitive systems as input/output devices where perceptual and motor functions are implemented in a purely feedforward, open-loop fashion. On this view, perception and action are often seen as encapsulated modules with limited interaction between them. While embodied and enactive approaches to cognitive science have challenged the idealisation of the brain as an input/output device, we argue that even the more recent attempts to model systems using closed-loop architectures still heavily rely on a strong separation between motor and perceptual functions. Previously, we have suggested that the mainstream notion of modularity strongly resonates with the separation principle of control theory. In this work we present a minimal model of a sensorimotor loop implementing an architecture based on the separation principle. We link this to popular formulations of perception and action in the cognitive sciences, and show its limitations when, for instance, external forces are not modelled by an agent. These forces can be seen as variables that an agent cannot directly control, i.e., a perturbation from the environment or an interference caused by other agents. As an alternative approach inspired by embodied cognitive science, we then propose a nonmodular architecture based on the active inference framework. We demonstrate the robustness of this architecture to unknown external inputs and show that the mechanism with which this is achieved in linear models is equivalent to integral control.
http://arxiv.org/abs/1903.09542
Deep reinforcement learning (DRL) is a promising approach for developing legged locomotion skills. However, the iterative design process that is inevitable in practice is poorly supported by the default methodology. It is difficult to predict the outcomes of changes made to the reward functions, policy architectures, and the set of tasks being trained on. In this paper, we propose a practical method that allows the reward function to be fully redefined on each successive design iteration while limiting the deviation from the previous iteration. We characterize policies via sets of Deterministic Action Stochastic State (DASS) tuples, which represent the deterministic policy state-action pairs as sampled from the states visited by the trained stochastic policy. New policies are trained using a policy gradient algorithm which then mixes RL-based policy gradients with gradient updates defined by the DASS tuples. The tuples also allow for robust policy distillation to new network architectures. We demonstrate the effectiveness of this iterative-design approach on the bipedal robot Cassie, achieving stable walking with different gait styles at various speeds. We demonstrate the successful transfer of policies learned in simulation to the physical robot without any dynamics randomization, and that variable-speed walking policies for the physical robot can be represented by a small dataset of 5-10k tuples.
http://arxiv.org/abs/1903.09537
Artificial Intelligence problems, ranging form planning/scheduling up to game control, include an essential crucial step: describing a model which accurately defines the problem’s required data, requirements, allowed transitions and established goals. The ways in which a model can fail are numerous and often lead to a failure of search strategies to provide a quick, optimal, or even any solution. This paper proposes using SMT (Satisfiability Modulo Theories) solvers, such as Z3, to check the validity of a model. We propose two tests: checking whether a final(goal) state exists in the model’s described problem space and checking whether the transitions described can provide a path from the identified initial states to any the goal states (meaning a solution has been found). The advantage of using an SMT solver for AI model checking is that they substitute actual search strategies and they work over an abstract representation of the model, that is, a set of logical formulas. Reasoning at an abstract level is not as expensive as exploring the entire solution space. SMT solvers use efficient decision procedures which provide proofs for the logical formulas corresponding to the AI model. A recent addition to Z3 allowed us to describe sequences of transitions as a recursive function, thus we can check if a solution can be found in the defined model.
http://arxiv.org/abs/1903.09475