Associative memories are structures that store data patterns and retrieve them given partial inputs. Sparse Clustered Networks (SCNs) are recently-introduced binary-weighted associative memories that significantly improve the storage and retrieval capabilities over the prior state-of-the art. However, deleting or updating the data patterns result in a significant increase in the data retrieval error probability. In this paper, we propose an algorithm to address this problem by incorporating multiple-valued weights for the interconnections used in the network. The proposed algorithm lowers the error rate by an order of magnitude for our sample network with 60% deleted contents. We then investigate the advantages of the proposed algorithm for hardware implementations.
https://arxiv.org/abs/1402.0808
Intelligent machines require basic information such as moving-object detection from videos in order to deduce higher-level semantic information. In this paper, we propose a methodology that uses a texture measure to detect moving objects in video. The methodology is computationally inexpensive, requires minimal parameter fine-tuning and also is resilient to noise, illumination changes, dynamic background and low frame rate. Experimental results show that performance of the proposed approach is higher than those of state-of-the-art approaches. We also present a framework for vehicular traffic density estimation using the foreground object detection technique and present a comparison between the foreground object detection-based framework and the classical density state modelling-based framework for vehicular traffic density estimation.
https://arxiv.org/abs/1402.0289
A class of channels is introduced for which there is memory inside blocks of a specified length and no memory across the blocks. The multi-user model is called an information network with in-block memory (NiBM). It is shown that block-fading channels, channels with state known causally at the encoder, and relay networks with delays are NiBMs. A cut-set bound is developed for NiBMs that unifies, strengthens, and generalizes existing cut bounds for discrete memoryless networks. The bound gives new finite-letter capacity expressions for several classes of networks including point-to-point channels, and certain multiaccess, broadcast, and relay channels. Cardinality bounds on the random coding alphabets are developed that improve on existing bounds for channels with action-dependent state available causally at the encoder and for relays without delay. Finally, quantize-forward network coding is shown to achieve rates within an additive gap of the new cut-set bound for linear, additive, Gaussian noise channels, symmetric power constraints, and a multicast session.
https://arxiv.org/abs/1206.5389
In coming years, the first truly Earth-like planets will be discovered orbiting other stars, and the search for signs of life on these worlds will begin. However, such observations will be hugely time-consuming and costly, and so it will be important to determine which of those planets represent the best prospects for life elsewhere. One of the key factors in such a decision will be the climate variability of the planet in question - too chaotic a climate might render a planet less promising as a target for our initial search for life elsewhere. On the Earth, the climate of the last few million years has been dominated by a series of glacial and interglacial periods, driven by periodic variations in the Earth’s orbital elements and axial tilt. These Milankovitch cycles are driven by the gravitational influence of the other planets, and as such are strongly dependent on the architecture of the Solar system. Here, we present the first results of a study investigating the influence of the orbit of Jupiter on the Milankovitch cycles at Earth - a first step in developing a means to characterise the nature of periodic climate change on planets beyond our Solar system.
https://arxiv.org/abs/1401.6741
The influence of a microwave treatment (MWT) on the optical properties of hexagonal GaN films has been studied. To estimate the internal mechanical strains and the degree of structural perfection in a thin near-surface layer of the film, the electroreflectance (ER) method is used. The ER spectra are measured in the interval of the first direct interband transitions. It has been shown that the MWT results in the relaxation of internal mechanical strains in the irradiated films. In addition, the structural perfection in the thin near-surface layer of the irradiated film became higher. A mechanism that includes resonance effects and the local heating of the film defect regions is proposed to explain the effects observed.
https://arxiv.org/abs/1401.5972
We show over 100-fold enhancement of the exciton oscillator strength as the diameter of an InGaN nanodisk in a GaN nanopillar is reduced from a few micrometers to less than 40 nm, corresponding to the quantum dot limit. The enhancement results from significant strain relaxation in nanodisks less than 100 nm in diameter. Meanwhile, the radiative decay rate is only improved by 10 folds due to strong reduction of the local density of photon states in small nanodisks. Further increase in the radiative decay rate can be achieved by engineering the local density of photon states, such as adding a dielectric coating.
https://arxiv.org/abs/1309.6264
We report the discovery of two long-period giant planets from the Anglo-Australian Planet Search. HD 154857c is in a multiple-planet system, while HD 114613b appears to be solitary. HD 114613b has an orbital period P=10.5 years, and a minimum mass m sin i of 0.48 Jupiter masses; HD 154857c has P=9.5 years and m sin i=2.6 Jupiter masses. These new data confirm the planetary nature of the previously unconstrained long-period object in the HD 154857 system. We have performed detailed dynamical stability simulations which show that the HD 154857 two-planet system is stable on timescales of at least 100 million years. These results highlight the continued importance of “legacy” surveys with long observational baselines; these ongoing campaigns are critical for determining the population of Jupiter analogs, and hence of those planetary systems with architectures most like our own Solar system.
https://arxiv.org/abs/1401.5525
We demonstrate second order optical nonlinearity in a silicon architecture through heterogeneous integration of single-crystalline gallium nitride (GaN) on silicon (100) substrates. By engineering GaN microrings for dual resonance around 1560 nm and 780 nm, we achieve efficient, tunable second harmonic generation at 780 nm. The {chi}(2) nonlinear susceptibility is measured to be as high as 16 plus minus 7 pm/V. Because GaN has a wideband transparency window covering ultraviolet, visible and infrared wavelengths, our platform provides a viable route for the on-chip generation of optical wavelengths in both the far infrared and near-UV through a combination of {chi}(2) enabled sum-/difference-frequency processes.
https://arxiv.org/abs/1401.4798
Today, mobile robots are expected to carry out increasingly complex tasks in multifarious, real-world environments. Often, the tasks require a certain semantic understanding of the workspace. Consider, for example, spoken instructions from a human collaborator referring to objects of interest; the robot must be able to accurately detect these objects to correctly understand the instructions. However, existing object detection, while competent, is not perfect. In particular, the performance of detection algorithms is commonly sensitive to the position of the sensor relative to the objects in the scene. This paper presents an online planning algorithm which learns an explicit model of the spatial dependence of object detection and generates plans which maximize the expected performance of the detection, and by extension the overall plan performance. Crucially, the learned sensor model incorporates spatial correlations between measurements, capturing the fact that successive measurements taken at the same or nearby locations are not independent. We show how this sensor model can be incorporated into an efficient forward search algorithm in the information space of detected objects, allowing the robot to generate motion plans efficiently. We investigate the performance of our approach by addressing the tasks of door and text detection in indoor environments and demonstrate significant improvement in detection performance during task execution over alternative methods in simulated and real robot experiments.
https://arxiv.org/abs/1401.4612
We present a case study of artificial intelligence techniques applied to the control of production printing equipment. Like many other real-world applications, this complex domain requires high-speed autonomous decision-making and robust continual operation. To our knowledge, this work represents the first successful industrial application of embedded domain-independent temporal planning. Our system handles execution failures and multi-objective preferences. At its heart is an on-line algorithm that combines techniques from state-space planning and partial-order scheduling. We suggest that this general architecture may prove useful in other applications as more intelligent systems operate in continual, on-line settings. Our system has been used to drive several commercial prototypes and has enabled a new product architecture for our industrial partner. When compared with state-of-the-art off-line planners, our system is hundreds of times faster and often finds better plans. Our experience demonstrates that domain-independent AI planning based on heuristic search can flexibly handle time, resources, replanning, and multiple objectives in a high-speed practical application without requiring hand-coded control knowledge.
https://arxiv.org/abs/1401.3875
Diagrammatic reasoning (DR) is pervasive in human problem solving as a powerful adjunct to symbolic reasoning based on language-like representations. The research reported in this paper is a contribution to building a general purpose DR system as an extension to a SOAR-like problem solving architecture. The work is in a framework in which DR is modeled as a process where subtasks are solved, as appropriate, either by inference from symbolic representations or by interaction with a diagram, i.e., perceiving specified information from a diagram or modifying/creating objects in a diagram in specified ways according to problem solving needs. The perceptions and actions in most DR systems built so far are hand-coded for the specific application, even when the rest of the system is built using the general architecture. The absence of a general framework for executing perceptions/actions poses as a major hindrance to using them opportunistically – the essence of open-ended search in problem solving. Our goal is to develop a framework for executing a wide variety of specified perceptions and actions across tasks/domains without human intervention. We observe that the domain/task-specific visual perceptions/actions can be transformed into domain/task-independent spatial problems. We specify a spatial problem as a quantified constraint satisfaction problem in the real domain using an open-ended vocabulary of properties, relations and actions involving three kinds of diagrammatic objects – points, curves, regions. Solving a spatial problem from this specification requires computing the equivalent simplified quantifier-free expression, the complexity of which is inherently doubly exponential. We represent objects as configuration of simple elements to facilitate decomposition of complex problems into simpler and similar subproblems. We show that, if the symbolic solution to a subproblem can be expressed concisely, quantifiers can be eliminated from spatial problems in low-order polynomial time using similar previously solved subproblems. This requires determining the similarity of two problems, the existence of a mapping between them computable in polynomial time, and designing a memory for storing previously solved problems so as to facilitate search. The efficacy of the idea is shown by time complexity analysis. We demonstrate the proposed approach by executing perceptions and actions involved in DR tasks in two army applications.
https://arxiv.org/abs/1401.3854
Pathways of diffusion observed in real-world systems often require stochastic processes going beyond first-order Markov models, as implicitly assumed in network theory. In this work, we focus on second-order Markov models, and derive an analytical expression for the effect of memory on the spectral gap and thus, equivalently, on the characteristic time needed for the stochastic process to asymptotically reach equilibrium. Perturbation analysis shows that standard first-order Markov models can either overestimate or underestimate the diffusion rate of flows across the modular structure of a system captured by a second-order Markov network. We test the theoretical predictions on a toy example and on numerical data, and discuss their implications for network theory, in particular in the case of temporal or multiplex networks.
https://arxiv.org/abs/1401.0447
We present total energy and force calculations on the (GaN)${1-x}$(ZnO)${x}$ alloy. Site-occupancy configurations are generated by Monte Carlo (MC) simulations, based on a cluster expansion (CE) model proposed in a previous study. Surprisingly large local atomic coordinate relaxations are found by density-functional calculations using a 432-atom periodic supercell, for three representative configurations at $x=0.5$. These are used to generate bond length distributions. The configurationally averaged composition- and temperature-dependent short-range order (SRO) parameters of the alloys are discussed. Entropy is approximated in terms of pair distribution statistics and thus related to SRO parameters. This approximate entropy is compared with accurate numerical values from MC. An empirical model for the dependence of bond length on local chemical environments is proposed.
https://arxiv.org/abs/1401.0072
The architecture has its basis in a dialectic search of new choices of representation. We deal with the form on the contemporary architecture under two approaches: expression and content. We examine how mathematical principles based on natural growth can be applied in architectural design in order to create a dynamic, rather than static, structure. The dynamic process of a cell and its growth provides the basic structure. We exemplify the impact of the new forms on the new society that already began.
https://arxiv.org/abs/1312.7256
These are the proceedings of the Second Workshop on GRAPH Inspection and Traversal Engineering (GRAPHITE 2013), which took place on March 24, 2013 in Rome, Italy, as a satellite event of the 16th European Joint Conferences on Theory and Practice of Software (ETAPS 2013). The topic of the GRAPHITE workshop is graph analysis in all its forms in computer science. Graphs are used to represent data in many application areas, and they are subjected to various computational algorithms in order to acquire the desired information. These graph algorithms tend to have common characteristics, such as duplicate detection to guarantee their termination, independent of their application domain. Over the past few years, it has been shown that the scalability of such algorithms can be dramatically improved by using, e.g., external memory, by exploiting parallel architectures, such as clusters, multi-core CPUs, and graphics processing units, and by using heuristics to guide the search. Novel techniques to further scale graph search algorithms, and new applications of graph search are within the scope of this workshop. Another topic of interest of the event is more related to the structural properties of graphs: which kind of graph characteristics are relevant for a particular application area, and how can these be measured? Finally, any novel way of using graphs for a particular application area is on topic. The goal of this event is to gather scientists from different communities, such as model checking, artificial intelligence planning, game playing, and algorithm engineering, who do research on graph search algorithms, such that awareness of each others’ work is increased.
https://arxiv.org/abs/1312.7062
We investigate the use of deep neural networks for the novel task of class generic object detection. We show that neural networks originally designed for image recognition can be trained to detect objects within images, regardless of their class, including objects for which no bounding box labels have been provided. In addition, we show that bounding box labels yield a 1% performance increase on the ImageNet recognition challenge.
https://arxiv.org/abs/1312.6885
With at least 50 cores, Intel Xeon Phi is a true many-core architecture. Featuring fairly powerful cores, two cache levels, and very fast interconnections, the Xeon Phi can get a theoretical peak of 1000 GFLOPs and over 240 GB/s. These numbers, as well as its flexibility - it can be used both as a coprocessor or as a stand-alone processor - are very tempting for parallel applications looking for new performance records. In this paper, we present an empirical study of Xeon Phi, stressing its performance limits and relevant performance factors, ultimately aiming to present a simplified view of the machine for regular programmers in search for performance. To do so, we have micro-benchmarked the main hardware components of the processor - the cores, the memory hierarchies, the ring interconnect, and the PCIe connection. We show that, in ideal microbenchmarking conditions, the performance that can be achieved is very close to the theoretical peak, as given in the official programmer’s guide. We have also identified and quantified several causes for significant performance penalties. Our findings have been captured in four optimization guidelines, and used to build a simplified programmer’s view of Xeon Phi, eventually enable the design and prototyping of applications on a functionality-based model of the architecture.
https://arxiv.org/abs/1310.5842
We present GLB, a programming model and an associated implementation that can handle a wide range of irregular paral- lel programming problems running over large-scale distributed systems. GLB is applicable both to problems that are easily load-balanced via static scheduling and to problems that are hard to statically load balance. GLB hides the intricate syn- chronizations (e.g., inter-node communication, initialization and startup, load balancing, termination and result collection) from the users. GLB internally uses a version of the lifeline graph based work-stealing algorithm proposed by Saraswat et al. Users of GLB are simply required to write several pieces of sequential code that comply with the GLB interface. GLB then schedules and orchestrates the parallel execution of the code correctly and efficiently at scale. We have applied GLB to two representative benchmarks: Betweenness Centrality (BC) and Unbalanced Tree Search (UTS). Among them, BC can be statically load-balanced whereas UTS cannot. In either case, GLB scales well– achieving nearly linear speedup on different computer architectures (Power, Blue Gene/Q, and K) – up to 16K cores.
https://arxiv.org/abs/1312.5691
In this paper, a concept of multipurpose object detection system, recently introduced in our previous work, is clarified. The business aspect of this method is transformation of a classifier into an object detector/locator via an image grid. This is a universal framework for locating objects of interest through classification. The framework standardizes and simplifies implementation of custom systems by doing only a custom analysis of the classification results on the image grid.
https://arxiv.org/abs/1401.6126
This paper describes an information system designed to support the large volume of monitoring information generated by a distributed testbed. This monitoring information is produced by several subsystems and consists of status and performance data that needs to be federated, distributed, and stored in a timely and easy to use manner. Our approach differs from existing approaches because it federates and distributes information at a low architectural level via messaging; a natural match to many of the producers and consumers of information. In addition, a database is easily layered atop the messaging layer for consumers that want to query and search the information. Finally, a common language to represent information in all layers of the information system makes it significantly easier for users to consume information. Performance data shows that this approach meets the significant needs of FutureGrid and would meet the needs of an experimental infrastructure twice the size of FutureGrid. In addition, this design also meets the needs of existing distributed scientific infrastructures.
https://arxiv.org/abs/1312.3504
The muon tomography technique, based on multiple Coulomb scattering of cosmic ray muons, has been proposed as a tool to detect the presence of high density objects inside closed volumes. In this paper a new and innovative method is presented to handle the density fluctuations (noise) of reconstructed images, a well known problem of this technique. The effectiveness of our method is evaluated using experimental data obtained with a muon tomography prototype located at the Legnaro National Laboratories (LNL) of the Istituto Nazionale di Fisica Nucleare (INFN). The results reported in this paper, obtained with real cosmic ray data, show that with appropriate image filtering and muon momentum classification, the muon tomography technique can detect high density materials, such as lead, albeit surrounded by light or medium density material, in short times. A comparison with algorithms published in literature is also presented.
https://arxiv.org/abs/1307.6093
Deep convolutional neural networks have recently achieved state-of-the-art performance on a number of image recognition benchmarks, including the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC-2012). The winning model on the localization sub-task was a network that predicts a single bounding box and a confidence score for each object category in the image. Such a model captures the whole-image context around the objects but cannot handle multiple instances of the same object in the image without naively replicating the number of outputs for each instance. In this work, we propose a saliency-inspired neural network model for detection, which predicts a set of class-agnostic bounding boxes along with a single score for each box, corresponding to its likelihood of containing any object of interest. The model naturally handles a variable number of instances for each class and allows for cross-class generalization at the highest levels of the network. We are able to obtain competitive recognition performance on VOC2007 and ILSVRC2012, while using only the top few predicted locations in each image and a small number of neural network evaluations.
https://arxiv.org/abs/1312.2249
Community Question Answering (CQA) websites have become valuable repositories which host a massive volume of human knowledge. To maximize the utility of such knowledge, it is essential to evaluate the quality of an existing question or answer, especially soon after it is posted on the CQA website. In this paper, we study the problem of inferring the quality of questions and answers through a case study of a software CQA (Stack Overflow). Our key finding is that the quality of an answer is strongly positively correlated with that of its question. Armed with this observation, we propose a family of algorithms to jointly predict the quality of questions and answers, for both quantifying numerical quality scores and differentiating the high-quality questions/answers from those of low quality. We conduct extensive experimental evaluations to demonstrate the effectiveness and efficiency of our methods.
社区问答(CQA)网站已经成为宝贵的知识库,它们拥有大量的人类知识。为了最大限度地利用这些知识,评估现有问题或答案的质量至关重要,特别是在CQA网站上发布之后不久。 在本文中,我们通过软件CQA(Stack Overflow)的案例研究来研究推断问题和答案质量的问题。我们的主要发现是答案的质量与其问题的质量强烈正相关。有了这一观察结果,我们提出了一系列算法来联合预测问题和答案的质量,既可以量化数字质量得分,又可以将高质量的问题/答案与低质量的问题/答案区分开来。我们进行了广泛的实验评估,以证明我们的方法的有效性和效率。
http://arxiv.org/abs/1311.6876
An “elephant in the room” for most current object detection and localization methods is the lack of explicit modelling of partial visibility due to occlusion by other objects or truncation by the image boundary. Based on a sliding window approach, we propose a detection method which explicitly models partial visibility by treating it as a latent variable. A novel non-maximum suppression scheme is proposed which takes into account the inferred partial visibility of objects while providing a globally optimal solution. The method gives more detailed scene interpretations than conventional detectors in that we are able to identify the visible parts of an object. We report improved average precision on the PASCAL VOC 2010 dataset compared to a baseline detector.
https://arxiv.org/abs/1311.6758
Vapor growth of semiconductors is analyzed using recently obtained dependence of the adsorption energy on the electron charge transfer between the surface adsorbed species and the bulk [Krukowski et al. J. Appl. Phys. 114 (2013) 063507, Kempisty et al. ArXiv 1307.5778 (2013)]. Ab initio calculations were performed to study the physical properties of GaN(0001) surface in ammonia-rich conditions, i.e. covered by mixture of NH3 molecules and NH2 radicals. The Fermi level is pinned at valence band maximum (VBM) and conduction band minimum (CBM) for full coverage by NH3 molecules and NH2 radicals, respectively. For the crossover content of ammonia of about 25% monolayer (ML), the Fermi level is unpinned. It was shown that hydrogen adsorption energy depends on the doping in the bulk for the unpinned Fermi level, i.e. for this coverage. Surface structure thermodynamic and mechanical stability criteria are defined and compared. Mechanical stability of the coverage of such surfaces was checked by determination of the desorption energy of hydrogen molecules. Thermodynamic stability analysis indicates that initally equilibrium hydrogen vapor partial pressure steeply increases with NH3 content to attain the crossover NH3/NH2 coverage, i.e. the unpinned Fermi level condition. For such condition the entire range of experimentally accessible pressures belongs showing that vapor growth of semiconductor crystals occurs predominantly for unpinned Fermi level at the surface, i.e. for flat bands. Accordingly, adsorption energy of most species depends on the doping in the bulk that is basis of the possible molecular scenario explaining dependence of the growth and the doping of semiconductor crystals on the doping in the bulk
https://arxiv.org/abs/1311.5239
Internet of Things (IoT) grows quickly, and 50 billion of IoT devices will be interconnected by 2020. For the huge number of IoT devices, a high scalable discovery architecture is required to provide autonomous registration and look-up of IoT resources and services. The architecture should enable dynamic updates when new IoT devices are incorporated into Internet, and changes are made to the existing ones. Nowadays in Internet, the most used discovery architecture is the Domain Name System (DNS). DNS offers a scalable solution through two distributed mechanisms: multicast DNS (mDNS) and DNS Service Directory (DNS-SD). Both mechanisms have been applied to discover resources and services in local IoT domains. However, a full architecture has not still been designed to support global discovery, local directories and a search engine for ubiquitous IoT domains. Moreover, the architecture should provide other transversal functionalities such as a common semantic for describing services and resources, and a service layer for interconnecting with M2M platforms and mobile clients. This paper presents an oriented-service architecture based on DNS to support a global discovery, local directories and a distributed search engine to enable a scalable looking-up of IoT resources and services. The architecture provides two lightweight discovery mechanisms based on mDNS and DNS-SD that have been optimized for the constraints of IoT devices to allow autonomous registration. Moreover, we analyse and provide other relevant elements such semantic description and communications interfaces to support the heterogeneity of IoT devices and clients. All these elements contribute to build a scalable architecture for the discovery and access of heterogeneous and ubiquitous IoT domains.
https://arxiv.org/abs/1311.4293
In this paper, we push forward the idea of machine learning systems whose operators can be modified and fine-tuned for each problem. This allows us to propose a learning paradigm where users can write (or adapt) their operators, according to the problem, data representation and the way the information should be navigated. To achieve this goal, data instances, background knowledge, rules, programs and operators are all written in the same functional language, Erlang. Since changing operators affect how the search space needs to be explored, heuristics are learnt as a result of a decision process based on reinforcement learning where each action is defined as a choice of operator and rule. As a result, the architecture can be seen as a ‘system for writing machine learning systems’ or to explore new operators where the policy reuse (as a kind of transfer learning) is allowed. States and actions are represented in a Q matrix which is actually a table, from which a supervised model is learnt. This makes it possible to have a more flexible mapping between old and new problems, since we work with an abstraction of rules and actions. We include some examples sharing reuse and the application of the system gErl to IQ problems. In order to evaluate gErl, we will test it against some structured problems: a selection of IQ test tasks and some experiments on some structured prediction problems (list patterns).
https://arxiv.org/abs/1311.4235
Modern spectroscopic surveys produce large spectroscopic databases, generally with sizes well beyond the scope of manual investigation. The need arises, therefore, for an automated line detection method with objective indicators for detection significance. In this paper, we present an automated and objective method for emission line detection in spectroscopic surveys and apply this technique to 1574 spectra, obtained with the Hectospec spectrograph on the MMT Observatory (MMTO), to detect Lyman alpha emitters near z ~ 2.7. The basic idea is to generate on-source (signal plus noise) and off-source (noise only) mock observations using Monte Carlo simulations, and calculate completeness and reliability values, (C, R), for each simulated signal. By comparing the detections from real data with the Monte Carlo results, we assign the completeness and reliability values to each real detection. From 1574 spectra, we obtain 881 raw detections and, by removing low reliability detections, we finalize 649 detections from an automated pipeline. Most of high completeness and reliability detections, (C, R) ~ (1.0, 1.0), are robust detections when visually inspected; the low C and R detections are also marginal on visual inspection. This method at detecting faint sources is dependent on the accuracy of the sky subtraction.
https://arxiv.org/abs/1311.3667
Owing to the variety of possible charge and spin states and to the different ways of coupling to the environment, paramagnetic centres in wide band-gap semiconductors and insulators exhibit a strikingly rich spectrum of properties and functionalities, exploited in commercial light emitters and proposed for applications in quantum information. Here we demonstrate, by combining synchrotron techniques with magnetic, optical and \emph{ab initio} studies, that the codoping of GaN:Mn with Mg allows to control the Mn$^{n+}$ charge and spin state in the range $3\le n\le 5$ and $2\ge S\ge 1$. According to our results, this outstanding degree of tunability arises from the formation of hitherto concealed cation complexes Mn-Mg$_k$, where the number of ligands $k$ is pre-defined by fabrication conditions. The properties of these complexes allow to extend towards the infrared the already remarkable optical capabilities of nitrides, open to solotronics functionalities, and generally represent a fresh perspective for magnetic semiconductors.
https://arxiv.org/abs/1311.3106
The evolution of the optical branch in the Raman spectra of (Ga,Mn)N:Mg epitaxial layers as a function of the Mn and Mg concentrations, reveals the interplay between the two dopants. We demonstrate that the various Mn-Mg-induced vibrational modes can be understood in the picture of functional Mn–Mg$_k$ complexes formed when substitutional Mn cations are bound to $k$ substitutional Mg through nitrogen atoms, the number of ligands $k$ being driven by the ratio between the Mg and the Mn concentrations.
https://arxiv.org/abs/1311.3097
Single photon emission was observed from site-controlled InGaN/GaN quantum dots. The single-photon nature of the emission was verified by the second-order correlation function up to 90 K, the highest temperature to date for site-controlled quantum dots. Micro-photoluminescence study on individual quantum dots showed linearly polarized single exciton emission with a lifetime of a few nanoseconds. The dimensions of these quantum dots were well controlled to the precision of state-of-the-art fabrication technologies, as reflected in the uniformity of their optical properties. The yield of optically active quantum dots was greater than 90%, among which 13%-25% exhibited single photon emission at 10 K.
https://arxiv.org/abs/1308.5908
Sophisticated multilayer neural networks have achieved state of the art results on multiple supervised tasks. However, successful applications of such multilayer networks to control have so far been limited largely to the perception portion of the control pipeline. In this paper, we explore the application of deep and recurrent neural networks to a continuous, high-dimensional locomotion task, where the network is used to represent a control policy that maps the state of the system (represented by joint angles) directly to the torques at each joint. By using a recent reinforcement learning algorithm called guided policy search, we can successfully train neural network controllers with thousands of parameters, allowing us to compare a variety of architectures. We discuss the differences between the locomotion control task and previous supervised perception tasks, present experimental results comparing various architectures, and discuss future directions in the application of techniques from deep learning to the problem of optimal control.
https://arxiv.org/abs/1311.1761
Through an optical campaign performed at 4 telescopes located in the northern and the southern hemispheres, we have obtained optical spectroscopy for 75 counterparts of unclassified or poorly studied hard X-ray emitting objects detected with Swift/BAT and listed in the 54 month Palermo BAT catalogue. All these objects have also observations taken with Swift/XRT, ROSAT or Chandra satellites which allowed us to reduce the high energy error box and pinpoint the most likely optical counterpart/s. We find that 69 sources in our sample are Active Galactic Nuclei (AGNs); of them, 35 are classified as type 1 (with broad and narrow emission lines), 33 are classified as type 2 (with only narrow emission lines) and one is an high redshift QSO; the remaining 6 objects are galactic cataclysmic variables (CVs). Among type 1 AGNs, 32 are objects of intermediate Seyfert type (1.2-1.9) and one is Narrow Line Seyfert 1 galaxy; for 29 out of 35 type 1 AGNs, we have been able to estimate the central black hole mass and the Eddington ratio. Among type 2 AGNs, two display optical features typical of the LINER class, 3 are classified as transition objects, 1 is a starburst galaxy and 2 are instead X-ray bright, optically normal galaxies. All galaxies classified in this work are relatively nearby objects (0.006 - 0.213) except for one at redshift 1.137.
https://arxiv.org/abs/1311.1458
We investigate fundamental decisions in the design of instruction set architectures for linear genetic programs that are used as both model systems in evolutionary biology and underlying solution representations in evolutionary computation. We subjected digital organisms with each tested architecture to seven different computational environments designed to present a range of evolutionary challenges. Our goal was to engineer a general purpose architecture that would be effective under a broad range of evolutionary conditions. We evaluated six different types of architectural features for the virtual CPUs: (1) genetic flexibility: we allowed digital organisms to more precisely modify the function of genetic instructions, (2) memory: we provided an increased number of registers in the virtual CPUs, (3) decoupled sensors and actuators: we separated input and output operations to enable greater control over data flow. We also tested a variety of methods to regulate expression: (4) explicit labels that allow programs to dynamically refer to specific genome positions, (5) position-relative search instructions, and (6) multiple new flow control instructions, including conditionals and jumps. Each of these features also adds complication to the instruction set and risks slowing evolution due to epistatic interactions. Two features (multiple argument specification and separated I/O) demonstrated substantial improvements int the majority of test environments. Some of the remaining tested modifications were detrimental, thought most exhibit no systematic effects on evolutionary potential, highlighting the robustness of digital evolution. Combined, these observations enhance our understanding of how instruction architecture impacts evolutionary potential, enabling the creation of architectures that support more rapid evolution of complex solutions to a broad range of challenges.
https://arxiv.org/abs/1309.0719
The molecular motor myosin V exhibits a wide repertoire of pathways during the stepping process, which is intimately connected to its biological function. The best understood of these is hand-over-hand stepping by a swinging lever arm movement toward the plus-end of actin filaments, essential to its role as a cellular transporter. However, single-molecule experiments have also shown that the motor “foot stomps”, with one hand detaching and rebinding to the same site, and backsteps under sufficient load. Explaining the complete taxonomy of myosin V’s load-dependent stepping pathways, and the extent to which these are constrained by motor structure and mechanochemistry, are still open questions. Starting from a polymer model, we develop an analytical theory to understand the minimal physical properties that govern motor dynamics. In particular, we solve the first-passage problem of the head reaching the target binding site, investigating the competing effects of load pulling back at the motor, strain in the leading head that biases the diffusion in the direction of the target, and the possibility of preferential binding to the forward site due to the recovery stroke. The theory reproduces a variety of experimental data, including the power stroke and slow diffusive search regimes in the mean trajectory of the detached head, and the force dependence of the forward-to-backward step ratio, run length, and velocity. The analytical approach yields a formula for the stall force, identifying the relative contributions of the chemical cycle rates and mechanical features like the bending rigidities of the lever arms. Most importantly, by fully exploring the design space of the motor, we predict that myosin V is a robust motor whose dynamical behavior is not compromised by reasonable perturbations to the reaction cycle, and changes in the architecture of the lever arm.
https://arxiv.org/abs/1310.6741
Salient object detection aims to locate objects that capture human attention within images. Previous approaches often pose this as a problem of image contrast analysis. In this work, we model an image as a hypergraph that utilizes a set of hyperedges to capture the contextual properties of image pixels or regions. As a result, the problem of salient object detection becomes one of finding salient vertices and hyperedges in the hypergraph. The main advantage of hypergraph modeling is that it takes into account each pixel’s (or region’s) affinity with its neighborhood as well as its separation from image background. Furthermore, we propose an alternative approach based on center-versus-surround contextual contrast analysis, which performs salient object detection by optimizing a cost-sensitive support vector machine (SVM) objective function. Experimental results on four challenging datasets demonstrate the effectiveness of the proposed approaches against the state-of-the-art approaches to salient object detection.
https://arxiv.org/abs/1310.5767
We combine the effect of the electron-electron and electron-phonon interactions to study the electronic and optical properties of zb-GaN. We show that only by treating the two effects at the same time it is possible to obtain an unprecedented agreement of the zero and finite-temperature electronic gaps and absorption spectra with the experimental results. Compared to the state-of-the-art results our calculations predict a large effect on the main absorption peak position and width as well as on the overall absorption lineshape. These important modifications are traced back to the combined electron-phonon damping mechanism and non uniform GW level corrections. Our results demonstrate the importance of treating on equal footing the electron and phonon mediated correlation effects to obtain an accurate description of the III-nitrides group physical properties.
https://arxiv.org/abs/1310.2038
Efficiency droop is a major obstacle facing high-power application of InGaN/GaN quantum-well (QW) light-emitting diodes. In this letter, we report the suppression of efficiency droop induced by density-activated defect recombination in nanorod structure of a-plane InGaN/GaN QWs. In the high carrier density regime, the retained emission efficiency in a dry-etched nanorod sample is observed to be over two times higher than that in its parent QW sample. We further argue that the improvement is a combined effect of the amendment contributed by lateral carrier confinement and the deterioration made by surface trapping.
https://arxiv.org/abs/1310.2004
We propose a new framework for object detection based on a generalization of the keypoint correspondence framework. This framework is based on replacing keypoints by keygraphs, i.e. isomorph directed graphs whose vertices are keypoints, in order to explore relative and structural information. Unlike similar works in the literature, we deal directly with graphs in the entire pipeline: we search for graph correspondences instead of searching for individual point correspondences and then building graph correspondences from them afterwards. We also estimate the pose from graph correspondences instead of falling back to point correspondences through a voting table. The contributions of this paper are the proposed framework and an implementation that properly handles its inherent issues of loss of locality and combinatorial explosion, showing its viability for real-time applications. In particular, we introduce the novel concept of keytuples to solve a running time issue. The accuracy of the implementation is shown by results of over 800 experiments with a well-known database of images. The speed is illustrated by real-time tracking with two different cameras in ordinary hardware.
https://arxiv.org/abs/1310.0171
Selection rules are presented for electron-phonon scattering in GaN with the wurtzite crystal structure. The results are obtained for the interband scattering between the lowest conduction band ($\Gamma$-valley) and the second conduction band ($U$-valley). These selection rules are derived based on the original group-theoretical analysis of the crystal vibrations in GaN, which included detailed compatibility relations for all phonon modes.
https://arxiv.org/abs/1310.0079
GaN is a wide-bandgap semiconductor used in high-efficiency LEDs and solar cells. The solid is produced industrially at high chemical purities by deposition from a vapour phase, and oxygen may be included at this stage. Oxidation represents a potential path for tuning its properties without introducing more exotic elements or extreme processing conditions. In this work, ab initio computational methods are used to examine the energy potentials and electronic properties of different extents of oxidation in GaN. Solid-state vibrational properties of Ga, GaN, Ga2O3 and a single substitutional oxygen defect have been studied using the harmonic approximation with supercells. A thermodynamic model is outlined which combines the results of ab initio calculations with data from experimental literature. This model allows free energies to be predicted for arbitrary reaction conditions within a wide process envelope. It is shown that complete oxidation is favourable for all industrially-relevant conditions, while the formation of defects can be opposed by the use of high temperatures and a high N2:O2 ratio.
https://arxiv.org/abs/1309.6232
One of the central problems in computer vision is the detection of semantically important objects and the estimation of their pose. Most of the work in object detection has been based on single image processing and its performance is limited by occlusions and ambiguity in appearance and geometry. This paper proposes an active approach to object detection by controlling the point of view of a mobile depth camera. When an initial static detection phase identifies an object of interest, several hypotheses are made about its class and orientation. The sensor then plans a sequence of views, which balances the amount of energy used to move with the chance of identifying the correct hypothesis. We formulate an active hypothesis testing problem, which includes sensor mobility, and solve it using a point-based approximate POMDP algorithm. The validity of our approach is verified through simulation and real-world experiments with the PR2 robot. The results suggest that our approach outperforms the widely-used greedy view point selection and provides a significant improvement over static object detection.
https://arxiv.org/abs/1309.5401
An information-centric network should realize significant economies by exploiting a favourable memory-bandwidth tradeoff: it is cheaper to store copies of popular content close to users than to fetch them repeatedly over the Internet. We evaluate this tradeoff for some simple cache network structures under realistic assumptions concerning the size of the content catalogue and its popularity distribution. Derived cost formulas reveal the relative impact of various cost, traffic and capacity parameters, allowing an appraisal of possible future network architectures. Our results suggest it probably makes more sense to envisage the future Internet as a loosely interconnected set of local data centers than a network like today’s with routers augmented by limited capacity content stores.
https://arxiv.org/abs/1309.5220
We provide a formulation for Local Support Vector Machines (LSVMs) that generalizes previous formulations, and brings out the explicit connections to local polynomial learning used in nonparametric estimation literature. We investigate the simplest type of LSVMs called Local Linear Support Vector Machines (LLSVMs). For the first time we establish conditions under which LLSVMs make Bayes consistent predictions at each test point $x_0$. We also establish rates at which the local risk of LLSVMs converges to the minimum value of expected local risk at each point $x_0$. Using stability arguments we establish generalization error bounds for LLSVMs.
我们为局部支持向量机(LSVM)提供了一个公式,该公式推广了以前的公式,并引出了与非参数估计文献中使用的局部多项式学习的显式关联。我们研究称为局部线性支持向量机(LLSVM)的最简单类型的LSVM。我们第一次建立了LLSVM在每个测试点$ x_0 $下进行贝叶斯一致预测的条件。我们还建立了LLSVM的局部风险在每个点$ x_0 $收敛到预期局部风险最小值的汇率。使用稳定性参数我们为LLSVMs建立泛化误差界。
https://arxiv.org/abs/1309.3699
We demonstrate the use of hydrogen induced changes in the emission of isoelectric Eu ions, in Mg-doped p-type GaN, as a powerful probe to study the dynamics of hydrogen movement under electron beam irradiation. We identify, experimentally, a two-step process in the dissociation of Mg-H complexes and propose, based on density functional theory, that the presence of minority carriers and resulting charge states of the hydrogen drives this process.
https://arxiv.org/abs/1309.2338
Astrometry is a powerful technique to study the populations of extrasolar planets around nearby stars. It gives access to a unique parameter space and is therefore required for obtaining a comprehensive picture of the properties, abundances, and architectures of exoplanetary systems. In this review, we discuss the scientific potential, present the available techniques and instruments, and highlight a few results of astrometric planet searches, with an emphasis on observations from the ground. In particular, we discuss astrometric observations with the Very Large Telescope (VLT) Interferometer and a programme employing optical imaging with a VLT camera, both aimed at the astrometric detection of exoplanets. Finally, we set these efforts into the context of Gaia, ESA’s astrometry mission scheduled for launch in 2013, and present an outlook on the future of astrometric exoplanet detection from the ground.
https://arxiv.org/abs/1309.0329
We perform an automatic analysis of television news programs, based on the closed captions that accompany them. Specifically, we collect all the news broadcasted in over 140 television channels in the US during a period of six months. We start by segmenting, processing, and annotating the closed captions automatically. Next, we focus on the analysis of their linguistic style and on mentions of people using NLP methods. We present a series of key insights about news providers, people in the news, and we discuss the biases that can be uncovered by automatic means. These insights are contrasted by looking at the data from multiple points of view, including qualitative assessment.
我们根据电视新闻节目的自动分析功能进行自动分析。具体来说,我们收集了在六个月内在美国140多个电视频道播出的所有新闻。我们从分割,处理和自动注释隐藏字幕开始。接下来,我们着重分析他们的语言风格,并提及使用NLP方法的人。我们提供了一系列关于新闻提供者,新闻人物的重要见解,并且我们讨论了可以通过自动方式发现的偏见。通过从多个角度观察数据,包括定性评估,对比这些见解。
https://arxiv.org/abs/1307.4879
Associative memories are structures that can retrieve previously stored information given a partial input pattern instead of an explicit address as in indexed memories. A few hardware approaches have recently been introduced for a new family of associative memories based on Sparse-Clustered Networks (SCN) that show attractive features. These architectures are suitable for implementations with low retrieval latency, but are limited to small networks that store a few hundred data entries. In this paper, a new hardware architecture of SCNs is proposed that features a new data-storage technique as well as a method we refer to as Selective Decoding (SD-SCN). The SD-SCN has been implemented using a similar FPGA used in the previous efforts and achieves two orders of magnitude higher capacity, with no error-performance penalty but with the cost of few extra clock cycles per data access.
https://arxiv.org/abs/1308.6021
Saliency detection has been an intuitive way to provide useful cues for object detection and segmentation, as desired for many vision and graphics applications. In this paper, we provided a robust method for salient object detection and segmentation. Other than using various pixel-level contrast definitions, we exploited global image structures and proposed a new geodesic method dedicated for salient object detection. In the proposed approach, a new geodesic scheme, namely geodesic tunneling is proposed to tackle with textures and local chaotic structures. With our new geodesic approach, a geodesic saliency map is estimated in correspondence to spatial structures in an image. Experimental evaluation on a salient object benchmark dataset validated that our algorithm consistently outperformed a number of the state-of-art saliency methods, yielding higher precision and better recall rates. With the robust saliency estimation, we also present an unsupervised hierarchical salient object cut scheme simply using adaptive saliency thresholding, which attained the highest score in our F-measure test. We also applied our geodesic cut scheme to a number of image editing tasks as demonstrated in additional experiments.
https://arxiv.org/abs/1302.6557
There is cognitive, neurological, and computational support for the hypothesis that defocusing attention results in divergent or associative thought, conducive to insight and finding unusual connections, while focusing attention results in convergent or analytic thought, conducive to rule-based operations. Creativity appears to involve both. It is widely believed that it is possible to escape mental fixation by spontaneously and temporarily engaging in a more associative mode of thought. The resulting insight (if found) may be refined in a more analytic mode of thought. The questions addressed here are: (1) how does the architecture of memory support these two modes of thought, and (2) what is happening at the neural level when one shifts between them? Recent advances in neuroscience shed light on this. Activated cell assemblies are composed of multiple neural cliques, groups of neurons that respond differentially to general or context-specific aspects of a situation. I refer to neural cliques that would not be included in the assembly if one were in an analytic mode, but would be if one were in an associative mode, as neurds. It is posited that the shift to a more associative mode of thought is accomplished by recruiting neurds that respond to abstract or atypical microfeatures of the problem or task. Since memory is distributed and content-addressable, this fosters the forging of associations to potentially relevant items previously encoded in those neurons. Thus it is proposed that creative thought not by searching a space of predefined alternatives and blindly tweaking those that hold promise, but by evoking remotely associated items through the recruitment of neurds in a distributed, content-addressable memory.
https://arxiv.org/abs/1308.5037