The quality of life of many people could be improved by autonomous humanoid robots in the home. To function in the human world, a humanoid household robot must be able to locate itself and perceive the environment like a human; scene perception, object detection and segmentation, and object spatial localization in 3D are fundamental capabilities for such humanoid robots. This paper presents a 3D multi-class object detection and segmentation method. The contributions are twofold. Firstly, we present a multi-class detection method, where a minimal joint codebook is learned in a principled manner. Secondly, we incorporate depth information using RGB-D imagery, which increases the robustness of the method and gives the 3D location of objects – necessary since the robot reasons in 3D space. Experiments show that the multi-class extension improves the detection efficiency with respect to the number of classes and the depth extension improves the detection robustness and give sufficient natural 3D location of the objects.
https://arxiv.org/abs/1301.5582
We have studied the high-frequency properties of the non-equilibrium electron gas in GaN samples subjected to electric and magnetic fields. Spectra of the complex tensor of the dynamical mobility have been calculated for THz frequency range. For the compensated GaN and low temperatures, in the intervals of electric fields of the few $kV/cm$ and magnetic fields of the few $T$ the existence of the cyclotron and optical phonon transit-time resonances has been identified. We have shown that interplay of two resonances gives rise to specific spectra of THz transmission and absorption (or gain). We suggest that experimental investigation of these effects will facilitate elaboration of field controlled devices for THz optoelectronics.
https://arxiv.org/abs/1301.5146
This paper presents a knowledge-based detection of objects approach using the OWL ontology language, the Semantic Web Rule Language, and 3D processing built-ins aiming at combining geometrical analysis of 3D point clouds and specialist’s knowledge. Here, we share our experience regarding the creation of 3D semantic facility model out of unorganized 3D point clouds. Thus, a knowledge-based detection approach of objects using the OWL ontology language is presented. This knowledge is used to define SWRL detection rules. In addition, the combination of 3D processing built-ins and topological Built-Ins in SWRL rules allows a more flexible and intelligent detection, and the annotation of objects contained in 3D point clouds. The created WiDOP prototype takes a set of 3D point clouds as input, and produces as output a populated ontology corresponding to an indexed scene visualized within VRML language. The context of the study is the detection of railway objects materialized within the Deutsche Bahn scene such as signals, technical cupboards, electric poles, etc. Thus, the resulting enriched and populated ontology, that contains the annotations of objects in the point clouds, is used to feed a GIS system or an IFC file for architecture purposes.
https://arxiv.org/abs/1301.4991
Enhanced interband tunnel injection of holes into a PN junction is demonstrated using P-GaN/InGaN/N-GaN tunnel junctions with a specific resistivity of 1.2 X 10-4 {\Omega} cm2. The design methodology and low-temperature characteristic of these tunnel junctions is discussed, and insertion into a PN junction device is described. Applications of tunnel junctions in III-nitride optoelectronics devices are explained using energy band diagrams. The lower band gap and polarization fields reduce tunneling barrier, eliminating the need for ohmic contacts to p-type GaN. This demonstration of efficient tunnel injection of carriers in III-Nitrides can lead to a replacement of existing resistive p-type contact material in light emitters with tunneling contact layers, requiring very little metal footprint on the surface, resulting in enhanced light extraction from top emitting emitters.
https://arxiv.org/abs/1211.4905
This paper presents a knowledge-based detection of objects approach using the OWL ontology language, the Semantic Web Rule Language, and 3D processing built-ins aiming at combining geometrical analysis of 3D point clouds and specialist’s knowledge. This combination allows the detection and the annotation of objects contained in point clouds. The context of the study is the detection of railway objects such as signals, technical cupboards, electric poles, etc. Thus, the resulting enriched and populated ontology, that contains the annotations of objects in the point clouds, is used to feed a GIS systems or an IFC file for architecture purposes.
https://arxiv.org/abs/1301.4783
The strain state and composition of a 400 nm thick (In,Ga)N layer grown by metal-organic chemical vapor deposition on a GaN template are investigated by spatially integrated x-ray diffraction and cathodoluminescence (CL) spectroscopy as well as by spatially resolved CL and energy dispersive x-ray analysis. The CL investigations confirm a process of strain relaxation accompanied by an increasing indium content toward the surface of the (In,Ga)N layer, which is known as the compositional pulling effect. Moreover, we identify the strained bottom, unstrained top, and gradually relaxed intermediate region of the (In,Ga)N layer. In addition to an increase of the indium content along the growth direction, the strain relaxation leads to an enhancement of the lateral variations of the indium distribution toward the surface.
https://arxiv.org/abs/1301.4138
In this paper we systematically analyze the electronic structures of polar and nonpolar wurtzite-InN/GaN quantum dots and their modification due to the quantum-confined Stark effect caused by intrinsic fields. This is achieved by combining continuum elasticity theory with an empirical tight binding model to describe the elastic and single-particle electronic properties in these nitride systems. Based on these results, a many-body treatment is used to determine optical absorption spectra. The efficiency of optical transitions depends on the interplay between the Coulomb interaction and the quantum-confined Stark effect. We introduce an effective confinement potential which represents the electronic structure under the influence of the intrinsic polarization fields and calculate the needed strength of Coulomb interaction to diminish the separation of electrons and holes.
https://arxiv.org/abs/1301.2468
Reversible debuggers have been developed at least since 1970. Such a feature is useful when the cause of a bug is close in time to the bug manifestation. When the cause is far back in time, one resorts to setting appropriate breakpoints in the debugger and beginning a new debugging session. For these cases when the cause of a bug is far in time from its manifestation, bug diagnosis requires a series of debugging sessions with which to narrow down the cause of the bug. For such “difficult” bugs, this work presents an automated tool to search through the process lifetime and locate the cause. As an example, the bug could be related to a program invariant failing. A binary search through the process lifetime suffices, since the invariant expression is true at the beginning of the program execution, and false when the bug is encountered. An algorithm for such a binary search is presented within the FReD (Fast Reversible Debugger) software. It is based on the ability to checkpoint, restart and deterministically replay the multiple processes of a debugging session. It is based on GDB (a debugger), DMTCP (for checkpoint-restart), and a custom deterministic record-replay plugin for DMTCP. FReD supports complex, real-world multithreaded programs, such as MySQL and Firefox. Further, the binary search is robust. It operates on multi-threaded programs, and takes advantage of multi-core architectures during replay.
https://arxiv.org/abs/1212.5204
This paper is a survey discussing Information Retrieval concepts, methods, and applications. It goes deep into the document and query modelling involved in IR systems, in addition to pre-processing operations such as removing stop words and searching by synonym techniques. The paper also tackles text categorization along with its application in neural networks and machine learning. Finally, the architecture of web crawlers is to be discussed shedding the light on how internet spiders index web documents and how they allow users to search for items on the web.
https://arxiv.org/abs/1212.2065
In this work, we compare the photodetector performance of single defect-free undoped and n-in GaN nanowires (NWs). In vacuum, undoped NWs present a responsivity increment, nonlinearities and persistent photoconductivity effects (~ 100 s). Their unpinned Fermi level at the m-plane NW sidewalls enhances the surface states role in the photodetection dynamics. Air adsorbed oxygen accelerates the carrier dynamics at the price of reducing the photoresponse. In contrast, in n-i-n NWs, the Fermi level pinning at the contact regions limits the photoinduced sweep of the surface band bending, and hence reduces the environment sensitivity and prevents persistent effects even in vacuum.
https://arxiv.org/abs/1212.1591
We introduce a condition for an ensemble of networked phase oscillators to feature an abrupt, first-order phase transition from an unsynchronized to a synchronized state. This condition is met in a very wide spectrum of situations, and for various oscillators’ initial frequency distributions. We show that the occurrence of such transitions is always accompanied by the spontaneous emergence of frequency-degree correlations in random network architectures. We also discuss ways to relax the condition, and to further extend the possibility for the first-order transition to occur, and illustrate how to engineer magnetic-like states of synchronization. Our findings thus indicate how to search for abrupt transitions in real-world applications.
https://arxiv.org/abs/1212.0404
We propose a theory that relates difficulty of learning in deep architectures to culture and language. It is articulated around the following hypotheses: (1) learning in an individual human brain is hampered by the presence of effective local minima; (2) this optimization difficulty is particularly important when it comes to learning higher-level abstractions, i.e., concepts that cover a vast and highly-nonlinear span of sensory configurations; (3) such high-level abstractions are best represented in brains by the composition of many levels of representation, i.e., by deep architectures; (4) a human brain can learn such high-level abstractions if guided by the signals produced by other humans, which act as hints or indirect supervision for these high-level abstractions; and (5), language and the recombination and optimization of mental concepts provide an efficient evolutionary recombination operator, and this gives rise to rapid search in the space of communicable ideas that help humans build up better high-level internal representations of their world. These hypotheses put together imply that human culture and the evolution of ideas have been crucial to counter an optimization difficulty: this optimization difficulty would otherwise make it very difficult for human brains to capture high-level knowledge of the world. The theory is grounded in experimental observations of the difficulties of training deep artificial neural networks. Plausible consequences of this theory for the efficiency of cultural evolutions are sketched.
https://arxiv.org/abs/1203.2990
Object recognition is an important task in image processing and computer vision. This paper presents a perfect method for object recognition with full boundary detection by combining affine scale invariant feature transform (ASIFT) and a region merging algorithm. ASIFT is a fully affine invariant algorithm that means features are invariant to six affine parameters namely translation (2 parameters), zoom, rotation and two camera axis orientations. The features are very reliable and give us strong keypoints that can be used for matching between different images of an object. We trained an object in several images with different aspects for finding best keypoints of it. Then, a robust region merging algorithm is used to recognize and detect the object with full boundary in the other images based on ASIFT keypoints and a similarity measure for merging regions in the image. Experimental results show that the presented method is very efficient and powerful to recognize the object and detect it with high accuracy.
https://arxiv.org/abs/1211.5829
This paper describes a prototype of extended XDB. XDB is an open-source and extensible database architecture developed by National Aeronautics and Space Administration (NASA) to provide integration of heterogeneous and distributed information resources for scientific and engineering applications. XDB enables an unlimited number of desktops and distributed information sources to be linked seamlessly and efficiently into an information grid using Data Access and Retrieval Composition (DARC) protocol which provides a contextual search and retrieval capability useful for lightweight web applications. This paper shows the usage of XDB on common data management in the enterprise without burdening users and application developers with unnecessary complexity and formal schemas. Supported by NASA Ames Research Center through NASA Exploration System Mission Directorate (ESMD) Higher Education grant, a project team at Fairfield University extended this concept and developed an extended XDB protocol and a prototype providing text-searches for Wiki. The technical specification of the protocol was posted to Source Forge (this http URL) and a prototype providing text-searches for Wiki was developed. The prototype was created for 16 tags of the MediaWiki dialect. As part of future works, the prototype will be further extended to the complete Wiki markups and other dialects of Wiki.
https://arxiv.org/abs/1211.5629
We provide algorithms for efficiently addressing quantum memory in parallel. These imply that the standard circuit model can be simulated with low overhead by the more realistic model of a distributed quantum computer. As a result, the circuit model can be used by algorithm designers without worrying whether the underlying architecture supports the connectivity of the circuit. In addition, we apply our results to existing memory intensive quantum algorithms. We present a parallel quantum search algorithm and improve the time-space trade-off for the Element Distinctness and Collision problems.
https://arxiv.org/abs/1207.2307
This paper considers the problem of information capacity of a random neural network. The network is represented by matrices that are square and symmetrical. The matrices have a weight which determines the highest and lowest possible value found in the matrix. The examined matrices are randomly generated and analyzed by a computer program. We find the surprising result that the capacity of the network is a maximum for the binary random neural network and it does not change as the number of quantization levels associated with the weights increases.
https://arxiv.org/abs/1211.3451
Density functional theory simulations were used to obtain physical properties of GaN/AlN system. Combination of these two compounds into multiquantum well (MQW) structure will induce strong electrostatic effect leading to emergence of high magnitude dipole layers at the AlN/GaN interfaces, which were first postulated by {Tersoff Phys. Rev. B 30(8) pp.4874 (1984)} and already identified in GaN/InN by {Romanowski et al. J. Phys. Chem C 114. 14410 (2010)}. When combining GaN and AlN in to a heterostructure a spatial projection of wavefunctions indicate that valence band offset between states becomes of order of 0.85 V. Systematic analysis of influence of number of Ga atomic layers on the properties of wells have shown that for thickness up to 4 Ga layers, GaN behave as carriers locating potential minimum rather, while for larger thickness it is a standard quantum well. In all cases wells has strongly localized quantum states close to valance band maxima (VBM) and conduction band minimum (CBM). The calculated oscillator strength values rapidly decreases for the well thickness in excess of 8 Ga layers (~21 {\AA}) which indicates that wells for UV emitters should be much thinner than these based on InGaN/GaN systems. The Quantum Confined Stark Effect (QCSE) related changes of the transition energy in function of the geometric arrangement were also obtained.
https://arxiv.org/abs/1208.5849
The XENON100 experiment, in operation at the Laboratori Nazionali del Gran Sasso (LNGS) in Italy, was designed to search for evidence of dark matter interactions inside a volume of liquid xenon using a dual-phase time projection chamber. This paper describes the Slow Control System (SCS) of the experiment with emphasis on the distributed architecture as well as on its modular and expandable nature. The system software was designed according to the rules of Object-Oriented Programming and coded in Java, thus promoting code reusability and maximum flexibility during commissioning of the experiment. The SCS has been continuously monitoring the XENON100 detector since mid 2008, remotely recording hundreds of parameters on a few dozen instruments in real time, and setting emergency alarms for the most important variables.
https://arxiv.org/abs/1211.0836
We present a detailed investigation concerning the exciton dynamics in GaN epilayers grown on c-plane sapphire substrates, focussing on the exciton formation and the transition from the nonthermal to the thermal regime. The time-resolved kinetics of LO-phonon replicas is used to address the energy relaxation in the excitonic band. From ps time-resolved spectra we bring evidence for a long lasting non-thermal excitonic distribution which accounts for the rst 50 ps. Such a behavior is con rmed in di erent experimental conditions, both when non-resonant and resonant excitation are used. At low excitation power density the exciton formation and their subsequent thermalization is dominated by impurity scattering rather than by acoustic phonon scattering. The estimate of the average energy of the excitons as a function of delay after the excitation pulse provides information on the relaxation time, which describes the evolution of the exciton population to the thermal regime.
https://arxiv.org/abs/1211.0824
The Polarbear Cosmic Microwave Background (CMB) polarization experiment is currently observing from the Atacama Desert in Northern Chile. It will characterize the expected B-mode polarization due to gravitational lensing of the CMB, and search for the possible B-mode signature of inflationary gravitational waves. Its 250 mK focal plane detector array consists of 1,274 polarization-sensitive antenna-coupled bolometers, each with an associated lithographed band-defining filter. Each detector’s planar antenna structure is coupled to the telescope’s optical system through a contacting dielectric lenslet, an architecture unique in current CMB experiments. We present the initial characterization of this focal plane.
https://arxiv.org/abs/1210.7877
GaN nanowire ensembles with axial InGaN multi-quantum wells (MQWs) were grown by molecular beam epitaxy. In a series of samples, we varied the In content in the MQWs from almost zero to about 20%. Within the nanowire ensemble, the MQWs fluctuate strongly in composition and size. Statistical information about the composition was obtained from x-ray diffraction and Raman spectroscopy. Photoluminescence at room temperature was obtained in the range from 2.2 eV to 2.5 eV depending on In content. Contrary to planar MQWs, the intensity increases with increasing In content. We compare the observed emission energies with transition energies obtained from a one-dimensional model, and conclude that several mechanisms for carrier localization affect the luminescence of these three-dimensional structures.
https://arxiv.org/abs/1210.7597
We present the architecture behind Twitter’s real-time related query suggestion and spelling correction service. Although these tasks have received much attention in the web search literature, the Twitter context introduces a real-time “twist”: after significant breaking news events, we aim to provide relevant results within minutes. This paper provides a case study illustrating the challenges of real-time data processing in the era of “big data”. We tell the story of how our system was built twice: our first implementation was built on a typical Hadoop-based analytics stack, but was later replaced because it did not meet the latency requirements necessary to generate meaningful real-time results. The second implementation, which is the system deployed in production, is a custom in-memory processing engine specifically designed for the task. This experience taught us that the current typical usage of Hadoop as a “big data” platform, while great for experimentation, is not well suited to low-latency processing, and points the way to future work on data analytics platforms that can handle “big” as well as “fast” data.
https://arxiv.org/abs/1210.7350
Object detection is a fundamental task in computer vision and has many applications in image processing. This paper proposes a new approach for object detection by applying scale invariant feature transform (SIFT) in an automatic segmentation algorithm. SIFT is an invariant algorithm respect to scale, translation and rotation. The features are very distinct and provide stable keypoints that can be used for matching an object in different images. At first, an object is trained with different aspects for finding best keypoints. The object can be recognized in the other images by using achieved keypoints. Then, a robust segmentation algorithm is used to detect the object with full boundary based on SIFT keypoints. In segmentation algorithm, a merging role is defined to merge the regions in image with the assistance of keypoints. The results show that the proposed approach is reliable for object detection and can extract object boundary well.
https://arxiv.org/abs/1210.7038
We analyse the storage and retrieval capacity in a recurrent neural network of spiking integrate and fire neurons. In the model we distinguish between a learning mode, during which the synaptic connections change according to a Spike-Timing Dependent Plasticity (STDP) rule, and a recall mode, in which connections strengths are no more plastic. Our findings show the ability of the network to store and recall periodic phase coded patterns a small number of neurons has been stimulated. The self sustained dynamics selectively gives an oscillating spiking activity that matches one of the stored patterns, depending on the initialization of the network.
https://arxiv.org/abs/1210.6979
We study the collective dynamics of a Leaky Integrate and Fire network in which precise relative phase relationship of spikes among neurons are stored, as attractors of the dynamics, and selectively replayed at differentctime scales. Using an STDP-based learning process, we store in the connectivity several phase-coded spike patterns, and we find that, depending on the excitability of the network, different working regimes are possible, with transient or persistent replay activity induced by a brief signal. We introduce an order parameter to evaluate the similarity between stored and recalled phase-coded pattern, and measure the storage capacity. Modulation of spiking thresholds during replay changes the frequency of the collective oscillation or the number of spikes per cycle, keeping preserved the phases relationship. This allows a coding scheme in which phase, rate and frequency are dissociable. Robustness with respect to noise and heterogeneity of neurons parameters is studied, showing that, since dynamics is a retrieval process, neurons preserve stablecprecise phase relationship among units, keeping a unique frequency of oscillation, even in noisy conditions and with heterogeneity of internal parameters of the units.
https://arxiv.org/abs/1210.6789
These are the proceedings of the First Workshop on GRAPH Inspection and Traversal Engineering (GRAPHITE 2012), which took place on April 1, 2012 in Tallinn, Estonia, as a satellite event of the 15th European Joint Conferences on Theory and Practice of Software (ETAPS 2012). The topic of the GRAPHITE workshop is graph search in all its forms in computer science. Graph search algorithms tend to have common characteristics, such as duplicate state detection, independent of their application domain. Over the past few years, it has been shown that the scalability of such algorithms can be dramatically improved by using, e.g., external memory, by exploiting parallel architectures, such as clusters, multi-core CPUs, and graphics processing units, and by using heuristics to guide the search. The goal of this event is to gather scientists from different communities, such as model checking, artificial intelligence planning, game playing, and algorithm engineering, who do research on graph search algorithms, such that awareness of each others’ work is increased.
https://arxiv.org/abs/1210.6118
Social applications mine user social graphs to improve performance in search, provide recommendations, allow resource sharing and increase data privacy. When such applications are implemented on a peer-to-peer (P2P) architecture, the social graph is distributed on the P2P system: the traversal of the social graph translates into a socially-informed routing in the peer-to-peer layer. In this work we introduce the model of a projection graph that is the result of decentralizing a social graph onto a peer-to-peer network. We focus on three social network metrics: degree, node betweenness and edge betweenness centrality and analytically formulate the relation between metrics in the social graph and in the projection graph. Through experimental evaluation on real networks, we demonstrate that when mapping user communities of sizes up to 50-150 users on each peer, the association between the properties of the social graph and the projection graph is high, and thus the properties of the (dynamic) projection graph can be inferred from the properties of the (slower changing) social graph. Furthermore, we demonstrate with two application scenarios on large-scale social networks the usability of the projection graph in designing social search applications and unstructured P2P overlays.
https://arxiv.org/abs/1210.6052
Several M dwarfs are targets of systematical monitoring in searches for Doppler signals caused by low-mass exoplanet companions. As a result, an emerging population of high-multiplicity planetary systems around low-mass stars are being detected as well. We optimize classic data analysis methods and develop new ones to enhance sensitivity towards low-amplitude planets in high-multiplicity systems. We apply these methods to the public HARPS observations of GJ 676A, a nearby M dwarf with one reported gas giant companion. We re-derived Doppler measurements using the template matching method (HARPS-TERRA). We used refined versions of periodograms to assess the presence of additional low-mass companions. We analyse the same dataset with Bayesian tools and compared the performance of both approaches. We confirm the reported massive gas giant candidate and a long period trend, whose curvature is now well detected. We also find very secure evidence of two new candidates in close-in orbits and masses in the super-Earth mass regime. Despite the increased sensitivity of the new periodogram tools, we find that Bayesian methods are more sensitive in the early detection of candidate signals. While hardware development is important, development of data analysis techniques can help to reveal new results from existing data sets with significantly fewer resources. This new system holds the record of minimum-mass range (from Msin i = 4.5 M_Earth to 5 M_Jup) and period range (from 3.6 days to more than 10 years). Although all the planet candidates are substantially more massive, it is the first exoplanetary system with a general architecture similar to our solar system. GJ 676A can be happily added to the family of high-multiplicity planetary systems around M dwarfs.
https://arxiv.org/abs/1206.7118
This paper proposes a technique for the unsupervised detection and tracking of arbitrary objects in videos. It is intended to reduce the need for detection and localization methods tailored to specific object types and serve as a general framework applicable to videos with varied objects, backgrounds, and image qualities. The technique uses a dependent Dirichlet process mixture (DDPM) known as the Generalized Polya Urn (GPUDDPM) to model image pixel data that can be easily and efficiently extracted from the regions in a video that represent objects. This paper describes a specific implementation of the model using spatial and color pixel data extracted via frame differencing and gives two algorithms for performing inference in the model to accomplish detection and tracking. This technique is demonstrated on multiple synthetic and benchmark video datasets that illustrate its ability to, without modification, detect and track objects with diverse physical characteristics moving over non-uniform backgrounds and through occlusion.
https://arxiv.org/abs/1210.3288
In this paper, we propose {\em distributed network compression via memory}. We consider two spatially separated sources with correlated unknown source parameters. We wish to study the universal compression of a sequence of length $n$ from one of the sources provided that the decoder has access to (i.e., memorized) a sequence of length $m$ from the other source. In this setup, the correlation does not arise from symbol-by-symbol dependency of two outputs from the two sources (as in Slepian-Wolf setup). Instead, the two sequences are correlated because they are originated from the two sources with \emph{unknown} correlated parameters. The finite-length nature of the compression problem at hand requires considering a notion of almost lossless source coding, where coding incurs an error probability $p_e(n)$ that vanishes as sequence length $n$ grows to infinity. We obtain bounds on the redundancy of almost lossless codes when the decoder has access to a random memory of length $m$ as a function of the sequence length $n$ and the permissible error probability $p_e(n)$. Our results demonstrate that distributed network compression via memory has the potential to significantly improve over conventional end-to-end compression when sufficiently large memory from previous communications is available to the decoder.
https://arxiv.org/abs/1210.2144
Small distributed systems are limited by their main memory to generate massively large graphs. Trivial extension to current graph generators to utilize external memory leads to large amount of random I/O hence do not scale with size. In this work we offer a technique to generate massive scale graphs on small cluster of compute nodes with limited main memory. We develop several distributed and external memory algorithms, primarily, shuffle, relabel, redistribute, and, compressed-sparse-row (csr) convert. The algorithms are implemented in MPI/pthread model to help parallelize the operations across multicores within each core. Using our scheme it is feasible to generate a graph of size $2^{38}$ nodes (scale 38) using only 64 compute nodes. This can be compared with the current scheme would require at least 8192 compute node, assuming 64GB of main memory. Our work has broader implications for external memory graph libraries such as STXXL and graph processing on SSD-based supercomputers such as Dash and Gordon [1][2].
https://arxiv.org/abs/1210.0187
We report the first detection of a radio-continuum and molecular jet associated with a dominant blue-shifted maser source, G353.273+0.641. A radio jet is extended 3000 au along NW-SE direction. H${2}$O masers are found to be clustered in the root of a bipolar radio jet. A molecular jet is detected by thermal SiO ($\upsilon$ = 0, $J$ = 2-1) emission. The SiO spectrum is extremely wide (-120 – +87 km s$^{-1}$) and significantly blue-shift dominated, similar to the maser emission. The observed geometry and remarkable spectral similarity between H${2}$O maser and SiO strongly suggests the existence of a maser-scale ($\sim$ 340 au) molecular jet that is enclosed by the extended radio jet. We propose a “disc-masking” scenario as the origin of the strong blue-shift dominance, where an optically thick disc obscures a red-shifted lobe of a compact jet.
https://arxiv.org/abs/1209.4313
Existing theoretical universal algorithmic intelligence models are not practically realizable. More pragmatic approach to artificial general intelligence is based on cognitive architectures, which are, however, non-universal in sense that they can construct and use models of the environment only from Turing-incomplete model spaces. We believe that the way to the real AGI consists in bridging the gap between these two approaches. This is possible if one considers cognitive functions as a “cognitive bias” (priors and search heuristics) that should be incorporated into the models of universal algorithmic intelligence without violating their universality. Earlier reported results suiting this approach and its overall feasibility are discussed on the example of perception, planning, knowledge representation, attention, theory of mind, language, and some others.
https://arxiv.org/abs/1209.4290
We describe the recovery of faint Main Belt comet P/2008 R1 Garradd using several telescopes, culminating in a successful low $S/N$ recovery with the Gemini North telescope with GMOS. This recovery was a time-critical effort for a mission proposal, and had to be performed in a crowded field. We describe techniques and software tools for eliminating systematic noise artifacts and stellar residuals, bringing the final detection image statistics close to the Gaussian ideal for a median image stack, and achieving a detection sensitivity close to this theoretical optimum. The magnitude of $R_c$=26.1$\pm$0.2 with an assumed geometric albedo of 0.05 corresponds to a radius of 0.3 km. For ice to have survived in this object over the age of the solar system, it implies that the object is a more recent collisional fragment. We discuss the implications of the unexpectedly faint magnitude and nuclear size of P/2008 R1 on the survival of ice inside very small bodies.
https://arxiv.org/abs/1209.3833
In this paper, we propose to develop service model architecture by merging multi-agentsystems and semantic web technology. The proposed architecture works in two stages namely, Query Identification and Solution Development. A person referred to as customer will submit the problem details or requirements which will be referred to as a query. Anyone who can provide a service will need to register with the registrar module of the architecture. Services can be anything ranging from expert consultancy in the field of agriculture to academic research, from selling products to manufacturing goods, from medical help to legal issues or even providing logistics. Query submitted by customer is first parsed and then iteratively understood with the help of domain experts and the customer to get a precise set of properties. Query thus identified will be solved again with the help of intelligent agent systems which will search the semantic web for all those who can find or provide a solution. A workable solution workflow is created and then depending on the requirements, using the techniques of negotiation or auctioning, solution is implemented to complete the service for customer. This part is termed as solution development. In this service oriented architecture, we first try to analyze the complex set of user requirements then try to provide best possible solution in an optimized way by combining better information searches through semantic web and better workflow provisioning using multi agent systems.
https://arxiv.org/abs/1208.6421
We present the studies of magnetic properties of 2 MeV 4He+-irradiated GaN grown by metal-organic chemical-vapor deposition. Particle irradiation allowed controllable introduction of Ga-vacancy in the samples. The magnetic moments with concentrations changing between 4.3…8.3x10^17 cm^-3 showing superparamagnetic blocking at room temperature are observed. The appearance of clear hysteresis curve at T = 5 K with coercive field of about H_C = 270 Oe suggests that the formation of more complex Ga vacancy related defects is promoted with increasing Ga vacancy content. The small concentration of the observed magnetically-active defects with respect to the total Ga- vacancy concentration suggests that the presence of the oxygen/hydrogen-related vacancy complexes is the source of the observed magnetic moments.
https://arxiv.org/abs/1203.1142
We consider the quantum capture of nonrelativistic massive particle by the moving infinite curve (cosmic string in wire approximation). It is shown that the cusp appearing on a string at a certain point due to the string dynamics can make the wave function collapse at this point irrespective of the caption place.
我们考虑通过移动的无限曲线(在线近似中的宇宙弦)量子捕获非相对论质量粒子。结果表明,由于弦动力的作用,某一点出现在琴弦上的弦尖可以使弦波在这一点上消失,而不考虑字幕的位置。
https://arxiv.org/abs/1202.2222
Recently, parallel search engines have been implemented based on scalable distributed file systems such as Google File System. However, we claim that building a massively-parallel search engine using a parallel DBMS can be an attractive alternative since it supports a higher-level (i.e., SQL-level) interface than that of a distributed file system for easy and less error-prone application development while providing scalability. In this paper, we propose a new approach of building a massively-parallel search engine using a DB-IR tightly-integrated parallel DBMS and demonstrate its commercial-level scalability and performance. In addition, we present a hybrid (i.e., analytic and experimental) performance model for the parallel search engine. We have built a five-node parallel search engine according to the proposed architecture using a DB-IR tightly-integrated DBMS. Through extensive experiments, we show the correctness of the model by comparing the projected output with the experimental results of the five-node engine. Our model demonstrates that ODYS is capable of handling 1 billion queries per day (81 queries/sec) for 30 billion web pages by using only 43,472 nodes with an average query response time of 211 ms, which is equivalent to or better than those of commercial search engines. We also show that, by using twice as many (86,944) nodes, ODYS can provide an average query response time of 162 ms, which is significantly lower than those of commercial search engines.
https://arxiv.org/abs/1208.4270
We present a method for identifying localized secondary populations in stellar velocity data using Bayesian statistical techniques. We apply this method to the dwarf spheroidal galaxy Ursa Minor and find two secondary objects in this satellite of the Milky Way. One object is kinematically cold with a velocity dispersion of $4.25 \pm 0.75\ \kms$ and centered at $(9.1\arcmin \pm 1.5, 7.2\arcmin \pm 1.2)$ in relative RA and DEC with respect to the center of Ursa Minor. The second object has a large velocity offset of $-12.8^{+1.75}{-1.5}\ \kms$ compared to Ursa Minor and centered at $(-14.0\arcmin^{+2.4}{-5.8}, -2.5\arcmin^{+0.4}_{-1.0})$. The kinematically cold object has been found before using a smaller data set but the prediction that this cold object has a velocity dispersion larger than $2.0\ \kms$ at 95% C.L. differs from previous work. We use two and three component models along with the information criteria and Bayesian evidence model selection methods to argue that Ursa Minor has one or two localized secondary populations. The significant probability for a large velocity dispersion in each secondary object raises the intriguing possibility that each has its own dark matter halo, that is, it is a satellite of a satellite of the Milky Way.
https://arxiv.org/abs/1208.4146
Motivation: The comparison of diverse genomic datasets is fundamental to understanding genome biology. Researchers must explore many large datasets of genome intervals (e.g., genes, sequence alignments) to place their experimental results in a broader context and to make new discoveries. Relationships between genomic datasets are typically measured by identifying intervals that intersect: that is, they overlap and thus share a common genome interval. Given the continued advances in DNA sequencing technologies, efficient methods for measuring statistically significant relationships between many sets of genomic features is crucial for future discovery. Results: We introduce the Binary Interval Search (BITS) algorithm, a novel and scalable approach to interval set intersection. We demonstrate that BITS outperforms existing methods at counting interval intersections. Moreover, we show that BITS is intrinsically suited to parallel computing architectures such as Graphics Processing Units (GPUs) by illustrating its utility for efficient Monte-Carlo simulations measuring the significance of relationships between sets of genomic intervals.
https://arxiv.org/abs/1208.3407
This document reports the x-ray powder diffraction main reflections (intensity threshold >= 100) for possible Fe-related phases forming during the metal-organic vapor phase epitaxy (MOVPE) growth of Fe in NH_3/H_2 mixture on wurtzite-GaN/sapphire. The 2\theta values are given for Cu K\alpha_1 radiation (1.5406 \AA) in the range 25-100 deg (ordered by increasing 2\theta). The GaN(000l) and Al_2O_3(000l) are also reported for reference.
https://arxiv.org/abs/1208.3420
Decision Taking is discussed in the context of the role it may play for a selling agent in a search market, in particular for agents involved in the sale of valuable and relatively unique items, such as a dwelling, a second hand car, or a second hand recreational vessel. Detailed connections are made between the architecture of decision making processes and a sample of software technology based concepts including instruction sequences, multi-threading, and thread algebra. Ample attention is paid to the initialization or startup of a thread dedicated to achieving a given objective, and to corresponding decision taking. As an application, the selling of an item is taken as an objective to be achieved by running a thread that was designed for that purpose.
https://arxiv.org/abs/1208.2460
Single planar arrays of Ga(x)Fe(4-x)N magnetic nanocrystals embedded in GaN have been fabricated in an epitaxial process. The phase of the nanocrystals and their epitaxial relationship with the host matrix are studied $via$ high-resolution transmission electron microscopy and high-resolution x-ray diffraction. By changing the growth parameters and mode, the crystallographic phase and chemical composition of the nanocrystals can be varied on demand. In view of the different magnetic properties of the various phases, applications in room-temperature ferromagnetic as well as antiferromagnetic spintronic devices are envisaged.
https://arxiv.org/abs/1208.2356
Creating and monitoring competitive and cost-effective pay-per-click advertisement campaigns through the web-search channel is a resource demanding task in terms of expertise and effort. Assisting or even automating the work of an advertising specialist will have an unrivaled commercial value. In this paper we propose a methodology, an architecture, and a fully functional framework for semi- and fully- automated creation, monitoring, and optimization of cost-efficient pay-per-click campaigns with budget constraints. The campaign creation module generates automatically keywords based on the content of the web page to be advertised extended with corresponding ad-texts. These keywords are used to create automatically the campaigns fully equipped with the appropriate values set. The campaigns are uploaded to the auctioneer platform and start running. The optimization module focuses on the learning process from existing campaign statistics and also from applied strategies of previous periods in order to invest optimally in the next period. The objective is to maximize the performance (i.e. clicks, actions) under the current budget constraint. The fully functional prototype is experimentally evaluated on real world Google AdWords campaigns and presents a promising behavior with regards to campaign performance statistics as it outperforms systematically the competing manually maintained campaigns.
https://arxiv.org/abs/1208.1187
We present a universal approach for determining the spontaneous polarization Psp of a wurtzite semiconductor from the emission energies of excitons bound to the different types of stacking faults in these crystals. Employing micro-photoluminescence and cathodoluminescence spectroscopy, we observe emission lines from the intrinsic and extrinsic stacking faults in strain-free GaN micro-crystals. By treating the polarization sheet charges associated with these stacking faults as a plate capacitor, Psp can be obtained from the observed transition energies with no additional assumptions. Self-consistent Poisson-Schroedinger calculations, aided by the microscopic electrostatic potential computed using density-functional theory, lead to nearly identical values for Psp. Our recommended value for Psp of GaN is -0.022+/-0.007 C/m^{2}.
https://arxiv.org/abs/1201.4294
The thermal conductivity of a crystal is sensitive to the presence of surfaces and nanoscale defects. While this opens tremendous opportunities to tailor thermal conductivity, a true “phonon engineering” of nanocrystals for a specific electronic or thermoelectric application can only be achieved when the dependence of thermal conductivity on the defect density, size, and spatial population is understood and quantified. Unfortunately, experimental studies of effects of nanoscale defects are quite challenging. While molecular dynamics simulations are effective in calculating thermal conductivity, the defect density range that can be explored with feasible computing resources is unrealistically high. As a result, previous work has not generated a fully detailed understanding of the dependence of thermal conductivity on nanoscale defects. Using GaN as an example, we have combined physically-motivated analytical model and highly-converged large scale molecular dynamics simulations to study effects of defects on thermal conductivity. An analytical expression for thermal conductivity as a function of void density, size, and population has been derived and corroborated with the model, simulations, and experiments.
https://arxiv.org/abs/1207.3144
We propose a multi-wing harmonium model for mining multimedia data that extends and improves on earlier models based on two-layer random fields, which capture bidirectional dependencies between hidden topic aspects and observed inputs. This model can be viewed as an undirected counterpart of the two-layer directed models such as LDA for similar tasks, but bears significant difference in inference/learning cost tradeoffs, latent topic representations, and topic mixing mechanisms. In particular, our model facilitates efficient inference and robust topic mixing, and potentially provides high flexibilities in modeling the latent topic spaces. A contrastive divergence and a variational algorithm are derived for learning. We specialized our model to a dual-wing harmonium for captioned images, incorporating a multivariate Poisson for word-counts and a multivariate Gaussian for color histogram. We present empirical results on the applications of this model to classification, retrieval and image annotation on news video collections, and we report an extensive comparison with various extant models.
我们提出了一个用于挖掘多媒体数据的多翼协调模型,它扩展和改进了基于两层随机场的早期模型,该模型捕获隐藏主题方面和观察输入之间的双向依赖关系。这个模型可以看作是类似任务的LDA等两层导向模型的一个无向对象,但在推理/学习成本折衷,潜在主题表示和主题混合机制方面存在显着差异。具体来说,我们的模型有助于高效的推理和强大的主题混合,并潜在地提供潜在主题空间建模的高灵活性。导出了一个对比散度和变分算法。我们将我们的模型专门用于带有标题图像的双翼乐队,其中包含多元泊松(Poisson)字词计数和多元高斯(Gaussian)颜色直方图。我们给出了这个模型在新闻视频收藏上的分类,检索和图像注释的应用实证结果,并且我们报告了与各种现存模型的广泛比较。
https://arxiv.org/abs/1207.1423
In recent years, the crucial importance of metrics in machine learning algorithms has led to an increasing interest for optimizing distance and similarity functions. Most of the state of the art focus on learning Mahalanobis distances (requiring to fulfill a constraint of positive semi-definiteness) for use in a local k-NN algorithm. However, no theoretical link is established between the learned metrics and their performance in classification. In this paper, we make use of the formal framework of good similarities introduced by Balcan et al. to design an algorithm for learning a non PSD linear similarity optimized in a nonlinear feature space, which is then used to build a global linear classifier. We show that our approach has uniform stability and derive a generalization bound on the classification error. Experiments performed on various datasets confirm the effectiveness of our approach compared to state-of-the-art methods and provide evidence that (i) it is fast, (ii) robust to overfitting and (iii) produces very sparse classifiers.
http://arxiv.org/abs/1206.6476
Significant differences exist among literature for thermal conductivity of various systems computed using molecular dynamics simulation. In some cases, unphysical results, for example, negative thermal conductivity, have been found. Using GaN as an example case and the direct non-equilibrium method, extensive molecular dynamics simulations and Monte Carlo analysis of the results have been carried out to quantify the uncertainty level of the molecular dynamics methods and to identify the conditions that can yield sufficiently accurate calculations of thermal conductivity. We found that the errors of the calculations are mainly due to the statistical thermal fluctuations. Extrapolating results to the limit of an infinite-size system tend to magnify the errors and occasionally lead to unphysical results. The error in bulk estimates can be reduced by performing longer time averages using properly selected systems over a range of sample lengths. If the errors in the conductivity estimates associated with each of the sample lengths are kept below a certain threshold, the likelihood of obtaining unphysical bulk values becomes insignificant. Using a Monte-Carlo approach developed here, we have determined the probability distributions for the bulk thermal conductivities obtained using the direct method. We also have observed a nonlinear effect that can become a source of significant errors. For the extremely accurate results presented here, we predict a [0001] GaN thermal conductivity of 185 $\rm{W/K \cdot m}$ at 300 K, 102 $\rm{W/K \cdot m}$ at 500 K, and 74 $\rm{W/K \cdot m}$ at 800 K. Using the insights obtained in the work, we have achieved a corresponding error level (standard deviation) for the bulk (infinite sample length) GaN thermal conductivity of less than 10 $\rm{W/K \cdot m}$, 5 $\rm{W/K \cdot m}$, and 15 $\rm{W/K \cdot m}$ at 300 K, 500 K, and 800 K respectively.
https://arxiv.org/abs/1206.5445
Recent molecular dynamics simulation methods have enabled thermal conductivity of bulk materials to be estimated. In these simulations, periodic boundary conditions are used to extend the system dimensions to the thermodynamic limit. Such a strategy cannot be used for nanostructures with finite dimensions which are typically much larger than it is possible to simulate directly. To bridge the length scales between the simulated and the actual nanostructures, we perform large-scale molecular dynamics calculations of thermal conductivities at different system dimensions to examine a recently developed conductivity vs. dimension scaling theory for both film and wire configurations. We demonstrate that by an appropriate application of the scaling law, reliable interpolations can be used to accurately predict thermal conductivity of films and wires as a function of film thickness or wire radius at realistic length scales from molecular dynamics simulations. We apply this method to predict thermal conductivities for GaN wurtzite nanostructures.
https://arxiv.org/abs/1206.5355