Using time-resolved photoluminescence spectroscopy, we explore the transient behavior of bound and free excitons in GaN nanowire ensembles. We investigate samples with distinct diameter distributions and show that the pronounced biexponential decay of the donor-bound exciton observed in each case is not caused by the nanowire surface. At long times, the individual exciton transitions decay with a common lifetime, which suggests a strong coupling between the corresponding exciton states. A system of non-linear rate-equations taking into account this coupling directly reproduces the experimentally observed biexponential decay.
https://arxiv.org/abs/1308.1799
In this paper, we present an overview of a multimodal system to indexing and searching video sequence by the content that has been developed within the REGIMVid project. A large part of our system has been developed as part of TRECVideo evaluation. The MAVSIR platform provides High-level feature extraction from audio-visual content and concept/event-based video retrieval. We illustrate the architecture of the system as well as provide an overview of the descriptors supported to date. Then we demonstrate the usefulness of the toolbox in the context of feature extraction, concepts/events learning and retrieval in large collections of video surveillance dataset. The results are encouraging as we are able to get good results on several event categories, while for all events we have gained valuable insights and experience.
https://arxiv.org/abs/1308.1150
Question answering system can be seen as the next step in information retrieval, allowing users to pose question in natural language and receive compact answers. For the Question answering system to be successful, research has shown that the correct classification of question with respect to the expected answer type is requisite. We propose a novel architecture for question classification and searching in the index, maintained on the basis of expected answer types, for efficient question answering. The system uses the criteria for Answer Relevance Score for finding the relevance of each answer returned by the system. On analysis of the proposed system, it has been found that the system has shown promising results than the existing systems based on question classification.
https://arxiv.org/abs/1307.6937
N-polar GaN channel mobility is important for high frequency device applications. In this Letter, we report the theoretical calculations on the surface optical (SO) phonon scattering rate of two-dimensional electron gas (2-DEG) in N-polar GaN quantum well channels with high-k dielectrics. The effect of SO phonons on 2-DEG mobility was found to be small at >5 nm channel thickness. However, the SO mobility in 3 nm N-polar GaN channels with high-k dielectrics is low and limits the total mobility. The SO scattering for SiNx dielectric GaN was found to be negligible due to its high SO phonon energy.
https://arxiv.org/abs/1307.6405
The crucial importance of metrics in machine learning algorithms has led to an increasing interest in optimizing distance and similarity functions, an area of research known as metric learning. When data consist of feature vectors, a large body of work has focused on learning a Mahalanobis distance. Less work has been devoted to metric learning from structured objects (such as strings or trees), most of it focusing on optimizing a notion of edit distance. We identify two important limitations of current metric learning approaches. First, they allow to improve the performance of local algorithms such as k-nearest neighbors, but metric learning for global algorithms (such as linear classifiers) has not been studied so far. Second, the question of the generalization ability of metric learning methods has been largely ignored. In this thesis, we propose theoretical and algorithmic contributions that address these limitations. Our first contribution is the derivation of a new kernel function built from learned edit probabilities. Our second contribution is a novel framework for learning string and tree edit similarities inspired by the recent theory of (e,g,t)-good similarity functions. Using uniform stability arguments, we establish theoretical guarantees for the learned similarity that give a bound on the generalization error of a linear classifier built from that similarity. In our third contribution, we extend these ideas to metric learning from feature vectors by proposing a bilinear similarity learning method that efficiently optimizes the (e,g,t)-goodness. Generalization guarantees are derived for our approach, highlighting that our method minimizes a tighter bound on the generalization error of the classifier. Our last contribution is a framework for establishing generalization bounds for a large class of existing metric learning algorithms based on a notion of algorithmic robustness.
http://arxiv.org/abs/1307.4514
A List Viterbi detector produces a rank ordered list of the N globally best candidates in a trellis search. A List Viterbi detector structure is proposed that incorporates the noise prediction with periodic state-metric updates based on outer error detection codes (EDCs). More specifically, a periodic decision making process is utilized for a non-overlapping sliding windows of P bits based on the use of outer EDCs. In a number of magnetic recording applications, Error Correction Coding (ECC) is adversely effected by the presence of long and dominant error events. Unlike the conventional post processing methods that are usually tailored to a specific set of dominant error events or the joint modulation code trellis architectures that are operating on larger state spaces at the expense of increased implementation complexity, the proposed detector does not use any a priori information about the error event distributions and operates at reduced state trellis. We present pre ECC bit error rate performance as well as the post ECC codeword failure rates of the proposed detector using perfect detection scenario as well as practical detection codes as the EDCs are not essential to the overall design. Furthermore, it is observed that proposed algorithm does not introduce new error events. Simulation results show that the proposed algorithm gives improved bit error and post ECC codeword failure rates at the expense of some increase in complexity.
https://arxiv.org/abs/1307.5906
We present techniques to parallelize membership tests for Deterministic Finite Automata (DFAs). Our method searches arbitrary regular expressions by matching multiple bytes in parallel using speculation. We partition the input string into chunks, match chunks in parallel, and combine the matching results. Our parallel matching algorithm exploits structural DFA properties to minimize the speculative overhead. Unlike previous approaches, our speculation is failure-free, i.e., (1) sequential semantics are maintained, and (2) speed-downs are avoided altogether. On architectures with a SIMD gather-operation for indexed memory loads, our matching operation is fully vectorized. The proposed load-balancing scheme uses an off-line profiling step to determine the matching capacity of each par- ticipating processor. Based on matching capacities, DFA matches are load-balanced on inhomogeneous parallel architectures such as cloud computing environments. We evaluated our speculative DFA membership test for a representative set of benchmarks from the Perl-compatible Regular Expression (PCRE) library and the PROSITE protein database. Evaluation was conducted on a 4 CPU (40 cores) shared-memory node of the Intel Manycore Testing Lab (Intel MTL), on the Intel AVX2 SDE simulator for 8-way fully vectorized SIMD execution, and on a 20-node (288 cores) cluster on the Amazon EC2 computing cloud.
https://arxiv.org/abs/1210.5093
Associative memories store content in such a way that the content can be later retrieved by presenting the memory with a small portion of the content, rather than presenting the memory with an address as in more traditional memories. Associative memories are used as building blocks for algorithms within database engines, anomaly detection systems, compression algorithms, and face recognition systems. A classical example of an associative memory is the Hopfield neural network. Recently, Gripon and Berrou have introduced an alternative construction which builds on ideas from the theory of error correcting codes and which greatly outperforms the Hopfield network in capacity, diversity, and efficiency. In this paper we implement a variation of the Gripon-Berrou associative memory on a general purpose graphical processing unit (GPU). The work of Gripon and Berrou proposes two retrieval rules, sum-of-sum and sum-of-max. The sum-of-sum rule uses only matrix-vector multiplication and is easily implemented on the GPU. The sum-of-max rule is much less straightforward to implement because it involves non-linear operations. However, the sum-of-max rule gives significantly better retrieval error rates. We propose a hybrid rule tailored for implementation on a GPU which achieves a 880-fold speedup without sacrificing any accuracy.
https://arxiv.org/abs/1303.7032
In this study, we have investigated the adequacy of the PGAS parallel language X10 to implement a Constraint-Based Local Search solver. We decided to code in this language to benefit from the ease of use and architectural independence from parallel resources which it offers. We present the implementation strategy, in search of different sources of parallelism in the context of an implementation of the Adaptive Search algorithm. We extensively discuss the algorithm and its implementation. The performance evaluation on a representative set of benchmarks shows close to linear speed-ups, in all the problems treated.
https://arxiv.org/abs/1307.4641
This study is a part of design of an audio system for in-house object detection system for visually impaired, low vision personnel by birth or by an accident or due to old age. The input of the system will be scene and output as audio. Alert facility is provided based on severity levels of the objects (snake, broke glass etc) and also during difficulties. The study proposed techniques to provide speedy detection of objects based on shapes and its scale. Features are extraction to have minimum spaces using dynamic scaling. From a scene, clusters of objects are formed based on the scale and shape. Searching is performed among the clusters initially based on the shape, scale, mean cluster value and index of object(s). The minimum operation to detect the possible shape of the object is performed. In case the object does not have a likely matching shape, scale etc, then the several operations required for an object detection will not perform; instead, it will declared as a new object. In such way, this study finds a speedy way of detecting objects.
https://arxiv.org/abs/1307.3439
Introduction: Before embarking on the design of any computer system it is first necessary to assess the magnitude of the problem. In the case of a web search engine this assessment amounts to determining the current size of the web, the growth rate of the web, and the quantity of computing resource necessary to search it, and projecting the historical growth of this into the future. Method: The over 20 year history of the web makes it possible to make short-term projections on future growth. The longer history of hard disk drives (and smart phone memory card) makes it possible to make short-term hardware projections. Analysis: Historical data on Internet uptake and hardware growth is extrapolated. Results: It is predicted that within a decade the storage capacity of a single hard drive will exceed the size of the index of the web at that time. Within another decade it will be possible to store the entire searchable text on the same hard drive. Within another decade the entire searchable web (including images) will also fit. Conclusion: This result raises questions about the future architecture of search engines. Several new models are proposed. In one model the user’s computer is an active part of the distributed search architecture. They search a pre-loaded snapshot (back-file) of the web on their local device which frees up the online data centre for searching just the difference between the snapshot and the current time. Advantageously this also makes it possible to search when the user is disconnected from the Internet. In another model all changes to all files are broadcast to all users (forming a star-like network) and no data centre is needed.
https://arxiv.org/abs/1307.1179
This paper presents our joint research efforts on big data benchmarking with several industrial partners. Considering the complexity, diversity, workload churns, and rapid evolution of big data systems, we take an incremental approach in big data benchmarking. For the first step, we pay attention to search engines, which are the most important domain in Internet services in terms of the number of page views and daily visitors. However, search engine service providers treat data, applications, and web access logs as business confidentiality, which prevents us from building benchmarks. To overcome those difficulties, with several industry partners, we widely investigated the open source solutions in search engines, and obtained the permission of using anonymous Web access logs. Moreover, with two years’ great efforts, we created a sematic search engine named ProfSearch (available from this http URL). These efforts pave the path for our big data benchmark suite from search engines—BigDataBench, which is released on the web page (this http URL). We report our detailed analysis of search engine workloads, and present our benchmarking methodology. An innovative data generation methodology and tool are proposed to generate scalable volumes of big data from a small seed of real data, preserving semantics and locality of data. Also, we preliminarily report two case studies using BigDataBench for both system and architecture researches.
https://arxiv.org/abs/1307.0320
Recently, there has been considerable interest in new tiered network cellular architectures, which would likely use many more cell sites than found today. Two major challenges will be i) providing backhaul to all of these cells and ii) finding efficient techniques to leverage higher frequency bands for mobile access and backhaul. This paper proposes the use of outdoor millimeter wave communications for backhaul networking between cells and mobile access within a cell. To overcome the outdoor impairments found in millimeter wave propagation, this paper studies beamforming using large arrays. However, such systems will require narrow beams, increasing sensitivity to movement caused by pole sway and other environmental concerns. To overcome this, we propose an efficient beam alignment technique using adaptive subspace sampling and hierarchical beam codebooks. A wind sway analysis is presented to establish a notion of beam coherence time. This highlights a previously unexplored tradeoff between array size and wind-induced movement. Generally, it is not possible to use larger arrays without risking a corresponding performance loss from wind-induced beam misalignment. The performance of the proposed alignment technique is analyzed and compared with other search and alignment methods. The results show significant performance improvement with reduced search time.
https://arxiv.org/abs/1306.6659
ATLAAS-P2P is a two-layered P2P architecture for developing systems providing resource aggregation and approximated discovery in P2P networks. Such systems allow users to search the desired resources by specifying their requirements in a flexible and easy way. From the point of view of resource providers, this system makes available an effective solution supporting providers in being reached by resource requests.
https://arxiv.org/abs/1306.2160
The IEEE 754-2008 standard recommends the correct rounding of some elementary functions. This requires to solve the Table Maker’s Dilemma which implies a huge amount of CPU computation time. We consider in this paper accelerating such computations, namely Lefe’vre algorithm on Graphics Processing Units (GPUs) which are massively parallel architectures with a partial SIMD execution (Single Instruction Multiple Data). We first propose an analysis of the Lefèvre hard-to-round argument search using the concept of continued fractions. We then propose a new parallel search algorithm much more efficient on GPU thanks to its more regular control flow. We also present an efficient hybrid CPU-GPU deployment of the generation of the polynomial approximations required in Lefèvre algorithm. In the end, we manage to obtain overall speedups up to 53.4x on one GPU over a sequential CPU execution, and up to 7.1x over a multi-core CPU, which enable a much faster solving of the Table Maker’s Dilemma for the double precision format.
https://arxiv.org/abs/1211.3056
We report on experimental realization of p-n heterojunctions based on p-type GaN, and an n-type correlated oxide, VO2. The band offsets are evaluated by current-voltage and capacitance voltage measurements at various temperatures. A band diagram based on the conventional band bending picture is proposed to explain the evolution of the apparent barier height from electrical measurements and it suggests that the work function of VO2 decreases by ~0.2 eV when it goes through the insulator to metal transtion, in qualitative agreement with Kelvin force microscopy measurements reported in literature. The frequency dependent capacitance measurements allows us to differentiate the miniority carrier effect from the interface states and series resistance contributions, and estimate the minority carrier lifetime in insulating phase of VO2 to be of the order of few microseconds. The nitride-oxide based p-n heterojunctions provide a new dimension
https://arxiv.org/abs/1306.0916
Indexing the Web is becoming a laborious task for search engines as the Web exponentially grows in size and distribution. Presently, the most effective known approach to overcome this problem is the use of focused crawlers. A focused crawler applies a proper algorithm in order to detect the pages on the Web that relate to its topic of interest. For this purpose we proposed a custom method that uses specific HTML elements of a page to predict the topical focus of all the pages that have an unvisited link within the current page. These recognized on-topic pages have to be sorted later based on their relevance to the main topic of the crawler for further actual downloads. In the Treasure-Crawler, we use a hierarchical structure called the T-Graph which is an exemplary guide to assign appropriate priority score to each unvisited link. These URLs will later be downloaded based on this priority. This paper outlines the architectural design and embodies the implementation, test results and performance evaluation of the Treasure-Crawler system. The Treasure-Crawler is evaluated in terms of information retrieval criteria such as recall and precision, both with values close to 0.5. Gaining such outcome asserts the significance of the proposed approach.
https://arxiv.org/abs/1306.0054
The two significant tasks of a focused Web crawler are finding relevant topic-specific documents on the Web and analytically prioritizing them for later effective and reliable download. For the first task, we propose a sophisticated custom algorithm to fetch and analyze the most effective HTML structural elements of the page as well as the topical boundary and anchor text of each unvisited link, based on which the topical focus of an unvisited page can be predicted and elicited with a high accuracy. Thus, our novel method uniquely combines both link-based and content-based approaches. For the second task, we propose a scoring function of the relevant URLs through the use of T-Graph (Treasure Graph) to assist in prioritizing the unvisited links that will later be put into the fetching queue. Our Web search system is called the Treasure-Crawler. This research paper embodies the architectural design of the Treasure-Crawler system which satisfies the principle requirements of a focused Web crawler, and asserts the correctness of the system structure including all its modules through illustrations and by the test results.
https://arxiv.org/abs/1305.7265
Top-down fabricated GaN nanowires, 250 nm in diameter and with various heights, have been used to experimentally determine the evolution of strain along the vertical direction of 1-dimensional objects. X-ray diffraction and photoluminescence techniques have been used to obtain the strain profile inside the nanowires from their base to their top facet for both initial compressive and tensile strains. The relaxation behaviors derived from optical and structural characterizations perfectly match the numerical results of calculations based on a continuous media approach. By monitoring the elastic relaxation enabled by the lateral free-surfaces, the height from which the nanowires can be considered strain-free has been estimated. Based on this result, NWs sufficiently high to be strain-free have been coalesced to form a continuous GaN layer. X-ray diffraction, photoluminescence and cathodoluminescence clearly show that despite the initial strain-free nanowires template, the final GaN layer is strained.
https://arxiv.org/abs/1305.7115
The one-zone synchrotron-self-Compton (SSC) model aims to describe the spectral energy distribution (SED) of BL Lac objects via synchrotron emission by a non-thermal population of electrons and positrons in a single homogeneous emission region, partially upscattered to gamma-rays by the particles themselves. The model is usually considered as degenerate, given that the number of free parameters is higher than the number of observables. It is thus common to model the SED by choosing a single set of values for the SSC-model parameters that provide a good description of the data, without studying the entire parameter space. We present here a new numerical algorithm which permits us to find the complete set of solutions, using the information coming from the detection in the GeV and TeV energy bands. The algorithm is composed of three separate steps: we first prepare a grid of simulated SEDs and extract from each SED the values of the observables; we then parametrize each observable as a function of the SSC parameters; we finally solve the system for a given set of observables. We iteratively solve the system to take into account uncertainties in the values of the observables, producing a family of solutions. We present a first application of our algorithm to the typical high-frequency-peaked BL Lac object 1RXS J101015.9-311909, provide constraints on the SSC parameters, and discuss the result in terms of our understanding of the blazar emitting region.
https://arxiv.org/abs/1305.4597
We show that GdN nanoislands can enhance inter-band tunneling in GaN PN junctions by several orders of magnitude, enabling low optical absorption low-resistance tunnel junctions (specific resistivity 1.3 X 10-3 {\Omega}-cm2) for various optoelectronic applications. We exploit the ability to overgrow high quality GaN over GdN nanoislands to create new nanoscale heterostructure designs that are not feasible in planar epitaxy. GdN nanoisland assisted inter-band tunneling was found to enhance tunneling in both of the polar orientations of GaN. Tunnel injection of holes was confirmed by low temperature operation of GaN p-n junction with a tunneling contact layer, showing strong electroluminescence down to 20K. The availability of tunnel junctions with negligible absorption could not only improve the efficiency of existing optoelectronic devices significantly, but also enable new electronic and optical devices based on wide band gap materials.
https://arxiv.org/abs/1206.3810
Compound semiconducting nanowires are promising building blocks for several nanoelectronic devices yet the inability to reliably control their growth morphology is a major challenge. Here, we report the Au-catalyzed vapor-liquid-solid (VLS) growth of GaN nanowires with controlled growth direction, surface polarity and surface roughness. We develop a theoretical model that relates the growth form to the kinetic frustration induced by variations in the V(N)/III(Ga) ratio across the growing nanowire front. The model predictions are validated by the trends in the as-grown morphologies induced by systematic variations in the catalyst particle size and processing conditions. The principles of shape selection highlighted by our study pave the way for morphological control of technologically relevant compound semiconductor nanowires.
https://arxiv.org/abs/1305.3936
Compared to the giant planets in the solar system, exoplanets have many remarkable properties such as the prevalence of giant planets on eccentric orbits and the presence of hot Jupiters. Planet-planet scattering (PPS) between giant planets is a possible mechanism in interpreting above and other observed properties. If the observed giant planet architectures are indeed the outcomes of PPS, such drastic dynamical process must affect their primordial moon systems. In this Letter, we discuss the effect of the PPS on the survival of their regular moons. From the viewpoint of observations, some preliminary conclusions are drawn from the simulations. 1. PPS is a destructive process to the moon systems, single planets on eccentric orbits are not the ideal moon-search targets. 2. If hot Jupiters formed through PPS, their original moons have little chance to survive. 3. Planets in multiple systems with small eccentricities are more likely holding their primordial moons. 4. Compared to the lower-mass planets, the massive ones in multiple systems may not be the preferred moon-search targets if the system underwent a PPS history.
https://arxiv.org/abs/1305.1717
The quasi-Hilda object, 212P/2000YN30 with a cometary-like orbit, was found to display a dust tail structure between January and March, 2009. From orbital calculations, it is shown that this object could have been an active comet in its past history before being transported to the current orbital configuration in quasi-stable 3:2 resonance with Jupiter.
https://arxiv.org/abs/1305.1099
We introduce algorithms to visualize feature spaces used by object detectors. The tools in this paper allow a human to put on `HOG goggles’ and perceive the visual world as a HOG based object detector sees it. We found that these visualizations allow us to analyze object detection systems in new ways and gain new insight into the detector’s failures. For example, when we visualize the features for high scoring false alarms, we discovered that, although they are clearly wrong in image space, they do look deceptively similar to true positives in feature space. This result suggests that many of these false alarms are caused by our choice of feature space, and indicates that creating a better learning algorithm or building bigger datasets is unlikely to correct these errors. By visualizing feature spaces, we can gain a more intuitive understanding of our detection systems.
https://arxiv.org/abs/1212.2278
Web service composition is the process of synthesizing a new composite service using a set of available Web services in order to satisfy a client request that cannot be treated by any available Web services. The Web services space is a dynamic environment characterized by a huge number of elements. Furthermore, many Web services are offering similar functionalities. In this paper we propose a model for Web service composition designed to address the scale effect and the redundancy issue. The Web services space is represented by a two-layered network architecture. A concrete similarity network layer organizes the Web services operations into communities of functionally similar operations. An abstract interaction network layer represents the composition relationships between the sets of communities. Composition synthesis is performed by a two-phased graph search algorithm. First, the interaction network is mined in order to discover abstract solutions to the request goal. Then, the abstract compositions are instantiated with concrete operations selected from the similarity network. This strategy allows an efficient exploration of the Web services space. Furthermore, operations grouped in a community can be easily substituted if necessary during the composition’s synthesis’s process.
https://arxiv.org/abs/1305.0187
A new system for object detection in cluttered RGB-D images is presented. Our main contribution is a new method called Bingham Procrustean Alignment (BPA) to align models with the scene. BPA uses point correspondences between oriented features to derive a probability distribution over possible model poses. The orientation component of this distribution, conditioned on the position, is shown to be a Bingham distribution. This result also applies to the classic problem of least-squares alignment of point sets, when point features are orientation-less, and gives a principled, probabilistic way to measure pose uncertainty in the rigid alignment problem. Our detection system leverages BPA to achieve more reliable object detections in clutter.
https://arxiv.org/abs/1304.7399
The structural and electronic properties of new structural cubic (GaN)$_1$/(ZnO)$_1$ superlattice have been investigated using two different theoretical techniques: the full potential-linearized augmented plane wave (FP-LAPW) method and the linear combination of localized pseudo atomic orbital (LCPAO). The new modified Becke-Johnson (mBJ) exchange potential is chosen to improve the bandgap of the superlattice and effective masses. The bandgap is found to be slightly indirect and reduced from those of pure GaN and ZnO. The origin of this reduction is attributed to the $p-d$ repulsion of the Zn-N interface and the presence of the O $p$ electron. The electron effective mass is found to be isotropic. Good agreement is obtained between two used methods and with available theoretical and experimental data.
https://arxiv.org/abs/1304.7383
We report on a source of ultranarrow-band photon pairs generated by widely nondegenerate cavity-enhanced spontaneous down-conversion. The source is designed to be compatible with Pr3+ solid state quantum memories and telecommunication optical fibers, with signal and idler photons close to 606 nm and 1436 nm, respectively. Both photons have a spectral bandwidth around 2 MHz, matching the bandwidth of Pr3+ doped quantum memories. This source is ideally suited for long distance quantum communication architectures involving solid state quantum memories.
https://arxiv.org/abs/1304.6861
We report a realization of an associative memory signal/information processing system based on simple enzyme-catalyzed biochemical reactions. Optically detected chemical output is always obtained in response to the triggering input, but the system can also “learn” by association, to later respond to the second input if it is initially applied in combination with the triggering input as the “training” step. This second chemical input is not self-reinforcing in the present system, which therefore can later “unlearn” to react to the second input if it is applied several times on its own. Such processing steps realized with (bio)chemical kinetics promise applications of bio-inspired/memory-involving components in “networked” (concatenated) biomolecular processes for multi-signal sensing and complex information processing.
https://arxiv.org/abs/1304.5731
Nature-inspired devices and architectures are attracting considerable attention for various purposes, including the development of novel computing techniques based on spatiotemporal dynamics, exploiting stochastic processes for computing, and reducing energy dissipation. This paper demonstrates that networks of optical energy transfers between quantum nanostructures mediated by optical near-field interactions occurring at scales far below the wavelength of light could be utilized for solving a constraint satisfaction problem (CSP), the satisfiability problem (SAT), and a decision making problem. The optical energy transfer from smaller quantum dots to larger ones, which is a quantum stochastic process, depends on the existence of resonant energy levels between the quantum dots or a state-filling effect occurring at the larger quantum dots. Such a spatiotemporal mechanism yields different evolutions of energy transfer patterns in multi-quantum-dot systems. We numerically demonstrate that networks of optical energy transfers can be used for solution searching and decision making. We consider that such an approach paves the way to a novel physical informatics in which both coherent and dissipative processes are exploited, with low energy consumption.
https://arxiv.org/abs/1304.5649
Cryptanalysis of block ciphers involves massive computations which are independent of each other and can be instantiated simultaneously so that the solution space is explored at a faster rate. With the advent of low cost Field Programmable Gate Arrays, building special purpose hardware for computationally intensive applications has now become possible. For this the Data Encryption Standard is used as a proof of concept. This paper presents the design for Hardware implementation of DES cryptanalysis on FPGA using exhaustive key search. Two architectures viz. Rolled and Unrolled DES architecture are compared and based on experimental result the Rolled architecture is implemented on FPGA. The aim of this work is to make cryptanalysis faster and better.
https://arxiv.org/abs/1304.6672
Modern processor architectures, in addition to having still more cores, also require still more consideration to memory-layout in order to run at full capacity. The usefulness of most languages is deprecating as their abstractions, structures or objects are hard to map onto modern processor architectures efficiently. The work in this paper introduces a new abstract machine framework, cphVB, that enables vector oriented high-level programming languages to map onto a broad range of architectures efficiently. The idea is to close the gap between high-level languages and hardware optimized low-level implementations. By translating high-level vector operations into an intermediate vector bytecode, cphVB enables specialized vector engines to efficiently execute the vector operations. The primary success parameters are to maintain a complete abstraction from low-level details and to provide efficient code execution across different, modern, processors. We evaluate the presented design through a setup that targets multi-core CPU architectures. We evaluate the performance of the implementation using Python implementations of well-known algorithms: a jacobi solver, a kNN search, a shallow water simulation and a synthetic stencil simulation. All demonstrate good performance.
https://arxiv.org/abs/1210.7774
We consider an \textit{Adaptive Random Convolutional Network Coding} (ARCNC) algorithm to address the issue of field size in random network coding for multicast, and study its memory and decoding delay performances through both analysis and numerical simulations. ARCNC operates as a convolutional code, with the coefficients of local encoding kernels chosen randomly over a small finite field. The cardinality of local encoding kernels increases with time until the global encoding kernel matrices at related sink nodes have full rank.ARCNC adapts to unknown network topologies without prior knowledge, by locally incrementing the dimensionality of the convolutional code. Because convolutional codes of different constraint lengths can coexist in different portions of the network, reductions in decoding delay and memory overheads can be achieved. We show that this method performs no worse than random linear network codes in terms of decodability, and can provide significant gains in terms of average decoding delay or memory in combination, shuttle and random geometric networks.
https://arxiv.org/abs/1303.4484
We present a theoretical study of broadening of defect luminescence bands due to vibronic coupling. Numerical proof is provided for the commonly used assumption that a multi-dimensional vibrational problem can be mapped onto an effective one-dimensional configuration coordinate diagram. Our approach is implemented based on density functional theory with a hybrid functional, resulting in luminescence lineshapes for important defects in GaN and ZnO that show unprecedented agreement with experiment. We find clear trends concerning effective parameters that characterize luminescence bands of donor- and acceptor-type defects, thus facilitating their identification.
https://arxiv.org/abs/1303.3043
Increasing popularity of decentralized P2P architecture emphasizes on the need to come across an overlay structure that can provide efficient content discovery mechanism, accommodate high churn rate and adapt to failures. Traditional p2p systems are not able to solve the problems relating scalability and high churn rates. Hierarchical model were introduced to provide better fault isolation, effective bandwidth utilization, a superior adaptation to the underlying physical network and a reduction of the lookup path length as additional advantages. It is more efficient and easier to manage than traditional p2p networks. This paper discusses a further step in p2p hierarchy via 3-layers hierarchical model with distributed database architecture in different layer, each of which is connected through its root. The peers are divided into three categories according to their physical stability and strength. They are Ultra Super-peer, Super-peer and Ordinary Peer and we assign these peers to first, second and third level of hierarchy respectively. Peers in a group in lower layer have their own local database which hold as associated super-peer in middle layer and access the database among the peers through user queries. In our 3-layer hierarchical model for DHT algorithms, we used an advanced Chord algorithm with optimized finger table which can remove the redundant entry in the finger table in upper layer that influences the system to reduce the lookup latency. Our research work finally resulted that our model really provides faster search since the network lookup latency is decreased by reducing the number of hops. The peers in such network then can contribute with improve functionality and can perform well in P2P networks.
https://arxiv.org/abs/1303.1751
Genetic and pharmacological perturbation experiments, such as deleting a gene and monitoring gene expression responses, are powerful tools for studying cellular signal transduction pathways. However, it remains a challenge to automatically derive knowledge of a cellular signaling system at a conceptual level from systematic perturbation-response data. In this study, we explored a framework that unifies knowledge mining and data mining approaches towards the goal. The framework consists of the following automated processes: 1) applying an ontology-driven knowledge mining approach to identify functional modules among the genes responding to a perturbation in order to reveal potential signals affected by the perturbation; 2) applying a graph-based data mining approach to search for perturbations that affect a common signal with respect to a functional module, and 3) revealing the architecture of a signaling system organize signaling units into a hierarchy based on their relationships. Applying this framework to a compendium of yeast perturbation-response data, we have successfully recovered many well-known signal transduction pathways; in addition, our analysis have led to many hypotheses regarding the yeast signal transduction system; finally, our analysis automatically organized perturbed genes as a graph reflecting the architect of the yeast signaling system. Importantly, this framework transformed molecular findings from a gene level to a conceptual level, which readily can be translated into computable knowledge in the form of rules regarding the yeast signaling system, such as “if genes involved in MAPK signaling are perturbed, genes involved in pheromone responses will be differentially expressed”.
https://arxiv.org/abs/1302.5344
Object detection and recognition are important problems in computer vision. Since these problems are meta-heuristic, despite a lot of research, practically usable, intelligent, real-time, and dynamic object detection/recognition methods are still unavailable. We propose a new object detection/recognition method, which improves over the existing methods in every stage of the object detection/recognition process. In addition to the usual features, we propose to use geometric shapes, like linear cues, ellipses and quadrangles, as additional features. The full potential of geometric cues is exploited by using them to extract other features in a robust, computationally efficient, and less meta-heuristic manner. We also propose a new hierarchical codebook, which provides good generalization and discriminative properties. The codebook enables fast multi-path inference mechanisms based on propagation of conditional likelihoods, that make it robust to occlusion and noise. It has the capability of dynamic learning. We also propose a new learning method that has generative and discriminative learning capabilities, does not need large and fully supervised training dataset, and is capable of online learning. The preliminary work of detecting geometric shapes in real images has been completed. This preliminary work is the focus of this report. Future path for realizing the proposed object detection/recognition method is also discussed in brief.
https://arxiv.org/abs/1302.5189
Stability and electronic properties of atomic layers of GaN are investigated in the framework of the van der Waals-density functional theory. We find that the ground state of the layered GaN is a planar graphene-like configuration rather than a buckled bulk-like configuration. Application of an external perpendicular electric field to the layered GaN induces distinct stacking-dependent features of the tunability of the band gap; the band gap of the monolayer does not change whereas that of the trilayer GaN is significantly reduced for the applied field of 0.4 V/ {\AA}. It is suggested that such a stacking-dependent tunability of the band gap in the presence of an applied field may lead to novel applications of the devices based on the layered GaN.
https://arxiv.org/abs/1302.5157
A low-power Content-Addressable-Memory (CAM) is introduced employing a new mechanism for associativity between the input tags and the corresponding address of the output data. The proposed architecture is based on a recently developed clustered-sparse-network using binary-weighted connections that on-average will eliminate most of the parallel comparisons performed during a search. Therefore, the dynamic energy consumption of the proposed design is significantly lower compared to that of a conventional low-power CAM design. Given an input tag, the proposed architecture computes a few possibilities for the location of the matched tag and performs the comparisons on them to locate a single valid match. A 0.13 um CMOS technology was used for simulation purposes. The energy consumption and the search delay of the proposed design are 9.5%, and 30.4% of that of the conventional NAND architecture respectively with a 3.4% higher number of transistors.
https://arxiv.org/abs/1302.4463
In this letter, we report on unipolar vertical transport characteristics in c-plane GaN/AlGaN/GaN heterostructures. Vertical current in heterostructures with random alloy barriers was found to be independent of dislocation density and heterostructure barrier height, and significantly higher than theoretical estimates. Percolation-based transport due to random alloy fluctuations in the ternary AlGaN is suggested as the dominant transport mechanism, and confirmed through experiments showing that non-random or digital AlGaN alloys and polarization-engineered binary GaN barriers can eliminate percolation transport and reduce leakage significantly. The understanding of vertical transport and methods for effective control proposed here will greatly impact III-nitride unipolar vertical devices.
https://arxiv.org/abs/1302.3942
This technical report is an extended version of the paper ‘Cooperative Multi-Target Localization With Noisy Sensors’ accepted to the 2013 IEEE International Conference on Robotics and Automation (ICRA). This paper addresses the task of searching for an unknown number of static targets within a known obstacle map using a team of mobile robots equipped with noisy, limited field-of-view sensors. Such sensors may fail to detect a subset of the visible targets or return false positive detections. These measurement sets are used to localize the targets using the Probability Hypothesis Density, or PHD, filter. Robots communicate with each other on a local peer-to-peer basis and with a server or the cloud via access points, exchanging measurements and poses to update their belief about the targets and plan future actions. The server provides a mechanism to collect and synthesize information from all robots and to share the global, albeit time-delayed, belief state to robots near access points. We design a decentralized control scheme that exploits this communication architecture and the PHD representation of the belief state. Specifically, robots move to maximize mutual information between the target set and measurements, both self-collected and those available by accessing the server, balancing local exploration with sharing knowledge across the team. Furthermore, robots coordinate their actions with other robots exploring the same local region of the environment.
https://arxiv.org/abs/1302.3857
Effect of doping on the carrier-phonon interaction in wurtzite GaN is investigated by pump-probe reflectivity measurements using 3.1 eV light in near resonance with the fundamental band gap of 3.39 eV. Coherent modulations of the reflectivity due to the E2 and the A1(LO) modes, as well as the 2A1(LO) overtone are observed. Doping of acceptor and more so for donor atoms enhances the dephasing of the polar A1(LO) phonon via coupling with plasmons, with the effect of donors being stronger. Doping also enhances the relative amplitude of the coherent A1(LO) phonon with respect to that of the high-frequency E2 phonon, though it does not affect the relative intensity in Raman spectroscopic measurements. This enhanced coherent amplitude indicates that transient depletion field screening (TDFS), in addition to impulsive stimulated Raman scattering (ISRS), contribute to generation of the coherent polar phonons even for sub-band gap excitation. Because the TDFS mechanism requires photoexcitation of carriers, we argue that the interband transition is made possible at the surface with photon energies below the bulk band gap through the Franz-Keldysh effect.
https://arxiv.org/abs/1302.3658
A simple method for the creation of Ohmic contact to 2-D electron gas (2DEG) in AlGaN/GaN high electron-mobility transistors (HEMTs) using Cr/Graphene layer is demonstrated. A weak temperature dependence of this Ohmic contact observed in the range 77 to 300 K precludes thermionic emission or trap-assisted hopping as possible carrier-transport mechanisms. It is suggested that the Cr/Graphene combination acts akin to a doped n-type semiconductor in contact with AlGaN/GaN heterostructure, and promotes carrier transport along percolating Al-lean paths through the AlGaN layer. This new use of graphene offers a simple and reliable method for making Ohmic contacts to AlGaN/GaN heterostructures, circumventing complex additional processing steps involving high temperatures. These results could have important implications for the fabrication and manufacturing of AlGaN/GaN-based microelectronic and optoelectronic devices/sensors of the future.
https://arxiv.org/abs/1301.1952
Thanks to its high refractive index contrast, band gap and polarization mismatch compared to GaN, In0.17Al0.83N layers lattice-matched to GaN are an attractive solution for applications such as distributed Bragg reflectors, ultraviolet light-emitting diodes, or high electron mobility transistors. In order to study the structural degradation mechanism of InAlN layers with increasing thickness, we performed metalorganic vapor phase epitaxy of InAlN layers of thicknesses ranging from 2 to 500 nm, on free-standing (0001) GaN substrates with a low density of threading dislocations, for In compositions of 13.5% (layers under tensile strain), and 19.7% (layers under compressive strain). In both cases, a surface morphology with hillocks is initially observed, followed by the appearance of V-defects. We propose that those hillocks arise due to kinetic roughening, and that V-defects subsequently appear beyond a critical hillock size. It is seen that the critical thickness for the appearance of V-defects increases together with the surface diffusion length either by increasing the temperature or the In flux because of a surfactant effect. In thick InAlN layers, a better (worse) In incorporation occurring on the concave (convex) shape surfaces of the V-defects is observed leading to a top phase-separated InAlN layer lying on the initial homogeneous InAlN layer after V-defects coalescence. It is suggested that similar mechanisms could be responsible for the degradation of thick InGaN layers.
https://arxiv.org/abs/1302.3139
A method to incorporate polarization charges at heterojunctions in compact models for transistors is presented. By including the polarization sheet charge as a Dirac delta function, the Poisson equation is solved to yield a closed equation for the surface potential. A compact model for transistors based on the surface potential incorporating polarization charges describes the on-state as well as the off-state regimes of device operation. The new method of incorporating polarization charges in compact models helps make a direct connection to the material properties of the transistor. The current-voltage (I-V) curves generated by this model are in good agreement with the experimental data for GaN HEMTs.
https://arxiv.org/abs/1302.1243
Starting with empirical tight-binding band structures, the branch-point (BP) energies and resulting valence band offsets (VBOs) for the zincblende phase of InN, GaN and AlN are calculated from their k-averaged midgap energy. Furthermore, the directional dependence of the BPs of GaN and AlN is discussed using the Green’s function method of Tersoff. We then show how to obtain the BPs for binary semiconductor alloys within a band-diagonal representation of the coherent potential approximation (CPA) and apply this method to cubic AlGaN alloys. The resulting band offsets show good agreement to available experimental and theoretical data from the literature. Our results can be used to determine the band alignment in isovalent heterostructures involving pure cubic III-nitrides or AlGaN alloys for arbitrary concentrations.
https://arxiv.org/abs/1302.1725
Conditions required for the streaming effect and the optical-phonon transit-time resonance to take place in a compensated bulk GaN are analyzed in detail. Monte Carlo calculations of the high-frequency differential electron mobility are carried out. It is shown that the negative dynamic differential mobility can be realized in the terahertz frequency range, at low lattice temperatures of 30–77 K, and applied electric fields of 3–10 kV/cm. New manifestations of the streaming effect are revealed, namely, the anisotropy of the dynamic differential mobility and a specific behavior of the diffusion coefficient in the direction perpendicular to the applied electric field. The theory of terahertz radiation transmission through the structure with an epitaxial GaN layer is developed. Conditions for the amplification of electromagnetic waves in the frequency range of 0.5–2 THz are obtained. The polarization dependence of the radiation transmission coefficient through the structure in electric fields above 1 kV/cm is found.
https://arxiv.org/abs/1302.1671
We have investigated the spin current polarization without the external magnetic field in the resonant tunneling diode with the emitter and quantum well layers made from the ferromagnetic GaMnN. For this purpose we have applied the self-consistent Wigner-Poisson method and studied the spin-polarizing effect of the parallel and antiparallel alignment of the magnetization in the ferromagnetic layers. The results of our calculations show that the antiparallel magnetization is much more advantageous for the spin filter operation and leads to the full spin current polarization at low temperatures and 35 % spin polarization of the current at room temperature.
https://arxiv.org/abs/1301.6544
Planets embedded within dust disks may drive the formation of large scale clumpy dust structures by trapping dust into resonant orbits. Detection and subsequent modeling of the dust structures would help constrain the mass and orbit of the planet and the disk architecture, give clues to the history of the planetary system, and provide a statistical estimate of disk asymmetry for future exoEarth-imaging missions. Here we present the first search for these resonant structures in the inner regions of planetary systems by analyzing the light curves of hot Jupiter planetary candidates identified by the Kepler mission. We detect only one candidate disk structure associated with KOI 838.01 at the 3-sigma confidence level, but subsequent radial velocity measurements reveal that KOI 838.01 is a grazing eclipsing binary and the candidate disk structure is a false positive. Using our null result, we place an upper limit on the frequency of dense exozodi structures created by hot Jupiters. We find that at the 90% confidence level, less than 21% of Kepler hot Jupiters create resonant dust clumps that lead and trail the planet by ~90 degrees with optical depths >~5*10^-6, which corresponds to the resonant structure expected for a lone hot Jupiter perturbing a dynamically cold dust disk 50 times as dense as the zodiacal cloud.
https://arxiv.org/abs/1301.6147