title_a
stringlengths 27
199
| abstract_a
stringlengths 43
1.02k
| title_b
stringlengths 30
199
| abstract_b
stringlengths 39
1.02k
| id
int64 1
186k
|
|---|---|---|---|---|
exploration of unknown spaces by people who are blind using a multi-sensory virtual environment
|
exploration of unknown spaces is essential for the development of efficient orientation and mobility skills. most of the information required for the exploration is gathered through the visual channel. people who are blind lack this crucial information, facing in consequence difficulties in mapping as well as navigating spaces. this study is based on the assumption that the supply of appropriate spatial information through compensatory sensorial channels may contribute to the spatial performance of people who are blind. the main goals of this study were (a) the development of a haptic virtual environment enabling people who are blind to explore unknown spaces and (b) the study of the exploration process of these spaces by people who are blind. participants were 31 people who are blind: 21 in the experimental group exploring a new space using a multi-sensory virtual environment, and 10 in the control group directly exploring the real new space. the results of the study showed that the participants in the e...
|
the transfer of spatial knowledge in virtual environment training
|
many training applications of virtual environments (ves) require people to be able to transfer spatial knowledge acquired in a ve to a real-world situation. using the concept of fidelity, we examine the variables that mediate the transfer of spatial knowledge and discuss the form and development of spatial representations in ve training. we report the results of an experiment in which groups were trained in six different environments (no training, real world, map, ve desktop, ve immersive, and ve long immersive) and then were asked to apply route and configurational knowledge in a real-world maze environment. short periods of ve training were no more effective than map training; however with sufficient exposure to the virtual training environment, ve training eventually surpassed real-world training. robust gender differences in training effectiveness of ves were also found.
| 1
|
3d human pose estimation under limited supervision using metric learning.
|
estimating 3d human pose from monocular images demands large amounts of 3d pose and in-the-wild 2d pose annotated datasets which are costly and require sophisticated systems to acquire. in this regard, we propose a metric learning based approach to jointly learn a rich embedding and 3d pose regression from the embedding using multi-view synchronised videos of human motions and very limited 3d pose annotations. the inclusion of metric learning to the baseline pose estimation framework improves the performance by 21\% when 3d supervision is limited. in addition, we make use of a person-identity based adversarial loss as additional weak supervision to outperform state-of-the-art whilst using a much smaller network. lastly, but importantly, we demonstrate the advantages of the learned embedding and establish view-invariant pose retrieval benchmarks on two popular, publicly available multi-view human pose datasets, human 3.6m and mpi-inf-3dhp, to facilitate future research.
|
structured prediction of 3d human pose with deep neural networks
|
most recent approaches to monocular 3d pose estimation rely on deep learning. they either train a convolutional neural network to directly regress from image to 3d pose, which ignores the dependencies between human joints, or model these dependencies via a max-margin structured learning framework, which involves a high computational cost at inference time. in this paper, we introduce a deep learning regression architecture for structured prediction of 3d human pose from monocular images that relies on an overcomplete autoencoder to learn a high-dimensional latent pose representation and account for joint dependencies. we demonstrate that our approach outperforms state-of-the-art ones both in terms of structure preservation and prediction accuracy.
| 2
|
acoustic modeling and training of a bilingual asr system when a minority language is involved
|
this paper describes our work in developing a bilingual speech recognition system using two speechdat databases. the bilingual aspect of this work is of particular importance in the galician region of spain where both languages galician and spanish coexist and one of the languages, the galician one, is a minority language. based on a global spanish-galician phoneme set we built a bilingual speech recognition system which can handle both languages: spanish and galician. the recognizer makes use of context dependent acoustic models based on continuous density hidden markov models. the system has been evaluated on a isolated-word large-vocabulary task. the tests show that spanish system exhibits a better performance than the galician system due to its better training. the bilingual system provides an equivalent performance to that achieved by the language specific systems.
|
transcrigal: a bilingual system for automatic indexing of broadcast news
|
this paper describes a broadcast news (bn) database called transcrigal-db. the news shows are mainly in galician language, although around 11% of data is in spanish. this database has been constructed for automatic speech recognition (asr) purposes. a bn-asr reference system is also described and evaluated on the test partition of transcrigal-db. the reference system has been designed having in mind that both languages, spanish and galician, may be used. performance of the reference is improved when language adaptation techniques are taken into consideration.
| 3
|
communication networks in geographically distributed software development
|
in this paper, we seek to shed light on how communication networks in geographically distributed projects evolve in order to address the limits of the modular design strategy. we collected data from a geographically distributed software development project covering 39 months of activity. our analysis showed that over time a group of developers emerge as the liaisons between formal teams and geographical locations. in addition to handling the communication and coordination load across teams and locations, those engineers contributed the most to the development effort.
|
on the criteria to be used in decomposing systems into modules
|
this paper discusses modularization as a mechanism for improving the flexibility and comprehensibility of a system while allowing the shortening of its development time. the effectiveness of a “modularization” is dependent upon the criteria used in dividing the system into modules. a system design problem is presented and both a conventional and unconventional decomposition are described. it is shown that the unconventional decompositions have distinct advantages for the goals outlined. the criteria used in arriving at the decompositions are discussed. the unconventional decomposition, if implemented with the conventional assumption that a module consists of one or more subroutines, will be less efficient in most cases. an alternative approach to implementation which does not have this effect is sketched.
| 4
|
mother, may i? owl-based policy management at nasa
|
among the challenges of managing nasa’s information systems is the management (that is, creation, coordination, verification, validation, and enforcement) of many different role-based access control policies and mechanisms. this paper describes an actual data federation use case that demonstrates the inefficiencies created by this challenge and presents an approach to reducing these inefficiencies using owl. the focus is on the representation of xacml policies in dl, but the approach generalizes to other policy languages.
|
semid: combining semantics with identity management
|
the need for information security and privacy in today's connected systems is overwhelming. in this paper, we focus on the identity management in corporate environment to access various project resources. capabilities of semantic web technology facilitate the development of proposed semid ontology that formally represents the identity management domain. it contains roles, policies and rules to control access to resources and to ensure privacy. a use case scenario of project oriented corporate working environment is introduced and then modeled using protege ontology editor platform.
| 5
|
approach in high precision topic-specific resource discovery on the web
|
the internet presents numerous sources of useful information nowadays. however, these resources are drowning under the dynamic web, so accurate finding userspecific information is very difficult. in this paper we discuss a semantic graph web search (sgws) algorithm in topicspecific resource discovery on the web. this method combines the use of hyperlinks, characteristics of web graph and semantic term weights. we implement the algorithm to find chinese medical information from the internet. our study showed that it has better precision than traditional ir (information retrieval) methods and traditional search engines.
|
graph-theoretic web algorithms: an overview
|
the world wide web is growing rapidly and revolutionizing the means of information access. it can be modeled as a directed graph in which a node represents a web page and an edge represents a hyperlink. currently the number of nodes in this gigantic web graph is over four billion and is growing by more than seven million nodes a day--without any centralized control. the study of this graph is essential for designing efficient algorithms for crawling, searching, and ranking web resources. knowledge of the structure of the web graph can be also exploited for attaining efficiency and comprehensiveness in web navigation. this paper describes algorithms for graph-theoretic analysis of the web.
| 6
|
automatic classification of one-dimensional cellular automata
|
cellular automata, a class of discrete dynamical systems, show a wide range of dynamic behavior, some very complex despite the simplicity of the system’s definition. this range of behavior can be organized into six classes, according to the li-packard system: null, fixed point, two-cycle, periodic, complex, and chaotic. an advanced method for automatically classifying cellular automata into these six classes is presented. seven parameters were used for automatic classification, six from existing literature and one newly presented. these seven parameters were used in conjunction with neural networks to automatically classify an average of 98.3% of elementary cellular automata and 93.9% of totalistic k = 2 r = 3 cellular automata. in addition, the seven parameters were ranked based on their effectiveness in classifying cellular automata into the six li-packard classes.
|
adaptive façades:an evaluation of cellular automata controlled dynamic shading system using new hourly-based metrics
|
this research explores utilizing cellular automata patterns as climate-adaptive dynamic shading systems to mitigate the undesirable impacts by excessive solar penetration in cooling-dominant climates. the methodological procedure is realized through two main phases. the first evaluates all 256 elementary cellular automata possible rules to elect the ones with good visual and random patterns, to ensure an equitable distribution of the natural daylight in internal spaces. based on the newly developed hourly-based metrics, simulations are conducted in the second phase to evaluate the cellular automata controlled dynamic shadings performance, and formalize the adaptive facade variation logic that maximizes daylighting and minimizes energy demand.
| 7
|
comparison and application of time-frequency analysis methods for nonstationary signal processing
|
most of signals in engineering are nonstationary and time-varying. the fourier transform as a traditional approach can only provide the feature information in frequency domain. the time-frequency techniques may give a comprehensive description of signals in time-frequency planes. based on some typical nonstationary signals, five time-frequency analysis methods, i.e., the short-time fourier transform (stft), wavelet transform (wt), wigner- ville distribution (wvd), pseudo-wvd (pwvd) and the hilbert-huang transform (hht), were performed and compared in this paper. the characteristics of each method were obtained and discussed. compared with the other methods, the hht with a high time-frequency resolution can clearly describe the rules of the frequency compositions changing with time, is a good approach for feature extraction in nonstationary signal processing.
|
theory and applications of time-frequency methods for analysis of non-stationary vibration and seismic signal
|
the classical fourier transform method does not reveal the temporal information of the signal, whereas the time-frequency methods provide the comprehensive information of non-stationary signal over the time-frequency plane. based on the typical non-stationary signals several time-frequency methods such as short time fourier transform, wavelet transform, wigner --ville distribution, pseudo wigner --ville distribution, hilbert huang transform are evaluated. in this paper, these time-frequency methods are compared to the performance of gabor -wigner transform which proves the effectiveness of the gwt in comparison to other methods. the efficacy of this method is validated by two different numerical case studies, and it is further applied for damage detection of buildings which provides a significant information pertaining to damage to the building.
| 8
|
pre-evaluation of invariant layout in functional variable-data documents
|
layout of content in variable data documents can be computationally expensive. when very large numbers of almost similar copies of a document are required, automated pre-evaluation of invariant sections may increase efficiency of final document generation. if the layout model is functional and combinatorial in nature (such as in the document description framework), there are some generalised conservative techniques to do this that involve very modest changes to implementations, independent of details of the actual layouts. this paper describes these techniques and how they might be used with other similar document layout models.
|
evaluating invariances in document layout functions
|
with the development of variable-data-driven digital presses where each document printed is potentially unique there is a need for pre-press optimization to identify material that is invariant from document to document. in this way rasterisation can be confined solely to those areas which change between successive documents thereby alleviating a potential performance bottleneck.given a template document specified in terms of layout functions, where actual data is bound at the last possible moment before printing, we look at deriving and exploiting the invariant properties of layout functions from their formal specifications. we propose future work on generic extraction of invariance from such properties for certain classes of layout functions.
| 9
|
a voice response system for an office information system
|
one of the major activities in offices is communication. communication involves information exchange in different kinds of media such as text, voice, pictures, and most likely, a mix of these. as such, a successful office information system must provide adequate facilities for handling these kinds of communication media. this paper discusses the design and implementation of a voice response system for ofs (office form system), a prototype office information system.
|
synthesis of speech from unrestricted text
|
for many applications, it is desirable to be able to convert arbitrary english text to natural and intelligible sounding speech. this transformation between two surface forms is facilitated by first obtaining the common underlying abstract linguistic representation which relates to both text and speech surface representations. calculation of these abstract bases then permits proper selection of phonetic segments, lexical stress, juncture, and sentence-level stress and intonation. the resulting system serves as a model for the cognitive process of reading aloud, and also as a stable practical means for providing speech output in a broad class of computer-based systems.
| 10
|
a survey on replacement strategies in cache memory for embedded systems
|
cache is one of the most power-consuming components in computer architecture. power reduction in cache can be achieved by reducing miss rate miss penalty latency per access and power consumption per access. the power reduction can also be achieved by shutting down unused part of the cache by allowing not so recently used cache banks to sleep reconfiguring the cache for specific application and various combinations of one or more of these. the cache hit depends on the cache size associativity and the cache line size. replacement strategies in associative mapping schemes play an important role in cache hit rate performance. this survey paper proposes a classification of these strategies with detailed discussion on their advantages and disadvantages.
|
low-complexity algorithms for static cache locking in multitasking hard real-time systems
|
cache memories have been extensively used to bridge the gap between high speed processors and relatively slow main memories. however, they are a source of predictability problems because of their dynamic and adaptive behavior and thus need special attention to be used in hard-real time systems. a lot of progress has been achieved in the last ten years to statically predict the worst-case behavior of applications with respect to caches in order to determine safe and precise bounds on task worst-case execution times (wcets) and cache-related preemption delays. an alternative approach to cope with caches in real-time systems is to statically lock their contents such that memory access times and cache-related preemption times are predictable. in this paper, we propose two low-complexity algorithms for selecting the contents of statically-locked caches. we evaluate their performances and compare them with those of a state of the art static cache analysis method.
| 11
|
random sampling adc for sparse spectrum sensing
|
scanning large bandwidths (spectrum sensing) pushes today's analog hardware to its limits since periodic sampling at nyquist rate with sufficient resolution is often prohibitively complex. in this paper, we consider a scenario where the signal to be acquired is sparse in the frequency domain (e.g., spectrum sensing in cognitive radio applications) and we are interested in identifying the sparse support of the signal. for this type of applications, we describe a new analog-to-digital converter (adc) architecture that acquires unequally spaced samples based on a slope adc, which is one of the least complex adc architectures available. for the signal reconstruction, we employ algorithms from compressed sensing for the recovery of the dominant spectral components. the performance of the proposed design is compared to more traditional designs with comparable or higher hardware complexity.
|
a 9-bit, 14 μw and 0.06 mm $^{2}$ pulse position modulation adc in 90 nm digital cmos
|
this work presents a compact, low-power, time-based architecture for nanometer-scale cmos analog-to-digital conversion. a pulse position modulation adc architecture is proposed and a prototype 9 bit ppm adc incorporating a two-step tdc scheme is presented as proof of concept. the 0.06 mm2 prototype is implemented in 90 nm cmos and achieves 7.9 effective bits across the entire input bandwidth and dissipates 14 μw at 1 ms/s.
| 12
|
the power of autonomic characteristics in opportunistic networks
|
autonomic characteristics can dramatically improve the efficiency of opportunistic communications. the main contribution of autonomic communications to opportunistic architectures is to simplify, automate, protect and enhance the knowledge. this allows empowering and making more effective the information handling and exchanges in the face of a complex, dynamic, uncertain, and unsecured environment. in this paper we identify the key elements that enable autonomic behavior of nodes in opportunistic networks. we then show how this can be implemented, and how beneficial it is.
|
impact of human mobility on the design of opportunistic forwarding algorithms
|
studying transfer opportunities between wireless devices carried by humans, we observe that the distribution of the inter-contact time, that is the time gap separating two contacts of the same pair of devices, exhibits a heavy tail such as one of a power law, over a large range of value. this observation is confirmed on six distinct experimental data sets. it is at odds with the exponential decay implied by most mobility models. in this paper, we study how this new characteristic of human mobility impacts a class of previously proposed forwarding algorithms. we use a simplified model based on the renewal theory to study how the parameters of the distribution impact the delay performance of these algorithms. we make recommendation for the design of well founded opportunistic forwarding algorithms, in the context of human carried devices.
| 13
|
coping with duplicate bug reports in free/open source software projects
|
free/open source software (foss) communities often use open bug reporting to allow users to participate by reporting bugs. this practice can lead to more duplicate reports, as users can be less rigorous about researching existing bug reports. this paper examines how foss projects deal with duplicate bug reports. we examined 12 foss projects: 4 small, 4 medium and 4 large, where size was determined by number of code contributors. first, we found that contrary to what has been reported from studies of individual large projects like mozilla and eclipse, duplicate bug reports are a problem for foss projects, especially medium-sized, which struggle with a large number of submissions without the resources of large projects. second, we found that the focus of a project does not affect the number of duplicate bug reports. our findings indicate a need for additional scaffolding and training for bug reporters.
|
are all duplicates value-neutral? an empirical analysis of duplicate issue reports
|
in open source communities, there are numerous duplicate issue reports, considered as useless and negligible by developers. conversely, some researches argued that duplicates deliver complementary information that could benefit issueresolving. considering all duplicates as value-neutral will result in either overestimation or underestimation of valuable information. it is necessary to be aware of whether all duplicates are redundant or beneficial. in this paper, we investigate whether duplicates have the same impacts on issue resolving and identification cost. we divide duplicates into three categories according to the statuses of master reports when duplicates are submitted. the results show duplicates in different categories play different roles in issue-resolving, and identification cost is also significantly different. our study reveals duplicates are different, but almost are paid equal attentions. it is promising to propose new approaches and tools to resolve the problem.
| 14
|
behavioral selection using the utility function method: a case study involving a simple guard robot
|
in this paper, the performance of the utility function method for behavioral organization is investigated in the framework of a simple guard robot. in order to achieve the best possible results, it was found that high-order polynomials should be used for the utility functions, even though the use of such polynomials, involving many terms, increases the running time needed for the evolutionary algorithm to find good solutions.
|
action-selection in hamsterdam: lessons from ethology
|
a computational model of action-selection is presented, which by drawing on ideas from ethology, addresses a number of problems which have been noted in models proposed to date including the need for greater control over the temporal aspects of behavior, the need for a loose hierarchical structure with information sharing, and the need for a flexible means of modeling the influence of internal and external factors. the paper draws on arguments from ethology as well as on computational considerations to show why these are important aspects of any action-selection mechanism for animats which must satisfy multiple goals in a dynamic environment. the computational model is summarized, and its use in hamsterdam, an object-oriented tool kit for modeling animal behavior is discussed briefly. results are presented which demonstrate the power and usefulness of the novel features incorporated in the algorithm.
| 15
|
multistage interconnection netwoks a review 1
|
multistage interconnection networks (mins) are basic class of switch-based network architectures, which are used for constructing scalable parallel computers or for connecting networks. multistage interconnection networks (mins) are networks providing fast and efficient communication. reliability of mins is used to measure system’s ability to transform information from input to output. in section i we reviewed the introduction of min networks. in section ii classification of mins is discussed. in section iii we surveyd the reliability of mins. in section iv conclusion and future scope of the paper.
|
irregular class of multistage interconnection network in parallel processing
|
a major problem in designing a large-scale parallel and distributed system was the construction of an interconnection network (in) to provide inter-processor communication. one of the biggest issues in the development of such a system was the development of an effective architecture and algorithms that have high reliability, give good performance (even in the presence of faults), low cost, low average path length, higher number of passes of request and a simple control. in this study, a new class of irregular fault tolerant multistage interconnection network (min) called improved four tree (ift) is introduced. algorithms for computing the cost and permutations passable in the presence and absence of fault are developed for the analysis of various networks with proposed network.
| 16
|
backoff protocols for distributed mutual exclusion and ordering
|
presents a simple and efficient protocol for mutual exclusion in synchronous message-passing distributed systems subject to failures. our protocol borrows design principles from prior work in backoff protocols for multiple access channels such as the ethernet. our protocol is adaptive in that the expected amortized system response time - informally, the average time a process waits before entering the critical section - is a function only of the number of clients currently contending and is independent of the maximum number of processes that might contend. in particular, in the contention-free case, a process can enter the critical section after only one round-trip message delay. we use this protocol to derive a protocol for ordering operations on a replicated object in an asynchronous distributed system subject to failures. this protocol is always safe, is probabilistically live during periods of stability and is suitable for deployment in practical systems.
|
mutual exclusion in asynchronous systems with failure detectors
|
this paper considers the fault-tolerant mutual exclusion problem in a message-passing asynchronous system and determines the weakest failure detector to solve the problem, given a majority of correct processes. this failure detector, which we call the trusting failure detector, and which we denote by t, is strictly weaker than the perfect failure detector p but strictly stronger than the eventually perfect failure detector @?p. the paper shows that a majority of correct processes is necessary to solve the problem with t. moreover, t is also the weakest failure detector to solve the fault-tolerant group mutual exclusion problem, given a majority of correct processes.
| 17
|
the organization of distributed problem-solving networks: examining how core and periphery interact together to solve problems in mozilla's community
|
the emerging empirical literature on open source communities indicates that a majority of code writing and communication activity is concentrated with a few contributors, the “core” (maintainers). however, these communities allow and encourage participation from anybody, the “periphery”. the focus of this work is on explaining how distributed communities solve software problems through the participation of a large number of participants. in particular, this paper investigates interaction, collaboration and division of labor between the core and periphery in a distributed problem-solving activity. using a linguistic method of analysis, we study bugs that affected firefox internet browser as reflected in the discussions and actions reported in bugzilla (the mozilla's bug tracking system). as results, we find various categories in the modes of interaction between the core and periphery participants of the community and suggest that interactions are influenced by their status.
|
two case studies of open source software development: apache and mozilla
|
according to its proponents, open source style software development has the capacity to compete successfully, and perhaps in many cases displace, traditional commercial development methods. in order to begin investigating such claims, we examine data from two major open source projects, the apache web server and the mozilla browser. by using email archives of source code change history and problem reports we quantify aspects of developer participation, core team size, code ownership, productivity, defect density, and problem resolution intervals for these oss projects. we develop several hypotheses by comparing the apache project with several commercial projects. we then test and refine several of these hypotheses, based on an analysis of mozilla data. we conclude with thoughts about the prospects for high-performance commercial/open source process hybrids.
| 18
|
sdiva: structural delay insensitivity verification analysis method for bit-level pipelined systolic arrays with early output evaluation
|
a structural delay-insensitivity verification analysis method, sdiva, is proposed for asynchronous systolic arrays in dual-rail threshold logic style. the sdiva method employs symbolic delays for all output evaluation paths and works at the behavioral specification level. for bit-level pipelined systolic arrays, which have data-dependent early output evaluation in one-dimension, sdiva method reduces the verification analysis task to examination of three adjacent systoles so that by analyzing all possible early/late output evaluation scenarios on three systoles, the delay-insensitivity of a complete systolic array could be verified at once, regardless of the array dimensions. delay-insensitivity violations are located and corrected at structural level, without diminishing the early output evaluation benefits. since symbolic delays are used without imposing any timing assumptions on the environment; the sdiva method is technology independent and robust against all physical and environmental variations.
|
application of bit-level pipelining to delay insensitive null convention adders
|
in this study, two asynchronous delay insensitive adder topologies in null convention logic (fant and brandt, 1997) style are adopted for bit-level pipelining: the reduced null convention logic adder (smith, 2001) and a null convention carry save adder. when pipelined at bit-level, early carry generation feature of both adders violate the requirements of delay insensitivity. to solve this problem, new topologies are proposed. resultant adders maintain both reliable delay insensitive operation and speedup advantages of early carry generation, with o(log n) average completion time for n-bit addition and -as a result of bit-level pipelining- constant throughput against increased bit-length.
| 19
|
user experience design goes agile in lean transformation -- a case study
|
this paper describes the results of a single-case case study, exploring the role of user experience (ux) work in agile software development. the case study company is a large multinational telecommunication company undergoing a lean transformation process. in this case, lean transformation includes the adoption of agile software development practices. transformation to agile practices had taken place one year prior to the analysis. the analysis is based on documentation analysis and semi-structured interviews of seven software development professionals. the results show that there were difficulties integrating ux design and software engineering work in an agile and iterative manner. the transition process succeeded in shifting ux and related documentation to a central planning role. the roles of the ux designers in the teams were still under re-definition. there was also a clear need to establish new ways of collaboration between ux professionals and software designers.
|
empirical analysis on the satisfaction of it employees comparing xp practices with other software development methodologies
|
job satisfaction has been studied by economists and psychologists. we believe this factor is very important in that it influences the effectiveness of the software development process. this paper reports the first results of a comparative analytic study on the job satisfaction of developers that use xp practices and others that do not use xp practices. by determining the factors that are highly valued by developers, the research can provide insight in currently practised software development processes, and help make changes that increase their strategic value.
| 20
|
c-scrip: collaborative security pattern integration process
|
collaboration is the act of working together, towards a common goal. collaboration is essential to the success of construction project. in software engineering projects, understanding and supporting collaboration gives the broad impact on product quality. there appears that it is difficult to effectively interact and achieve a common project goals within the bounds of cost, quality and time. the purpose of the paper is to propose a collaborative engineering process, called collaborative security pattern integration process c-scrip, and a tool that supports the full life-cycle of the development of a secure system from modeling to code.
|
safe design real-time embedded systems with security patterns
|
security is a fundamental property in the modeling of real-time embedded systems. unfortunately, integrating this property is a hard task for a designer due to their small background concerning this feature. thankfully, the design pattern can provide a practical solution to integrate security through an abstraction mode. however, nowadays the number of design pattern is increasing, for that reason, the selection of suitable pattern is a fundamental challenge for designers. in this context, we propose in this position paper an approach to integrating security pattern in the phase of modeling of real-time embedded systems. to solve the problem of selection pattern we propose in our approach to use the ontology-based solution, and thus, we propose some methods to guarantee the performance of the systems after integrating.
| 21
|
adult content classification through deep convolution neural network
|
adult content filtering is one of the main challenge in indonesia to prevent children from accessing the adult content. conventional web blocking and filtering through domain name server filtering is not enough to prevent the adult content distribution. mobile phones, tablet, and personal computer can distribute the adult content through the offline way. in this case, a more sophisticated and autonomous system is needed that can detect the adult content automatically. to leverage this problem, a deep neural network is used to build a model that is able to detect adult content automatically. in this experiments, our model is able to detect adult content with an accuracy of 75, 08% and 69, 02% during the validation and testing process, respectively.
|
the mask-sift cascading classifier for pornography detection
|
pornography detection using the scale invariant feature transform (sift) has been shown effective in identifying pornographic images. by including automated gaussian skin masking for feature isolation, classifier performance is significantly improved. similarly, utilizing a cascading classifier that pre-filters images based on size and skin percentage further improves precision and recall with a substantial increase in classification speed.
| 22
|
two approaches for the improvement in testability of communication protocols
|
protocols have grown larger and more complex with the advent of computer and communication technologies. as a result, the task of conformance testing of protocol implementation has also become more complex. the study of design for testability (dft) is a research area in which researchers investigate design principles that will help to overcome the ever increasing complexity of testing distributed systems. testability metrics are essential for evaluating and comparing designs. in a previous paper, we introduce a new metric for testability of communication protocols, based on the detection probability of a default. we demonstrate the usefulness of the metric for identifying faults there are more difficult to detect. in this paper, we present two approaches for improved testing of a protocol implementation once those faults that are difficult to detect are identified.
|
testability improvement by narrow input / output ( nio ) sequences
|
– communication protocol conformance testing has become more complex as protocols have grown larger. the study of design for testability (dft) is a research area to help overcoming the ever increasing complexity of testing distributed systems. in the previous research, we introduced a new metric for testability of communication protocols, demonstrated the usefulness of the metric for identifying faults that are more difficult to detect, and also presented the approach for improved testability without modifying the protocol structure by means of unique input/output (uio) sequences. since a uio sequence may not exist for every state of some protocol specification finite state machines (fsms), in this paper, we extend uio sequence to a new general concept called narrow input/output (nio) sequence. we demonstrate the effectiveness of nio sequence application on the improvement of protocol testability.
| 23
|
ptidej and decor: identification of design patterns and design defects
|
the ptidej project started in 2001 to study code generation from and identification of patterns. since then, it has evolved into a complete reverse-engineering tool suite that includes several identification algorithms. it is a flexible tool suite that attempts to ease as much as possible the development of new identification and analysis algorithms. recently, the module d ecor has been added to p tidej and allows the detection of design defects, which are recurring design problems. in this demonstration, we particularly focus on the creation and use of identification algorithms for design patterns and defects.
|
automatic generation of detection algorithms for design defects
|
maintenance is recognised as the most difficult and expansive activity of the software development process. numerous techniques and processes have been proposed to ease the maintenance of software. in particular, several authors published design defects formalising "bad" solutions to recurring design problems (e.g., anti-patterns, code smells). we propose a language and a framework to express design defects synthetically and to generate detection algorithms automatically. we show that this language is sufficient to describe some design defects and to generate detection algorithms, which have a good precision. we validate the generated algorithms on several programs.
| 24
|
saaspia platform: integrating and customizing on-demand applications supporting multi-tenancy
|
saas applications support multi-tenancy and dedicated service environments by providing multiple tenants with user interface for customizing their own service. on the other hand, existing web-based applications do not support multi-tenancy and configurability. therefore, transforming existing web applications into saas applications and serving them to multiple tenants is a great challenge. in this paper, we introduce saaspia, the general purpose saas platform supporting multi-tenancy and configurability. we also introduce saaspia integration tool and saaspia configuration tool, the saaspia platform tools to integrate and customize applications.
|
multi-tenant saas applications: maintenance dream or nightmare?
|
multi-tenancy is a relatively new software architecture principle in the realm of the software as a service (saas) business model. it allows to make full use of the economy of scale, as multiple customers - "tenants" - share the same application and database instance. all the while, the tenants enjoy a highly configurable application, making it appear that the application is deployed on a dedicated server. the major benefits of multi-tenancy are increased utilization of hardware resources and improved ease of maintenance, in particular on the deployment side. these benefits should result in lower overall application costs, making the technology attractive for service providers targeting small and medium enterprises (sme). however, as this paper advocates, a wrong architectural choice might entail that multi-tenancy becomes a maintenance nightmare.
| 25
|
a non-convex relaxation approach to sparse dictionary learning
|
dictionary learning is a challenging theme in computer vision. the basic goal is to learn a sparse representation from an overcomplete basis set. most existing approaches employ a convex relaxation scheme to tackle this challenge due to the strong ability of convexity in computation and theoretical analysis. in this paper we propose a non-convex online approach for dictionary learning. to achieve the sparseness, our approach treats a so-called minimax concave (mc) penalty as a nonconvex relaxation of the l 0 penalty. this treatment expects to obtain a more robust and sparse representation than existing convex approaches. in addition, we employ an online algorithm to adaptively learn the dictionary, which makes the non-convex formulation computationally feasible. experimental results on the sparseness comparison and the applications in image denoising and image inpainting demonstrate that our approach is more effective and flexible.
|
online robust dictionary learning
|
online dictionary learning is particularly useful for processing large-scale and dynamic data in computer vision. it, however, faces the major difficulty to incorporate robust functions, rather than the square data fitting term, to handle outliers in training data. in this paper, we propose a new online framework enabling the use of l1 sparse data fitting term in robust dictionary learning, notably enhancing the usability and practicality of this important technique. extensive experiments have been carried out to validate our new framework.
| 26
|
locally interacting hybrid systems with embedded graph grammars
|
in many cooperative control methods, the network topology influences the evolution of its continuous states. in turn, the continuous state may influence the network topology due to local restrictions on connectivity. in this paper we present a grammatical approach to modeling and controlling the switching of a system's network topology, continuous controllers, and discrete modes. the approach is based on embedded graph grammars, which restrict interactions to small subgraphs and include spatial restrictions on connectivity and progress. this allows us to direct the behavior of large decentralized systems of robots. the grammatical approach also allows us to compose multiple subsystems into a larger whole in a principled manner. in this paper, we illustrate the approach by proving the correctness of a cooperative control system called the load balanced multiple rendezvous problem
|
cooperative exploration and protection of a workspace assisted by information networks
|
we develop strategies that enable multiple robots to cooperatively explore an unknown workspace while building information networks. every robot deploys information nodes with sensing and communication capabilities while constructing the voronoi diagram as the topological map of the workspace. the resulting information networks constructed by individual robots will eventually meet, allowing for inter-robot information sharing. the constructed information network is then employed by the mobile robots to protect the workspace against intruders. we introduce the intruder capturing strategy on the voronoi diagram assisted by information networks.
| 27
|
benefits and drawbacks of asymmetric microwave links
|
the possibility of introducing asymmetric microwave links in the existing networks is currently under consideration. the main benefit of the concept lies in the fact that it reflects the actual requirement for rf spectrum, which is needed to handle download and upload traffic. currently used frequency plans anticipate the same amount of spectrum (`paired' channels) for both download and upload, but since traffic is significantly higher in download than in upload, asymmetric approach seems more natural option, which should provide certain spectrum savings. however, some concerns have been raised regarding the possibility of reusing the saved spectrum, which is the main challenge for the implementation of asymmetric approach. this paper deals with effects and consequences of introducing asymmetric microwave links, based on the comparative analysis of asymmetric approach on two realistic representations of a wireless network in rural and urban scenario.
|
analysis of spectrum efficiency when using asymmetric microwave links
|
asymmetric microwave links have been introduced only recently as a novel concept for wireless networks which should provide higher spectrum efficiency and thus significant savings of radiofrequency spectrum. this paper presents technical assessment of this approach with its main advantages and disadvantages, focusing on comparative analysis of symmetric and asymmetric microwave links networks. the analysis was performed on a realistic representation of rural scenario for the capacity of the existing network. benefits in terms of increased spectrum efficiency have been assessed.
| 28
|
cluster based peers configuration using hcnp in peer-to-peer overlay networks
|
the paper addresses the need of efficient collaboration of peers with different levels of heterogeneity in order to share resources in p2p overlay networks. here the heterogeneity reflects differences in peers physical resources like free storage space, processor speed etc. a non-hierarchical cluster-based approach has been introduced to make the network comprising of heterogeneous nodes, more efficient and scalable. np (newscast protocol) has been used to generate the nodes (peers) randomly in the network, where nodes also maintain the properties of its neighboring nodes in a cache associated to each node. np has proved to be the most efficient protocol to maintain the current state of the network. in this paper a heterogeneous cluster-based np (hcnp) has been introduced, which, not only preserves the properties of np, but also configures clusters on the basis of changes in physical parameters of the node at any particular time instant.
|
overlay networks: a scalable alternative for p2p
|
overlay networks create a structured virtual topology above the basic transport protocol level that facilitates deterministic search and guarantees convergence. overlay networks are evolving into a critical component for self-organizing systems. here we outline the differences between flooding-style and overlay networks, and offer specific examples of how researchers are applying the latter to problems requiring high-speed, self-organizing network topologies.
| 29
|
towards precise indoor rf localization
|
precise indoor localization of wireless nodes remains a challenge today. while there are radio-frequency (rf) methods that offer significant advantages, the balance between accuracy, range, and cost is suboptimal for many applications. radio interferometry has been shown to be effective outdoors, however, its applicability indoors has not been demonstrated mainly due to its sensitivity to multipath. this paper presents a roadmap outlining how the method can be enhanced to advance the state-of-the-art in indoor rf localization.
|
performance analysis of the interferometric ranging system with hopped frequencies against multi-tone jammer
|
interferometric ranging system with hopped frequencies (irhf) is a novel ranging technique with advanced anti-jamming capability in wireless sensor networks. this paper investigates the ranging performance of maximum likelihood (ml) estimator of irhf under multi-tone jamming (mtj), which is a potential threat faced by wireless sensor nodes. firstly, the jamming model with one malicious node transmitting mtj signal is introduced. secondly, the region where the false estimation locates is detected. finally, a closed-form expression of the probability of false estimation versus signal-to-jamming ratio and some system parameters is derived with the tool of pair-wise probability. the consistence between the simulation results and the theoretical approximations validates our analyses. the study shows that the probability of false estimation proposed here can predict the ml ranging performance of irhf accurately and relieve the requirement of time-consuming computer simulations.
| 30
|
formal techniques for java programs
|
this report gives an overview of the second ecoop workshop on formal techniques for java programs. it explains the motivation for such a workshop and summarizes the presentations and discussions.
|
reasoning about jml: differences between key and openjml
|
to increase the impact and capabilities of formal verification, it should be possible to apply different verification techniques on the same specification. however, this can only be achieved if verification tools agree on the syntax and underlying semantics of the specification language and unfortunately, in practice, this is often not the case.
| 31
|
gn-dtd: graphical notations for describing xml documents
|
this paper presents a graphical approach to model xml documents based on a data type documentation called graphical notations-data type documentation (gn-dtd). gn-dtd allows us to capture syntax and semantic of xml documents in a simple way but precise. using various notations, the important features of xml documents such as elements, attributes, relationship, hierarchical structure, cardinality, sequence and disjunction between elements or attribute are visualize clearly at the schema level. we believe, by having gn-dtd as tool, helps the user to arrange the content of xml documents in order to give a better undertanding of dtd structures, to improve xml design and normalization process. in this paper we presented also the transformation rules to convert from gn-dtd to dtd.
|
developing xml documents with guaranteed ``good'' properties
|
many xml documents are being produced, but there are no agreed-upon standards formally defining what it means for complying xml documents to have "good" properties. in this paper we present a formal definition for a proposed canonical normal form for xml documents called xnf. xnf guarantees that complying xml documents have maximally compact connectivity while simultaneously guaranteeing that the data in complying xml documents cannot be redundant. further, we present a conceptual-model-based methodology that automatically generates xnf-compliant dtds and prove that the algorithms, which are part of the methodology, produce dtds to ensure that all complying xml documents satisfy the properties of xnf.
| 32
|
datadroplets: a correlation-aware large scale decentralized tuple store
|
until now, relational database management systems have been the key technology to store and process structured data. however, these systems based on highly centralized and rigid architectures are facing a conundrum: the volume of data currently quadruples every eighteen months while the available performance per processor only doubles in the same time period. this is the breeding ground for a new generation of elastic data management solutions, that can scale both in the sheer volume of data that can be held but also in how required resources can be provisioned dynamically and incrementally.
|
bigtable: a distributed storage system for structured data
|
bigtable is a distributed storage system for managing structured data that is designed to scale to a very large size: petabytes of data across thousands of commodity servers. many projects at google store data in bigtable, including web indexing, google earth, and google finance. these applications place very different demands on bigtable, both in terms of data size (from urls to web pages to satellite imagery) and latency requirements (from backend bulk processing to real-time data serving). despite these varied demands, bigtable has successfully provided a flexible, high-performance solution for all of these google products. in this paper we describe the simple data model provided by bigtable, which gives clients dynamic control over data layout and format, and we describe the design and implementation of bigtable.
| 33
|
course practice teaching mode based on the exploration of online and offline integration
|
in this paper, open university of china (chengdu branch) open education business administration (college) students of "marketing" course as a research vehicle to farm and experience the teaching activities of the main store as discussed based on online and offline phasor fusion practical teaching strategies, put forward the theory and practice, classroom and on-site, online and offline integration courses and practice teaching mode, entertaining, instruction in life, fully activate the enthusiasm of student learning process, participation, initiative, interest, responsibility and sense of cooperation.
|
research on teaching presence evaluation indexes of online course
|
online courses are short of face-to-face and direct communication compared with classroom teaching, and the in-depth communication between teachers and students can only occur in the environment of high level of teaching presence. therefore, how to establish and evaluate the presence in online teaching is the research objective. for the evaluation of teaching presence of online teaching, two-level evaluation index system is set up in a hierarchical way in our research. the first-level evaluation indexes include design organization; facilitate discussion and direct guidance. the primary evaluation index is refined in the second level evaluation indexes. the spoc course “computer composition principle” is evaluated according to the two-level evaluation indexes. by analyzing the result, it can be concluded that the teaching presence evaluation indexes can provide effective references for teachers to improve teaching methods and teaching design, and form closed-loop feedback.
| 34
|
mri head segmentation for object based volume visualization
|
in this paper, we present a new image segmentation approach for mri of the head, which is a semi-automatic process. unlike automatic segmentation or manual segmentation, the semi-automatic segmentation approach is a robust and interactive segmentation process. this approach carries out 3d volume data segmentation based on 2d image slices. by utilising the user-provided image mask, including areas of interest or structural information, the semi-automatic segmentation process can generate a new segmented volume dataset and structural information. the object based volume visualization method can use this segmented dataset and structural information to perform structure based manipulation and visualization, which cannot be achieved using a normal volume rendering method.
|
image sequence segmentation using 3-d structure tensor and curve evolution
|
we describe a novel approach for image sequence segmentation. it contains three parts: global motion compensation, robust frame differencing, and curve evolution. it is computationally efficient, does not require dense-field motion estimation, and is insensitive to noise and global/background motion. it works for black-and-white and color image sequences. the efficacy of this approach is demonstrated on both tv and surveillance image sequences.
| 35
|
a current-mode approach to cmos neural network implementation
|
cmos equivalents of the synapse and the neuron are proposed for lsi implementation of an adaptive analog neural network. the synapse is a multiplying digital-to-analog converter based on an r-2r ladder and the neuron consists of the second-generation current conveyor. prototype chips fabricated independently using 0.6 /spl mu/m cmos process have confirmed the wideband signal processing capability owing to a fully current-mode approach. detailed analyses of measured performances have also given the design criteria for fully parallel implementation.
|
an inherently linear and compact most-only current division technique
|
a technique is presented that uses the same mos transistors for both division and switching functions, eliminating resistors or capacitors. although an mos-transistor exhibits a nonlinear relation between the current and voltage (even in the linear region), it is shown that the current division is inherently linear. the most important measurement results are shown. the dynamic range in the audio-band (0-20 khz) is 103 db with respect to a maximum input signal of 1 v/sub rms/. at 1 v/sub rms/, thd is below -80 db over the audio band and below -85 db under 3 khz. as the unity-gain frequency of the opamps is 4.5 mhz, the bandwidth of the circuit is limited to 1.5 mhz. attenuation accuracy is better than 0.15 db up to -48 db and better than 0.4 db over the entire attenuation range. >
| 36
|
optimization of multirate filter banks within the framework of a human visual model
|
a perfect reconstruction filter optimization based on a human visual model is presented. the optimized filters are compared to a set of published filters for coding at a fixed bit rate. a method is suggested for adapting the filters to change in source statistics. this method is applied to coding of intraframe data and residual frame data after motion-compensated prediction. >
|
the effects of a visual fidelity criterion of the encoding of images
|
shannon's rate-distortion function provides a potentially useful lower bound against which to compare the rate-versus-distortion performance of practical encoding-transmission systems. however, this bound is not applicable unless one can arrive at a numerically-valued measure of distortion which is in reasonable correspondence with the subjective evaluation of the observer or interpreter. we have attempted to investigate this choice of distortion measure for monochrome still images. this investigation has considered a class of distortion measures for which it is possible to simulate the optimum (in a rate-distortion sense) encoding. such simulation was performed at a fixed rate for various measures in the class and the results compared subjectively by observers. for several choices of transmission rate and original images, one distortion measure was fairly consistently rated as yielding the most satisfactory appearing encoded images.
| 37
|
design knowledge and sequential plans
|
the application of sequential planning to the design process is discussed, considering design as a search through a space of states which are acted upon by transformation rules. various approaches to goal satisfaction are considered, including forward inference, regression, the satisfaction of implicit goals, and metaplanning. these issues are illustrated with an example from a simple design domain. the example has been implemented in prolog.
|
logic programming as a means of representing semantics in design languages
|
logic programming is discussed as a method for representing aspects of design language: descriptions of designs, domain knowledge, transformation rules (design grammar), and control mechanisms necessary to implement rules. the applicability of logic programming to the representation of semantics in design is also explored. control at the semantic level provides a means of directing the automated generation of designs. examples are drawn from a rule-based design system written in the logic programming language prolog.
| 38
|
compression word coding techniques for information retrieval
|
a description and comparison is presented of four compression techniques for word coding having application to information retrieval. the emphasis is on codes useful in creating directories to large data files. it is further shown how differing application objectives lead to differing measures of optimality for codes, though compression may be a common quality.
|
data compression techniques for economic processing of large commercial files
|
the application of compact coding, differencing and other techniques to indexed sequential files is discussed. the effects on system performance are discussed and reductions of almost 80% in mass storage requirements for a particular file are reported.
| 39
|
an expert system framework based on a simulation generator
|
expert systems (es) implementations automatically perform tasks for which specially trained or talented people have been re quired. fifth generation simulation systems integrate the tools developed in the fourth generation and capture the knowledge of the expert programmer as well as that of the simulation model ing expert. haddock has programmed a user- oriented simula tion generator for the design and control of flexible manufac turing systems (fms). the development of an es based on the generator is described in this paper.the system to be described solely requires knowledge of the system to be simulated from the user. fortran written sub routines, incorporated within the software structure of siman, interpret the results of experimental runs and make statistical inferences about the performance measure.simulation generators can assist simulationists in model develop ment and update, as well as in the analysis of alternative sce narios. a very desirable feature of intelligent front ends (ifes) is to ha...
|
performing simulation projects with the extended simulation system (tess)
|
tess, the extended simulation system, provides a comprehen sive, flexible and integrated framework for performing simula tion projects. capabilities include (1) graphically building slam ii networks and schematic models; (2) forms entry for simulation control information, simulation inputs, and anima tion specifications; (3) database management of user-defined data, model inputs and model outputs; (4) preparation of reports and graphs; (5) analysis of simulation results; and (6) the anima tion of simulation runs. a nonprocedural command language provides access to the graphical builders and forms system as well as selects data for reports, graphs, analysis or animation. in addition, a library of subroutines gives the programmer ac cess to all data in the tess database.the tess project framework consists of nine elements of a proj ect and three functions for operating on these elements. integra tion is based on a single, central database containing all element occurrences. a comprehensive example illustrat...
| 40
|
overview of structural reliability analysis methods — part ii: sampling methods
|
in part ii of the overview of structural reliability analysis methods, the category of sampling methods is reviewed. the basic monte carlo simulation is the foundation for sampling methods of reliability analysis. sampling methods can evaluate the failure probability defined by both explicit and implicit performance function. with sufficient number of samples, simulation methods can give accurate results. however, for complex problem the computational cost is expensive. thus, based on variance reduction techniques, some variants of basic monte carlo simulation method are proposed to reduce the computational cost. monte carlo simulation and its variants, including importance sampling, adaptive sampling, latin hypercube sampling, directional simulation, and subset simulation, are presented and summarized in this paper.
|
simulation and the monte carlo method
|
from the publisher: ::: provides the first simultaneous coverage of the statistical aspects of simulation and monte carlo methods, their commonalities and their differences for the solution of a wide spectrum of engineering and scientific problems. contains standard material usually considered in monte carlo simulation as well as new material such as variance reduction techniques, regenerative simulation, and monte carlo optimization.
| 41
|
nonlinear model predictive control for stochastic differential equation systems
|
abstract using the van der pol oscillator model as an example, we provide a tutorial introduction to nonlinear model predictive control (nmpc) for systems governed by stochastic differential equations (sdes) that are observed at discrete times. such systems are called continuous-discrete systems and provides a natural representation of systems evolving in continuous-time. furthermore, this representation directly facilities construction of the state estimator in the nmpc. we provide numerical details related to systematic model identification, state estimation, and optimization of dynamical systems that are relevant to the nmpc.
|
nonlinear model predictive control: theory and algorithms
|
nonlinear model predictive control is a thorough and rigorous introduction to nonlinear model predictive control (nmpc) for discrete-time and sampled-data systems. nmpc is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. these results are complemented by discussions of feasibility and robustness. nmpc schemes with and without stabilizing terminal constraints are detailed and intuitive examples illustrate the performance of different nmpc variants. an introduction to nonlinear optimal control algorithms gives insight into how the nonlinear optimisation routine the core of any nmpc controller works. an appendix covering nmpc software and accompanying software in matlab and c++(downloadable from www.springer.com/isbn) enables readers to perform computer experiments exploring the possibilities and limitations of nmpc.
| 42
|
reasoning with multi-modal sensor streams for m-health applications.
|
musculoskeletal disorders have a long term impact on individuals as well as on the community. they require self-management, typically in the form of maintaining an active lifestyle that adheres to prescribed exercises regimes. in the recent past m-health applications gained popularity by gamification of physical activity monitoring and has had a positive impact on general health and well-being. however maintaining a regular exercise routine with correct execution needs more sophistication in human movement recognition compared to monitoring ambulatory activities. in this research we propose a digital intervention which can intercept, recognize and evaluate exercises in real-time with a view to supporting exercise self-management plans. we plan to compile a heterogeneous multi-sensor dataset for exercises, then we will improve upon state of the art machine learning models implement reasoning methods to recognise exercises and evaluate performance quality.
|
a deep learning approach to human activity recognition based on single accelerometer
|
in this paper, we propose an acceleration-based human activity recognition method using popular deep architecture, convolution neural network (cnn). in particular, we construct a cnn model and modify the convolution kernel to adapt the characteristics of tri-axial acceleration signals. also, for comparison, we use some widely used methods to accomplish the recognition task on the same dataset. the large dataset we constructed consists of 31688 samples from eight typical activities. the experiment results show that the cnn works well, which can reach an average accuracy of 93.8% without any feature extraction methods.
| 43
|
synchrophasor-based dominant electromechanical oscillation modes extraction using opdmd considering measurement noise
|
a synchrophasor-based approach for security status monitoring is an effective method of assessing power system stability without models, but the accuracy of this method is greatly affected by noise at the same time. this paper proposes an optimized dynamic mode decomposition algorithm to extract electromechanical oscillation modes from the noise-containing data of wide-area measurements. this algorithm effectively improves the ability of the dynamic mode decomposition technique to resist noise in the observed data by using variable projection and a finite-difference style approximation method. the proposed method provides a reliable extraction of the spatial relationships (mode shape), dynamic trends (frequency and damping ratio), and dominant modes using the derived energy relationships in the measured data. the performance of the proposed method has been investigated using simulated data from a simplified 14-generator system and measured data from a real system.
|
nonlinear koopman modes and coherency identification of coupled swing dynamics
|
we perform modal analysis of short-term swing dynamics in multi-machine power systems. the analysis is based on the so-called koopman operator, a linear, infinite-dimensional operator that is defined for any nonlinear dynamical system and captures full information of the system. modes derived through spectral analysis of the koopman operator, called koopman modes, provide a nonlinear extension of linear oscillatory modes. computation of the koopman modes extracts single-frequency, spatial modes embedded in non-stationary data of short-term, nonlinear swing dynamics, and it provides a novel technique for identification of coherent swings and machines.
| 44
|
a delay-locked loop using a synthesizer-based phase shifter for 3.2 gb/s chip-to-chip communication
|
a delay-locked loop in 0.18-mum cmos for chip-to-chip communication at 3.2 gb/s is presented. by leveraging the fractional-n synthesizer technique, this architecture provides fine-resolution and infinite-range delay and is less sensitive to process, temperature, and voltage variations than that of conventional techniques using a phase interpolator. a key element of the proposed structure is a digital sigma-delta modulator architecture that allows a high clock rate with compact area and reasonable power dissipation. the custom prototype ic operates at a 1.8-v supply voltage with a current consumption of 55 ma. the phase resolution is 1.4deg and measured differential and single-ended rms clock jitter is 3.6 ps and 4.8 ps, respectively. the core circuits occupy 0.42 mm2
|
adaptive low-power synchronization technique for multiple source-synchronous clocks in high-speed communication systems
|
advanced high-speed source-synchronous systems such as gddr5 use multiple source-synchronous clocks to increase memory bandwidth. therefore, well-defined phase relationships among multiple clocks are required to perform correct read/write operations. a gddr5 system solves this problem by adaptive clock synchronization training. for such multiple clocks synchronization training at controller side this paper proposes two simplified architectures based on: a) unit-delay incrementer, b) pi (phase-interpolator) based pll (phase-locked loop). experiments show that the proposed unit-delay architecture consumes only 0.89 mw power and 100 (μm)2 area in 65nm which is 16.8 times less power and 35 times less area than other works while power and area consumed in the pi-based pll architecture depends upon the complexity of the pi itself.
| 45
|
cross-platform software for the development of sign communication system: dactyl language modelling
|
the technology, which is implemented using cross-platform tools, is proposed for modeling of gesture units of sign language, dynamic mapping between states of gesture units with a combination of gestures structures (words, sentences). the technology implemented simulated playback of gesture items and constructions using virtual model of spatial hand. with the cross-platform means technology achieves the ability to run on multiple platforms without implementation for each platform.
|
modeling human hand movements, facial expressions, and articulation to synthesize and visualize gesture information
|
information and mathematical models are proposed for the animation of sign language communication based on a virtual human model. a model is developed to fix sign language morphemes and is used to create a technology and software to generate, store, and reproduce gestures. algorithmic solutions are proposed for the computation of human-like trajectories of hand and body movements in passing from a gesture to another and also facial expressions and articulation.
| 46
|
blind calibration algorithm for nonlinearity correction based on selective sampling
|
this paper proposes a blind calibration algorithm for suppressing nonlinearity in analog-to-digital converters (adcs). the proposed algorithm does not need any external calibration signal and is first of its kind. the proposed algorithm relies on the properties of downsampling and orthogonality of sinusoidal signals to estimate the nonlinearity coefficients present in the system and can be operated to remove even and odd order nonlinearities simultaneously. the working of the algorithm is demonstrated on a first-order ring oscillator based δς adc, whose performance is limited due to the nonlinearity present in its system. built in 0.13 μm cmos, the algorithm improves the sndr of the adc by 39 db, while improving sfdr by 45 db.
|
a 12-bit 20-msample/s pipelined analog-to-digital converter with nested digital background calibration
|
a 12-bit 20-msample/s pipelined analog-to-digital converter (adc) is calibrated in the background using an algorithmic adc, which is itself calibrated in the foreground. the overall calibration architecture is nested. the calibration overcomes the circuit nonidealities caused by capacitor mismatch and finite operational amplifier (opamp) gain both in the pipelined adc and the algorithmic adc. with a 58-khz sinusoidal input, test results show that the pipelined adc achieves a peak signal-to-noise-and-distortion ratio (sndr) of 70.8 db, a peak spurious-free dynamic range (sfdr) of 93.3 db, a total harmonic distortion (thd) of -92.9 db, and a peak integral nonlinearity (inl) of 0.47 least significant bit (lsb). the total power dissipation is 254 mw from 3.3 v. the active area is 7.5 mm/sup 2/ in 0.35-/spl mu/m cmos.
| 47
|
retrieval performance and information theory
|
abstract this paper challenges the meaningfulness of precision and recall values as a measure of performance of a retrieval system. instead, it advocates the use of a normalised form of shannon's functions (entropy and mutual information). shannon's four axioms are replaced by an equivalent set of five axioms which are more readily shown to be pertinent to document retrieval. the applicability of these axioms and the conceptual and operational advantages of shannon's functions are the central points of the work. the applicability of the results to any automatic classification is also outlined.
|
the knowledge in multiple human relevance judgments
|
we show first that the pooling of multiple human judgments of relevance provides predictor of relevance that is superior to that obtained from a single human's relevance judgemts. a learning algorithm applied to a set of relevance judgments obtained from a single human would be expected to perform on new material at a level somewhat below that human. however, we examine two learning methods which when trained on the superior source of pooled human relevance judgments are able to perform at the level of a single human on new material. all performance comparisons are based on an independent human judge. both algorithms function by producing term weights—one by a log odds calculation and the other by producing a least-squares fit to human relevance ratings. some characteristics of the algorithms are examined.
| 48
|
automatic classification for analogue and digital modulation based on spectrum feature
|
an automatic recognition algorithm between analogue and digital modulation signals based on spectrum feature is introduced.the detecting value of single frequency which is extracted from the power spectrum of the second power of the signal's envelop is as cha-racter parameter,and then automatically recognize analogue and digital modulation.simulation results show that this method can achieve more than 99% classification accuracy when snr is above 3 db.this scheme needn't any prior knowledge,and it is easy to be realized.it is not sensitive to the roll-off factor of the digital modulation signals and has the good practicability.
|
automatic classification of joint analog and digital modulated signal
|
a typical signal modulation identification method is first to divide a signal into different modulation types, and then to do the within-class identification. in this paper, we propose a method to distinguish the types of analog and digital signals by extracting spectrum feature and cyclostationarity feature of a signal. the simulation results show that this method has high classification accuracy at low snr, and compared with the existing methods, this method is applicable to cpm signals.
| 49
|
an anycast routing algorithm supporting qos for service data
|
an anycast routing algorithm supporting qos for service data (arsd) is proposed.arsd pre-computes path for anycast destination by the reverse path bandwidth,hop number and server load.the path pre-computed can fulfill the qos requirements of datagram transmitted on the path reverse direction and the load of server on the path end is not worse.arsd routes anycast datagram along the path and resource is reserved for the service data requested by the anycast datagram at the same time.with arsd,resource reservation for service data requested by ancyast datagram can be accomplished without unicast qos routing protocols.the simulation result shows that arsd can improve the bandwidth reservation request acceptance ratio of service data requested by anycast datagram,and balance the server load to some extent with little transmit delay of anycast datagram.the message overhead and setup time of resource reservation can be reduced with arsd,too.
|
a minimum interference anycast qos routing for mpls traffic engineering
|
anycast is a new service of ipv6. how to guarantee its quality of service(qos) has become an important subject. an anycast qos routing algorithm supporting traffic engineering(art) in multi-protocol label switching(mpls) networks is proposed. in order to select a qos path for client's bandwidth request, information regarding the source-destination pairs is used to reduce the interference between current request and future requests. art first assigns links weight associated with links residual capacity, servers load and interference on the links from anycast servers to the client. then it gets the best path by using dijkstra algorithm. art uses constraint routing-label distribution protocol(cr-ldp) to reserve bandwidth on label switching path(lsp). simulations show art can balance the load of mpls networks, optimize network resource utilization of anycast, and reduce delay and data loss during server's data transfer.
| 50
|
the summarization of research and application of web2.0
|
the emergence and development of web2.0 deeply affects the fields of library and information services,electronic government,knowledge management and services in enterprise.this paper sums up the background,conception and characters of web2.0,summarizes the theoretic researches and practices situation of web2.0 from five aspects of theory foundation,public information services,opening for business online and education,prospects the trends of web2.0's development in the future.
|
research on web2.0-based anti-cheating mechanism for witkey e-commerce
|
web2.0 brings lots of amazing web applications. as a typical web2.0 application, witkey is an innovative e-commerce model. however, problems such as cheating and intellectual property seriously influence the sound development of witkey e-commerce. based on brief introduction to witkey model and its operational processes, this paper analyzes cheating phenomena on witkey websites in depth. after introducing current credit evaluation methods used by existing witkey websites and their problems, a new credit evaluation model adopting web2.0 concept for witkeys is explored. based on digg voting mechanism and rss technique, an anti-cheating mechanism for witkey e-commerce is proposed, which is combined with witkey credit evaluation system.
| 51
|
game theory analysis on underground space planning knowledge sharing via web 2.0
|
modern underground space planning knowledge sharing no longer depends on face-to-face meetings, lectures or phone calls only. the conventional read-only web 1.0 was designed by html helps us in knowing about the world wide underground space planning. nevertheless, during the era of web 1.0, online users can only read but cannot add anything new or express their ideas. the current change to web 2.0 has been regarded as an evolution of world wide web from a static to a dynamic concept. rapid and dynamic cross border knowledge sharing among planners become possible with the help of web 2.0. the use of web 2.0 as a knowledge sharing method, however, is still not very common nowadays among urban planners. in this paper, it reviews web 2.0 tools available for underground space planners, their motivations on knowledge sharing via game theory.
|
resistance and motivation to share sustainable development knowledge by web 2.0
|
the concept of sustainable development — development which meets the needs of the present generation without depriving the needs of the future generation — has been in the lips of many political leaders, educators, ngos and green groups. living in the age of knowledge explosion, we want to receive the most updated information and knowledge. the web 2.0 revolution provides the best solution to all those hungry knowledge seekers. this paper sheds light on the major resistance and motivations on sustainable knowledge sharing.
| 52
|
a transformless electric spring with decoupled real and reactive power control
|
the electric spring has emerged as a promising technique to stabilize electrical grids with high penetration of renewable energy sources. in response to the grid frequency deviation, electric springs can flexibly regulate the noncritical load power. however, the previous versions of electric springs either suffer from the coupling effect between real and reactive powers, or require associated energy storage systems, or need an extra low-frequency transformer for the isolation purpose. to overcome these shortcomings, a transformless electric spring and its associated control strategy are proposed in this paper. the proposed electric spring can independently regulate the real and reactive power without using batteries for maintaining the dc-link voltage or handling power flow. finally, real time hardware-in-loop (hil) simulation results are provided for verification.
|
exploration of the relationship between inertia enhancement and dc-link capacitance for grid-connected converters
|
grid-connected converters (gccs) are showing potential in providing virtual inertia and has attracted wide attention recently. for the virtual inertia emulated by gccs, it is proportional with the dc-link capacitance, and thus, the dc-link capacitance can directly affect the dynamic performance for the gcc emulating inertia through modifying the inertia constant. however, the impacts of the dc-link capacitance have never been discussed in any literature before. considering this issue, the influence of the dc-link capacitance on the dynamic performance for the gcc providing virtual inertia is analyzed in this paper. in addition to that, the selection approach of the dc-iink capacitance is presented by tuning the system damping ratio within its optimal range. simulations verify the correctness.
| 53
|
data detection method from printed images with different resolutions using tablet device
|
in this paper, we propose a method of embedding and detecting data in printed images with several resolutions using the camera of a tablet device. the proposed method is based on a method that uses block division and code diffusion. to specify the resolution of an image and the number of blocks, invisible markers that are embedded in the amplitude domain of the discrete fourier transform of the target image are used. the angles between the markers and the x-axis are used to specify the resolution and the number of blocks. the proposed method can increase the variety of images suitable for data embedding. from experimental results obtained using a tablet device, it is shown that the proposed method is effective for specifying the resolution and the number of blocks in a captured image.
|
a method for data embedding to printed images based on use of original images
|
data embedding to printed images has become an important issue for several applications. in this paper, we assume an original image to be known and the server based data retrieval model, and a new method for the data embedding to the printed images is proposed. to embed many number of data bits, we adopt “walsh code” for diffusion code, which is strong to the interference between diffusion codes. this can be also applied to the improvement of the detection process and we propose the embedding of same data to three blocks. this technique gives the tolerance to some distortion and noises. in the detection processing, a taken image is forwarded to the server which holds the original image and the embedded data is detected in the server. therefore, more accurate detection becomes possible.
| 54
|
moving to videokifu: the last steps toward a fully automatic record-keeping of a go game
|
in a previous paper [ arxiv:1508.03269 ] we described the techniques we successfully employed for automatically reconstructing the whole move sequence of a go game by means of a set of pictures. now we describe how it is possible to reconstruct the move sequence by means of a video stream (which may be provided by an unattended webcam), possibly in real-time. although the basic algorithms remain the same, we will discuss the new problems that arise when dealing with videos, with special care for the ones that could block a real-time analysis and require an improvement of our previous techniques or even a completely brand new approach. eventually we present a number of preliminary but positive experimental results supporting the effectiveness of the software we are developing, built on the ideas here outlined.
|
use of the hough transformation to detect lines and curves in pictures
|
hough has proposed an interesting and computationally efficient procedure for detecting lines in pictures. this paper points out that the use of angle-radius rather than slope-intercept parameters simplifies the computation further. it also shows how the method can be used for more general curve fitting, and gives alternative interpretations that explain the source of its efficiency.
| 55
|
research on vlsi test compression
|
this paper explores the recent classic vlsi test compression methods, analyses the characteristics of each methods, make compression among them, the advantages and disadvantages of each proposed approaches are expounded. the main purpose for test compression is to reduce test application time, hardware overhead and test cost. the test compression solutions are mainly based on scan chain, and the x-filling methods are adopted for test compression. finally this paper outlooks the prospective of vlsi test compression.
|
a dictionary-based test data compression method using tri-state coding
|
with the rapid development of integrated circuit manufacturing processes, the degree of integration of system-on-chip(soc) has increased dramatically. in this paper, a dictionary-based test data compression method with tri-state coding is proposed to reduce the increasing test data volume. firstly the partial input reduction is used to preprocess the test set, and then the tri-state coding is presented to mark the index so that it can encode anywhere in the test set. experimental results on iscas'89 benchmark circuits show that the average compression rate of the proposed scheme reaches 73.92% by increasing the hardware area overhead slightly.
| 56
|
towards a faster simulation of systemc designs
|
accelerating simulation is one of the main reasons beyond the introduction of system level modeling. here systemc is one of the main players proven to speed-up simulation in comparison to classical hdl languages. however, the kernel architecture of the systemc simulator treats the design as a black box. for instance, all active processes are executed without checking if they are relevant to the test plan. we illustrate the performance of our approach on a set of models built on top of the master/slave library part of the systemc release and for two levels of abstraction: untimed functional (utf) and bus-cycle-accurate (bca).
|
systemc validation of a low power analog cmos image sensor architecture
|
in a context of embedded steady camera for video surveillance with high performance requirements and hard power consumption constraints, a low power cmos image sensor architecture allowing sensor's acuity adaptation to the scene activity is considered. in this paper we present an original approach based on systemc modeling to validate a complex analog simd architecture (i.e. highly parallel and programmable) and the implemented algorithm.
| 57
|
imos: enabling voip qos monitoring at intermediate nodes in an openflow sdn
|
the growing popularity of outsourced enterprise voip services poses a significant quality assurance issue for service providers. voip traffic is very sensitive to network impairments and maintaining high qos across multiple domains can be challenging. we propose to use sdn and our implementation of intermediate voip call quality measurement to provide an advanced voip monitoring service. our solution can automatically detect and locate quality issues for voip traffic.
|
experience of developing an openflow sdn prototype for managing iptv networks
|
iptv is a method of delivering tv content to endusers that is growing in popularity. it is a paid service, hence the implications of poor video quality may ultimately be a loss of revenue for the provider. consequently, it is vital to provide service monitoring and reconfiguration mechanisms to ensure quality requirements set out in service level agreements are upheld. this paper describes our experience of building an iptv software defined network testbed that can be used to develop and validate new approaches for service assurance in iptv networks. the testbed is modular and many of the concepts detailed in this tutorial may be applied to the management of other end-to-end services.
| 58
|
eye movements as implicit relevance feedback
|
reading detection is an important step in the process of automatic relevance feedback generation based on eye movements for information retrieval tasks. we describe a reading detection algorithm and present a preliminary study to find expressive eye movement measures.
|
arfled: ability recognition framework for learning and education
|
learning is one of the vital behaviors of human beings. this paper demonstrates a framework to augment learning activities by packaging two key ideas: eyetifact and hypermind. eyetifact is a system that converts data of eye movements beyond the difference of sensing devices to collect a large amount of training data for machine learning. hypermind is a digital textbook that displays learning materials dynamically based on a learner's cognitive states as measured by several sensors. in order to implement these two ideas, we have conducted experiments related to eyewear computing, textbook reading behavior analysis, and stress sensing. the contributions of this research are to investigate approaches that recognize human abilities and to transfer them from experts to others.
| 59
|
programmable vision processor/controller for flexible implementation of current and future image compression standards
|
the vision processor (vp) and vision controller (vc), two integrated products dedicated to video compression, are discussed. the chips implement the p*64, jpeg, and mpeg image compression standards. the vp forms the heart of the image compression system. it performs discrete cosine transform (dct), quantization, and motion estimation, as well as inverse dct, and inverse quantization. the highly parallel and microcode-based processor performs all of the jpeg, mpeg, and p*64 algorithms. the vc smart microcontroller controls the compression process and provides the interface to the host system. it captures pixels from a video source, performs video preprocessing, supervises pixel compression by the vp, performs huffman encoding, and passes the compressed data to the host over a buffered interface. it takes compressed data from the host, performs coder decoding, supervises decompression via the vp, performs postprocessing, and generates digital pixel output for a video destination such as a monitor. >
|
a case study in pipeline processor farming: parallelising the h.263 encoder
|
this paper describes the parallelisation of the h.263 hybrid video encoder algorithm based upon a pipelines of processor farms (ppf) paradigm. in addition, a data-farming template, which can be very useful for several image coding algorithms, was incorporated in the ppf model. a variety of parallel topologies were implemented in order to obtain the best time performance for an eight processor distributed-memory machine. results show that, due to communication overheads and algorithm constraints, the speed-up performance is below the value predicted by static analysis. however, the design examples indicated how to modify the ppf methodology in identifying those algorithm components which restrict scaling performance. the paper highlights the problems associated with the parallelisation of sequential algorithms and emphasises the need for generic tools to facilitate such conversion.
| 60
|
machine translation pre-training for data-to-text generation -- a case study in czech
|
while there is a large body of research studying deep learning methods for text generation from structured data, almost all of it focuses purely on english. in this paper, we study the effectiveness of machine translation based pre-training for data-to-text generation in non-english languages. since the structured data is generally expressed in english, text generation into other languages involves elements of translation, transliteration and copying - elements already encoded in neural machine translation systems. moreover, since data-to-text corpora are typically small, this task can benefit greatly from pre-training. based on our experiments on czech, a morphologically complex language, we find that pre-training lets us train end-to-end models with significantly improved performance, as judged by automatic metrics and human evaluation. we also show that this approach enjoys several desirable properties, including improved performance in low data scenarios and robustness to unseen slot values.
|
findings of the e2e nlg challenge
|
this paper summarises the experimental setup and results of the first shared task on end-to-end (e2e) natural language generation (nlg) in spoken dialogue systems. recent end-to-end generation systems are promising since they reduce the need for data annotation. however, they are currently limited to small, delexicalised datasets. the e2e nlg shared task aims to assess whether these novel approaches can generate better-quality output by learning from a dataset containing higher lexical richness, syntactic complexity and diverse discourse phenomena. we compare 62 systems submitted by 17 institutions, covering a wide range of approaches, including machine learning architectures -- with the majority implementing sequence-to-sequence models (seq2seq) -- as well as systems based on grammatical rules and templates.
| 61
|
design andimplementation of a module generator for low power multipliers
|
multiplication is an important part of real-time system applications. various hardware parallel multipliers used in such applications have been proposed. however, when the operand sizes of the mult ...
|
design of efficient add/shift algorithm for multiple constant multiplication
|
this paper deals with the remarkable linear phase and stability property of fir filter. it is appealing for wide fir filter applications. constant filter coefficients are procured from matlab r2011b and further the optimization operation is performed on the constant coefficients using xilinx ise13.1 tool. constant multiplication can be secured by a technique that is widely used and accepted as mcm technique. a technique that determines the multiplication of set of constants to single variable here in fir filter the set of constants refers to set of coefficients and variable refers to input x(n). constant coefficients are processed for optimization using cse technique, csd technique and gb technique.
| 62
|
claynet: content adaptation in m-learning
|
m-learning is the natural evolution of e-learning based on the use of mobile devices. therefore, claynet, as an e-learning platform, is performing this step by means of developing a web service which will allow the users of mobile devices to access the resources of the e-learning platform.
|
applications of service oriented architecture for the integration of lms and m-learning applications
|
a fuel injection pump for internal combustion engines which has an rpm-proportional injection onset adjusting device, a pressure control valve controlling the supply pressure (phd f), a cold-start acceleration device having a pressure valve associated with the pressure control valve, and a full-load stop, controlled by an adjustment device, for limiting the maximum full-load quantity injected by the fuel injection pump. the adjustment device has a piston, which is acted upon on one end by the supply pressure in the suction chamber of the injection pump and on the other end by a pressure (pa) that is made to differ from the supply pressure by means of two separate throttles. the two throttles have the effect that if the pressure valve is closed, the adaptation device is subjected to a differential pressure which is approximately equal to that when the pressure valve is opened, and so the full-load courses attained in normal operation and in cold operation are virtually identical.
| 63
|
reducing power consumption of wireless networks through collaborative dmc mobile clusters
|
reducing the energy consumption of the wireless networks is significantly important for the economic and ecological sustainability of the ict industry, as high energy consumption may limit the performance of wireless networks, and is one of the main network costs. to solve the energy consumption problem, especially on the terminal side, a scheme known as distributed mobile cloud (dmc) is considered to be a potential solution. multiple mobile terminals (mts) can cooperatively take advantage of good quality links among the mts to save energy when receiving from the base station. in this paper, we aim to find the optimal transmit power to further reduce the energy consumption of dmc. from simulation studies, it is shown that up to 80% energy savings can be accomplished when using optimal transmit power, compared to using the standard dmc without exploring the optimal transmit power.
|
a survey on mobile data offloading: technical and business perspectives
|
over the last few years, data traffic over cellular networks has seen an exponential rise, primarily due to the explosion of smartphones, tablets, and laptops. this increase in data traffic on cellular networks has caused an immediate need for offloading traffic for optimum performance of both voice and data services. as a result, different innovative solutions have emerged to manage data traffic. some of the key technologies include wi-fi, femtocells, and ip flow mobility. the growth of data traffic is also creating challenges for the backhaul of cellular networks; therefore, solutions such as core network offloading and media optimization are also gaining popularity. this article aims to provide a survey of mobile data offloading technologies including insights from the business perspective as well.
| 64
|
algorithm of ontology transformation to concept map for usage in semantic web expert system
|
abstract the main purpose of this paper is to present an algorithm of owl (web ontology language) ontology transformation to concept map for subsequent generation of rules and also to evaluate the efficiency of this algorithm. these generated rules are necessary to supplement and even to develop swes (semantic web expert system) knowledge base. this paper is a continuation of the earlier research of owl ontology transformation to rules.
|
towards the semantic web expert system
|
towards the semantic web expert systemthe paper presents a conception of the semantic web expert system which is the logical continuation of the expert system development. the semantic web expert system emerges as the result of evolution of expert system concept and it means expert system moving toward the web and using new semantic web technologies. the proposed conception of the semantic web expert system promises to have new useful features that distinguish it from other types of expert systems.
| 65
|
a comprehensive mobile e-healthcare system
|
bio-sensors and communication devices like smart phones witnessed significant progress, to take this advances, we propose a mobile heath-care system that reduce the distance between the patient and the health care center especially for the patient that need long term nursing care. this mobile system provide a good monitoring system for the patient in outdoor and indoor. the system is divided into two part, the first is a mobile phone application installed on a smart phone that connected with bio-sensors, the second is a data base center that connected with different health-care center and then the ambulance center. the application record the bio-signals from sensors and decide the patient status, send important information to the center, then add it to the patient's history that can accessed by the physician from any where.
|
building long-distance health care network using minimized portable sensors and active alert system
|
the accelerated aging population around the world will be more obvious. in the usa, there will be about 72 million people aged 65 by 2030, there will have nearly 36.98% of the elderly population in taiwan until 2050. those people will need health care too. nowadays, the current health care focus on the passive home care in taiwan, however, the devices which can provide elder to carrying in outdoor will be huge and heavy, and also the devices will hard to actively send the alert message for the medical units when the patient have problem in their health. therefore, the proposed of this study is to miniaturized vital sign detection and active alert system. the system will give elder the privacy, convenience, and security as well. hoping this study can not only reduce the medical unit's work but lower the medical resources cost, moreover, it will also enhance the quality of life for the patients.
| 66
|
secure and efficient search technique in cloud computing
|
cloud computing is nowadays widely used technology. various advanced technologies in the world are taking cloud computing very seriously as the new era for mobile as well as a steady computing environment. in cloud computing the data privacy and its security is highly recommended, that's why the data which have to be stored on the cloud server database requires encryption. this results into complex utilization of cloud data access. so, it is highly recommended to improve the trust on cloud server as well as not to make its utilization a complex task for computation. this process should not increase the burden on overall system. this paper represents brief review of various methodologies which helps user for secured storage and efficient access to the data. later a very secure and efficient system has been proposed to reduce the burden of the system thus to decrease complexity and to improve performance of overall system.
|
mona: secure multi -owner data sharing for dynamic groups in the cloud
|
in this paper, we present a secure multi owner data sharing scheme for dynamic groups in the cloud computing. by leveraging on group signature and dynamic broadcast encryption techniques, any cloud user can anonymously share data with others. in this propose a new model for sharing secure data in the cloud computing for the multiuser groups. in this one of the biggest concern with cloud data storage is that of data integrity verification at untreated servers. to preserve data privacy, the basic solution is to encrypt data files, and then upload the encrypted data into the cloud. to resolve this problem recently the best efficient method mona presented for secured multi owner data sharing in however we identified some limitations in that same approach in terms of reliability and scalability. hence in this paper we are further extending the basic mona by adding the reliability and as well as improving the scalability by increasing the number of group managers dynamically.
| 67
|
learning to plan probabilistically from neural networks
|
this paper discusses the learning of probabilistic planning without a priori domain-specific knowledge. different from existing reinforcement learning algorithms that generate only reactive policies and existing probabilistic planning algorithms that requires a substantial amount of a priori knowledge in order to plan, we devise a two-stage bottom-up learning-to-plan process, in which the reinforcement learning/dynamic programming is first applied, without the use of a priori domain-specific knowledge, to acquire a reactive policy and then explicit plans are extracted from the learned reactive policy. plan extraction is based on a beam search algorithm that performs temporal projection in a restricted fashion guided by the value functions resulting from the reinforcement learning/dynamic programming. the experiments and theoretical analysis are presented.
|
planning in a hierarchy of abstraction spaces
|
a problem domain can be represented as a hierarchy of abstraction spaces in which successively finer levels of detail are introduced. the problem sotver abstrips, a modification of strips, can define an abstraction space hierarchy from the strips representatien of a problem domain, and it can utilize the hierarchy in solving problems. examples of the system's performance are presented that demonstrate the significant increases in problem-solving power that this approach provides. then some further implications of the hierarchical planning approach are explored.
| 68
|
an interactive software package for the investigation of hydrodynamic-slider bearing-lubrication
|
the temperature dependent character of viscosity complicates the numerical analysis of hydrodynamic slider bearings and the geometry of the flow cavity plays a significant role on the design and performance of the lubrication systems. in this paper, we represent a recent software tool, named as ''hydro-lub,'' capable of performing constant and variable viscosity runs in various pad styles with moving boundaries. results of the demonstrating project are not only consistent with the available literature but also show the fast and reliable character of the package; which in return put forward the advantages of applying the program in the lubrication courses of mechanical engineering. 2003 wiley periodicals, inc. comput appl eng educ 11: 103� 115, 2003; published online in wiley interscience (www.interscience.wiley.com.); doi 10.1002/ cae.10047
|
nonadiabatic and frictional constant area duct flow: a visual software based simulation for compressible systems
|
numerical investigation of compressible flows in constant area ducts becomes substantially complicated when the surface roughness and heat flux conditions simultaneously act as independent and considerable boundary conditions that create significant influences on the flow and heat transfer characteristics. this article presents gas-dyn v2.0, a specialized software package, capable of handling nonadiabatic and frictional systems with the contribution of a database, developed also by the author, which involves the real time temperature-dependent gas properties. results of the presented computational analysis are in harmony with the available literature, which not only indicates the reliability of the package but also points out its adaptability to msc and phd level research studies and for the compressible flow system design projects. © 2006 wiley periodicals, inc. comput appl eng educ 14: 64–75, 2006; published online in wiley interscience (www.interscience.wiley.com); doi 10.1002/cae.20068
| 69
|
relation-based file management for portable device
|
as storage capacity in ce devices has been increased, the number and type of files has been increased accordingly. but traditional file systems generally use hierarchical directory/file structure to organize files. when we want to store and retrieve a file, we should know the name and location of the file exactly. and only one access path can be allowed to users. these file systems are not adequate to manage files of portable storage. we present webfs to effectively manage files stored in mobile storage and to provide users with a convenient way to retrieve files. webfs represents complicated information of files by using extended file metadata and provides various and effective way to access files by using inter-file relationships among files.
|
richer file system metadata using links and attributes
|
traditional file systems provide a weak and inadequate structure for meaningful representations of file interrelationships and other context-providing metadata. existing designs, which store additional file-oriented metadata either in a database, on disk, or both are limited by the technologies upon which they depend. moreover, they do not provide for user-defined relationships among files. to address these issues, we created the linking file system (lifs), a file system design in which files may have both arbitrary user- or application-specified attributes, and attributed links between files. in order to assure performance when accessing links and attributes, the system is designed to store metadata in non-volatile memory. this paper discusses several use cases that take advantage of this approach and describes the user-space prototype we developed to test the concepts presented.
| 70
|
junction tree decomposition for parallel exact inference
|
we present a junction tree decomposition based algorithm for parallel exact inference. this is a novel parallel exact inference method for evidence propagation in an arbitrary junction tree. if multiple cliques contain evidence, the performance of any state-of-the-art parallel inference algorithm achieving logarithmic time performance is adversely affected. in this paper, we propose a new approach to overcome this problem. we decompose a junction tree into a set of chains. cliques in each chain are partially updated after the evidence propagation. these partially updated cliques are then merged in parallel to obtain fully updated cliques. we derive the formula for merging partially updated cliques and estimate the computation workload of each step. experiments conducted using mpi on state-of-the-art clusters showed that the proposed algorithm exhibits linear scalability and superior performance compared with other parallel inference methods.
|
global conditioning for probabilistic inference in belief networks
|
in this paper we propose a new approach to probabilistic inference on belief networks, global conditioning, which is a simple generalization of pearl's (1986b) method of loop-cutset conditioning. we show that global conditioning, as well as loop-cutset conditioning, can be thought of as a special case of the method of lauritzen and spiegelhalter (1988) as refined by jensen et al (1990a; 1990b). nonetheless, this approach provides new opportunities for parallel processing and, in the case of sequential processing, a tradeoff of time for memory. we also show how a hybrid method (suermondt and others 1990) combining loop-cutset conditioning with jensen's method can be viewed within our framework. by exploring the relationships between these methods, we develop a unifying framework in which the advantages of each approach can be combined successfully.
| 71
|
image deformation based on cubic splines and moving least squares
|
based on cubic splines, an image deformation method using moving least squares is proposed in this paper. according to shape information or deformation requirement, key points are set to create control curves. the curves are moved to new positions to guide image warping. the transformation functions are deduced to compute affine, similarity and rigid deformations, by which we can obtain various image deformation results. experimental results show that our method can be used to describe outline and gain realistic deformation for complex image warping.
|
sketch-based aesthetic product form exploration from existing images using piecewise clothoid curves
|
we present a new sketch-based product form exploration technique that works from images and sketches of existing products. at the heart of our approach, is a multi-stroke curve beautification method and a curve-based image deformation algorithm. the proposed approach converts groups of strokes into piecewise clothoid curves in order to produce visually pleasing shapes. the deformation diffusion algorithm then spatially distributes the user specified deformations through out the image to produce smooth transformations from the original image to the resulting image. we demonstrate the technique on a variety of images including photo-realistic images, real product images, and sketches.
| 72
|
the analysis of frequency deviation on synchrophasor calculation and correction methods
|
this paper gave a brief introduction of phasor measurement unit (pmu) in the wide area measurement system (wams) of power systems, and studied the error characteristics of the discrete fourier calculation of phasor in the condition of a sinusoidal signal with frequency deviation from nominal frequency. then, a program calculated the phasor using discrete fourier transform under the platform of matlab was implemented, and the simulating analysis of off-nominal frequency was studied in detail. the simulating results showed that the error characteristics of calculated phasor had taken on the error ellipse shape. finally, several methods of error correction were introduced for calculating the accurate phasor in power systems.
|
fourier averaging algorithm for improving accuracy of basic wave voltages and currents calculations on industrial frequencies
|
accurately and quickly acquisition of effective values (or amplitudes) and phases of real time basic wave currents and voltages is needed to guarantee the stability, security and economics of power systems. accuracy and speed are also important performance indices of different micro -processor relay protection devices and automatic control equipment. however, when the frequency of the power system is not 50 hz, the use of fixed sampling intervals with fast fourier transforms (fft) to calculate the basic wave current and voltage will cause significant error. this paper presents a realistic fft based averaging algorithm which significantly improves the calculation accuracy of the basic wave current and voltage vectors. with this algorithm, when the power system frequency has a bias of ±2 hz from the normal frequency, the calculational error for the basic wave amplitude is limited to less than 0.5%, and the phase error is reduced significantly as well.
| 73
|
application of improved dv-hop localization algorithm in port container positioning
|
in order to overcome the traditional positioning system has high cost, poor precision, demand for warehouse, container terminal management, using rssi ( received single strength ) technology on the container for initial localization, then introduction of genetic dv-hop localization algorithm on the error correction. the simulation results show that improvement dv-hop algorithm improves the positioning accuracy of the node, reducing the positioning error, can make real-time monitoring for position, inventory and shift information of the goods, conducive to improving the efficiency of container management, more truly reflect the actual distribution of the containers.
|
research on distance measurement based on rssi of zigbee
|
distance measurement based on rssi, featuring low communication overhead and low complexity, is widely applied in the range-based localization of the wireless sensor networks. we first analyze the theory of distance measurement based on rssi and the influence of environment on rssi, and then we propose three experimental data processing methods. after using the zigbee-based hardware platform to test the measurement error of the three methods, we draw the conclusion that the measurement error of gauss model is 2 meters within 20 meters without consideration of circumstances effects.
| 74
|
a location-aware smart bus guide application for seoul
|
the goal of our research is to develop a smart context-aware guide system that provides a smart and personalized guide services based on implicit awareness of context. as a context-aware guide application, we have been developing a location-aware smart bus guide system for seoul. it will guide users to the nearby bus stops and provides users with information about the bus lines at the bus stops.
|
developing a context-aware electronic tourist guide: some issues and experiences
|
in this paper, we describe our experiences of developing and evaluating guide, an intelligent electronic tourist guide. the guide system has been built to overcome many of the limitations of the traditional information and navigation tools available to city visitors. for example, group-based tours are inherently inflexible with fixed starting times and fixed durations and (like most guidebooks) are constrained by the need to satisfy the interests of the majority rather than the specific interests of individuals. following a period of requirements capture, involving experts in the field of tourism, we developed and installed a system for use by visitors to lancaster. the system combines mobile computing technologies with a wireless infrastructure to present city visitors with information tailored to both their personal and environmental contexts. in this paper we present an evaluation of guide, focusing on the quality of the visitor's experience when using the system.
| 75
|
on scheduling 3d model transmission in network virtual environments
|
view dependent 3d-geometry streaming allows navigation of large virtual environments without requiring replication of the entire database at the client. we present scheduling policies for optimal transmission of multiple multiresolution 3d-geometry objects over congested network links. using multiresolution objects allows smooth adaptation of the visual quality of the graphics environment with network bandwidth fluctuations. a greedy scheduling policy is used for minimizing the visual error received at the client. the performance is further increased using a look-ahead scheduling policy. finally a scheduling policy using multicast transmission of 3d models is presented. the scheduling policy combines unicast and multicast transmissions in order to provide optimal visual quality of the virtual environment rendered at the client.
|
multi-resolution next location prediction for distributed virtual environments
|
computing performance of today's graphics hardware grows fast as well as amount of rendered data. modern graphics engines enable a possibility to use an arbitrary number of textures with arbitrary resolutions. on the other hand, high quality distributed 3d virtual environments can't exploit the computation power due to the limited network bandwidth. the problem mainly appears just in case the designers of such environments use high resolution textures. to overcome this streaming bottleneck an efficient prefetching scheme should be proposed. instead of blind greedy scheduling policy we propose a scheme which exploits movement history of users to realize a look-ahead policy which enables the clients to retrieve potentially rendered data in advance. the prediction itself is established by markov chains due to their ability to fast learning in conjunction with 2-state predictor which increases ability of the scheduling system to adapt to new habits of particular users.
| 76
|
structured ontology and ir for email search discovery
|
this paper discusses an email discovery and infor-mation retrieval tool based on formal concept anal-ysis. the program allows users to navigate emailusing a visual lattice metaphor rather than a tree.it implements a virtual le structure over emailwhere les and entire directories can appear in mul-tiple positions in a given view. the content andshape of the lattice formed by the conceptual ontol-ogy can assist in email discovery. the system de-scribed provides more exibility in retrieving storedemails than what is normally available in emailclients. the paper discusses how conceptual ontolo-gies can leverage traditional information retrievalsystems.keywords email management, information re-trieval, formal concept analysis
|
formal concept analysis: mathematical foundations
|
from the publisher: ::: this is the first textbook on formal concept analysis. it gives a systematic presentation of the mathematical foundations and their relation to applications in computer science, especially in data analysis and knowledge processing. above all, it presents graphical methods for representing conceptual systems that have proved themselves in communicating knowledge. theory and graphical representation are thus closely coupled together. the mathematical foundations are treated thoroughly and illuminated by means of numerous examples.
| 77
|
myanmar phrases translation model with morphological analysis for statistical myanmar to english translation system
|
this paper presents myanmar phrases translation model with morphological analysis. the system is based on statistical approach. in statistical machine translation, large amount of information is needed to guide the translation process. when small amount of training data is available, morphological analysis is needed especially for morphology rich language. myanmar language is inflected language and there are very few creations and researches of corpora in myanmar, comparing to other language such as english, french, and czech etc. therefore, myanmar phrases translation model is based on syntactic structure and morphology of myanmar language. bayes rule is also used to reformulate the translation probability of phrase pairs. experiment results showed that proposed system can improve translation quality by applying morphological analysis on myanmar language.
|
modeling with structures in statistical machine translation
|
most statistical machine translation systems employ a word-based alignment model. in this paper we demonstrate that word-based alignment is a major cause of translation errors. we propose a new alignment model based on shallow phrase structures, and the structures can be automatically acquired from parallel corpus. this new model achieved over 10% error reduction for our spoken language translation task.
| 78
|
named data networking for priority-based content dissemination in vanets
|
name-based communication and in-network caching make named data networking (ndn) a promising solution for content dissemination in vehicular ad hoc networks (vanets). so far, different ndn packet forwarding mechanisms have been proposed, but none of them have considered the idea of a prioritized traffic treatment based on vehicular content type. this is instead the focus of this paper, since priority-based content dissemination is a rather crucial objective in vehicular environments in order to meet the requirements of heterogeneous applications. based on the ndn hierarchical namespace, we propose specific “name-prefixes” that identify globally understood priorities for vehicular data traffic. a prefix-based prioritized technique is then implemented on top of basic ndn forwarding algorithms. simulation results show that the proposed enhancement succeeds in achieving differentiated traffic treatment and in reducing the latency of both high and low priority data.
|
enhancing content-centric networking for vehicular environments
|
content-centric networking (ccn) is a new popular communication paradigm that achieves information retrieval and distribution by using named data instead of end-to-end host-centric communications. this innovative model particularly fits mobile wireless environments characterized by dynamic topologies, unreliable broadcast channels, short-lived and intermittent connectivity, as proven by preliminary works in the literature. in this paper we extend the ccn framework to efficiently and reliably support content delivery on top of ieee 802.11p vehicular technology. achieved results show that the proposed solution, by leveraging distributed broadcast storm mitigation techniques, simple transport routines, and lightweight soft-state forwarding procedures, brings significant improvements w.r.t. a plain ccn model, confirming the effectiveness and efficiency of our design choices.
| 79
|
loop quasi-invariant chunk detection
|
several techniques for analysis and transformations are used in compilers. among them, the peeling of loops for hoisting quasi-invariants can be used to optimize generated code, or simply ease developers’ lives. in this paper, we introduce a new concept of dependency analysis borrowed from the field of implicit computational complexity (icc), allowing to work with composed statements called “chunks” to detect more quasi-invariants. based on an optimization idea given on a while language, we provide a transformation method - reusing icc concepts and techniques [8, 10] - to compilers. this new analysis computes an invariance degree for each statement or chunks of statements by building a new kind of dependency graph, finds the “maximum” or “worst” dependency graph for loops, and recognizes if an entire block is quasi-invariant or not. this block could be an inner loop, and in that case the computational complexity of the overall program can be decreased.
|
global common subexpression elimination
|
when considering compiler optimization, there are two questions that immediately come to mind; one, why and to what extent is optimization necessary and two, to what extent is it possible. ::: when considering the second question, one might immediately become discouraged since it is well known that the program equivalency problem is recursively unsolvable. it is, of course, clear from this that there will never be techniques for generating a completely optimum program. these unsolvability results, however, do not preclude the possibility of ad hoc techniques for program improvement or even a partial theory which produces a class of equivalent programs optimized in varying degrees. ::: the reasons why optimization is required seem to me to fall in two major categories. the first i will call “local” and the second “global”.
| 80
|
influence of the surrounded tissue in the detection of microcalcifications using wavelets
|
breast cancer is the most common cancer in women. many clinical decision support systems aimed to help in the diagnosis of breast cancer have been developed because an early diagnosis is fundamental to improve the results of the treatment. most of the developments are aimed to detect microcalcifications using the same system and parameters for all the mammograms without considering any other characteristic of the breast. in this paper we introduce the type of tissue in the breast as an element that can affect the selection of the right algorithm to improve the detection rates. we adapt the system setup depending on the type of tissue improving the results of the aid system.
|
representative frame decoration using unsharp filter in video summarization
|
representative frames are vital and attractive key components in providing internet users a way to swiftly browse a video clip in different levels of detail, without the need to view entire video clip. past study reveals that several schemes have been developed for representative frame extraction however all schemes are not decorating the key frames. this paper proposes a new concept called representative frame decoration (rfd), in which the essential operation is to enhance the key frame of a video shot. our approach employs an unsharp mask filter to enhance contrast and sharpen edges of the elements without increasing noise or blemish. experimental results of representative frames existing on internet illustrate the high performance of the proposed schemata.
| 81
|
image denoising based on the bivariate model of dual tree complex wavelet transform
|
in our study, we designed a algorithm of image de-noising using the local adaptive window bivariate model based on dual tree complex wavelet transform (dtcwt). in the algorithm, we used the advantages of dtcwt with approximate shift invariance and well direction selectivity, and then according to the correlation of neighborhood coefficient we selected suitable size algorithm of neighborhood window to do bivariate model of image de-noising algorithm, so as to achieve a better noise reduction.
|
improved image denoising based on haar wavelet transform
|
the wavelet decomposing levels and the selection of the thresholding function affect the performance of image denoising using the wavelet thresholding method. in this paper, a new method to identify the wavelet decomposing levels using the 2d haar wavelet thresholding method is presented to denoise an image. it uses the standard deviation values of the sub-bands to find out if the signal energy is strong or weak in the high frequency sub-bands after the 2d haar wavelet transform. in addition, a new thresholding function is proposed which achieves better denoising performance in terms of peak signal-to-noise ratio (psnr) and mean squared error (mse) than the soft thresholding method. especially, at high noise levels, the proposed new thresholding method outperforms hard thresholding, soft thresholding and semi-soft thresholding methods.
| 82
|
character-based parsing with convolutional neural network
|
we describe a novel convolutional neural network architecture with k-max pooling layer that is able to successfully recover the structure of chinese sentences. this network can capture active features for unseen segments of a sentence to measure how likely the segments are merged to be the constituents. given an input sentence, after all the scores of possible segments are computed, an efficient dynamic programming parsing algorithm is used to find the globally optimal parse tree. a similar network is then applied to predict syntactic categories for every node in the parse tree. our networks archived competitive performance to existing benchmark parsers on the ctb-5 dataset without any task-specific feature engineering.
|
parsing the internal structure of words: a new paradigm for chinese word segmentation
|
lots of chinese characters are very productive in that they can form many structured words either as prefixes or as suffixes. previous research in chinese word segmentation mainly focused on identifying only the word boundaries without considering the rich internal structures of many words. in this paper we argue that this is unsatisfying in many ways, both practically and theoretically. instead, we propose that word structures should be recovered in morphological analysis. an elegant approach for doing this is given and the result is shown to be promising enough for encouraging further effort in this direction. our probability model is trained with the penn chinese treebank and actually is able to parse both word and phrase structures in a unified way.
| 83
|
improved spectrum agility in narrow-band plc with cyclic block fmt modulation
|
narrow-band power line communications (nb-plc) operate in portions of the 3–500 khz spectrum and have to obey certain spectral masks for emc and coexistence issues. although orthogonal frequency division multiplexing (ofdm) allows simple spectrum management by switching on-off the subchannels, its poor sub-channel frequency selectivity translates into a poor spectrum usage. an agile use of the spectrum and higher spectral efficiency can be obtained with filter bank modulation. in particular, in this paper, we investigate the use of cyclic block filtered multitone (cb-fmt) modulation and compare it to pulse shaped ofdm (ps-ofdm) deployed in the g3-plc and ieee p1901.2 standards. the comparison shows that higher spectral efficiency and improved spectrum management can be achieved with cb-fmt.
|
power line network topology identification using admittance measurements and total least squares estimation
|
in this paper, we consider the identification of a smart grid network topology by means of admittance measurements performed at the network nodes. we show how the application of the transmission line theory allows the identification of the node-to-node connections. thereby, a topology identification algorithm is presented. the identification error due to the network noise is reduced by performing repeated measurements, either at different time instants or over a set of frequencies, at each network node. the information is then fused by means of the total least square regression technique, which is also compared to a simpler estimator. a thorough analysis of the results is then presented, which shows good reliability of the proposed algorithm.
| 84
|
mining a new fault-tolerant pattern type as an alternative to formal concept discovery
|
formal concept analysis has been proved to be useful to support knowledge discovery from boolean matrices. in many applications, such 0/1 data have to be computed from experimental data and it is common to miss some one values. therefore, we extend formal concepts towards fault-tolerance. we define the dr-bi-set pattern domain by allowing some zero values to be inside the pattern. crucial properties of formal concepts are preserved (number of zero values bounded on objects and attributes, maximality and availability of functions which “connect” the set components). dr-bi-sets are defined by constraints which are actively used by our correct and complete algorithm. experimentation on both synthetic and real data validates the added-value of the dr-bi-sets.
|
constraint programming for mining n-ary patterns
|
the aim of this paper is to model and mine patterns combining several local patterns (n-ary patterns). first, the user expresses his/her query under constraints involving n-ary patterns. second, a constraint solver generates the correct and complete set of solutions. this approach enables to model in a flexible way sets of constraints combining several local patterns and it leads to discover patterns of higher level. experiments show the feasibility and the interest of our approach.
| 85
|
human resources information systems improvement: involving financial systems and other sources data
|
human resources management systems are having a wide audience at present. however, no truly integrate solution has been proposed yet to improve the systems concerned. possible approaches to extra data collection for decision-making are considered including psychological testing and fixed assets information as well as product sales data. concept modeling is presented as a theoretical background for the systems in question. current technologies in state-of-the-art hr management software are discussed. design and implementation aspects of a web-enabled integrate enterprise system with high security and scalability are described. testing results for an improved enterprise-level hr system are given. perspectives of the field in question are discussed.
|
enterprise resource planning systems: the integrated approach
|
enterprise resource planning (erp) systems enjoy an increasingly wide coverage. however, no truly integrate solution has been proposed as yet. erp classification is given. recent trends in commercial systems are analyzed on the basis of human resources (hr) management software. an innovative "straight through" design and implementation process of an open, secure, and scalable integrated event-driven enterprise solution is suggested. implementation results are presented.
| 86
|
understanding relaxed memory consistency through interactive visualization
|
we present rasho, a tool that extracts and depicts execution scenarios from upc source code, then allows learners to intervene in the scenarios, manipulating them to explore hypotheses about consistent and inconsistent code. this approach allows learners to play with relaxed consistency inside a responsive, informative visual space.
|
nemos: a framework for axiomatic and executable specifications of memory consistency models
|
summary form only given. conforming to the underlying memory consistency rules is a fundamental requirement for implementing shared memory systems and developing multiprocessor programs. in order to promote understanding and enable automated verification, it is highly desirable that a memory model specification be both declarative and executable. we present a specification framework called nemos (nonoperational yet executable memory ordering specifications), which supports precise specification and automatic execution in the same framework. we employ a uniform notation based on predicate logic to define shared memory semantics in an axiomatic as well as compositional style. we also apply constraint logic programming and sat solving to make the axiomatic specifications executable for memory model analysis. to illustrate our approach, we formalize a collection of classical memory models, including sequential consistency, coherence, pram, causal consistency, and processor consistency.
| 87
|
topic models and metadata for visualizing text corpora
|
effectively exploring and analyzing large text corpora requires visualizations that provide a high level summary. past work has relied on faceted browsing of document metadata or on natural language processing of document text. in this paper, we present a new web-based tool that integrates topics learned from an unsupervised topic model in a faceted browsing experience. the user can manage topics, filter documents by topic and summarize views with metadata and topic graphs. we report a user study of the usefulness of topics in our tool.
|
exploratory search: from finding to understanding
|
research tools critical for exploratory search success involve the creation of new interfaces that move the process beyond predictable fact retrieval.
| 88
|
visualizing the thematic update status of web and wap sites on mobile phones *
|
the primary goal of people accessing the web from mobile phones is to find specific pieces of information (poi, hereinafter), not to surf. well-designed sites for mobile users help them by minimizing the path needed to reach the desired poi. we propose a further improvement, based on visualizing thematic update status (i.e., how many poi have been added in each category and when). this can prevent unfruitful navigation of the site and also allow users to compare different sites to choose which one better suits their needs.
|
building usable wireless applications for mobile phones
|
building usable wap applications is not simple. wireless devices have many limitations, and the average user of a wap application is not technically oriented (and possibly not even used to the internet). finally, the interpretation of wml varies greatly between devices from different vendors. this poses an extra challenge to good usability.
| 89
|
nina - navigating and interacting with notation and audio
|
this article deals with an xml-based format that allows an integrated representation of music, as well as with a working software demo that demonstrates the power of the format. the format itself is the basis of an ieee international standard for a comprehensive description of music contents and their synchronisation. after a theoretical introduction, a music browser based on such concepts will be reviewed. the application, called nina (for navigating and interacting with notation and audio), has been recently presented at an exhibition about neapolitan music.
|
the representation levels of music information
|
the purpose of this article is to characterize the various kinds and specificities of music representations in technical systems. it shows that an appropriate division derived from existing applications relies in four main types, which are defined as the physical, signal, symbolic and knowledge levels. this fair simple and straightforward division provides a powerful grid for analyzing all kinds of musical applications, up to the ones resulting from the most recent research advances. moreover, it is particularly adapted to exhibiting most current scientific issues in music technology as problems of conversion between various representation levels. the effectiveness of these concepts is then illustrated through an overview of existing applications functionalities, in particular from examples of recent research performed at ircam.
| 90
|
single frequency-based visual servoing for microrobotics applications
|
recently, high resolution visual methods based on direct-phase measurement of periodic patterns has been proposed with successful applications to microrobotics. this paper proposes a new implementation of direct-phase measurement methods to achieve 3-dof (degrees of freedom) visual servoing. the proposed algorithm relies on a single frequency tracking rather than a complete 2d discrete fourier transform that was required in previous works. the method does not require any calibration step and has many advantages such as high subpixelic resolution, high robustness and short computation time. several experimental validations (in favorable and unfavorable conditions of use) were performed using a xyδ microrobotic platform. the obtained results demonstrate the efficiency of the frequency-based controller, this in term of accuracy (micrometric error), convergence rate (30 iterations in nominal conditions) and robustness.
|
pseudo-periodic encryption of extended 2-d surfaces for high accurate recovery of any random zone by vision
|
this article presents a binary position encoding method for extended two-dimensional surfaces. position encryption is based on linear feedback shift register sequences inserted within a periodic frame of spots. the position and orientation of any local view is retrieved accurately with respect to the encrypted surface. image processing combines phase computations with binary image feature analysis. measurement resolution is in the range of 10−2 pixel in position and 10−3 degree for in-plane orientation. the method is used as a visual sensor in a position control loop applied to fluorescence optical microscopy for the recovery of cells of interest within culture dishes.
| 91
|
low power cross-domain high-voltage transmitters for battery management systems
|
this work presents a pair of low power high-voltage (hv) transmitters for battery management systems (bms). besides, the hv transmitter is designed using cmos transistors without any isolator. to realize a solution on silicon, the proposed hv transmitter shall be fabricated using an advanced hv semiconductor process, which usually is constrained by the voltage drop limitation between gate and source of hv devices. the proposed design is implemented using a typical 0.25 µm 1-poly 3-metal 60 v bcd process. the post-layout simulation results show that the hv transmitters can transmit data with hv dc level (36.4 ∼ 54.6 v) and the power consumption is less than 0.342 mw/mbps.
|
a stackable, 6-cell, li-ion, battery management ic for electric vehicles with 13, 12-bit σδ adcs, cell balancing, and direct-connect current-mode communications
|
battery monitoring systems are a critical aspect of electric vehicles. to interface with the high voltages in these systems a stacked ic approach is used. in this work we present an ic suitable for monitoring 6 cells which can be stacked to monitor a total of 192 cells. the ic has 13 parallel σδ adcs (12 bit, 5mv accuracy, better than 1.5mv matching) to measure individual cell voltages, cell temperature and battery current. gate drivers with programmable frequency (5khz - 2.048mhz), duty cycle and dither for active and passive cell balancing is provided. on chip 3.3v and 5.2v ldos power the various subsystems. to eliminate external components, the ics uses current mode signaling which allows the ic to be directly connected to one another to transmit information about cells from the top of the stack to the host. the ic consumes less than 15ua of quiescent current.
| 92
|
best-first vs. depth-first and/or search for multi-objective constraint optimization
|
in this paper we present and evaluate the power of best-first search over and/or search spaces for multi-objective constraint optimization. the main virtue of the and/or representation of the search space is its sensitivity to problem structure, which can translate into significant time savings. we introduce a linear-space best-first search algorithm that explores an and/or search tree and uses a class of partitioning-based heuristics for guidance. the superiority of the best-first approach over depth-first and/or branch-and-bound search using the same heuristic function is demonstrated empirically on random and real-world benchmarks for multi-objective constraint optimization.
|
optimal paths in graphs with stochastic or multidimensional weights
|
this paper explores computationally tractable formulations of stochastic and multidimensional optimal path problems, each as an extension of the shortest path problem. a single formulation encompassing both problems is considered, in which a utility function defines preference among candidate paths. the result is the ability to state explicit conditions for exact solutions using standard methods, and the applicability of well-understood approximation techniques.
| 93
|
security, liberty, and electronic communications
|
we live in perilous times. we live in times where a dirty bomb going off in lower manhattan is not unimaginable. we live in times where the cia interrogations of al qaeda leaders were so harsh that the fbi would not let its agent participate [36]. we live in times when security and liberty are both endangered.
|
preserving privacy versus data retention
|
the retention of communication data has recently attracted much public interest, mostly because of the possibility of its misuse. in this paper, we present protocols that address the privacy concerns of the communication partners. our data retention protocols store streams of encrypted data items, some of which may be flagged as critical (representing misbehavior). the frequent occurrence of critical data items justifies the self-decryption of all recently stored data items, critical or not. our first protocol allows the party gathering the retained data to decrypt all data items collected within, say, the last half year whenever the number of critical data items reaches some threshold within, say, the last month. the protocol ensures that the senders of data remain anonymous but may reveal that different critical data items came from the same sender. our second, computationally more complex scheme obscures this affiliation of critical data with high probability.
| 94
|
neural network topological evolvement
|
in order to obtain instance response from evolving neural networks, we separate the evolving procedure and the output procedure in the traditional ga approach, and make them simultaneously. thanks to the support from database system, this modified ga has three characteristics, namely serialized storage, persistent evolving and instance response to end user's request. in our experiment, this approach shows off better performance over the conventional strategy.
|
on the use of biologically-inspired adaptive mutations to evolve artificial neural network structures
|
evolutionary algorithms have been used to successfully evolve artificial neural network structures. normally the evolutionary algorithm has several different mutation operators available to randomly change the number and location of neurons or connections. the scope of any mutation is typically limited by a user-selected parameter. nature, however, controls the total number of neurons and synaptic connections in more predictable ways, which suggests the methods typically used by evolutionary algorithms may be inefficient. this paper describes a simple evolutionary algorithm that adaptively mutates the network structure where the adaptation emulates neuron and synaptic growth in the rhesus monkey. our preliminary results indicate it is possible to evolve relatively sparse connected networks that exhibit quite reasonable performance.
| 95
|
adaptive customer profile configuration in mass customization
|
in the twenty-first century, a company has to organize around the customer in order to be a successful and viable firm. today, the marketplace is customer driven. customers expect to get what they would like, with a side order of customization. this alteration of traditional organization of a company through the involvement of the customer into the configuration of the final product faces some obvious problems. the fundamental challenge is to avoid the abortion of the configuration process by the customer. the problem of adapting the process of co-creation to different customers can be solved by identification of different customer profiles that suit each individual customer's needs and limitations. the paper presents one approach in solving the presented problem, by the introduction of a methodology for adaptive involvement of customers as co-creators in mass customization of products and services.
|
true/false questions analysis using computerized certainty-based marking tests
|
writing effective and efficient exams is a crucial component of the teaching and learning process. exams are a common approach to assess student learning and the results are useful in a variety of ways. most often, results are used to provide students with feedback on what they have learnt or to evaluate the instructional effectiveness of a course. certainty-based marking (cbm) scores an objective test (usually done on a computer) in a way that rewards students for identifying and distinguishing between individual answers being reliable or unreliable. it penalizes confident errors and rewards a thoughtful and realistic judgment by the student on the basis of limitations of his/her knowledge.
| 96
|
pattern matching in music and its use for automated composition
|
it is something of a truism to say that self-reference abounds in ‘classical’ music. that is, given a single piece of music, it will most likely contain instances of verbatim repetition as well as more subtle variation. why then, when programming a computer to generate music, is little attention paid to ensuring the program’s output contains self-reference? if the program is intended to test a model of musical style then its inattention to selfreference may result in failure of the test. this report gives further exposition of the above issue and the research question thus motivated. current computational methods for pattern matching in music are exemplified and criticised. an introduction to markov models for music is also provided, which is intended to be suggestive of more intricate computational models of musical style. the research proposal emphasises the need to integrate pattern matchers and music generators, and makes feasible plans to pursue this objective.
|
algorithms for discovering repeated patterns in multidimensional representations of polyphonic music
|
in previous approaches to repetition discovery in music, the music to be analysed has been represented using strings. however, there are certain types of interesting musical repetitions that cannot be discovered using string algorithms. we propose a geometric approach to repetition discovery in which the music is represented as a multidimensional dataset. certain types of interesting musical repetition that cannot be found using string algorithms can efficiently be found using algorithms that process multidimensional datasets. our approach allows polyphonic music to be analysed as efficiently as monophonic music and it can be used to discover polyphonic repeated patterns “with gaps” in the timbre, dynamic and rhythmic structure of a passage as well as its pitch structure. we present two new algorithms: sia and siatec. sia computes all the maximal repeated patterns in a multidimensional dataset and siatec computes all the occurrences of all the maximal repeated patterns in a dataset. for a k -dimensional d...
| 97
|
on practical results of the differential power analysis
|
this paper describes practical differential power analysis attacks. there are presented successful and unsuccessful attack attempts with the description of the attack methodology. it provides relevant information about oscilloscope settings, optimization possibilities and fundamental attack principles, which are important when realizing this type of attack. the attack was conducted on the pic18f2420 microcontroller, using the aes cryptographic algorithm in the ecb mode with the 128-bit key length. we used two implementations of this algorithm - in the c programming language and in the assembler.
|
a dpa resistant dual rail préchargé logic cell
|
this paper presents a novel differential pass-transistor precharge logic (dppl) based on the complementary pass-transistor logic (cpl). in the aspect of circuit-level countermeasure, the power consumption of dppl is more constant. the dppl also solves the problem of early propagation effects (epe) effectively and has a relatively small area overhead compared to the wave dynamic differential logic (wddl). the power constancy and dpa resistant capability of dppl are evaluated by spice simulation in this paper. the results show that the power constancy and dpa resistant capability of dppl are superior to wddl.
| 98
|
the design of a language-directed editor for block-structured languages
|
a language-directed editor combines the text manipulation functions of a general-purpose editor with the syntax-checking functions of a compiler. it allows a user to create and modify a program in terms of its syntactic structure. the design of a user interface and an implementation for one such editor is described in language-independent terms. the design rationale is given. the implementation is outlined in terms of its major data structures.
|
grammar-based definition of metaprogramming systems
|
description d'une approche basee sur la grammaire de la specification des composants de manipulation syntactique d'un systeme de meta-programmation. la methode est applicable a tout langage de programmation et est illustree dans son application particuliere au pascal
| 99
|
digital filtering in pcm telephone systems
|
the advent of cheap and fast digital integrated circuits permits, at least in principle, the application of digital processing directly to pcm coded telephone signals. at present, the following applications look most promising: channel filters; digital conference circuits; receivers for mfc or touch-tone signals; and sampling rate conversion. following a review of these fields, an account is given of the problems which exist specifically in a pcm telephone system.
|
optocoupler-based extension-line circuit for electronic pabx's
|
a new approach for an extension-line circuit of a private automatic branch exchange (pabx), which does not depend on a line transformer and high-power feeder resistors, is described. a matched pair of electrooptical couplers is used in place of the transformed they provide high-voltage-protected signal transmission with stable insertion loss and good dynamic range. power dissipation of the feeding circuitry has been kept low by feeding the telephone with a constant line-independent current. a new status and signaling-detection concept is also described, which yields fast and reliable detection of status changes even in the presence of large common-mode signals during ring condition. elimination of the transformer, reduction of power dissipation in the feeding circuitry, and maximum use of integrated and hybrid circuit technology lead to a high packing density.
| 100
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 4