Abstract: Paxos and Fast Paxos are optimal consensus algorithms that are simple and elegant, while suitable for efficient implementation. In this paper, we compare the performance of both algorithms in failure-free and failure-prone runs using Treplica, a general replication toolkit that implements these algorithms in a modular and efficient manner. We have found that Paxos outperforms Fast Paxos for small number of replicas and that collisions are not the cause of this performance difference.
Abstract: This work assesses how crashes and recoveries affect the performance of a replicated dynamic content web application. RobustStore is the result of retrofitting TPC-W's on-line bookstore with Treplica, a middleware for building dependable applications. Implementations of Paxos and Fast Paxos are at the core of Treplica's efficient and programmer friendly support for replication and recovery. The TPC-W benchmark, augmented with faultloads and dependability measures, is used to evaluate the behaviour of RobustStore. Experiments apply faultloads that cause sequential and concurrent replica crashes. RobustStore's performance drops by less than 13% during the recovery from two simultaneous replica crashes. When subject to an identical faultload and a shopping workload, a five replicas RobustStore maintains an accuracy of 99.999%. Our results display not only good performance, total autonomy and uninterrupted availability, they also indicate that Treplica simplifies the programming of application recovery, a great benefit for programmers of highly available applications.
Resumo: Com a crescente disponibilidade de genomas completos, o problema de rearranjo de genomas tem despertado muito interesse no campo da biologia computacional. Neste problema, um genoma é modelado como uma seqüência de regiões conservadas dentro de um grupo de genomas. O objetivo é encontrar um cenário evolutivo plausível para estre grupo, considerando a posição e orientação das regiões conservadas, construindo uma árvore filogenética utilizando as distâncias de rearranjo entre os genomas e possivelmente também construindo os genomas ancestrais. Neste artigo, analisamos os avanços mais recentes e aplicações deste campo.
Abstract: With the growing availability of complete genome sequence data, the genome rearrangement problem has received a lot of attention in the field of computational biology. In this problem, a genome is modeled as a sequence of regions that are conserved within a genome group. The goal is to find a plausible evolution scenario for this genome group, considering the position and orientation of the conserved regions, building a phylogenetic tree using pairwise distance estimates, optionally also rebuilding the ancestral genomes. This methodology has shown encouraging results when applied to the phylogenetic inference of several species groups. In this article, we review the state-of-the-art and applications in this field.
Abstract: We present a new method that addresses the various deficiencies of the state-of-the-art non-linear invariant generation methods for hybrid systems. Present approaches for non-linear invariant generation are limited to linear systems, or they relay on non scalable methods which have high complexity. Moreover, for hybrid systems with discrete transitions, diferential rules, and local conditions that are described by multivariate polynomials or fractional systems, no applicable method is known that lends itself to non-trivial non-linear invariants generation. We demonstrate a powerful computational complete method to solve this problem. By identifying suitable endomorphisms for each consecution condition, we reduce the problem to the intersection between specific eigenspaces and initial semi-affine/algebraic constraints. Our approach avoids first-order quantifier elimination, Gröbner bases computation or direct resolution of the systems, hereby circumventing difficulties met by other recent techniques.
Abstract: Non-linear loop invariant generation have seen tremendous progress in recent years. However, the weakness of these approach is that they are limited to linear (affine) system, and they often relay on trivial polynomial invariant (null or constant). Moreover, for programs with loops that describe multivariate polynomial or multivariate fractional system, no method is known to lend itself to non-trivial non-linear invariant. In order to automate the generation of non-trivial multivariate polynomial invariant, one needs to handle initiation and consecution condition for non-linear (algebraic) loop. We demonstrate a powerful computational complete method that encodes these conditions for a candidate invariant (a multivariate polynomial assertion with indeterminate coefficients) into a set of multi-parametric constraints such that all solutions identify a non-trivial non-linear loop invariant. Then, we provide a complete decision procedure for this constraint-solving problem. For each type of loop (affine, polynomial, fractional system) we present necessary and sufficient conditions for the existence of non-trivial non-linear loop invariant and we identify a large decidable class together with undecidable class. Without computing Grobner bases or using quantifier elimination techniques we show that our method generates stronger invariant, hereby circumventing difficulties met by recent approach.
Abstract: e-Cidadania is a research Project inspired by one of the current grand challenges of Computer Science research in Brazil for the next years: the Participative and Universal Access to Knowledge for the Brazilian Citizen. This Project investigates the relationship people establish in their informal communities organized around some special interests, how they use societal artifacts, including computational technology. This work builds on Participatory Design (PD) techniques and Organizational Semiotics (OS) artifacts to conduct the 2nd Semio-Participatory Workshop of e-Cidadania Project and to analyze its achievements. The Workshop aimed at clarifying the dynamics of social networks through narratives the participants record on special cards. Results of this practice will inform design elements for an inclusive social network system. This research report illustrates the use of the PD and OS artifacts, presents the main achievements of the Workshop and discusses results of the semiotic-informed analysis.
Abstract: e-Cidadania is a research Project inspired by one of the current grand challenges of Computer Science research in Brazil for the next years: the Participative and Universal Access to Knowledge for the Brazilian Citizen. This Project investigates the relationship people establish in their informal communities organized around some special interests, how they use societal artifacts, including computational technology. This work builds on Organizational Semiotics (OS) to conduct the 1st Semio-Participatory Workshop of the e-Cidadania Project aiming at problem clarification, project scope definition and requirements elicitation for an inclusive social network system. This research report illustrates the use of the OS artifacts, presents the main achievements of the Workshop and discusses results of the semiotic-informed analysis.
Abstract: Web Usage Mining (WUM) usually considers server logs as a data source for collecting patterns of usage data. This solution presents limitations when the goal is to represent how users interact with specific user interface elements, since this approach may not have detailed information about what the users do in a Web page. This technical report presents a model for logging client-side events and an implementation of it in a tool called WELFIT (Web Event Logger and Flow Identification Tool). By using the model presented here miner systems can capture more detailed Web usage data, making possible a more fine-grained examination of Web pages usage. In addition, the model can help HCI (Human-Computer Interaction) practitioners to log client-side events of mobile devices, set-top boxes, Web pages, among other artifacts.
Abstract: Allocating variables to registers is a central task in program code generation. Though considered a well-studied and thoroughly scrutinized problem, register allocation sometimes reveals a new interesting facet. Spill code generation is part of any register allocation algorithm and its efficacy can have a direct impact in the final program performance. In this paper we present a dynamic programming algorithm to generate spill code, which tries to simultaneously reallocate spilled variables into two type of holes left by the register allocator: intervals between live ranges (called dead-holes), and low-density live ranges (called live-holes). As a post-pass optimization, this technique can be used by any register allocation algorithm. Experimental results reveal that a considerable reduction in memory traffic can be achieved.
Resumo: Esse relatório resume o que foi apresentado e discutido no workshop empresarial intitulado "Sistemas de Gestão de Processos de Negócios", durante o ENEGEP 2007 em Foz do Iguaçu-PR. Os sistemas de gestão de processos de negócios visam facilitar a execução e gerência dos processos de negócios nas empresas. A implementação e manutenção de tais sistemas envolve um ciclo com quatro fases: (i) (re-) modelagem dos processos de negócios; (ii) configuração desses modelos para um determinado sistema; (iii) execução dos processos em ferramentas de apoio; e, (iv) análise de tais execuções. Nesse contexto, dois problemas são fundamentais. O primeiro é como integrar os diferentes processos de negócios intra- e/ou inter-organizacionais. O segundo é como analisar as execuções desses processos de negócios de forma a detectar eventuais pontos de otimização, como realizar auditorias dos processos automaticamente, entre outros. Note que o primeiro problema está relacionado às três primeiras fases da implementação dos sistemas de gerenciamento, enquanto o segundo envolve a fase de análise. Nesse sentido, esse documento contém nossas experiências na solução de tais problemas. Para a parte de modelagem e gerenciamento, são apresentadas técnicas de modelagem e gerência de workflow incluindo experiências em contrato eletrônico, que determinam como diferentes empresas/departamentos negociam a execução de processos. Além disso, mostram-se algumas vantagens na inclusão de parâmetros de qualidade de serviço para descoberta de serviços que melhor atendam os requisitos de consumidores. Para a parte de análise, o foco é nas técnicas de mineração de processos (www.processmining.org) que fornecem feedbacks sobre diferentes perspectivas (fluxo de atividades, organização, etc) dos processos de negócios. Como o objetivo final desse documento é estimular a cooperação entre indústria e academia, as pesquisas apresentadas são concretamente ilustradas através de estudos de caso.
Resumo: Este relatório técnico contém resumos de 24 trabalhos apresentados no 4o Workshop de Teses de Doutorado do Instituto de Computação da UNICAMP. O workshop, realizado de 22 a 24 de setembro de 2008, permitiu que doutorandos do Instituto apresentassem os principais aspectos de suas pesquisas. Cada capítulo corresponde a uma apresentação, sendo o texto limitado a 4 páginas.
A participação foi voluntária e o perfil acadêmico dos participantes foi variado, cobrindo desde alunos recém-admitidos no programa até aqueles que já tinham defesa marcada em setembro de 2008.
A publicação dos resumos sob forma de um relatório técnico tem por objetivo uma maior divulgação de trabalhos de doutorado em andamento no IC. Além disso, é um registro sucinto do estado de várias dessas pesquisas.
Abstract: Extraction of the mid-sagittal plane (MSP) is a key step for brain image registration and asymmetry analysis. We present a fast MSP extraction method for 3D MR images, based on automatic segmentation of the brain and on heuristic maximization of the cerebro-spinal fluid within the MSP. The method is robust to severe anatomical asymmetries between the hemispheres, caused by surgical procedures and lesions. The method is also accurate with respect to MSP delineations done by a specialist. The method was evaluated on 64 MR images (36 pathological, 20 healthy, 8 synthetic), and it found a precise and accurate approximation of the MSP in all of them with a mean time of 60.0 seconds per image, mean angular variation within a same image (precision) of 1.26 degrees and mean angular difference from specialist delineations (accuracy) of 1.64 degrees.
Resumo: A utilização de computadores como meio de comunicação vem aumentando o interesse pela área de sistemas colaborativos. Ao mesmo tempo, o desenvolvimento da área fomenta essa comunicação via sistemas computacionais. Sistemas colaborativos devem ser capazes de oferecer mecanismos para que usuários possam perceber e utilizar informações dos demais usuários, objetos e atividades no sistema. A natureza multidisciplinar da área aliada a sua origem em grupos de pesquisa com diferentes focos representa uma dificuldade ao entendimento de seus conceitos fundamentais e frameworks. Neste artigo apresentamos uma síntese da literatura em sistemas colaborativos, com foco especial em awareness e propomos uma organização de conceitos representada em um diagrama de ontologia da Semiótica Organizacional. Com base no estudo identificamos desafios de pesquisa para o desenvolvimento de sistemas colaborativos relevantes à área de IHC (Interação Humano-Computador).
Abstract: We propose an approach for data clustering based on optimum-path forest. The samples are taken as nodes of a graph, whose arcs are defined by an adjacency relation. The nodes are weighted by their probability density values (pdf) and a connectivity function is maximized, such that each maximum of the pdf becomes root of an optimum-path tree (cluster), composed by samples ``more strongly connected'' to that maximum than to any other root. We discuss the advantages over other pdf-based approaches and present extensions to large datasets with results for interactive image segmentation and for fast, accurate, and automatic brain tissue classification in magnetic resonance (MR) images.
Abstract: We introduce a framework for synergistic arc-weight estimation and show its application to several interactive segmentation approaches using graphs. The method is a dynamic training process, where the user draws markers inside each object (including background), arc weights are estimated from image attributes and object information (pixels under the markers), and a visual feedback guides the next user's action. We demonstrate the advantages of the proposed framework with respect to methods that do not exploit object information for arc-weight estimation and methods that recompute weights during delineation, making the user to loose control over the segmentation process.
Abstract: We present an approach for supervised classification, which interprets a training set as a complete graph, identifies prototypes in all classes, and computes an optimum-path forest rooted at them. The class of a sample in a tree is assumed to be the same of its root. A test sample is classified by identifying which tree would contain it. We show how to improve performance from the errors on an evaluation set, without increasing the training set. The advantages over others approaches are demonstrated using several experiments.
Abstract: This paper presents a new relevance feedback method for content-based image retrieval using local image features. This method adopts a genetic programming approach to learn user preferences and combine the region similarity values in a query session. Experiments demonstrate that the proposed method yields more effective results than the Local Aggregation Pattern (LAP)-based relevance feedback technique.
Abstract: In a previous report, we developed formulas for basic operations on simploidal Bernstein polynomials. Here we extend that work with formulas for partial and directional derivatives of such polynomials.
Abstract: Scientific research is producing and consuming large volumes of multimedia data at an ever growing rate. Metadata - data about data - is the primary mechanism through which context is associated to content to enhance content management. It also makes it easier to interpret and share data and helps digital curation. However, raw data often needs to go through complex processing steps before it can be consumed. During these transformation processes, original metadata from the production phase is often discarded or ignored, since its usefulness is usually limited to the first transformation step. New metadata must be associated with the final product, a time consuming task often carried out manually. Systematically associating new metadata to the result of each data transformation step is know as metadata evolution or annotation propagation. This paper introduces techniques for semantically enhancing metadata and automatically transforming them along with the data transformation processes. This helps the construction of new annotated multimedia data sets, preserving contextual information. The solution is based on: (i) the notion of semantic annotations, which are metadata structures enriched with domain ontologies; (ii) a set of transformations rules, based on ontological relations; and, (iii) workflows, which steer the sequence of transformations.
Abstract: The bicontraction of a vertex of degree two in a graph consists of contracting both the edges incident with that vertex. The retract of a graph is the graph obtained from it by bicontracting all its vertices of degree two. An edge of a brick is thin if the retract of is a brick, and is strictly thin if that retract is a simple brick. Thin and strictly thin edges in braces on six or more vertices are similarly defined. We showed in a previous paper that every brick distinct from , and the Petersen graph has a thin edge. In the first part of this paper we show that every brace has a thin edge.
McCuaig showed that every brace, which is not a biwheel or a prism or a Möbius ladder, has a strictly thin edge. Analogously, Norine and Thomas showed that every brick, which is different from the Petersen graph and is not in any one of five well-defined infinite families of graphs, has a strictly thin edge. These theorems yield procedures for generating simple braces and bricks, respectively. In the second part of the paper we show that the results of McCuaig on braces, and of Norine and Thomas on bricks, may be deduced fairly easily from ours mentioned above. The proofs of these results are so remarkably similar that we are able to present them simultaneously.
Abstract: Dynamic Binary Translation can provides code portability between different architectures. The translation process occurs on the fly by translating the binary code from source to target architecture. Since this is a dynamic process, the translation overhead must be minimum. One way to compensate the overhead cost of binary translation is to dynamically optimize traces of the translated code. This paper discuss the influence of trace quality on dynamic optimization and also presents three dynamic optimizations and shows which gain expectations a dynamic optimization system can have.
Abstract: In this paper we present a generalization for the W-method, which can be used for automatically generating test cases. In contrast too the W-method, this generalization allows for test case generations even in the absence of a characterization set for the specification. The work presents proofs of correctness for this generalization, and derives the original W-method from it as a particular case. Formal proofs of correctness for the W-method, not given in the original paper, are also presented in a clear and detailed way.
Resumo: Este relatório discute Problem Based Learning (PBL) e Computer Supported Collaborative Learning (CSCL), objetivando identificar e unificar conceitos, definições e características de cada um dos modelos. A partir dos resultados do estudo propomos ao modelo Aprendizagem Colaborativa Baseada em Problemas (ACBP) que integra diferentes conceitos e características de PBL e CSCL.
ACBP é um modelo orientado ao ensino e aprendizagem mediados pela Internet baseado na interação e colaboração de um pequeno grupo de estudantes na resolução de problemas. ACBP tem como eixo central as discussões de grupo e é composto por um conjunto de atividades orientadas ao trabalho colaborativo, agrupadas em 5 fases. Cada fase do processo permite evoluir na resolução de problemas, de forma iterativa ao mesmo tempo em que permite voltar a qualquer fase do processo.
ACBP baseia-se em métodos da Semiótica Organizacional (SO) para nortear o processo de resolução de problemas com uma visão social, mais abrangente. Inicialmente a SO está incorporada na fase numero dois do ACBP com o uso do Modelo de Articulação de Problemas (PAM), possibilitando suporte à interpretação e análise dos problemas propostos.
Abstract: This paper contains a survey and a literature review about Problem Based Learning (PBL) and Computer Supported Collaborative Learning (CSCL), aiming at identifying and unifying concepts, definitions and characteristics of each model. Considering both models, we proposed a new model called Problem-Based Collaborative Learning (ACBP in its Portuguese acronym) that integrates different concepts and features from PBL and CSCL.
ACBP is a model oriented to teaching and learning through Internet-based interaction and collaboration of a small group of students, solving problems. ACBP has as its backbone group discussions, comprising a set of activities designed to collaborative work which are grouped into five phases. Each phase allows advances in the process of problem solving, but with an iterative approach that allows the group to get back in any stage of the process.
ACBP is based on methods of the Organizational Semiotics (OS) to guide the process of clarifying problems with a wide vision. Initially, OS is applied to the second stage of ACBP model using the Problem Articulation Model (PAM), supporting the problem interpretation and analysis.
Abstract: In this report, we define simploidal polynomial functions and simploidal Bernstein bases, which are a generalization of the polynomials and Bernstein bases used in simplicial and tensorial Bézier patches. We then provide formulas for converting between simploidal polynomials expressed in various kinds of simploidal Bernstein bases.
Abstract: The Brazilian society faces today a situation characterized by enormous differences with regard to socio-economics, culture as well as access to technology and knowledge. This Technical Report, developed within the project e-Cidadania, contributes with investigations regarding new methods of design to address the questions a system opened to the differences raises. Here we present preliminary results of a design proposal for a system to support an inclusive social network. Considering the concepts of Universal Design and using techniques from Participatory Design, we propose an approximation to the system design and the list of non-functional and functional requirements. The design decisions were recorded in a Design Rationale document.
Abstract: From an HCI point of view, actual development design processes do not consider adequately aspects of interaction design, nor participatory techniques for involving end users. Bonacin et al. (2008) have proposed the Agile Inclusive Process Model (AIPM), a process model, based on agile methods, as well as on practices, methods and theories from Human-Computer Interaction, Participatory Design and Organizational Semiotics. In this technical report, we clarified and justified some of these practices and methods to emphasize and show the viability of considering them in software development processes.
Abstract: The Internet represents a new dimension to software development. It can be understood as an opportunity to develop systems to promote social inclusion and citizenship. These systems impose a singular way for developing software, where accessibility and usability are key requirements. This technical report proposes a process model for agile software development, which takes into account these requirements. This method brings together multidisciplinary practices coming from Participatory Design, and Organizational Semiotics with concepts of agile methods.
Abstract: Some Content Management Systems (CMS) offer a number of facilities to make available different services through the Internet; however these facilities may not meet all requirements of a specific project. In addition, when developing an extension or plug-in that offers a specific service for a CMS, the developed software becomes highly coupled with the platform used. To address this problem we propose an architecture that integrates CMSs with Web applications so that they can evolve independently. From the proposed architecture it is possible to develop services for CMSs to come in the future.
Abstract: We present new quantum lower bounds and upper bounds for several computational geometry problems. The bounds presented here improve on currently known results in a number of ways. We give asymptotically optimal bounds for one of the problems considered, and we provide up to logarithmic factors optimal bounds for a number of other problems. We settle an open problem of Bahadur et al. Some of these new bounds are obtained using a general algorithm for finding a minimum pair over a given arbitrary order relation.
Abstract: In this paper we present a new approach to solve the Graph Reconstruction Conjecture. We reduce the problem of reconstructing connected graphs to the problem of finding a special system of linear equations with a unique solution. We also show how this method can be applied to some simple cases and propose some extensions that possibly could be used for a proof of the conjecture.
Abstract: The Vibrionaceae family comprises a wide variety of organisms, including some severe human pathogens. Currently, 8 completely sequenced genomes are public available, and a natural challenge that arises from this scenario is to employ this information to establish a modern phylogeny.
Toward this goal, we propose an approach based on genomic rearrangements to compare complete genomes of Vibrionaceae strains, which can in principle be applied, with adaptations, to other species. In our approach, we employ a profile-based methodology to identify homologous genes and model evolutionary events such as gene losses, lateral gene transfers, and chromosomal rearrangements to estimate evolutive distance, which we believe can substantially complement analyzes based on gene markers or phenotypic characteristics.
Abstract: Software architecture and component-based development are complementary approaches to the development of intensive software systems. Software architecture describes a system in terms of its logical architectural components, while component-based development focuses on the reuse of existing software components for building new software systems. However, software components are usually developed in programming languages that do not include component and software architecture abstractions, such as interfaces, architectural components and architectural connectors.
So there is a gap between the conception of an abstract software architecture and its concrete implementation in a programming language. The COSMOS* model uses programming language features and well-known design patterns to represent software architectures explicitly in the program code. The COSMOS* model defines the specification and implementation of COSMOS* components, COSMOS* connectors, COSMOS* architectural configurations, and COSMOS* composite components. Case studies applying the COSMOS* model are discussed using the Java programming language.
Resumo: Este trabalho representa uma contribuição ao Projeto ``Acesso, Permanência e Prosseguimento da Escolaridade em Nível Superior de Pessoas com Deficiência: Ambientes Inclusivos"(PROESP/CAPES). O objetivo deste trabalho é propor um Processo para Adequação de Websites a Requisitos de Acessibilidade e Usabilidade, que apoie equipes de desenvolvimento de websites na identificação e resolução dos problemas comumente endereçados por ferramentas, diretrizes e heurísticas de usabilidade e acessibilidade. Busca-se que sua utilização seja feita de forma gradual e que com este avanço seu conteúdo se integre à rotina de desenvolvimento da equipe responsável. O desenvolvimento do processo foi pautado pelo conteúdo e pelos resultados obtidos com oficinas realizadas para a equipe de desenvolvimento do website do Grupo Gestor de Benefícios Sociais (GGBS) da Unicamp. Assim, o processo é inspirado nos resultados obtidos nas oficinas. Uma vez definido o conteúdo do processo pretende-se tornar disponível um website, que disponibilizará formas de consulta a esse conteúdo e um canal de comunicação entre seus usuários (e.g., fórum, blog, chat).
Abstract: A new approach to identify clusters as trees of an optimum-path forest has been presented. We are extending the method for large datasets with application to automatic GM/WM classification in MR-T1 images of the brain. The method is computed for a few randomly selected voxels, such that GM and WM define two optimum-path trees. The remaining voxels are classified incrementally, by identifying which tree would contain each voxel if it were part of the forest. Our method produces accurate results on phantom and real images, similarly to those obtained by the state-of-the-art, does not rely on templates, and takes less than 1.5 minute on modern PCs.
Abstract: This paper presents a Markov Model used to configure the Chen,Toueg and Aguilera's NFD-U/NFD-E failure detector algorithm for unsynchronized clocks according to QoS requirements. The Markov Model uses the loss burst probabilities to capture the message loss burst information. The experiments with data gathered from a link between two long distance networks show that the proposed configurator performs better than the Chen, Toueg and Aguilera configurator when long loss bursts occur and performs similarly to their configurator when no loss burst occurs.
Instituto de Computação ::
Universidade Estadual de Campinas
Caixa Postal 6176 • Av. Albert Einstein, 1251 - Cidade Universitária • CEP 13083-970 • Campinas/SP - Brasil • Fone: [19] 3521-5838 |