In 1981, the 25 universities who are members of the association of public universities (CRUCH) published 602 articles in the Web of Science (WOS) index. That number increased to 11,133 in 2016. How did universities become more productive? Studies show that universities can improve their productivity by using academic strategies, such as hiring more researchers, financing more research projects, incentivizing collaborations with other universities or by using non-academic strategies such as publishing the same paper in different languages, increasing the number of academics with double affiliations or participating in large research groups in which all members are included as authors on the group publications. This study analyzes a dataset of 120,329 articles published by academics from CRUCH universities between 1980 and 2016 on the Web of Science. Our main objective is to estimate the use of academic and non-academic strategies to improve the productivity of public universities in Chile.
Probabilistic program models can be used to describe systems that exhibit uncertainty, such as communication protocols over unreliable channels, randomized algorithms in distributed systems, or fault-tolerant systems. Their semantics can be defined in terms of Markov chains, Markov decision processes or stochastic games. The usage of resources (time, power, memory, bandwidth, etc.) can be modeled by assigning a reward (or cost) to individual transitions, or, more generally, to whole computation paths. The resulting Markov reward model can be analyzed to verify safety, liveness and performance properties. For example: "What is the distribution/expectation/variance of the time/cost needed to reach a given target state? More generally, such properties can be expressed in stochastic logics for probabilistic systems (e.g., PCTL, and PRCTL) and verified by model checking techniques. We give an overview over new techniques for verifying quantitative properties of general infinite-state probabilistic systems (with unbounded counters, buffers, process creation, or recursion). These techniques use special structural properties of the underlying system models, such as sequential decomposition, finite attractors, or partial orders and monotonicity properties.
Despite the fact that JSON is currently one of the most popular formats for exchanging data on the Web, there are very few studies on this topic and there is no agreement upon theoretical framework for dealing with JSON. In this talk, we propose a formal data model for JSON documents and, based on the common features present in available systems using JSON, we define a lightweight query language allowing us to navigate through JSON documents. We also introduce a logic capturing the schema proposal for JSON and study the complexity of basic computational tasks associated with these two formalisms.
The Web is the most powerful communication medium and the largest public data repository that humankind has created. Its content ranges from great reference sources such as Wikipedia to ugly fake news. Indeed, social (digital) media is just an amplifying mirror of ourselves. Hence, the main challenge of search engines and other websites that rely on web data is to assess the quality of such data. However, as all people has their own biases, web content as well as our web interactions are tainted with many biases. Data bias includes redundancy and spam, while interaction bias includes activity and presentation bias. In addition, sometimes algorithms add bias, particularly in the context of search and recommendation systems. As bias generates bias, we stress the importance of debiasing data as well as using the context and other techniques such as explore & exploit, to break the filter bubble. The main goal of this talk is to make people aware of the different biases that affect all of us on the Web. Awareness is the first step to be able to fight and reduce the vicious cycle of bias.
In this talk, I will try to paraphrase the goals of the nucleus centre (as I perceived them from the introductory talk) based on my own complementary experience of research on semantic search. I will try to sketch the design of a system that can enable the type of semantic search that was put forward in the funding pitch. I will then try to give an impression of why such a system does not already exist: for each component of the system, I will cover the main challenges, what is the state of the art (including work I've already been involved in), and what are the research questions to tackle going forward. Though I doubt we can make Google bankrupt by 2017, I do hope that the talk can help to identify important research questions in the area ... questions that hopefully match up with our joint expertise.