Prof. Dr. Asunción Gómez-Pérez Prof. Dr. Asunción Gómez-Pérez is Vice-Rector for Research, Innovation and Doctoral Studies. at Universidad Politécnica de Madrid (UPM). She is Fellow of the European Academy of Science (2018). She is Full professor on Artificial Intelligence. She has received the Award ARITMEL - National Prize of Computer Science 2015, the Annual Award of Investigation of the UPM (2015) and the National Prize Ada Byron for the Technologist Woman in its second edition (2015). She leads the Ontology Engineering Group. Her research areas include: Ontological Engineering, Semantic Web, Linked Data, Multilingualism in Information and knowledge management.
Her research experience is also applied to research programs at national and international level with public and private funds. She has published more than 300 publications and she has been supervisor of 22 Ph.D thesis.
Projects: She has directed significant amount I+D+i research projects from the National Plan, European projects and private contracts. She has coordinated 5 European research projects and has taken part like Main Researcher of the UPM in 22 European research projects.
Prof Enrico Motta has a Ph.D. in Artificial Intelligence from the UK’s Open University, where is currently a Professor in Knowledge Technologies. He also holds a professorial appointment at the University of Bergen in Norway. In the course of his academic career he has authored over 350 refereed publications and his h-index is 67, an impact indicator that positions him among the top computer scientists in Europe.
His research focuses on large scale data integration and analysis to support decision making in complex scenarios. Among his recent projects, he has led the MK:Smart initiative, a £17.2M project that tackled key barriers to economic growth in Milton Keynes through the deployment of innovative data-intensive solutions. He is also currently working on new solutions for scholarly analytics and in particular he is collaborating with Springer Nature to develop new tools that improve both the quality and efficiency of editorial processes in the academic publishing industry.
Prof Motta was Editor-in-Chief of the International Journal of Human-Computer Studies from 2004 to 2018 and, over the years, has advised strategic research boards and governments in several countries, including UK, US, The Netherlands, Austria, Finland, and Estonia. In 2003 he founded the International Summer School on Ontology Engineering and the Semantic Web, a pioneering initiative that introduced an innovative pedagogical approach, which has since been adopted by several other similar initiatives.
Over the past 15-20 years we have witnessed a paradigm shift in Computer Science, brought about by the unprecedented availability of large-scale amounts of data. As a result of this paradigm shift new exciting opportunities have arisen, allowing us to study the dynamics of a variety of phenomena, such as social networks, people’s mobility patterns, shopping behaviours, etc. Within this overall context, a particularly interesting area of analysis is that of scholarly data, which allows us to gain significant insights about the dynamics of the research world. These insights are important not just because they allow us to better understand, at different levels of granularity, how research evolves, but also because they can help us to identify biases and distortions in the system of scientific research and inform corrective actions.
In my presentation I will illustrate our work in this area, which has led to a number of significant research outcomes as well as the deployment of innovative technologies in commercial settings. Highlights include: an innovative hybrid approach to the generation of highly detailed maps of the space of research topics in a given field of science; a novel solution, inspired by epistemological models of science, which is able to predict the emergence of a new research field before this has been explicitly recognised by a research community; a geo-political analysis of Computer Science conferences that indicates a rather static landscape in which, despite the massive increase in scholarly production, new entries struggle to emerge; and a novel solution, in routine use at Springer Nature since 2016, which automatically generates the scholarly metadata for all conference proceedings in Computer Science. This has led to substantial efficiency improvements in the editorial workflow, as well as several million additional downloads of conference proceedings on the Springer portal.
AI has started entering our world in various forms and applications and it will soon have profoundly changed the ways we live and work. But it seems that some parts of the world are much more strongly affected by this transformation. Will the others be left behind? USA, China, Europe and other regions are indeed moving at rather different speed. Being first in research does not suffice to secure success in commercialization. Who in the world and which sectors of society will benefit and which ones may suffer? What do we have to prepare for -- and how could we do this?
Jan Hajič is a full professor of Computational Linguistics and the deputy head of the Institute of Formal and Applied Linguistics at the School of Computer Science, Charles University in Prague, where he has also received his Ph.D. in 1995. His interests cover morphology and part-of-speech tagging of inﬂective languages, machine translation, deep language understanding, and the application of statistical methods in natural language processing in general. He also has an extensive experience in building language resources for multiple languages with rich linguistic annotation, and is currently the main coordinator of a large research infrastructure on language resources in the Czech Republic, LINDAT/CLARIAH-CZ. His work experience includes both industrial research (IBM Research Yorktown Heights, NY, USA, in 1991-1993) and academia (Charles University in Prague, Czech Republic and Johns Hopkins University, Baltimore, MD, USA – visiting 1999-2000). He has published more than 200 conference and journal papers, a book on computational morphology, and several other book chapters, encyclopaedia and handbook entries. He regularly teaches basic and advanced courses on Statistical NLP and has multiple experience giving tutorials and lectures at various international training schools. He has been the PI or Co-PI of numerous international and large national grants and projects, including two U.S. NSF funded projects and several EU-funded Framework and Horizon 2020 projects.
Language Resources have a long history, which in fact predates their role as a training data for machine learning approaches to Natural Language Processing. After briefly mentioning their history from the Brown Corpus to Universal Dependencies, the talk will focus on how various linguistically relevant data contribute to both linguistic research as well as applications, and how their role is changing alongside new developments in machine learning, and specifically in today’s prevalent Deep Learning approaches. There are three types of Language Resources: naturally occurring resources (such as large monolingual or parallel corpora, both for text and speech, often correlated to some real world knowledge or events), and linguistically structured and/or annotated resources, such as dictionaries or lexicons and linguistically annotated corpora. Each of them has different use for different purposes; the talk will argue that despite the latest progress in Deep Learning and high quality end-to-end practical systems, annotated and structured linguistic resources play a very important role not only in basic linguistic research, but also in developing new methods for natural language understanding as well as an indispensable ingredient in the quest for true Artificial Intelligence.
Dr. Yolanda Gil is Director of Knowledge Technologies at the Information Sciences Institute of the University of Southern California, and Research Professor in Computer Science and in Spatial Sciences. She is also Director of the USC Center for Knowledge-Powered Interdisciplinary Data Science. She received her M.S. and Ph. D. degrees in Computer Science from Carnegie Mellon University, with a focus on artificial intelligence. Her research is on intelligent interfaces for knowledge capture and discovery, which she investigates in a variety of projects concerning scientific discovery, knowledge-based planning and problem solving, information analysis and assessment of trust, semantic annotation and metadata, and community-wide development of knowledge bases. Dr. Gil collaborates with scientists in different domains on semantic workflows and metadata capture, social knowledge collection, computer-mediated collaboration, and automated discovery. She is a Fellow of the Association for Computing Machinery (ACM), and Past Chair of its Special Interest Group in Artificial Intelligence. She is also Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), and was elected as its 24th President in 2016.
Artificial intelligence will play an increasingly more prominent role in scientific research ecosystems, and will become indispensable as more interdisciplinary science questions are tackled. While in recent years computers have propelled science by crunching through data and leading to a data science revolution, qualitatively different scientific advances will result from advanced intelligent technologies for crunching through knowledge and ideas. We propose seven principles for developing “thoughtful artificial intelligence”, which will turn intelligent systems into partners for scientists. We present a personal perspective on a research agenda for thoughtful artificial intelligence, and discuss its potential for data science and scientific discovery.
Moshe Y. Vardi is the George Distinguished Service Professor in Computational Engineering and Director of the Ken Kennedy Institute for Information Technology at Rice University. He is the recipient of several awards, including the ACM SIGACT Goedel Prize, the ACM Kanellakis Award, the ACM SIGMOD Codd Award, the Blaise Pascal Medal, the IEEE Computer Society Goode Award, and the EATCS Distinguished Achievements Award. He is the author and co-author of over 600 papers, as well as two books. He is a fellow of several societies, and a member of several academies, including the US National Academy of Engineering
and National Academy of Science. He holds honorary doctorates from six universities. He is a Senior Editor of the Communications of the ACM, the premier publication in computing.
Automation, driven by technological progress, has been increasing inexorably for the past several decades. Two schools of economic thinking have for many years been engaged in a debate about the potential effects of automation on jobs: will new technology spawn mass unemployment, as the robots take jobs away from humans? Or will the jobs robots take over create demand for new human jobs? I will present data that demonstrate that the concerns about automation are valid. In fact, technology has been hurting working-class people for the past 40 years. The discussion about humans, machines and work tends to be a discussion about some undetermined point in the far future. But it is time to face reality. The future is now.
Dr. Richard Benjamins is Data & AI Ambassador at Telefonica, LUCA where he is responsible for making Data & AI sustainable from a business, societal and ethical perspective. He is among the 100 most influential people in data-driven business (DataIQ 100, 2018). He was Group Chief Data Officer at AXA (Insurance) and worked for 10 years at Telefonica occupying several management positions related to Big Data and Analytics, touching all areas of the value chain. His passion lies in creating value from data. Business value, but also value for the society: he is the founder of Telefonica’s Big Data for Social Good department. He is member of the B2G data-sharing Expert Group of the EC, and a frequent speaker on Data and Artificial Intelligence events. He is also a strategic advisor to BigML – “Machine Learning made easy”. He was co-founder and director at iSOCO (1999-2007) and has held positions at the universities/research institutes in Madrid, Amsterdam, Sao Paulo, Paris, and Barcelona. He holds a PhD in Cognitive Science / Artificial Intelligence from the University of Amsterdam.
Recently, a lot of attention is given to the unintended, yet undesired consequences of Artificial Intelligence (AI), such as unfair bias leading to discrimination, or the lack of explanations of the results of AI systems. There are several important questions to answer before AI can be deployed at scale in our businesses and societies. Most of these issues are being discussed by experts and the wider communities, and it seems there is broad consensus on where they come from. There is, however, less consensus on, and experience with how to practically deal with those issues in organizations that develop and use AI, both from a technical and organizational perspective. In this talk, I will briefly discuss the practical case of a large organization that is putting in place a company-wide methodology to minimize the risk of undesired consequences of AI. We hope that other organizations can learn from this and that our experience contributes to making the best of AI while minimizing its risks.
Roberto Navigli is Professor of Computer Science at the Sapienza University of Rome, where he heads the multilingual Natural Language Processing group. He is one of the few researchers to have received two prestigious ERC grants in computer science, namely an ERC Starting Grant on multilingual word sense disambiguation (2011-2016) and an ERC Consolidator Grant on multilingual language- and syntax-independent open-text unified representations (2017-2022). He was also a co-PI of a Google Focused Research Award on NLU. In 2015 he received the META prize for groundbreaking work in overcoming language barriers with BabelNet, a project also highlighted in The Guardian and Time magazine, and winner of the Artificial Intelligence Journal prominent paper award 2017. He is the co-founder of Babelscape, a successful Sapienza startup company which enables Natural Language Processing in dozens of languages.
Understanding language is a hard task for a computer even in the era of deep learning. In this talk I will advocate the importance of interdisciplinary work for multilingual Natural Language Understanding: I will present current state-of-the-art results attained by joining forces between computer scientists and (computational) linguists, in order to address the knowledge acquisition bottleneck and scale up key tasks in word- and sentence-level semantics which enable computers to understand what is written in a text. I will also showcase innovative multilingual solutions developed in my research group and at Babelscape, a Sapienza startup company I co-founded.
Víctor Maojo is a Full Professor at the Department of Artificial Intelligence, Universidad Politecnica de Madrid (UPM). He holds a MD degree from the University of Oviedo, a PhD degree in medicine from the University of A Coruña and a PhD in Computer Science from the Universidad Politecnica de Madrid.
He has been a visiting professor and consultant at Georgia Tech (Atlanta, USA), and a "Research Fellow" at the joint program in medical informatics at the Health Science and Technology Division at Harvard University-MIT (Boston, USA). He has been a principal investigator in numerous national and European Commission-funded projects ¾in two of them as coordinator (Action Grid and Africa Build). His main research interests are linked to various areas related to Artificial Intelligence in Biomedicine.
He has worked as an expert for the European Commission since 1997 and in numerous international agencies. He is currently in the editorial board of several scientific journals in the area of biomedical informatics (Journal of the American Medical Informatics Association, Methods of Information in Medicine and Journal of Biomedical Informatics). In 2011 he was elected a Fellow of the American College of Medical Informatics (ACMI), for his contributions to the area of biomedical informatics. In 2017 he was elected a Founding Member of the new International Academy of Health Sciences Informatics. In 2019 he received the National Award of the Spanish Society of Health Informatics (SEIS). He currently coordinates the PhD program in Artificial Intelligence of the UPM.
Over the last decade Artificial Intelligence (AI) has attracted an enormous interest of scientists, industry, media and society in general, raising great challenges and opportunities as well as many concerns related to issues such as future employment, ethics and economy. The research and development of new approaches in areas such as machine learning, imaging, semantics, and others, combined with a substantial improvement in computing power and the number of scientists and institutions involved have led to novel techniques and substantial promises.
In the area of biomedicine, the exponential growth of clinical, genomic and population information provides scientists with the basic data for improving scientific discovery and clinical diagnosis and prognosis in areas such as cancer or rare diseases, among many others. However, the lessons learned during five decades of research in AI in biomedicine, showing how this is one of the most difficult areas for the application of informatics, in general, are not always reminded. In this talk I will summarize the many recent achievements in the area, promising so many new innovative systems, and how researchers must remind the intrinsic biological nature of humans and those lessons learned in the past by AI pioneers. AI scientists must still face some fundamental scientific challenges in the area and avoid the same kind of mistakes that were made in the past and are beginning to be observed, again, in current developments.
Insights on learning and intelligence are provided from a deep bidirectional perspective, featured by inward encoding / cognition and outward reconstruction / implementation. First, we make an overview on bidirectional learning from those studied in the later eighties and the early nineties, such as autoencoder, Lmser reconstruction, and BYY harmony learning, to ones developed in recent years, such as variational autoencoders, deep generative models, GAN, U-net, and DenseNet. Then, we proceed to bidirectional intelligence, driven by long term dynamics for parameter learning and short term dynamics for image thinking and rational thinking. Image thinking deals with information flow as if thinking was displayed in the real world, exemplified by typical tasks of bidirectional deep learning, while rational thinking handles symbolic strings, performing uncertainty reasoning and problem solving, exemplified by AlphaGoZero like searching, IBM Watson system, and causal computation.
Lei Xu, Emeritus Professor, Chinese University of Hong Kong; Zhiyuan Chair Professor, Shanghai Jiao Tong University (SJTU); Chief Scientist, SJTU AI Research Institute; Director of Neural Computation Research Centre in Brain and Intelligence Science-Technology Institute, Shanghai ZhangJiang National Lab; Received several national and international academic awards, including 1993 National Nature Science Award, 1995 Leadership Award from International Neural Networks Society (INNS) and 2006 APNNA Outstanding Achievement Award. Elected to Fellow of IEEE in 2001; Fellow of intl. Association for Pattern Recognition in 2002 and of European Academy of Sciences (EURASC) in 2003. Published about 100 Journal papers, got about 5500 Web-of-Science citations (over 3900 by top-10 papers with 1319 for top-1 and 119 for 10th). Given over dozens keynote /invited lectures at various international conferences. Served as EIC and associate editors of several academic journals, e.g., including Neural Networks (1995-2016), IEEE Tr. Neural Networks (1994-98). Taken various roles in academic societies, e.g., INNS Governing Board (2001-03), the INNS award committee (2002-03), and the Fellow committee of IEEE Computational Intelligence society (2006-07), and the EURASC scientific committee (2014-17).
The history of renin is a "success story" as we would like to count many in physiology and medicine: an observation by a physiologist that leads some 80 years later to the use of renin angiotensin system (RAS) blocking drugs that have profoundly revolutionized cardiovascular therapy. At the beginning of the 1970s, little was known about the proteins and the genes of the RAS. The structure and function of three of the major components of this system have been elucidated in part in our laboratory using molecular biology and genetic tools: renin, angiotensinogen – the renin substrate - , and angiotensin I - converting enzyme (ACE).
The entire RAS has emerged some 400 million years ago, as shown by phylogenetic studies. This system was essential during the transition of several vertebrate species from salt water to fresh water where salt conservation in the body became essential. This essential function is still relevant in today's man. Many of the RAS proteins have properties that recall their ancestral origin. This is the case with ACE that converts angiotensin I into angiotensin II, the active peptide of the system and inactivates bradykinin, a vasodilator peptide. ACE has evolved from an initial developmental and reproductive function to regulation of body volume and blood pressure in vertebrates. In mice, inactivation of the ACE gene results in a marked decrease in blood pressure, decreased renal function and in two unexpected effects: infertility in male mice and a selective decrease in hematopoiesis.
In humans, the renin-angiotensin system has probably also played a major role in adapting to hot and humid climates and in preserving salt. There are several genetic variants of the renin-angiotensin system, some of which, such as angiotensinogen, promote salt retention. One of these variants is very common in the African subjects, slightly less so in Asians and even less so in the Europeans. This ability, determined by genetic factors to keep sodium in a warm country, may have provided a selective advantage during the evolution of the human species. Its persistence can nowadays become a disadvantage in a society where salt intake is often excessive. In humans, a mutation loss of function of any of the major genes of the RAS leads to neonatal death by anuria, probably due to a drop in perfusion pressure in the kidney, showing the importance of the RAS in the maintenance of blood pressure during fetal life.
In conclusion, the renin-angiotensin system, as we know it, is a good example of physiological adaptation in vertebrates. Inhibition of this system is indicated in many pathological situations, but inappropriate and excessive inhibition under certain circumstances may compromise the ancestral mechanisms of sodium conservation.
The replacement of waste-generating chemical processes, and polluting energy resources, by environmentally benign ones, are major goals. In both directions, catalysis can play a center-stage role. Our work on the design and development of new “green” catalytic reactions of significance for (a) chemical synthesis (b) organic hydrogen carrier systems, will be briefly presented.
Ocean is considered as one of the potential renewable energy resources. Applications of the piezoelectric materials have been acknowledged as the effective technologies to harvest energy from ocean waves. This lecture is to introduce and review the development of the technologies. First, by comparison of the power generation capability, transmission efficiency, and structural installation and economic costs among the three available major energy conversion techniques, namely electrostatic, electromagnetic and piezoelectric technologies, the advantages of applying piezoelectric energy conversion technology are identified. Afterwards, the review sums up several methodologies and designs of harvesting energy from ocean waves. Finally, the futuristic research directions and extensions of the piezoelectric technologies to other areas such as high-rise building, wind energy, vehicles, et al. are discussed profoundly.
As the age of fossil fuels is coming to an end, now more than ever there is the need for more efficient and sustainable renewable energy technologies. This presentation will give an overview on recent developments in solar technologies that aim to address the energy challenge. In particular, nanostructured materials synthesized via the bottom–up approach present an opportunity for future generation low cost manufacturing of devices . We demonstrate various multifunctional materials, namely materials that exhibit more than one functionality, and structure/property relationships in such systems, including new strategies for the synthesis of multifunctional nanoscale materials to be used for applications in photovoltaics, solar hydrogen production, luminescent solar concentrators and other emerging optoelectronic technologies. [2-31].
 F. Rosei, J. Phys. Cond. Matt. 16, S1373 (2004);  C. Yan et al., Adv. Mater. 22, 1741 (2010);  C. Yan et al., J. Am. Chem. Soc. 132, 8868 (2010);  R. Nechache et al., Adv. Mater. 23, 1724 (2011);  R. Nechache et al., Appl. Phys. Lett. 98, 202902 (2011);  G. Chen et al., Chem. Comm. 48, 8009 (2012);  G. Chen et al., Adv. Func. Mater. 22, 3914 (2012);  R. Nechache et al., Nanoscale 4, 5588 (2012);  J. Toster et al., Nanoscale 5, 873 (2013);  T. Dembele et al., J. Power Sources 233, 93 ( 2013);  S. Li et al., Chem. Comm. 49, 5856 (2013);  T. Dembele et al., J. Phys. Chem. C 117, 14510 (2013);  R. Nechache et al., Nature Photonics 9, 61 (2015);  R. Nechache et al., Nanoscale 8, 3237 (2016);  R. Adhikari et al. Nano Energy 27, 265 (2016);  H. Zhao et al., Small 12, 3888 (2016);  J. Chakrabartty et al., Nanotechnology 27, 215402 (2016);  D. Benetti et al., J. Mater. Chem. C 4, 3555 (2016);  K. Basu et al., Sci. Rep. 6, 23312 (2016);  Y. Zhou et al., Adv. En. Mater. 6, 1501913 (2016);  H. Zhao et al., Nanoscale 8, 4217 (2016);  L. Jin et al., Adv. Sci. 3, 1500345 (2016);  H. Zhao et al., Small 11, 5741 (2015);  S. Li et al., Small 11, 4018 (2015);  K.T. Dembele et al., J. Mater. Chem. A 3, 2580 (2015);  H. Zhao et al., Nano Energy 34, 214–223 (2017);  S. Li et al., Nano Energy 35, 92–100 (2017);  G.S. Selopal et al., Adv. Func. Mater. 27, 1401468 (2017);  X. Tong et al., Adv. En. Mater. 8, 1701432 (2018);  H. Zhao, F. Rosei, Chem 3, 229–258 (2017);  J. Chakrabartty et al., Nature Phot. 12, 271–276 (2018).