Browse by Tags: wais research seminar

Up a level
Export as [feed] Atom [feed] RSS 1.0 [feed] RSS 2.0
Number of items: 87.
  1. [img] [img]
    Linked Data in the Digital Humanities: Examples, Projects, and Tools
    Harnessing the potential of semantic web technologies to support and diversify scholarship is gaining popularity in the digital humanities. This talk describes a number of projects utilising Linked Data ranging from musicology and library metadata, to the representation of the narrative structure, philological, bibliographical, and museological data of ancient Mesopotamian literary compositions.

    Shared with the University by
    Ms Amber Bu
  2. [img] [img]
    "Thematically Analysing Social Network Content During Disasters Through the Lens of the Disaster Management Lifecycle" & "Investigating Similarity Between Privacy Policies of Social Networking Sites as a Precursor for Standardization"
    Abstract 1: Social Networks such as Twitter are often used for disseminating and collecting information during natural disasters. The potential for its use in Disaster Management has been acknowledged. However, more nuanced understanding of the communications that take place on social networks are required to more effectively integrate this information into the processes within disaster management. The type and value of information shared should be assessed, determining the benefits and issues, with credibility and reliability as known concerns. Mapping the tweets in relation to the modelled stages of a disaster can be a useful evaluation for determining the benefits/drawbacks of using data from social networks, such as Twitter, in disaster management.A thematic analysis of tweets’ content, language and tone during the UK Storms and Floods 2013/14 was conducted. Manual scripting was used to determine the official sequence of events, and classify the stages of the disaster into the phases of the Disaster Management Lifecycle, to produce a timeline. Twenty- five topics discussed on Twitter emerged, and three key types of tweets, based on the language and tone, were identified. The timeline represents the events of the disaster, according to the Met Office reports, classed into B. Faulkner’s Disaster Management Lifecycle framework. Context is provided when observing the analysed tweets against the timeline. This illustrates a potential basis and benefit for mapping tweets into the Disaster Management Lifecycle phases. Comparing the number of tweets submitted in each month with the timeline, suggests users tweet more as an event heightens and persists. Furthermore, users generally express greater emotion and urgency in their tweets.This paper concludes that the thematic analysis of content on social networks, such as Twitter, can be useful in gaining additional perspectives for disaster management. It demonstrates that mapping tweets into the phases of a Disaster Management Lifecycle model can have benefits in the recovery phase, not just in the response phase, to potentially improve future policies and activities. Abstract2: The current execution of privacy policies, as a mode of communicating information to users, is unsatisfactory. Social networking sites (SNS) exemplify this issue, attracting growing concerns regarding their use of personal data and its effect on user privacy. This demonstrates the need for more informative policies. However, SNS lack the incentives required to improve policies, which is exacerbated by the difficulties of creating a policy that is both concise and compliant. Standardization addresses many of these issues, providing benefits for users and SNS, although it is only possible if policies share attributes which can be standardized. This investigation used thematic analysis and cross- document structure theory, to assess the similarity of attributes between the privacy policies (as available in August 2014), of the six most frequently visited SNS globally. Using the Jaccard similarity coefficient, two types of attribute were measured; the clauses used by SNS and the coverage of forty recommendations made by the UK Information Commissioner’s Office. Analysis showed that whilst similarity in the clauses used was low, similarity in the recommendations covered was high, indicating that SNS use different clauses, but to convey similar information. The analysis also showed that low similarity in the clauses was largely due to differences in semantics, elaboration and functionality between SNS. Therefore, this paper proposes that the policies of SNS already share attributes, indicating the feasibility of standardization and five recommendations are made to begin facilitating this, based on the findings of the investigation.

    Shared with the University by
    Mr Roushdat Elaheebocus
  3. [img]
    A Vision for the future of Technology Enhanced Learning : Key trends and implications
    Abstract: The talk will provide a review of the digital landscape, along with an outline of the affordances of digital technologies. It will focus on networking and open practices and will argue that practitioners need new digital literacies to harness the potential of digital technologies. It will argue that the teachers role is central and that they need a range of Continuing Professional Practice. It will describe a new project at DCU, #OpenTeach, which will develop a course for our 90 part-time tutors to help them design and facilitate online courses. It will argue that new approaches to Learning Design are needed and will describe a number of recent frameworks. It will conclude with a set of final reflections drawing on a recent EU-commissioned report. Bio: Gráinne Conole is a professor and Head of Open Education in the National Institute for Digital Learning at Dublin City University. She has worked at the Universities of Bath Spa, Bristol, Leicester, OU and Southampton. Her research interests are on the use of technologies for learning, including Open Educational Resources (OER) and Massive Open Online Courses (MOOCs), new approaches to designing for learning, e-pedagogies, and social media. She has an HEA National Teaching Fellowship and fellow of EDEN and ASCILITE. She has published and presented over 1000 talks, workshops and articles. See http://e4innovation.co.uk for more details.

    Shared with the University by
    Ms Amber Bu
  4. [img] [img]
    A data-driven approach to disease control
    As our world becomes increasingly interconnected, diseases can spread at a faster and faster rate. Recent years have seen large-scale influenza, cholera and ebola outbreaks and failing to react in a timely manner to outbreaks leads to a larger spread and longer persistence of the outbreak. Furthermore, diseases like malaria, polio and dengue fever have been eliminated in some parts of the world but continue to put a substantial burden on countries in which these diseases are still endemic. To reduce the disease burden and eventually move towards countrywide elimination of diseases such as malaria, understanding human mobility is crucial for both planning interventions as well as estimation of the prevalence of the disease. In this talk, I will discuss how various data sources can be used to estimate human movements, population distributions and disease prevalence as well as the relevance of this information for intervention planning. Particularly anonymised mobile phone data has been shown to be a valuable source of information for countries with unreliable population density and migration data and I will present several studies where mobile phone data has been used to derive these measures.

    Shared with the University by
    Mr Roushdat Elaheebocus
  5. [img]
    AI in recent art practice
    Abstract: Over the past couple of years, there has been increasing interest in applying the latest advances in machine learning to creative projects in art and design. From DeepDream and style transfer to a GAN-generated painting selling for $430,000 at auction, AI art has moved beyond the world of research and academia and become a trend in its own right. Meanwhile, the contemporary art world's fascination with the social impact of facial recognition, recommendation systems and deep fakes has encouraged artists to explore AI critically as subject matter. This talk will give an overview of how artists and technologists are using and thinking about machine learning, its creative potential and societal impact. Bio: Luba Elliott is a curator and researcher specialising in artificial intelligence in the creative industries. She is currently working to educate and engage the broader public about the latest developments in AI art through talks, workshops and exhibitions at venues across the art and technology spectrum including The Photographers’ Gallery, Victoria & Albert Museum, Seoul MediaCity Biennale, Impakt Festival, Leverhulme Centre for the Future of Intelligence, NeurIPS and ECCV. Her Creative AI London meetup community includes 2,300+ members. She has advised organisations including The World Economic Forum, Google and City University on the topic and was featured on the BBC, Forbes and The Guardian. She is a member of the AI council at the British Interactive Media Association. Previously, she worked in startups and venture capital and has a degree in Modern Languages from Cambridge University.

    Shared with the University by
    Ms Amber Bu
  6. [img] [img]
    Automating Travel Surveys
    Abstract: Travel surveys are used to estimate the percentage of citizen using certain mode(s) of transport, and are the basic building block towards data-driven transport policies. Traditionally, travel surveys are done on paper or with telephone assistance, a costly and cumbersome approach that limits the number of times the survey can be applied. The advent of ubiquitous Smartphones opened the door for the implementation of travel surveys as a mobile app that (i) collects relevant sensor data (ii) uses Machine Learning models to detect the transportation mode, and (iii) interface with the user to verify the machine's prediction, correcting it if necessary. In this talk I will present a general framework for mobile travel surveys, review the associated challenges and research questions associated to each phase, and present some of the advances we have made in the context of the H2020 QROWD project. Speaker information: Luis-Daniel Ibáñez is a research fellow in the Web and Internet Science group of the University of Southampton, UK His research activity comprises the domains of the Web, Linked Open Data and Crowdsourcing. He holds a Ph.D in Informatics from the University of Nantes. He is currently the technical director of the H2020 QROWD project, and also participates in two further EU funded initiatives: the European Data Portal, where he contributes to the analysis of its effectivity, and the EU Blockchain Observatory and Forum. Previously, he was deputy coordinator of the Open Data Incubator for Europe, an EU funded incubation program for SMEs innovating around Open Data.

    Shared with the University by
    Ms Amber Bu
  7. [img]
    B2C e-Commerce in Indonesia: Personalisation & Impulse Buying
    Abstract : It is a work in progress to unleash the potentials of B2C e-Commerce in Indonesia. This study focuses on the millenials' impulse buying as digital buyers of SME in Indonesia. Potential personalisation dimension have been identified. These need further experiment and evaluation. About the Speaker : Dr Betty Purwandari is an academic staff in the Faculty of Computer Science, Universitas Indonesia. She is the course leader of MSc Information Technology at Universitas Indonesia. Her research is on Web Science and e-commerce mainly with SME. She also works on e-participation with the Executive Office of the President, Republic of Indonesia. She did her undergraduate at Universitas Indonesia. Then she got her MSc from UCL as the British Chevening awardee. She did her PhD at the University of Southampton. After she came back to Universitas Indonesia, she was appointed as the university’s Information Technology director. Her professional achievements were recognised by the British Council Indonesia in the 2016 UK Alumni Award.

    Shared with the University by
    Ms Amber Bu
  8. [img] [img]
    Bay 13 pecha kucha
    The talks are by EA Draffan, Nawar Halabi, Gareth Beeston and Neil Rogers. In 6m40s and 20 slides, each member of Bay 13 will introduce themselves, explaining their background and research interests, so those in WAIS can put a name to a face, and chat after the event if there are common interests.

    Shared with the University by
    Mr Roushdat Elaheebocus
  9. [img]
    Bias in the Social Web
    Abstract A frequent assumption in Social Media is that its open nature leads to a representative view of the world. In this talk we want to consider bias occurring in the Social Web. We will consider a case study of liquid feedback, a direct democracy platform of the German pirate party as well as models of (non-)discriminating systems. As a conclusion of this talk we stipulate the need of Social Media systems to bias their working according to social norms and to publish the bias they introduce. Speaker Biography: Prof Steffen Staab Steffen studied in Erlangen (Germany), Philadelphia (USA) and Freiburg (Germany) computer science and computational linguistics. Afterwards he worked as researcher at Uni. Stuttgart/Fraunhofer and Univ. Karlsruhe, before he became professor in Koblenz (Germany). Since March 2015 he also holds a chair for Web and Computer Science at Univ. of Southampton sharing his time between here and Koblenz. In his research career he has managed to avoid almost all good advice that he now gives to his team members. Such advise includes focusing on research (vs. company) or concentrating on only one or two research areas (vs. considering ontologies, semantic web, social web, data engineering, text mining, peer-to-peer, multimedia, HCI, services, software modelling and programming and some more). Though, actually, improving how we understand and use text and data is a good common denominator for a lot of Steffen's professional activities.

    Shared with the University by
    Miss Priyanka Singh
  10. [img] [img]
    Big Data or Right Data?
    Abstract Big data nowadays is a fashionable topic, independently of what people mean when they use this term. But being big is just a matter of volume, although there is no clear agreement in the size threshold. On the other hand, it is easy to capture large amounts of data using a brute force approach. So the real goal should not be big data but to ask ourselves, for a given problem, what is the right data and how much of it is needed. For some problems this would imply big data, but for the majority of the problems much less data will and is needed. In this talk we explore the trade-offs involved and the main problems that come with big data using the Web as case study: scalability, redundancy, bias, noise, spam, and privacy. Speaker Biography Ricardo Baeza-Yates Ricardo Baeza-Yates is VP of Research for Yahoo Labs leading teams in United States, Europe and Latin America since 2006 and based in Sunnyvale, California, since August 2014. During this time he has lead the labs in Barcelona and Santiago de Chile. Between 2008 and 2012 he also oversaw the Haifa lab. He is also part time Professor at the Dept. of Information and Communication Technologies of the Universitat Pompeu Fabra, in Barcelona, Spain. During 2005 he was an ICREA research professor at the same university. Until 2004 he was Professor and before founder and Director of the Center for Web Research at the Dept. of Computing Science of the University of Chile (in leave of absence until today). He obtained a Ph.D. in CS from the University of Waterloo, Canada, in 1989. Before he obtained two masters (M.Sc. CS & M.Eng. EE) and the electronics engineer degree from the University of Chile in Santiago. He is co-author of the best-seller Modern Information Retrieval textbook, published in 1999 by Addison-Wesley with a second enlarged edition in 2011, that won the ASIST 2012 Book of the Year award. He is also co-author of the 2nd edition of the Handbook of Algorithms and Data Structures, Addison-Wesley, 1991; and co-editor of Information Retrieval: Algorithms and Data Structures, Prentice-Hall, 1992, among more than 500 other publications. From 2002 to 2004 he was elected to the board of governors of the IEEE Computer Society and in 2012 he was elected for the ACM Council. He has received the Organization of American States award for young researchers in exact sciences (1993), the Graham Medal for innovation in computing given by the University of Waterloo to distinguished ex-alumni (2007), the CLEI Latin American distinction for contributions to CS in the region (2009), and the National Award of the Chilean Association of Engineers (2010), among other distinctions. In 2003 he was the first computer scientist to be elected to the Chilean Academy of Sciences and since 2010 is a founding member of the Chilean Academy of Engineering. In 2009 he was named ACM Fellow and in 2011 IEEE Fellow.

    Shared with the University by
    Mr Roushdat Elaheebocus
  11. [img] [img]
    Big Data: Wrongs and Rights by Andrew Cormack (WAIS Seminar)
    Abstract: Big Data has been characterised as a great economic opportunity and a massive threat to privacy. Both may be correct: the same technology can indeed be used in ways that are highly beneficial and those that are ethically intolerable, maybe even simultaneously. Using examples of how Big Data might be used in education - normally referred to as "learning analytics" - the seminar will discuss possible ethical and legal frameworks for Big Data, and how these might guide the development of technologies, processes and policies that can deliver the benefits of Big Data without the nightmares. Speaker Biography: Andrew Cormack is Chief Regulatory Adviser, Jisc Technologies. He joined the company in 1999 as head of the JANET-CERT and EuroCERT incident response teams. In his current role he concentrates on the security, policy and regulatory issues around the network and services that Janet provides to its customer universities and colleges. Previously he worked for Cardiff University running web and email services, and for NERC's Shipboard Computer Group. He has degrees in Mathematics, Humanities and Law.

    Shared with the University by
    Miss Priyanka Singh
  12. [img]
    Blockchain in Education: does it make any sense?
    Abstract: Blockchain, thanks to bitcoin, is in fashion. Nowadays it appears as the magic solution to solve some issues in many areas and Education (whatever face-to-face, blended and online) is one of them. There are some literature exploring potential applications and pointing out topics such as credentials, gamification, students tracking or assessment among others. In this seminar I would like to discuss where and when does it make sense to think of blockchain as a useful technology or just a bluf. We will probably have more questions than answers due to the nature of such a presumably disruptive technology as blockchain may be. Biodata : Miquel Oliver is full professor at the School of Engineering of Universitat Pompeu Fabra. His background comes from wireless and mobile communications but he has been shifting towards Internet and its impact upon society. He has been following the MOOCs phenomenon since its starts, as researcher, student and practitioner. More info here: https://www.upf.edu/web/etic/entry/-/-/19279/409/miquel-oliver

    Shared with the University by
    Ms Amber Bu
  13. [img] [img]
    Can you tell if they're learning?
    The proliferation of Web-based learning objects makes finding and evaluating online resources problematic. While established Learning Analytics methods use Web interaction to evaluate learner engagement, there is uncertainty regarding the appropriateness of these measures. In this paper we propose a method for evaluating pedagogical activity in Web-based comments using a pedagogical framework, and present a preliminary study that assigns a Pedagogical Value (PV) to comments. This has value as it categorises discussion in terms of pedagogical activity rather than Web interaction. Results show that PV is distinct from typical interactional measures; there are negative or insignificant correlations with established Learning Analytics methods, but strong correlations with relevant linguistic indicators of learning, suggesting that the use of pedagogical frameworks may produce more accurate indicators than interaction analysis, and that linguistic rather than interaction analysis has the potential to automatically identify learning behaviour.

    Shared with the University by
    Mr Roushdat Elaheebocus
  14. [img] [img]
    Co-designed platforms for delivering behaviour change interventions: Lessons learnt from the LifeGuide programme
    The LifeGuide research programme is a multidisciplinary initiative led by Professor Lucy Yardley (Psychology) and Dr Mark Weal (Computer Science) at the University of Southampton. We have developed a unique set of open source software tools, that allows intervention designers with no experience of programming to create interactive web-based interventions to support healthy behaviour. In this talk I will give a brief overview of digital behavioural change interventions, describe the LifeGuide platform that has been developed at the University of Southampton, and through a number of exemplar projects discuss some of the lessons learnt from this interdisciplinary collaboration.

    Shared with the University by
    Ms Amber Bu
  15. [img] [img]
    Consider the Source: In Whose Interests, and How, of Big, Small and Other Data? Exploring data science through wellth scenarios.
    We're not a particularly healthy culture. Our "normal" practices are not optimised for our wellbeing. From the morning commute to the number of hours we believe we need to put in to complete a task that may itself be unreasonable, to the choices we make about time to prepare food to fit into these constraints - all these operations tend to make us feel forced into treating ourselves as secondary to our jobs. How can data help improve our quality of life? FitBits and AppleWatches highlight the strengths and limits of Things that Count, not the least of which is the rather low uptake of things like FITBITS and apple watches. So once we ask the question about how data might improve quality of life, we may need to add the caveat: pervasively, ubiquitously, in the rich variety of contexts that isn't all about Counting. And once we think about such all seeing all knowing environments, we then need to think about privacy and anonymity. That is: does everything have to be connected to the internet to deliver on a vision of improved quality of life through data? And if there is a Big Ubiquity - should we think about inverting new norms, like how to make personal clouds and personal data stores far more easy to manage - rather than outsourcing so much data and computation? In this short talk, I'd like to consider three scenarios about Going where too few humans have gone before to help others The challenges of qualitative data Supporting privacy and content to motivate thinking about data capture, re-use and re-presentation, and opportunities across ECS for machine learning, AI, infoviz and hci.

    Shared with the University by
    Ms Amber Bu
  16. [img]
    Data Observatories
    Abstract: The Data Observatory (DO) at the Data Science Institute (DSI) in Imperial College (IC) is the largest interactive visualisation facility in Europe, consisting of 64 monitors arranged to give 313 degree surround vision and engagement Opened in November 2015, the DO provides an opportunity for academics and industry to visualise data in a way that uncovers new insights, and promotes the communication of complex data sets and analysis in an immersive and multi-dimensional environment. Designed, built by, and housed within the DSI, the DO enables decision makers to derive new implications and actions from interrogating data sets in an innovative, unique environment. The talk will provide an overview of the DO capabilities and case studies of its use.

    Shared with the University by
    Ms Amber Bu
  17. [img]
    Data Pitch: Data-driven innovation programme
    Data Pitch is a €7m EU-funded open innovation project bringing together corporates and public sector organisations that have data, with startups and SMEs working with data. Data can be reused and repurposed in multiple ways to solve specific challenges. Data from organisations has the potential to create huge value for private and public sector organisations but often only a small percentage of this is exploited. There are many SMEs and startups across Europe that are building innovative solutions using data and new technologies. Many of them struggle to get access to data from public and private sector organisations to develop real-case pilot projects. Data Pitch bridges the gap that these two groups face, supporting them throughout the process, reducing risk and providing the necessary expertise and credibility. In this seminar we will outline the approach to innovation with shared data taken by Data Pitch, and describe some of the issues that have arisen in data sharing, especially with regards to personal data and business use-cases, and how we have addressed them.

    Shared with the University by
    Ms Amber Bu
  18. [img] [img]
    Data Science Seminar: Generic Big Data Processing for Advancing Situation Awareness and Decision-Support
    The generation of heterogeneous big data sources with ever increasing volumes, velocities and veracities over the he last few years has inspired the data science and research community to address the challenge of extracting knowledge form big data. Such a wealth of generated data across the board can be intelligently exploited to advance our knowledge about our environment, public health, critical infrastructure and security. In recent years we have developed generic approaches to process such big data at multiple levels for advancing decision-support. It specifically concerns data processing with semantic harmonisation, low level fusion, analytics, knowledge modelling with high level fusion and reasoning. Such approaches will be introduced and presented in context of the TRIDEC project results on critical oil and gas industry drilling operations and also the ongoing large eVacuate project on critical crowd behaviour detection in confined spaces.

    Shared with the University by
    Mr Roushdat Elaheebocus
  19. [img]
    Data Science Seminar: The data science revolution in Physics and Astronomy
    Abstract Heading into the 2020s, Physics and Astronomy are undergoing experimental revolutions that will reshape our picture of the fabric of the Universe. The Large Hadron Collider (LHC), the largest particle physics project in the world, produces 30 petabytes of data annually that need to be sifted through, analysed, and modelled. In astrophysics, the Large Synoptic Survey Telescope (LSST) will be taking a high-resolution image of the full sky every 3 days, leading to data rates of 30 terabytes per night over ten years. These experiments endeavour to answer the question why 96% of the content of the universe currently elude our physical understanding. Both the LHC and LSST share the 5-dimensional nature of their data, with position, energy and time being the fundamental axes. This talk will present an overview of the experiments and data that is gathered, and outlines the challenges in extracting information. Common strategies employed are very similar to industrial data! Science problems (e.g., data filtering, machine learning, statistical interpretation) and provide a seed for exchange of knowledge between academia and industry. Speaker Biography Professor Mark Sullivan Mark Sullivan is a Professor of Astrophysics in the Department of Physics and Astronomy. Mark completed his PhD at Cambridge, and following postdoctoral study in Durham, Toronto and Oxford, now leads a research group at Southampton studying dark energy using exploding stars called "type Ia supernovae". Mark has many years' experience of research that involves repeatedly imaging the night sky to track the arrival of transient objects, involving significant challenges in data handling, processing, classification and analysis.

    Shared with the University by
    Mr Roushdat Elaheebocus
  20. [img]
    Data Science, Microsoft and You
    In this session we'll explore how Microsoft uses data science and machine learning across it's entire business, from Windows and Office, to Skype and XBox. We'll look at how companies across the world use Microsoft technology for empowering their businesses in many different industries. And we'll look at data science technologies you can use yourselves, such as Azure Machine Learning and Power BI. Finally we'll discuss job opportunities for data scientists and tips on how you can be successful!

    Shared with the University by
    Mr Roushdat Elaheebocus
  21. [img]
    Data Stories -Engaging with Data in a Post-truth World
    Shared with the University by
    Ms Amber Bu
  22. [img]
    Data Trusts: What Are They There To Do, and What Should They Look Like?
    Abstract: In Wendy Hall's 2017 AI review, the first recommendation was to create data trusts to encourage data access and data sharing. A number of initiatives are exploring these ideas in concrete contexts, including Elena Simperl's Data Pitch project and the Open Data Institute's work on data innovation. This talk will consider the question in more abstraction, to consider data trusts' function, and how this will determine their form. I will argue that a data trust should work within existing law to provide ethical, architectural and governance support for trustworthy data processing, and I will unpack this in terms of how it might create trust, what architectures might help implement data trusts, and how they relate to the existing law of trusts. Biodata: Kieron O'Hara is an associate professor in the WAIS group of ECS. His research interests are in trust, privacy and the nature of digital modernity. He is a lead in the UKAN network of anonymisation experts. His latest book, The Theory and Practice of Social Machines (with Nigel Shadbolt, Dave De Roure and Wendy Hall) will be published by Springer in March.

    Shared with the University by
    Ms Amber Bu
  23. [img] [img]
    Data-Driven Text Generation using Neural Networks & Provenance is Complicated and Boring — Is there a solution?
    Title: Data-Driven Text Generation using Neural Networks Speaker: Pavlos Vougiouklis, University of Southampton Abstract: Recent work on neural networks shows their great potential at tackling a wide variety of Natural Language Processing (NLP) tasks. This talk will focus on the Natural Language Generation (NLG) problem and, more specifically, on the extend to which neural network language models could be employed for context-sensitive and data-driven text generation. In addition, a neural network architecture for response generation in social media along with the training methods that enable it to capture contextual information and effectively participate in public conversations will be discussed. Speaker Bio: Pavlos Vougiouklis obtained his 5-year Diploma in Electrical and Computer Engineering from the Aristotle University of Thessaloniki in 2013. He was awarded an MSc degree in Software Engineering from the University of Southampton in 2014. In 2015, he joined the Web and Internet Science (WAIS) research group of the University of Southampton and he is currently working towards the acquisition of his PhD degree in the field of Neural Network Approaches for Natural Language Processing. Title: Provenance is Complicated and Boring — Is there a solution? Speaker: Darren Richardson, University of Southampton Abstract: Paper trails, auditing, and accountability — arguably not the sexiest terms in computer science. But then you discover that you've possibly been eating horse-meat, and the importance of provenance becomes almost palpable. Having accepted that we should be creating provenance-enabled systems, the challenge of then communicating that provenance to casual users is not trivial: users should not have to have a detailed working knowledge of your system, and they certainly shouldn't be expected to understand the data model. So how, then, do you give users an insight into the provenance, without having to build a bespoke system for each and every different provenance installation? Speaker Bio: Darren is a final year Computer Science PhD student. He completed his undergraduate degree in Electronic Engineering at Southampton in 2012.

    Shared with the University by
    Mr Roushdat Elaheebocus
  24. [img] [img]
    Decisions, decisions everywhere (in the open data era).
    Abstract: Decision support systems have been widely used for years in companies to gain insights from internal data, thus making successful decisions. Lately, thanks to the increasing availability of open data, these systems are also integrating open data to enrich decision making process with external data. On the other hand, within an open-data scenario, decision support systems can be also useful to decide which data should be opened, not only by considering technical or legal constraints, but other requirements, such as "reusing potential" of data. In this talk, we focus on both issues: (i) open data for decision making, and (ii) decision making for opening data. We will first briefly comment some research problems regarding using open data for decision making. Then, we will give an outline of a novel decision-making approach (based on how open data is being actually used in open-source projects hosted in Github) for supporting open data publication. Bio of the speaker: Jose-Norberto Mazón holds a PhD from the University of Alicante (Spain). He is head of the "Cátedra Telefónica" on Big Data and coordinator of the Computing degree at the University of Alicante. He is also member of the WaKe research group at the University of Alicante. His research work focuses on open data management, data integration and business intelligence within "big data" scenarios, and their application to the tourism domain (smart tourism destinations). He has published his research in international journals, such as Decision Support Systems, Information Sciences, Data & Knowledge Engineering or ACM Transaction on the Web. Finally, he is involved in the open data project in the University of Alicante, including its open data portal at http://datos.ua.es

    Shared with the University by
    Mr Roushdat Elaheebocus
  25. [img] [img]
    Developing Music Technology for Health and Learning
    The use of music as an aid for improving body and mind has received enormous attention over the last 20 years from a wide range of disciplines, including neuroscience, cognitive science, physical therapy, exercise science, psychological medicine, and pedagogy. It is important to translate insights gained from the scientific study of music, learning, and medicine into real-life applications. Such applications should be delivered widely, effectively, and accurately, harnessing the synergy of sound and music computing (SMC), wearable computing, and cloud computing technologies to promote learning and to facilitate disease prevention, diagnosis, and treatment in both developed countries and resource-poor developing countries. In this talk, I will highlight our recent projects at NUS Sound and Music Computing Lab that are developed to facilitate joyful learning, and motivate physical Rehabilitation. Speaker information WANG Ye is an Associate Professor in the Computer Science Department at the National University of Singapore (NUS) and NUS Graduate School for Integrative Sciences and Engineering (NGS). He established and directed the sound and music computing (SMC) Lab (www.smcnus.org). Before joining NUS he was a member of the technical staff at Nokia Research Center in Tampere, Finland for 9 years. His research interests include sound analysis and music information retrieval (MIR), mobile computing, and cloud computing, and their applications in music edutainment, e-Learning, and e-Health, as well as determining their effectiveness via subjective and objective evaluations. He has served as the general chair of ISMIR2017 (https://ismir2017.smcnus.org/) and TPC co-chair of ICOT2017 (http://www.colips.org/conferences/icot2017/). His most recent projects involve the design and evaluation of systems to support 1) therapeutic gait training using Rhythmic Auditory Stimulation (RAS), 2) auditory training and second language learning.

    Shared with the University by
    Ms Amber Bu
  26. [img]
    Disruptive Innovator DNA
    This thought provoking talk will be given by Bo Ji, Founder of China Start, an inspiring TEDx speaker, a Chinaprenuer, and a game changer in the global startup ecosystem. Bo is currently the Assistant Dean & Chief Representative for Europe at Cheung Kong Graduate School of Business (CKGSB), a top business school with more than 10,000 chairman/CEO level alumni in China. Bo had an over-20-year successful corporate career in Global Business Development, Innovation, Strategy, Supply Chain Management, M&A, etc and has served at the senior executive level for companies such as Monsanto, Cargill, Pfizer, Wrigley and Mars. After a long corporate life, Bo became a serial entrepreneur and investor and founded “China Start” to bring global startups and scaleups to China. He created a paradigm shift for global startups to expand to China instead of Silicon Valley. Is an innovator born or made? Is it possible to learn the ability of generating innovative ideas? The answer is YES! Our ability to generate innovative ideas is not just a function of our minds, but of our behaviors. There are five “discovery skills” used by innovative leaders that distinguish them from the ordinary: questioning, observation, networking, experimenting, and associational thinking. These skills can be learned and trained. Innovative companies have three components, people, process, and philosophy to support the application of the 5 DNA skills.

    Shared with the University by
    Ms Amber Bu
  27. [img] [img]
    Dynamic Document Generation from Semantic Web Data
    This talk will present an overview of the ongoing ERCIM project SMARTDOCS (SeMAntically-cReaTed DOCuments) which aims at automatically generating webpages from RDF data. It will particularly focus on the current issues and the investigated solutions in the different modules of the project, which are related to document planning, natural language generation and multimedia perspectives. The second part of the talk will be dedicated to the KODA annotation system, which is a knowledge-base-agnostic annotator designed to provide the RDF annotations required in the document generation process.

    Shared with the University by
    Mr Roushdat Elaheebocus
  28. [img] [img]
    Enabling Provenance on the Web: Standardization and Research Questions
    Provenance is a record that describes the people, institutions, entities, and activities, involved in producing, influencing, or delivering a piece of data or a thing in the world. Some 10 years after beginning research on the topic of provenance, I co-chaired the provenance working group at the World Wide Web Consortium. The working group published the PROV standard for provenance in 2013. In this talk, I will present some use cases for provenance, the PROV standard and some flagship examples of adoption. I will then move on to our current research area aiming to exploit provenance, in the context of the Sociam, SmartSociety, ORCHID projects. Doing so, I will present techniques to deal with large scale provenance, to build predictive models based on provenance, and to analyse provenance.

    Shared with the University by
    Mr Roushdat Elaheebocus
  29. [img] [img]
    Environmental sensing with the Internet of Things
    Abstract: After developing many sensor networks using custom protocols to save energy and minimise code complexity - we have now experimented with standards-based designs. These use IPv6 (6LowPAN), RPL routing, Coap for interfaces and data access and protocol buffers for data encapsulation. Deployments in the Cairngorm mountains have shown the capabilities and limitations of the implementations. This seminar will outline the hardware and software we used and discuss the advantages of the more standards-based approach. At the same time we have been progressing with high quality imaging of cultural heritage using the RTIdomes - so some results and designs will be shown as well. So this seminar will cover peat-bogs to museums, binary-HTTP-like REST to 3500 year old documents written on clay.

    Shared with the University by
    Mr Roushdat Elaheebocus
  30. [img]
    Eristic Argumentation on the Social Web
    Argumentation, debate and discussion are key facets of human communication, shaping the way people form, share and promote ideas, hypotheses and solutions to problems. Argumentation can broadly be broken down into collaborative problem solving or truth-seeking, and quarrelling without hope for a resolution, instead for recreation, catharsis or entertainment. The social web is a growing way in which individuals, social groups and even corporations share content, ideas and information, as well as hold discussions and debates. Current models of argumentation often focus on formal argumentation techniques, in which participants are expected to abide by a stringent set of rules or practices. However, on the social web there is no such code of conduct: antisocial behaviour, which often stems from argumentation, can have a negative impact on online communities, driving away new users and stifling participation. How can we model these types of argumentation, and how does it affect a user's perception of the discussion? Title and abstract to be confirmed

    Shared with the University by
    Ms Amber Bu
  31. [img] [img]
    Expressiveness Benchmarking for System-level Provenance
    Over the past decade a number of research prototypes that record provenance or other forms of rich audit logs at the operating system level. The last few years have seen the increasing use of such systems for security and audit, notably in DARPA's $60m investment in the Transparent Computing program. Yet the foundations for trust in such systems remains unclear; the correct behaviour of a provenance recording system has not yet been clearly specified or proved correct. Therefore, attempts to improve security through auditing provenance records may fail due to missing or inaccurate provenance, or misunderstanding the intentions of the system designers, particularly when integrating provenance records from different systems. Even worse, provenance recording systems are not even straightforward to test, because the expected behaviour is nondeterministic: running the same program at different times or different machines is guaranteed to yield different provenance graphs, and running programs with nontrivial concurrency behaviour typically also yields multiple possible provenance graphs with different structure. We believe that such systems can be formally specified and verified, and should be in order to remove complex provenance recording systems from the trusted computing base. However, formally verifying such a system seems to require first having an accepted formal model of the operating system kernel itself, which is a nontrivial undertaking. In the short term, we propose provenance expressiveness benchmarking, an approach to understanding the current behaviour of a provenance recording system. The key idea (which is simple in principle) is to generate provenance records for individual system calls or short sequences of calls, and for each one generate a provenance graph fragment that shows how the call was recorded in the provenance graph. The challenge is how to automate this process, given that provenance recording tools work in different ways, use different output formats, and generate different (but similar) graphs containing both target activity and background noise. I will present work on this problem so far, focusing on how to automate the NP-complete approximate subgraph isomorphism problems we need to solve to automatically extract benchmark results.

    Shared with the University by
    Ms Amber Bu
  32. [img]
    Fake News: Fake Causes & Real Solutions
    Recent elections, including the 2016 UK Referendum on Brexit and the 2017 US election, have seen a great deal of discussion about fake news. How exactly has the discussion of fake become so central to debates about modern democracy? In this talk, Nick Anstead will examine the difficulty of defining fake news and the evidence that it has political consequences. He will argue that there is too great a tendency to see the problem of fake news as technological, when the reality is that the underlying causes are political, social and economic. This analysis has important ramifications for how societies seek to combat fake news and ensure a knowledgeable and engaged electorate.

    Shared with the University by
    Ms Amber Bu
  33. [img]
    Five Big Challenges in Big Health Data
    Shared with the University by
    Mr Roushdat Elaheebocus
  34. [img] [img]
    From researcher to entrepreneur - my experience to commercialise Synote
    In this seminar, I will share my experience in the early process of becoming an entrepreneur from a research background. Since 2008, I have been working with Prof. Mike Wald on an innovative video annotation tool called Synote. After about eight years of research around Synote, I have applied for the Royal Acadamy of Engineering Enterprise Fellowship in order to focus on developing Synote for real clients and making Synote sustainable and profitable. Now, it is already eight months into the fellowship, which has totally changed my life. It is very exciting, but at the same time I'm struggling all the time. The seminar will briefly go through my experience so far on the way of commercializing Synote from a research background. I will also discuss the valuable resources you can get from RAEng Enterprise Hub and Future Worlds, which is a Southampton based organization to help startups. If you are a Ph.D. student or research fellow in the University, and you want to start your own business, this is the seminar you want to attend.

    Shared with the University by
    Mr Roushdat Elaheebocus
  35. [img]
    Fun with GPS glacier tracking in Iceland
    This summer I went back to basics to install a glacier movement sensor system Iceland - sponsored by Formula E. This followed on from a very simple GPS tracker we installed on a Greenland Iceberg last year. We chose some accurate dGPS units, the Iridium short messaging service and a micropython based microcontroller. Putting it all together and installing is a whole story in itself however! So this seminar will mainly be a story of design issues, sand in the keyboard, off-road driving, some quadcopter imaging and finally some actual results.

    Shared with the University by
    Ms Amber Bu
  36. [img]
    Future Worlds – The on-campus startup accelerator : Change the world with your ideas.
    Abstract: Future Worlds is a unique and vibrant startup accelerator at the University of Southampton which helps nurture aspiring entrepreneurs and cutting-edge technologies through one-to-one support and its network of seasoned founders, investors and millionaire entrepreneurs. This talk will be an introduction of Future Worlds to our group and an opportunity to learn about their work and services. Biodata: Ben Clark is a specialist in taking companies from startup to scaleup, most recently with Southampton-based Snowflake Software and as Resident Mentor at Future Worlds. Ben was previously on the leadership team of Hippowaste, raising over £12m in funding and growing from small startup to over £7m turnover in 4 years before moving on to work with venture capital and startups in Africa and a post as Chief Operating Officer at software as a service company, iPresent, bootstrapping to over $2m of annual recurring revenue.

    Shared with the University by
    Ms Amber Bu
  37. [img] [img]
    Hierarchical Prediction Machines and Big Data Analytics
    An emerging consensus in cognitive science views the biological brain as a hierarchically-organized predictive processing system. This is a system in which higher-order regions are continuously attempting to predict the activity of lower-order regions at a variety of (increasingly abstract) spatial and temporal scales. The brain is thus revealed as a hierarchical prediction machine that is constantly engaged in the effort to predict the flow of information originating from the sensory surfaces. Such a view seems to afford a great deal of explanatory leverage when it comes to a broad swathe of seemingly disparate psychological phenomena (e.g., learning, memory, perception, action, emotion, planning, reason, imagination, and conscious experience). In the most positive case, the predictive processing story seems to provide our first glimpse at what a unified (computationally-tractable and neurobiological plausible) account of human psychology might look like. This obviously marks out one reason why such models should be the focus of current empirical and theoretical attention. Another reason, however, is rooted in the potential of such models to advance the current state-of-the-art in machine intelligence and machine learning. Interestingly, the vision of the brain as a hierarchical prediction machine is one that establishes contact with work that goes under the heading of 'deep learning'. Deep learning systems thus often attempt to make use of predictive processing schemes and (increasingly abstract) generative models as a means of supporting the analysis of large data sets. But are such computational systems sufficient (by themselves) to provide a route to general human-level analytic capabilities? I will argue that they are not and that closer attention to a broader range of forces and factors (many of which are not confined to the neural realm) may be required to understand what it is that gives human cognition its distinctive (and largely unique) flavour. The vision that emerges is one of 'homomimetic deep learning systems', systems that situate a hierarchically-organized predictive processing core within a larger nexus of developmental, behavioural, symbolic, technological and social influences. Relative to that vision, I suggest that we should see the Web as a form of 'cognitive ecology', one that is as much involved with the transformation of machine intelligence as it is with the progressive reshaping of our own cognitive capabilities.

    Shared with the University by
    Mr Roushdat Elaheebocus
  38. [img]
    Human Data Interaction
    Abstract: Data is everywhere. Today people are faced with the daunting task of understanding and managing the data created by them, about them, and for them, due to the lack of mechanisms between them and the data. In this talk, I will use some examples in my own research to explain how we can bridge the gap between humans and data through a series of interaction mechanisms. I will first explain how we can use agencies such as recommender systems to help people manage the access to their personal data. I will then explain how data visualisations can be used to help people extract better insights from their personal data. I will also introduce my on-going work about applying data visualisations to public data to help people make better decisions and, beyond visualisations, telling stories about data. Biodata: Yuchen Zhao is a research fellow at the Web and Internet Science Research Group (WAIS) in the school of Electronic and Computer Science (ECS), the University of Southampton. He received his Ph.D. in computer science from University of St Andrews in 2017. His research aims to understand and address the issues in human data interaction. His previous research focused on understanding privacy issues in personal data and designing agencies to help people solve those issues. His recent research has expanded to apply data visualisations and narrative visualisations to provide better insights, transparency, and engagement in public data.

    Shared with the University by
    Ms Amber Bu
  39. [img] [img]
    IBM's Internet of Things and Academic Initiative
    IBM provide a comprehensive academic initiative, (http://www-304.ibm.com/ibm/university/academic/pub/page/academic_initiative) to universities, providing them free of charge access to a wide range of IBM Software. As part of this initiative we are currently offering free IBM Bluemix accounts, either to be used within a course, or for students to use for personal skills development. IBM Bluemix provides a comprehensive cloud based platform as a service solution set which includes the ability to quickly and easily integrate data from devices from Internet of Things ( IoT) solutions to develop and run productive and user focused web and mobile applications. If you would be interested in hearing more about IBM and Internet of Things or you would like to discuss prospective research projects that you feel would operate well in this environment, please come along to the seminar!

    Shared with the University by
    Mr Roushdat Elaheebocus
  40. [img] [img]
    ImageLearn - Decoding Britain's Landscape
    Abstract Ordnance Survey, our national mapping organisation, collects vast amounts of high-resolution aerial imagery covering the entirety of the country. Currently, photogrammetrists and surveyors use this to manually capture real-world objects and characteristics for a relatively small number of features. Arguably, the vast archive of imagery that we have obtained portraying the whole of Great Britain is highly underutilised and could be ‘mined’ for much more information. Over the last year the ImageLearn project has investigated the potential of "representation learning" to automatically extract relevant features from aerial imagery. Representation learning is a form of data-mining in which the feature-extractors are learned using machine-learning techniques, rather than being manually defined. At the beginning of the project we conjectured that representations learned could help with processes such as object detection and identification, change detection and social landscape regionalisation of Britain. This seminar will give an overview of the project and highlight some of our research results.

    Shared with the University by
    Mr Roushdat Elaheebocus
  41. [img] [img]
    It's always been about the links
    Abstract The World Wide Web Consortium, W3C, is known for standards like HTML and CSS but there's a lot more to it than that. Mobile, automotive, publishing, graphics, TV and more. Then there are horizontal issues like privacy, security, accessibility and internationalisation. Many of these assume that there is an underlying data infrastructure to power applications. In this session, W3C's Data Activity Lead, Phil Archer, will describe the overall vision for better use of the Web as a platform for sharing data and how that translates into recent, current and possible future work. What's the difference between using the Web as a data platform and as a glorified USB stick? Why does it matter? And what makes a standard a standard anyway? Speaker Biography Phil Archer Phil Archer is Data Activity Lead at W3C, the industry standards body for the World Wide Web, coordinating W3C's work in the Semantic Web and related technologies. He is most closely involved in the Data on the Web Best Practices, Permissions and Obligations Expression and Spatial Data on the Web Working Groups. His key themes are interoperability through common terminology and URI persistence. As well as work at the W3C, his career has encompassed broadcasting, teaching, linked data publishing, copy writing, and, perhaps incongruously, countryside conservation. The common thread throughout has been a knack for communication, particularly communicating complex technical ideas to a more general audience.

    Shared with the University by
    Mr Roushdat Elaheebocus
  42. [img]
    Joining the dots: Connecting the social determinants and physiological effects of air quality in offices
    Feeling drowsy at work? Despite findings that poor indoor air quality causes cognitive performance decline, the average office-worker has no access to information on the quality of air in the room until it becomes poor enough to cause discomfort. In this talk, I discuss our user-centred research from the REFRESH project, which joins the dots between the individual and social factors that affect perception of IAQ, and the human physiological responses to changes in air quality. This involves (1) physiological measurement such as (EEG) to detect the effect of air quality on drowsiness, (2) qualitative methods to understanding the social factors which influence air quality in offices, and (3) designing ambient technology which visualises CO2 of an office- an indicator of indoor air quality. At the end of the talk you will have some actions for how you can detect- and do something about- the air quality of your office; how easily you can incorporate qualitative methods into your research and use technology to understand your users’ needs.

    Shared with the University by
    Ms Amber Bu
  43. [img] [img]
    Justified assessments of service provider reputation
    Abstract Reputation, influenced by ratings from past clients, is crucial for providers competing for custom. For new providers with less track record, a few negative ratings can harm their chances of growing. In the JASPR project, we aim to look at how to ensure automated reputation assessments are justified and informative. Even an honest balanced review of a service provision may still be an unreliable predictor of future performance if the circumstances differ. For example, a service may have previously relied on different sub-providers to now, or been affected by season-specific weather events. A common way to ameliorate the ratings that may not reflect future performance is by weighting by recency. We argue that better results are obtained by querying provenance records on how services are provided for the circumstances of provision, to determine the significance of past interactions. Informed by case studies in global logistics, taxi hire, and courtesy car leasing, we are going on to explore the generation of explanations for reputation assessments, which can be valuable both for clients and for providers wishing to improve their match to the market, and applying machine learning to predict aspects of service provision which may influence decisions on the appropriateness of a provider. In this talk, I will give an overview of the research conducted and planned on JASPR. Speaker Biography Dr Simon Miles Simon Miles is a Reader in Computer Science at King's College London, UK, and head of the Agents and Intelligent Systems group. He conducts research in the areas of normative systems, data provenance, and medical informatics at King's, and has published widely and manages a number of research projects in these areas. He was previously a researcher at the University of Southampton after graduating from his PhD at Warwick. He has twice been an organising committee member for the Autonomous Agents and Multi-Agent Systems conference series, and was a member of the W3C working group which published standards on interoperable provenance data in 2013.

    Shared with the University by
    Mr Roushdat Elaheebocus
  44. [img]
    Learning Semantic Relatedness From Human Feedback Using Metric Learning
    Abstract: Assessing the degree of semantic relatedness between words is an important task with a variety of semantic applications, such as ontology learning for the Semantic Web, semantic search, recommendation or query expansion. To accomplish this in an automated fashion, many relatedness measures have been proposed. However, most of these metrics only encode information contained in the underlying corpus or in the navigation and thus do not directly model human intuition. In this talk, we show the utilisation of metric learning to improve existing semantic relatedness measures by learning from additional information, such as explicit human feedback. Our approach is based on knowledge that emergent as semantic information in Social Media systems and is embedded in the user's content or its navigational traces. We argue to use word embeddings instead of traditional high-dimensional vector representations in order to leverage their semantic density and to reduce computational cost as a first step to improve the extraction of the hidden semantic. We present results on several domains including tagging data as well as publicly available embeddings based on Wikipedia texts and navigation. Second, human feedback about semantic relatedness for learning and evaluation is extracted from publicly available datasets such as MEN or WS-353. We will show that our method can significantly improve semantic relatedness measures by learning from the additional explicit human feedback. For tagging data, we are the first to generate and study embeddings. Our results are of special interest for researchers and practitioners of Semantic Web and show the power of Machine Learning methods for extracting semantics. Biodata: Andreas Hotho is a professor at the University of Würzburg and the head of the DMIR group. In this context, he is directing the BibSonomy project spanning the L3S Research Center located in Hanover, the KDE group of the University of Kassel and the DMIR group. Prior, he was a senior researcher at the University of Kassel. He started his research at the AIFB Institute at the University of Karlsruhe where he was working on text mining, ontology learning and semantic web related topics. Currently, he is working in the area of data science, data mining, semantic web mining and social media analysis.

    Shared with the University by
    Ms Amber Bu
  45. [img] [img]
    Location Aware Narratives: Strange Hypertexts, Sculptural Stories, and Digital Poetics
    Researchers from the Web and Internet Science group have been exploring hypertexts and computational narrative for nearly two decades. In this seminar we present our most recent work on the Leverhulme Trust funded project StoryPlaces (http://storyplaces.soton.ac.uk/) where we have investigated the poetics and technology associated with location aware narratives. Location Aware Narratives are a type of Strange Hypertext (hypertexts that go beyond traditional node-link models) because location aware stories reflects the physical context of the reader - examples include tour guides where the reader is required to be in a particular location to access certain pages, interactive fiction where location is used to set the tone or backdrop to the drama, or dynamic narrative that changes or responds to the user’s wanderings. The StoryPlaces system is driven by a Sculptural Hypertext engine which models narrative as a state machine and delivers a mobile storytelling experience through a location aware web application. StoryPlaces is based on a general model for location aware narrative called "Canyons, Deltas, Plains" that we have shown to support the structures used in a broad sample of location aware storytelling systems. By working with both student and professional writers we have expanded our knowledge of the common patterns and structures used by authors in location aware narrative, and have begun to see how the structures of the narrative and the topology of the locations involved are intrinsically connected, and that the 'poetics of space' are a fundamental part of this medium. As part of the seminar we will demonstrate the StoryPlaces reader, and show how these patterns have begun to inform the design of our authorship tools.

    Shared with the University by
    Ms Amber Bu
  46. [img]
    Preview
    Lunchtime lecture with Dr Ted Nelson
    “Two Cheers for Now” …what I hoped for and what the world has become.

    Shared with the World by
    Miss Priyanka Singh
  47. [img] [img]
    Making data useful and usable
    Data is ubiquitous; everyone has it and deals with it. However, just because everyone deals with it, doesn't mean that we naturally handle it well or efficiently. In this talk, Adriane Chapman will introduce herself to the WAIS group and describe her interest in making data useful and usable. She will describe her past work in provenance, and her current work in annotations, provenance and data modelling.

    Shared with the University by
    Ms Amber Bu
  48. [img] [img]
    Mandevillian Intelligence: From Individual Vice to Collective Virtue
    Abstract Mandevillian intelligence is a specific form of collective intelligence in which individual cognitive vices (i.e., shortcomings, limitations, constraints and biases) are seen to play a positive functional role in yielding collective forms of cognitive success. In this talk, I will introduce the concept of mandevillian intelligence and review a number of strands of empirical research that help to shed light on the phenomenon. I will also attempt to highlight the value of the concept of mandevillian intelligence from a philosophical, scientific and engineering perspective. Inasmuch as we accept the notion of mandevillian intelligence, then it seems that the cognitive and epistemic value of a specific social or technological intervention will vary according to whether our attention is focused at the individual or collective level of analysis. This has a number of important implications for how we think about the cognitive impacts of a number of Web-based technologies (e.g., personalized search mechanisms). It also forces us to take seriously the idea that the exploitation (or even the accentuation!) of individual cognitive shortcomings could, in some situations, provide a productive route to collective forms of cognitive and epistemic success. Speaker Biography Dr Paul Smart Paul Smart is a senior research fellow in the Web and Internet Science research group at the University of Southampton in the UK. He is a Fellow of the British Computer Society, a professional member of the Association of Computing Machinery, and a member of the Cognitive Science Society. Paul’s research interests span a number of disciplines, including philosophy, cognitive science, social science, and computer science. His primary area of research interest relates to the social and cognitive implications of Web and Internet technologies. Paul received his bachelors degree in Psychology from the University of Nottingham. He also holds a PhD in Experimental Psychology from the University of Sussex.

    Shared with the University by
    Mr Roushdat Elaheebocus
  49. [img]
    Many Worlds on a Frame: Characterizing Online Social Cognition
    Abstract : The theme of the Web Observatory at IIIT Bangalore is "online social cognition." Our research aims to understand how social media activity molds collective worldview that in turn impacts several areas of human activity, like business, politics or even social harmony. We first categorize the web into three broad regions or realms: called the social, trigger, and inert realms respectively. The social realm forms the participatory areas of the web, where opinions are actively exchanged and molded. Trigger realm refers to elements like news websites or blogs, whose publishing events often trigger activity in the social realm. The inert realm refers to static web content, that gets used as a source of latent knowledge in the social interactions. The social realm itself is modeled as a "marketplace of opinions" -- where different vested interests invest their opinions in order to fetch returns. Opinions that are "compatible" come together to form one or more narratives. In order to characterize this, we first represent an opinion as comprising of two dimensions called: abstraction and expression. Abstraction refers to the opinion-holder's objective perspective on the issue, and expression refers to the communication of the opinion-holder's subjective sentiment about the issue. Cognitive science studies show that abstractions and expressions have vastly different characteristics in they way they diffuse through a population. Hence, the formation of narratives are sometimes catalyzed by abstractions, and sometimes by expressions. In order to represent narratives and their interplay, that constitutes social cognition, we also propose a hermeneutic framework called "Many Worlds on a Frame" (MWF). The framework models the semantic universe of discourse, as comprising of several semantic "worlds" or "narratives" within each of which , other worlds may participate as entities. Interactions between worlds are either facilitated or hampered by their respective worldviews. The set of all interactions between worlds is called the Frame. We argue that the "many worlds" representation is more conducive to modeling social cognition, rather than (say) a convergent multi-author knowledge model like a wiki. The MWF implementation does not impose an overarching ontology, at the same time, it is not completely unstructured either. We propose to use a modified form of the NQuad W3C standard for representing knowledge about online social cognition. About the Speaker : Srinath Srinivasa heads the Web Science lab and is the Dean (R&D) at IIIT Bangalore, India. Srinath holds a Ph.D (magna cum laude) from the Berlin Brandenburg Graduate School for Distributed Information Systems (GkVI) Germany, an M.S. (by Research) from IIT-Madras and B.E. in Computer Science and Engineering from The National Institute of Engineering (NIE) Mysore. He works in the area of Web Science, understanding the impact of the web on humanity. Technology for educational outreach and social empowerment has been a primary motivation driving his research. He has participated in several initiatives for technology enhanced education including the VTU Edusat program, The National Programme for Technology Enhanced Learning (NPTEL) and an educational outreach program in collaboration with Upgrad. He is a member of various technical and organizational committees for international conferences like International Conference on Weblogs and Social Media (ICWSM), ACM Hypertext, COMAD/CoDS, ODBASE, etc. He is also a life member of the Computer Society of India (CSI). As part of academic community outreach, Srinath has served on the Board of Studies of Goa University and as a member of the Academic Council of the National Institute of Engineering, Mysore. He has served as a technical reviewer for various journals like the VLDB journal, IEEE Transactions on Knowledge and Data Engineering, and IEEE Transactions on Cloud Computing. He is also the recipient of various national and international grants for his research activities.

    Shared with the University by
    Ms Amber Bu
  50. [img] [img]
    Modelling and Mining To Manage
    In this talk, I will describe various computational modelling and data mining solutions that form the basis of how the office of Deputy Head of Department (Resources) works to serve you. These include lessons I learn about, and from, optimisation issues in resource allocation, uncertainty analysis on league tables, modelling the process of winning external grants, and lessons we learn from student satisfaction surveys, some of which I have attempted to inject into our planning processes.

    Shared with the University by
    Mr Roushdat Elaheebocus
  51. [img] [img]
    On lions, impala, and bigraphs: modelling interactions in Ubiquitous Computing
    As ubiquitous systems have moved out of the lab and into the world the need to think more systematically about how there are realised has grown. This talk will present intradisciplinary work I have been engaged in with other computing colleagues on how we might develop more formal models and understanding of ubiquitous computing systems. The formal modelling of computing systems has proved valuable in areas as diverse as reliability, security and robustness. However, the emergence of ubiquitous computing raises new challenges for formal modelling due to their contextual nature and dependence on unreliable sensing systems. In this work we undertook an exploration of modelling an example ubiquitous system called the Savannah game using the approach of bigraphical rewriting systems. This required an unusual intra-disciplinary dialogue between formal computing and human- computer interaction researchers to model systematically four perspectives on Savannah: computational, physical, human and technical. Each perspective in turn drew upon a range of different modelling traditions. For example, the human perspective built upon previous work on proxemics, which uses physical distance as a means to understand interaction. In this talk I hope to show how our model explains observed inconsistencies in Savannah and ex- tend it to resolve these. I will then reflect on the need for intradisciplinary work of this form and the importance of the bigraph diagrammatic form to support this form of engagement. Speaker Biography Tom Rodden Tom Rodden (rodden.info) is a Professor of Interactive Computing at the University of Nottingham. His research brings together a range of human and technical disciplines, technologies and techniques to tackle the human, social, ethical and technical challenges involved in ubiquitous computing and the increasing used of personal data. He leads the Mixed Reality Laboratory (www.mrl.nott.ac.uk) an interdisciplinary research facility that is home of a team of over 40 researchers. He founded and currently co-directs the Horizon Digital Economy Research Institute (www.horizon.ac.uk), a university wide interdisciplinary research centre focusing on ethical use of our growing digital footprint. He has previously directed the EPSRC Equator IRC (www.equator.ac.uk) a national interdisciplinary research collaboration exploring the place of digital interaction in our everyday world. He is a fellow of the British Computer Society and the ACM and was elected to the ACM SIGCHI Academy in 2009 (http://www.sigchi.org/about/awards/).

    Shared with the University by
    Mr Roushdat Elaheebocus
  52. [img]
    Open Science for Computational Science and for Computer Science
    Abstract: Computational science (also scientific computing) involves the development of models and simulations to understand natural systems answering questions that neither theory nor experiment alone are equipped to answer. Despite the increasing importance of so-called in-silico experiments to the scientific discovery process, state-of-the-art software engineering practices are not fully adopted in computational science. However, software engineering is central to any effort to increase computational science’s software productivity. Among the methods and techniques that software engineering can offer to computational science, I’ll present work model-driven software engineering with domain specific languages and on modular software architectures. For good scientific practice, it is important that research results may properly be checked by reviewers, and possibly be repeated and extended by other researchers. This is of particular interest for "digital science" i.e. for in-silico experiments. In this talk, I’ll discuss some efforts on open science in both, computational science and computer science. Reference: A. Johanson, W. Hasselbring: “Software Engineering for Computational Science: Past, Present, Future”, In: Computing in Science & Engineering, pp. 90-109, March/April 2018. https://doi.org/10.1109/MCSE.2018.108162940# Biodata: Prof. Dr. Wilhelm (Willi) Hasselbring is professor of Software Engineering and former dean of the Faculty of Engineering at Kiel University, Germany. In the competence cluster Software Systems Engineering (KoSSE), he coordinates technology transfer projects with industry. In the excellence cluster Future Ocean, in the Helmholtz Research School Ocean System Science and Technology (HOSST), and in the new Helmholtz School for Marine Data Science (MarDATA), he collaborates with the GEOMAR Helmholtz Centre for Ocean Research Kiel.

    Shared with the University by
    Ms Amber Bu
  53. [img] [img]
    Predicting sense of community and participation by applying machine learning to open government data
    Community capacity is used to monitor socio-economic development. It is composed of a number of dimensions, which can be measured to understand the possible issues in the implementation of a policy or the outcome of a project targeting a community. Measuring community capacity dimensions is usually expensive and time consuming, requiring locally organised surveys. Therefore, we investigate a technique to estimate them by applying the Random Forests algorithm on secondary open government data. This research focuses on the prediction of measures for two dimensions: sense of community and participation. The most important variables for this prediction were determined. The variables included in the datasets used to train the predictive models complied with two criteria: nationwide availability; sufficiently fine-grained geographic breakdown, i.e. neighbourhood level. The models explained 77% of the sense of community measures and 63% of participation. Due to the low geographic detail of the outcome measures available, further research is required to apply the predictive models to a neighbourhood level. The variables that were found to be more determinant for prediction were only partially in agreement with the factors that, according to the social science literature consulted, are the most influential for sense of community and participation. This finding should be further investigated from a social science perspective, in order to be understood in depth.

    Shared with the University by
    Mr Roushdat Elaheebocus
  54. [img]
    Research methods - moving from the lab out 'into the wild'
    Moira McGregor has worked on various projects at the Mobile Life Research Centre including: everyday use of digital maps; the sharing economy; mobile battery maintenance; and speech technology in workplace meetings. What these projects have have in common is a desire to look at the use of mobile technology as it happens in order to understand how users make sense of the technology, and also how users interweave this use with other interactions going on around them at the same time. The above coincides with a general move from studying mobile phone technology in the controlled setting of the lab, to the challenge of devising methods to allow the study of mobile phone use in situ, out ‘in the wild’. This focus on use in situ calls for a focus on working with distributed research methods, including video analysis, interactional and conversational analysis, interviews, and technical probes – all of which have been deployed in Moira’s work in order to give access to moment by moment interaction with mobile technology. The resulting small scale and detailed perspective may be combined to complement the more pervasive approaches of recording mobile phone use by instrumenting technology with sensors and logging use over longer periods, with large cohorts of users. Moira is currently a PhD student at the MobileLife Research Centre in Stockholm. Her work looks at how technology is used in everyday life – from mobile phone use in co-present interaction with others, to how an app like Uber is changing the work practices of taxi drivers. In this seminar, Moira will present some of the research methods used in her studies and some of her preliminary findings.

    Shared with the University by
    Ms Amber Bu
  55. [img] [img]
    Research plan discussion: The socio-technical construction of MOOCs and educator practices in HE
    In this seminar slot, we will discuss Steve's research aims and plan. Massive open online courses (MOOCs) have received substantial coverage in mainstream sources, academic media, and scholarly journals, both negative and positive. Numerous articles have addressed their potential impact on Higher Education systems in general, and some have highlighted problems with the instructional quality of MOOCs, and the lack of attention to research from online learning and distance education literature in MOOC design. However, few studies have looked at the relationship between social change and the construction of MOOCs within higher education, particularly in terms of educator and learning designer practices. This study aims to use the analytical strategy of Socio-Technical Interaction Networks (STIN) to explore the extent to which MOOCs are socially shaped and their relationship to educator and learning designer practices. The study involves a multi-site case study of 3 UK MOOC-producing universities and aims to capture an empirically based, nuanced understanding of the extent to which MOOCs are socially constructed in particular contexts, and the social implications of MOOCs, especially among educators and learning designers.

    Shared with the University by
    Mr Roushdat Elaheebocus
  56. [img]
    Preview
    Rising to Challenges in Assessment and Feedback in HCI Education: A-Peer-Supported Approach
    Human-Computer Interaction (HCI) is a research area which studies how people interact with computer systems. Because of its multidisciplinary nature, HCI modules often sit at unease within the computer science curriculum which is primarily composed by modules typically assessed through objective measures, using quantitative methods. Assessment criteria of HCI topics need to make some subjective measures quantifiable (e.g. aesthetics and creativity). In the case of large classes, it is critical that the assessment can scale appropriately without compromising on the validity of the judgment of how well the learning outcomes have been achieved. In this seminar talk Adriana will focus on her experiences in redesigning the assessment of an undergraduate module on HCI, taking into account increasingly larger classes. Redesign decisions needed to preserve the validity and reliability of the assessment whilst respecting the need for timely feedback. Adriana will explain how learning activities in the module were aligned to the assessment. This included the use of PeerWise for student-authored MCQs, and the use of video to foster creativity and application of knowledge. The combination of these helped leveraging the power of peer interaction for learning.

    Shared with the World by
    Mrs Adriana Wilde
  57. [img]
    Scalable Data Integration
    Information and data integration focuses on providing an integrated view of multiple distributed and heterogeneous sources of information (such as web sites, databases, peer or sensor data etc.). Through information integration all this scattered data can be combined and queried. In this talk we are dealing with the problems of data integration, data exchange/warehousing, and query answering with or without ontologies. We present an algorithm for virtual data integration where data sources are queried in a distributed way and no centralized repository is materialized. Our algorithm processes queries in the presence of thousands of data sources in under a second. We extend this solution to virtual integration settings where domain knowledge is represented using constraints/ontologies (e.g. OWL2-QL). Subsequently, we examine the Chase algorithm which is the main tool to reason with constraints for data warehousing, and develop an optimization that performs orders of magnitude faster. We also examine hybrid solutions to data integration where both materialization/warehousing and virtual data integration are combined in order to optimize query answering. We discuss how these approaches can help set up future research directions and outline important applications to data management and analysis over integrated data.

    Shared with the University by
    Ms Amber Bu
  58. [img]
    Scoring Serious Educational Games Fairly
    Abstract: Many people would like to see new skills, such as managing systems or complex problem solving, introduced into mainstream education. These skills are in high demand in the workplace. However, education is still an assessment-driven environment, and until these skills can be tested fairly, their impact will be minimal. Online games provide a way of evidencing this kind of ability, but games scores do not currently meet requirements of fairness. In high stakes assessment, 'fairness' is a complex concept, but it always has to have a mathematical argument behind it, for ethical and legal reasons. The analysis techniques that assessors use to locate problems in large data sets have been applied to a range of testing scenarios, from multiple choice to human-scored evaluation of complex tasks, but they will not work with game data at the moment. Bayesian analysis appears to be the best approach to deal with the more complex behaviour produced during game play, but few assessors have worked with Bayes probability. This talk will give you an insight into how assessors mathematically model very common human test behaviours, such as cheating, guessing, a rogue examiner or poor question design. It will also outline how key assumptions about testing need to be re-conceptualised for game data, and suggest how existing approaches to identifying bias and error might be incorporated in Bayesian probability-based estimations of ability. Background of Speaker : Clare Walsh is a teacher, and final year PhD student. Before joining the CDT, she authored over 20 course books that are used in secondary schools and further education worldwide, and has worked for over 15 years in international high stakes assessment design.

    Shared with the University by
    Ms Amber Bu
  59. [img]
    Sketching the vision of a Web of Debates
    Web users have changed the Web from a means for publishing and exchanging documents to a means for sharing their feelings, beliefs, and opinions and participating in debates on any conceivable topic. Current web technologies fail to support this change: arguments and opinions are uploaded in purely textual form; as a result, they cannot be easily retrieved, processed and interlinked, and all this information is largely left unexploited. This talk will sketch the vision of Debate Web, which will enable the extraction, discovery, retrieval, interrelation and visualisation of the vast variety of viewpoints that exist online, based on machine-readable representations of arguments and opinions.

    Shared with the University by
    Ms Amber Bu
  60. [img]
    Social Influence in Web interactions: from Contagion to a Richer Casual Understanding
    A central problem in the analysis of observational data is inferring casual relationships - what are the underlying causes of the observed behaviours? With the recent proliferation of Big Data from online social networks, it has become important to determine to what extent social influence causes certain messages to 'go viral', and to what extent other causes also play a role. In this thesis, we propose a methodological framework for quantitatively measuring and for qualifying the effects of social influence from Web-mediated interactions, while accounting for other relevant causes, on individual and collective outcomes, using 'found' observational digital data. This framework is based on causality theory and is informed by the social sciences, constituting a methodological contribution of the type that is much needed in the emergent interdisciplinary area of computational social science. We demonstrate theoretically and empirically how our framework offers a way for successfully addressing many of the limitations of the popular information diffusion-based paradigm for social influence online, enabling researchers to disentangle, measure and qualify the effects of social influence from online interactions, at the individual and the collective level.

    Shared with the University by
    Ms Amber Bu
  61. [img] [img]
    Social machines dictating social behaviors – When context is missing what is the fallout of uberveillance?
    Abstract: In the mid-1990s when I worked for a telecommunications giant I struggled to gain access to basic geodemographic data. It cost hundreds of thousands of dollars at the time to simply purchase a tile of satellite imagery from Marconi, and it was often cheaper to create my own maps using a digitizer and A0 paper maps. Everything from granular administrative boundaries to right-of-ways to points of interest and geocoding capabilities were either unavailable for the places I was working in throughout Asia or very limited. The control of this data was either in a government’s census and statistical bureau or was created by a handful of forward thinking corporations. Twenty years on we find ourselves inundated with data (location and other) that we are challenged to amalgamate, and much of it still “dirty” in nature. Open data initiatives such as ODI give us great hope for how we might be able to share information together and capitalize not only in the crowdsourcing behavior but in the implications for positive usage for the environment and for the advancement of humanity. We are already gathering and amassing a great deal of data and insight through excellent citizen science participatory projects across the globe. In early 2015, I delivered a keynote at the Data Made Me Do It conference at UC Berkeley, and in the preceding year an invited talk at the inaugural QSymposium. In gathering research for these presentations, I began to ponder on the effect that social machines (in effect, autonomous data collection subjects and objects) might have on social behaviors. I focused on studying the problem of data from various veillance perspectives, with an emphasis on the shortcomings of uberveillance which included the potential for misinformation, misinterpretation, and information manipulation when context was entirely missing. As we build advanced systems that rely almost entirely on social machines, we need to ponder on the risks associated with following a purely technocratic approach where machines devoid of intelligence may one day dictate what humans do at the fundamental praxis level. What might be the fallout of uberveillance? Bio: Dr Katina Michael is a professor in the School of Computing and Information Technology at the University of Wollongong. She presently holds the position of Associate Dean – International in the Faculty of Engineering and Information Sciences. Katina is the IEEE Technology and Society Magazine editor-in-chief, and IEEE Consumer Electronics Magazine senior editor. Since 2008 she has been a board member of the Australian Privacy Foundation, and until recently was the Vice-Chair. Michael researches on the socio-ethical implications of emerging technologies with an emphasis on an all-hazards approach to national security. She has written and edited six books, guest edited numerous special issue journals on themes related to radio-frequency identification (RFID) tags, supply chain management, location-based services, innovation and surveillance/ uberveillance for Proceedings of the IEEE, Computer and IEEE Potentials. Prior to academia, Katina worked for Nortel Networks as a senior network engineer in Asia, and also in information systems for OTIS and Andersen Consulting. She holds cross-disciplinary qualifications in technology and law.

    Shared with the University by
    Mr Roushdat Elaheebocus
  62. [img] [img]
    Spatial data integration for mapping progress towards the Sustainable Development Goals
    Abstract: The UN sustainable development goals, an intergovernmental set of 17 aspirational goals and 169 targets to be achieved by 2030, were launched last year. These include ending poverty and malnutrition, improving health and education, and building resilience to natural disasters and climate change. A particular focus across the goals and targets is achievement 'everywhere', ensuring that no one gets left behind and that progress is monitored at subnational levels to avoid national-level statistics masking local heterogeneities. How will this subnational monitoring of progress towards meeting the goals be undertaken when many countries will undertake just a single census in the 2015-2030 monitoring period? Professor Tatem will present an overview of the work of the two organizations he directs; WorldPop ( www.worldpop.org ) and Flowminder ( www.flowminder.org ); in meeting the challenges of constructing consistent, comparable and regularly updated metrics to measure a! nd map progress towards the sustainable development goals in low and middle income countries, and where the integration of traditional and new forms of data, including those derived from satellite imagery, GPS and mobile phones, can play a role.

    Shared with the University by
    Ms Amber Bu
  63. [img] [img]
    Studying the emergent properties of Social Machines
    In this talk, I will discuss the unexpected uses of social machines, and how individual and collective behaviour on platforms such as Twitter, Wikipedia, and the Zooniverse contribute to their development, success, and failure. Based on these observations, we will explore how we can take advantage of the emergent features and interpretive flexibility of social machines, in order to support current global challenges.

    Shared with the University by
    Ms Amber Bu
  64. [img]
    Synote-Inclusively Enhancing Learning from Lectures & Recordings
    Machines recognition of continuous speech became commercially available in 1998 creating the possibility of automatically transcribing what a lecturer was saying in class to change approaches to notetaking as well as benefitting disabled students and international students. In spite of continuous improvements in speech recognition accuracy, universities haven’t been providing their students with automatically transcribed lectures and so our spin out company Synote was set up to help turn the possibility into reality. This seminar reviews the past 20 years of research into enhancing learning from lectures and recordings using speech recognition transcription that has involved researchers, universities and organisations worldwide as well as student projects and grant funded projects in ECS.

    Shared with the University by
    Ms Amber Bu
  65. [img] [img]
    Temporal TF-IDF: A High Performance Approach for Event Summarization in Twitter
    In recent years, there has been increased interest in real-world event summarization using publicly accessible data made available through social networking services such as Twitter and Facebook. People use these outlets to communicate with others, express their opinion and commentate on a wide variety of real-world events, such as disasters and public disorder. Due to the heterogeneity, the sheer volume of text and the fact that some messages are more informative than others, automatic summarization is a very challenging task. This paper presents three techniques for summarizing microblog documents by selecting the most representative posts for real-world events (clusters). In particular, we tackle the task of multilingual summarization in Twitter. We evaluate the generated summaries by comparing them to both human produced summaries and to the summarization results of similar leading summarization systems. Our results show that our proposed Temporal TF-IDF method outperforms all the other summarization systems for both the English and non-English corpora as they lead to informative summaries.

    Shared with the University by
    Ms Amber Bu
  66. [img] [img]
    The Age of Social Machines
    Many of the most successful and important systems that impact our lives combine humans, data, and algorithms at Web Scale. These social machines are amalgamations of human and machine intelligence. This seminar will provide an update on SOCIAM, a five year EPSRC Programme Grant that seeks to gain a better understanding of social machines; how they are observed and constituted, how they can be designed and their fate determined. We will review how social machines can be of value to society, organisations and individuals. We will consider the challenges they present to our various disciplines.

    Shared with the University by
    Mr Roushdat Elaheebocus
  67. [img]
    The Chemistry of Data
    Abstract: In my talk I will discuss the way in which the ideas of the Data Science, Web and Semantic Web, Open Science contribute to new methods and approaches to data driven chemistry and chemical informatics. A key aspect of the discussion will be how to facilitate the improved acquisition and integration and analysis of chemical data in context. I will refer to lesions learnt in the e-Science and Digital Economy (particularly the IT as a Utility Network) programmes and the EDISON H2020 project. Jeremy G. Frey Jeremy Frey obtained his DPhil on experimental and theoretical aspects of van der Waals complexes, in Oxford, followed by a fellowship at the Lawrence Berkeley Laboratory with Yuan Lee. In 1984 he joined the University of Southampton, where he is now Professor of Physical Chemistry and head of the Computational Systems Chemistry Group. His experimental research probes molecular organization from single molecules to liquid interfaces using laser spectroscopy from the IR to soft X-rays. In parallel he investigates how e-Science infrastructure supports intelligent access to scientific data. He is strongly committed to collaborative inter and multi-disciplinary research and is skilled in facilitating communication between diverse disciplines speaking different languages. He has successfully lead several large interdisciplinary collaborative RUCK research grants, from Basic Technology (Coherent Soft X-Ray imaging), e-Science (CombeChem) and most recently the Digital Economy Challenge area of IT as a Utility Network+, where he has successfully created a unique platform to facilitate collaboration across the social, science, engineering and design domains, working with all the research, commercial, third and governmental sectors.

    Shared with the University by
    Ms Amber Bu
  68. [img] [img]
    The Development and Exploitation of the Synchronised Timeline and Iconographic Interface in Healthcare and Beyond
    Mr. Rew is a Consultant Surgeon at the University Hospital of Southampton. He has been leading work within NHS on data visualisation systems for the electronic patient record across the UHS Clinical Data Estate.

    Shared with the University by
    Ms Amber Bu
  69. [img] [img]
    The End of the World Wide Web
    Nothing lasts forever. The World Wide Web was an essential part of life for much of humantiy in the early 21st century, but these days few people even remember that it existed. Members of the Web Science research group will present several possible scenarios for how the Web, as we know it, could cease to be. This will be followed by an open discussion about the future we want for the Web and what Web Science should be doing today to help make that future happen, or at least avoid some of the bad ones.

    Shared with the University by
    Mr Roushdat Elaheebocus
  70. [img]
    Preview
    [img]
    The MOOC Dashboard: Visualising MOOC data for everyone
    Abstract Massive Open Online Courses (MOOCs) generate enormous amounts of data. The University of Southampton has run and is running dozens of MOOC instances. The vast amount of data resulting from our MOOCs can provide highly valuable information to all parties involved in the creation and delivery of these courses. However, analysing and visualising such data is a task that not all educators have the time or skills to undertake. The recently developed MOOC Dashboard is a tool aimed at bridging such a gap: it provides reports and visualisations based on the data generated by learners in MOOCs. Speakers Manuel Leon is currently a Lecturer in Online Teaching and Learning in the Institute for Learning Innovation and Development (ILIaD). Adriana Wilde is a Teaching Fellow in Electronics and Computer Science, with research interests in MOOCs and Learning Analytics. Darron Tang (4th Year BEng Computer Science) and Jasmine Cheng (BSc Mathematics & Actuarial Science and starting MSc Data Science shortly) have been working as interns over this Summer (2016) as have been developing the MOOC Dashboard.

    Shared with the World by
    Mr Roushdat Elaheebocus
  71. [img] [img]
    The Open Web of Things as a means to unlock the potential of the IoT
    Abstract: There is a lot of hype around the Internet of Things along with talk about 100 billion devices within 10 years time. The promise of innovative new services and efficiency savings is fueling interest in a wide range of potential applications across many sectors including smart homes, healthcare, smart grids, smart cities, retail, and smart industry. However, the current reality is one of fragmentation and data silos. W3C is seeking to fix that by exposing IoT platforms through the Web with shared semantics and data formats as the basis for interoperability. This talk will address the abstractions needed to move from a Web of pages to a Web of things, and introduce the work that is being done on standards and on open source projects for a new breed of Web servers on microcontrollers to cloud based server farms. Speaker Biography -Dave Raggett : Dave has been involved at the heart of web standards since 1992, and part of the W3C Team since 1995. As well as working on standards, he likes to dabble with software, and more recently with IoT hardware. He has participated in a wide range of European research projects on behalf of W3C/ERCIM. He currently focuses on Web payments, and realising the potential for the Web of Things as an evolution from the Web of pages. Dave has a doctorate from the University of Oxford. He is a visiting professor at the University of the West of England, and lives in the UK in a small town near to Bath.

    Shared with the University by
    Mr Roushdat Elaheebocus
  72. [img]
    The Paradigm of Crowdsourced Systems
    Title: The Paradigm of Crowdsourced Systems Abstract: High acceptance rates of truly personal, portable devices such as smartphones and smart gadgets, along with the successful introduction of DIY computer platforms, like Arduino's and Raspberry Pi's, have lead to an unprecedented abundance of well-connected and well-equipped devices. Crowdsourced Systems is a new system paradigm that seeks to exploit the high availability of such devices and thus change the way data is generated, processed and consumed. In this talk, we will discuss this new paradigm, the challenges and opportunities it poses, review real-world use-cases and present relative on-going standardization efforts. Short CV: Dr. Constantinos Marios Angelopoulos is Lecturer in Computing at Bournemouth University (U.K.) specializing in future and emerging paradigms of computer networks and distributed systems. He is also the Lead Editor of the ITU-T Work Item on Crowdsourced Systems; co-author of the ITU-T Technical Report on “Artificial Intelligence in IoT” and the Vocabulary Co-rapporteur for ITU-T SG20. In the past, he has worked for three years as a postdoctoral researcher at University of Geneva (CH) under the prestigious Swiss Government Excellence Scholarship for Foreign Researchers.

    Shared with the University by
    Ms Amber Bu
  73. [img]
    The Paradigm of Smart Circular Economy
    Abstract: Circular Economy is a paradigm for sustainable growth that envisions the transformation of how modern societies design, produce and consume goods and services towards a regenerative economic cycle. Future and emerging technologies, such as 5G, the blockchain, and crowdsourced sensing systems, along with innovative models and paradigms, such as the Internet of Things, Industry 4.0, and community networks, will play an important role in the transition to Circular Economy by enabling and facilitating, among others, digitization (efficient asset management, open data, etc.) and collaboration (co-innovation, shared value creation, etc.). In this talk we will introduce the paradigm of Smart (i.e. data-driven) Circular Economy, we will review corresponding technological enablers and will present data-driven circular business models. Biodata: Dr. Marios Angelopoulos is Principal Academic in Computing at Bournemouth University (UK), specialising in future and emerging paradigms of computer networks and distributed systems. Previously, he was with University of Geneva (CH) under the prestigious Swiss Government Excellence Scholarship for Foreign Researchers. At BU, he has established and leads the Open Innovation Lab (OIL) and is a founding member of the Future & Complex Networks (FlexNet) Research Group. He is also the founding Programme Leader of three master courses in the area of Internet of Things. Marios is also active in international standardisation bodies. Since 2018, he serves at the International Telecommunication Union (ITU-T) as Associate Rapporteur of Question 5 "Research and emerging technologies, terminology and definitions" in Study Group 20: Internet of Things (IoT) and Smart Cities and Communities (SC&C). He is also the Liaison co-Rapporteur of Study Group 20 to the Standardization Committee for Vocabulary (SCV). His research on Crowdsourced Systems has greatly contributed to the ITU-T work item on "Requirements and Functional Architecture of IoT-related Crowdsourced Systems". Finally, he has published in and served in the committees of several highly-esteemed international Journals and Conferences in his area of expertise (such as IEEE ToN, Elsevier's ComNet, ComCom, Ad-Hoc Nets, IEEE ICDCS, DCOSS, ICC, ACM MSWiM, etc).

    Shared with the University by
    Ms Amber Bu
  74. [img] [img]
    The Zooniverse - Enabling Everyone
    Abstract Grant is a recovering astrophysicist, now based at the University of Oxford. He works as the special projects manager and communications lead for the Zooniverse - the world's leading citizen science platform. They run over 40 projects across fields ranging from astronomy to zoology, and have recently been working on a platform that allows researchers to create their own citizen science projects in no time at all.

    Shared with the University by
    Mr Roushdat Elaheebocus
  75. [img]
    Transmedia and Interactive Narrative
    Abstract: In this seminar we will present three PhD research projects currently underway in WAIS in the area of transmedia and interactive narrative. All three have been accepted as papers to the International Conference on Interactive Digital Storytelling (ICIDS’18) that takes place in Dublin next month. The three projects are: 1) Multiplayer Interactive Narrative Experiences (MINEs): A type of multiplayer interactive storytelling, in which each player may experience their own distinct narrative. Here, we explain what these narratives are, describe the design of a system to support them and explore some brief examples of the stories that are possible. 2) Authoring Interactive Digital Narratives: An experiment performed using one story across three different type interactive writing tools. This was done to explore the impact a tool has in shaping a story and to observe how each affected the authorial task. 3) Models of Alternative Reality Games (ARGs): Transmedia Storytelling involves telling stories across multiple media channels. This method presents problems for researchers in that it is difficult to understand the structure of a transmedia story and how the story unfolds over time. We present a way of describing such stories using examples of several ARGs and explore the affordances of this technique. Biodata: Callum Spawforth is a PHD student in the Web and Internet Science group at the University of Southampton, UK. His main area of research is Multiplayer Interactive Storytelling, exploring the possibilities for interactive fiction featuring multiple participants. His research has also touched on understanding interactions in multiplayer games and authoring systems for sculptural hypertext. Callum did his undergraduate degree in Computer Science here at the University of Southampton, and is one of the organisers for the Southampton Game Jam. Sofia Kitromili is a Web Science PhD student at the University of Southampton looking into authoring digital storytelling and how the practise is reformed through different platforms. She is currently looking to investigate the notion of storytelling through an authorial perspective with a locative literature tool using cultural heritage collections as a case study. Ryan Javanshir is a Web Science CDT PhD student. His research lies in the area of transmedia storytelling, looking at how we can better understand how narrative unfolds across media. He is also interested in game design and the surrounding ideas that can be transferred over and applied to transmedia design.

    Shared with the University by
    Ms Amber Bu
  76. [img] [img]
    Understanding social media in everyday life: Ethnomethodological and conversation analytic perspectives
    Over the last decade, social media has become a hot topic for researchers of collaborative technologies (e.g., CSCW). The pervasive use of social media in our everyday lives provides a ready source of naturalistic data for researchers to empirically examine the complexities of the social world. In this talk I outline a different perspective informed by ethnomethodology and conversation analysis (EMCA) - an orientation that has been influential within CSCW, yet has only rarely been applied to social media use. EMCA approaches can complement existing perspectives through articulating how social media is embedded in everyday life, and how its social organisation is achieved by users of social media. Outlining a possible programme of research, I draw on a corpus of screen and ambient audio recordings of mobile device use to show how EMCA research can be generative for understanding social media through concepts such as adjacency pairs, sequential context, turn allocation / speaker selection, and repair. In doing so, I also raise questions about existing studies of social media use and the way they characterise interactional phenomena.

    Shared with the University by
    Ms Amber Bu
  77. [img] [img]
    User-Centred Methods for Measuring the Value of Open Data
    A project to identify metrics for assessing the quality of open data based on the needs of small voluntary sector organisations in the UK and India. For this project we assumed the purpose of open data metrics is to determine the value of a group of open datasets to a defined community of users. We adopted a much more user-centred approach than most open data research using small structured workshops to identify users’ key problems and then working from those problems to understand how open data can help address them and the key attributes of the data if it is to be successful. We then piloted different metrics that might be used to measure the presence of those attributes. The result was six metrics that we assessed for validity, reliability, discrimination, transferability and comparability. This user-centred approach to open data research highlighted some fundamental issues with expanding the use of open data from its enthusiast base.

    Shared with the University by
    Mr Roushdat Elaheebocus
  78. [img] [img]
    Virtual City Explorer: A crowdsourching tool to locate and describe static points of interest in cities.
    Abstract: A common issue among European municipalities is the lack of information of mobility infrastructure Points of Interest (PoIs). This information is valuable both for powering services to citizens and as a mean to audit the urban health and environment of the city. Currently, municipalities need to send their employees to the field to do the counting, an expensive and error-prone approach that does not scale in the size of the area to be covered. Alternatively, one can rely on Volunteered Geographical Information (VGI) systems to crowdsource the data, but at the expense of having no control over data updates. We propose a faster and cheaper solution to tackle the problem by taking advantage of virtual imagery to use paid crowdworkers who can perform the item locating task remotely with no need of to be physically in place and without having any prior local knowledge of the area for which the exploration is required. We implemented a standalone crowdsourcing system named Virtual City Explorer (VCE) which allows collecting locations and images of PoIs on virtual spaces with paid crowdworkers. Our system takes as input a virtual space (e.g. a Google Street View instance), a type of PoI and an area of interest (defined as a geo-spatial polygon) and returns coordinates of instances of the target PoI type inside the area of interest discovered by a fixed, configurable, number of crowdworkers. Each crowdworker is asked to explore the area of interest finding a (configurable) number of PoIs, being rewarded upon completion. Our first experiments resulted in being very encouraging, showing how the VCE can effectively be used to find and collect locations of PoIs, in a cheaper and faster way. Speaker information Eddy Maddalena is a research fellow in the Web and Internet Science (WAIS) group of the University of Southampton, United Kingdom. Eddy got a PhD in Computer Science in 2017 at the University of Udine, Italy. During his PhD, Eddy mainly focused on Information Retrieval (IR) and crowdsourcing to create human annotated test collection of documents to be used to evaluate the effectiveness of IR systems. After holding his PhD, Eddy moved to the University of Southampton where he is the leader of the Crowdsourcing Work Package for the H2020 QROWD Project, and designs and develops crowdsourcing based solutions to improve mobility and reduce transportation issues for smart cities, which aim to include “human in the loop” participation in the Big Data Value Chain information flow.

    Shared with the University by
    Ms Amber Bu
  79. [img] [img]
    WAIS Fest 2015 - Wrap up
    WAISfest is an opportunity to explore an area of research that isn't part of your day-to-day job, for 3 days. It's kinda like your Google 20% time. At the kick off session, a set of themes will be presented, and you get to choose which group to work with. Then for a few working days, you get to work on this challenge, before presenting what you've achieved at the end.

    Shared with the University by
    Mr Roushdat Elaheebocus
  80. [img] [img]
    WAIS Tutorial: Publishing in Top Quality Journals
    The purpose of this seminar session is to share with you some of my experience with publishing in top quality journals. The session will be structured as follows: - Publish or Perish - The CS Debate (conferences vs journals) - Top journals - Multidiciplinary work - The Process

    Shared with the University by
    Ms Amber Bu
  81. [img]
    WAIS/AIC Joint Seminar: Storytelling in Mixed Realities: Making Sense of the World
    Abstract "Storytelling in Mixed Realities: Making Sense of the World 1D Since the early days of civilization, the way we tell and consume stories defines how do we make sense of the world. Every new technology has an impact on our narrative artifacts. Today mobile ubiquitous digital technologies allow us to structure and distribute our narratives in novel and unprecedented ways. During this talk i will presents some old and recent projects developed in collaboration with a vast team of researchers and artists, that exemplify novel approaches to content and context through interactive storytelling and gaming. Bio Valentina Nisi is an Assistant Professor at the University of Madeira and founder and researcher at the Madeira Interactive Technologies Institute (M-ITI). Her area of investigation revolves around Digital Media Art and HCI. Her research focuses on designing and producing digitally mediated experiences in real spaces, merging culture, context and landscapes. Valentina previously worked with Glorianna Davenport and Mads Haahr at MediaLab Europe, MIT MediaLab European research partner. In 2006 she co-founded Amsterdam based non profit organization FattoriaMediale, together with Ian Oakley and Martine PostHuma de Boer, designing and producing interactive mobile stories for several Amsterdam neighbourhoods. Her work has won several Awards and been published and shown internationally,

    Shared with the University by
    Ms Amber Bu
  82. [img] [img]
    WEBS2002 Group Projects: What can Flickr photographs tell us about New York City?
    In their second year, our undergraduate web scientists undertake a group project module (WEBS2002, taught by Jonathon Hare & Su White) in which they get to apply what they learnt in the first year to a practical web-science problem, and also learn about team-working. For the project this semester, the students were provided with a large dataset of geolocated images and associated metadata collected from the Flickr website. Using this data, they were tasked with exploring what this data could tell us about New York City. In this seminar the two groups will present the outcomes of their work. Team Alpha (Thomas Davidson, Adam Rann, Luke Gibbins & Ryan Dodd) will present their work on “Analysing Flickr Demographics: Identifying Optimal Advertising Locations in New York". This work aims to detect areas of high footfall for varying demographics with the aim of using this information to more accurately target advertising. Team Bravo (Thomas Rowledge, Xavier Voigt-Hill & Chloe Cripps) will present their work on “The Flickr that Never Sleeps: Observing a Changing City Through a Decade of Geotagged Uploads". This work aims to explore the broad breadth of ways in which users' interactions with Flickr captures reactions, geographical trends and the changing picture of a prominent global city.

    Shared with the University by
    Ms Amber Bu
  83. collection
    Web & Internet Science Seminar Recordings 2018
    Collections of all the recording of the Web & Internet Science Research Seminars from 2018

    Shared with the University by
    Mrs Kelly Terrell
  84. [img] [img]
    Web Knowledge and Web Governance: WAIS PhD Research Reports
    Abstract This seminar consists of two very different research reports by PhD students in WAIS. Hypertext Engineering, Fettling or Tinkering (Mark Anderson): Contributors to a public hypertext such as Wikipedia do not necessarily record their maintenance activities, but some specific hypertext features - such transclusion - could indicate deliberate editing with a mind to the hypertext’s long-term use. The MediaWiki software used to create Wikipedia supports transclusion, a deliberately hypertextual form of content creation which aids long terms consistency. This discusses the evidence of the use of hypertext transclusion in Wikipedia, and its implications for the coherence and stability of Wikipedia. Designing a Public Intervention - Towards a Sociotechnical Approach to Web Governance (Faranak Hardcastle): In this talk I introduce a critical and speculative design for a socio-technical intervention -called TATE (Transparency and Accountability Tracking Extension)- that aims to enhance transparency and accountability in Online Behavioural Tracking and Advertising mechanisms and practices.

    Shared with the University by
    Mr Roushdat Elaheebocus
  85. [img] [img]
    What can Flickr photographs tell us about New York City?
    In their second year, our undergraduate web scientists undertake a group project module (WEBS2002, led by Jonathon Hare & co-taught by Su White) in which they get to apply what they learnt in the first year to a practical web-science problem, and also learn about team-working. For the project this semester, the students were provided with a large dataset of geolocated images and associated metadata collected from the Flickr website. Using this data, they were tasked with exploring what this data could tell us about New York City. In this seminar the two groups will present the outcomes of their work. Team Alpha (Wil Muskett, Mark Cole & Jiwanjot Guron) will present their work on "An exploration of deprivation in NYC through Flickr". This work aims to explore whether social deprivation can be predicted geo-spatially through the analysis of social media by exploring correlations within the Flickr data against official statistics including poverty indices and crime rates. Team Bravo (Edward Baker, Callum Rooke & Rachel Whalley) will present their work on "Determining the Impact of the Flickr Relaunch on Usage and User Behaviour in New York City". This work explores the effect of the Flickr site relaunch in 2013 and looks at how user demographics and the types of content created by the users changed with the relaunch.

    Shared with the University by
    Mr Roushdat Elaheebocus
  86. [img] [img]
    What is privacy and why can't we agree about it?
    Abstract: The concept of privacy has divided lawyers, scholars and policymakers for decades, not only in terms of whether it is a good or bad thing, but even what it is. Some say it is a human right, some that it is a prerequisite for democracy; others note that individuals are prone to breaching their own privacy and are remarkably relaxed about it, and have described various privacy paradoxes or other common inconsistencies in attitude; some argue that it is unenforceable; still others argue that it is a blocker to the knowledge economy and the socially-beneficial use of big data; and many more say that whatever its merits it is dead. In this talk, Kieron O'Hara will argue that the reason for this apparently confused disarray is that different privacy discourses are going on simultaneously, talking past each other and cheerfully committing various category errors. He sets out a series of seven types of privacy discussion, which are distinct but relatable to each other, as ! a first step towards clearing up some of the confusion, and argues that privacy itself is strongly implicated at the boundaries between the self and world. Our attitudes towards privacy depend crucially on where we wish those boundaries to be.

    Shared with the University by
    Ms Amber Bu
  87. [img] [img]
    Why Cyber Security is Hard
    Abstract There has been a great deal of interest in the area of cyber security in recent years. But what is cyber security exactly? And should society really care about it? We look at some of the challenges of being an academic working in the area of cyber security and explain why cyber security is, to put it rather simply, hard! Speaker Biography Keith Martin Prof. Keith Martin is Professor of Information Security at Royal Holloway, University of London. He received his BSc (Hons) in Mathematics from the University of Glasgow in 1988 and a PhD from Royal Holloway in 1991. Between 1992 and 1996 he held a Research Fellowship at the University of Adelaide, investigating mathematical modelling of cryptographic key distribution problems. In 1996 he joined the COSIC research group of the Katholieke Universiteit Leuven in Belgium, working on security for third generation mobile communications. Keith rejoined Royal Holloway in January 2000, became a Professor in Information Security in 2007 and was Director of the Information Security Group between 2010 and 2015. Keith's research interests range across cyber security, but with a focus on cryptographic applications. He is the author of 'Everyday Cryptography' published by Oxford University Press.

    Shared with the University by
    Mr Roushdat Elaheebocus
This list was generated on Fri Apr 19 06:50:04 2024 UTC.