City University: Final Report

Beneficiaries // Objectives // Publications // Other Research Outputs // Summary


Beneficiaries

In addition to the scientific community, we can identify three broad categories of beneficiary: companies, regulators and the wider society.

Companies: Companies supplying systems, components and services used in the development of dependable systems will be direct users of the new mechanisms, techniques and processes developed by INDEED.
Regulators: We believe our work will inform established regulatory practice and allow the identification of new approaches to governance. The development of soundly-based assessment practice will offer significant improvements in observed dependability and will provide a competitive advantage to firms operating within a well informed regulatory context.
Wider society: We believe the work of the proposed project will facilitate timely, safe deployment of innovative products in a variety of domains (e.g. medical devices with safety requirements, e-commerce services with security requirements).

Interactions with beneficiaries will be via: - interactions with our industrial partners (Voca, British Energy, Qinetiq, CAA) and their supply chain and other key players in the industry sectors - the Scientific and Business Advisory Board, composed of senior and influential representatives from academia and our partners - spin-off projects - other dissemination activities such as Continuing Professional Developemnt courses


Objectives

The main objectives of the research at proposal time

This project tackles the design challenge of dependability in computer-based systems. The overall objective is to support, via an interdisciplinary appraoch that takes into account the characteristics and behaviours of machines, individuals and organisations, the design of dependable socio-technical systems and to be able to assess and communicate this dependability convincingly.This high-level objective encompasses these more detailed, interdependent ones:

  1. to develop ways of using time as a unifying abstraction for structuring dependable systems, by developing the DIRC time band model as it relates to system structure and the design of dependable systems and deriving a notation, language, logic and prototype tools for time bands.

  2. to address adaptation mechanisms and diversity within socio-technical systems, developing probabilistic models and tools that designers and assessors can use to gain insights and make explicit design trade-offs.

  3. to develop notations, tools and guidelines for modelling organisational responsibility and the underlying trust relations, with a particular emphasis on the evolution of responsibility and trust.

  4. to develop techniques for using these models, in conjunction with other system representations, to support the design of socio-technical systems and their associated dependability cases.

  5. to develop an approach to dependability cases that treats confidence, and the diversity of arguments often required to achieve confidence, in a technically sound manner with a supporting modelling language and pragmatics (case studies, templates).

  6. to incorporate time as a structuring mechanism and viewpoint on the case.

In addition, we intend to train a significant number of PhD students, essential for developing the next generation of dependability researchers.


The main objectives of the research at report time

The project timescale was extended to maximise co-ordination with other sites and to exploit opportunities with FDA and nuclear new build. The project was scaled down at funding to approx 1RA per site. The objectives remained the same but need to be interpreted in the light of the size of the project.


Publications

Journal publications

  1. Reasoning about the Reliability Of Diverse Two-Channel Systems In Which One Channel is Possibly Perfect. (2011). Professor Bev Littlewood & Dr John Rushby. IEEE Transactions on Software Engineering.

  2. Towards a Formalism for Conservative Claims about the Dependability of Software-Based Systems. (2010). Professor Robin Bloomfield, Professor Peter Bishop, Professor Bev Littlewood, Dr Andrey Povyakalo & Dr David Wright. IEEE Transactions on Software Engineering.

  3. CAD in mammography: lesion-level vs. case-level analysis of the effects of prompts on people's decisions. Dr Eugenio Alberdi, Professor Lorenzo Strigini, Dr Andrey Povyakalo & Professor Peter Ayton. International Journal ofComputer Assisted Radiology and Surgery. 3, 115-122.

Conference publications

  1. Why are people's decisions sometimes worse with computer support? (2009). Dr Eugenio Alberdi, Professor Lorenzo Strigini, Dr Andrey Povyakalo & Professor Peter Ayton. SAFECOMP 2009, The 28th International Conference on Computer Safety, Reliability and Security. 18-31.

  2. Assessing asymmetric fault-tolerant software. (2010). Professor Lorenzo Strigini & Dr Peter Popov. 21st International Symposium on Software Reliability Engineering (ISSRE2010), San Jose, California. 41-50.


Other Research Outputs

Industrial Training Courses

Workshop given on "Confidence in Safety Cases" to CAA SRG and NATS; Tutorial on “Structured Safety and Assurance Cases: Concepts, Practicalities and Research Directions” delivered at the IEEE 20th Int. Symposium on Software Reliability Engineering, ISSRE 2009, Mysuru, India


Public policy engagement

UK Nuclear New Build: Technical critique of method of assessing safety of protection system of proposed EPR reactor, in response to HSE call for responses; workshop on “Assurance Cases for Software-based Systems in a Regulatory Environment: Justifying and Communicating Complex Risks” at Software Engineering institute.


Summary

This was a collaborative project between City University London and the Universities of York, Edinburgh, St Andrews. Each University produces a separate summary of work, but the project results have come through collaboration in which the different threads contributed to each other, in particular through workshop style meetings during the four years.

The emphasis of the City work was on two essential aspects of Assurance Cases: the semantics of confidence in the claims being made and the development of systematic approach for assessing socio-technical aspects of adaptation.

The work on confidence addresses fundamental issues arising from the trust we need to assess and communicate about complex socio-technical systems. It has led to new mathematical models for how confidence is propagated in assurance cases and some novel approaches to system architectures, following an international collaboration with Dr John Rushby of SRI. The Assurance Case work on confidence led to sponsored work from our nuclear industry partners and the CAA. We also held a workshop on "confidence" for the CAA regulation group and its service provider partners.

The structuring of assurance around timebands was extended in qualitative fashion building on the vulnerabilities/defences approach that we previously used with one of our industrial partners on financial processing systems. The Assurance Case approach was used as the overarching integration mechanism: the York work (on timebands) and the responsibility modelling of St Andrews and the trust work at Edinburgh fit well into this framework, as does the CSR work on confidence, adaptation and diversity modelling.

Work on human adaptation and human-machines diversity made progress on issues of method through application to case study data and then elaboration of a general model. The issue studied is that in decision making situations in data rich environments, computerised decision support tools may have unexpected effects, changing the mental processes that people apply in unintended and sometimes detrimental ways. A well known example is "complacency" in which decision makers actually delegate their responsibility to the computer aids, accepting their explicit or perceived advice as always correct. This reduces the necessary "diversity" between strengths and weaknesses of the decision aids and of their users, so that the benefit of such aids is reduced, sometimes to the point that they make decision makers worse, rather than better, at their tasks.

New results from statistical data analysis in a case study, interesting both to the application community concerned (medical) and as a proof of existence of specific unintended influences of a decision support tool, contributed to the development of a new descriptive model of multiple possible mechanisms through which these unintended effects can be produced, both as unintended but rational uses of the computer advice and as "hidden" psychological effects on the process of use of the advice. This model goes beyond the limited, behavioral view typical of the literature on "operator complacency" or "automation bias", to identify how different "mechanisms" producing the same unintended behaviour need different defences to counteract them: an assurance case needs to take into account these different possible mechanisms in order to determine whether sufficient mitigation measures are in place. In addition to initial publication through a conference paper, the modelling approach has been disseminated through presentations to industry.

back