Tuesday 1st February 2022, 09:00-17:00 (GMT)
Throughout the day there will be group discussions around a table hosted by a company who will be presenting an AI Challenge. As well as networking, the objective is to spend a thought-provoking day discussing the topics and possibly relating them to the participants’ research. At the end of the day, each table will feedback what was discussed.
If you are interested in attending, please contact [email protected] with details of the topics of most interest to you. Every effort will be made to assign you to your selected topic(s), and to move you to a new table for the afternoon session if you select more than one topic, however we will also need to ensure a spread of participants across topics.
Programme 0900-17:00 (GMT)
09:00-09:30 Arrival Tea/Coffee
09:30-09:45 Introduction to the day
09:45-10:45 Presentation of AI Challenges from the table hosts
10:45-11:00 Mid Morning Tea/Coffee
11:00-12:30 Table discussions
13:30-15:00 Table discussions
15:00-15:15 Afternoon Tea/Coffee
15:15-15:45 Prepare presentation of table discussion
15:45-16:45 Presentations/Feedback from table discussions
16:45-17:00 Closing the day
‘Table Hosts’ and AI Challenges
- Explainable maritime autonomous navigation: We are seeing a surge in the technological development of autonomous navigation systems in the maritime sector, they are trained to meet Collision Regulations (COLREGS) and operate safely no matter what the scenario or who is operating around them. However, they are a black box of controls, and so it is hard to know exactly what they are thinking, with most understanding derived from neural networks. Considering that autonomous and traditionally crewed vessels will need to work together for the next decade at least, how can we make the navigation system explainable so they can describe why they are taking the courses they are?
- Explainable threat detection (military domain): Similar scenario to above but with automated threat detection and classification systems – how do you break down the problem to enable the customer to describe why the incoming asset is a threat, when there are so many variables?
- Simulated data: When we want to test systems, build visualizations or create AI models one challenge is getting customer data. Given our market and the sensitivity customers often don’t want to share their data. This as a result lowers the quality of our testing and may mean more bug fixing or longer UAT windows. If we could create simulated data, that looks, feels and behaves like real data but isn’t real then we could get better testing results without the risk of customers sharing real data. To do this we don’t just want a random number of auto-generated records we want to learn from the original data and create a facsimile that acts the same, but isn’t. Challenges include: how and where we deploy, how we train, which modelling technique to use, and how we validate.
- Socio-Technical AI: AI is often used to inform stakeholder decision-making. In Natural Language Processing, text-generation and extraction can be used to create emergent value and even insight. However, in cases where insight is the goal, the solution should not draw boundaries around the machine, but around the human-AI interaction. We define this as the socio-technical AI system. On the technical side, to move beyond language models and towards purposive systems, additional information and features need to be incorporated in the insight generation process. How can additional features be incorporated into text generation? On the social side, human behaviour is individualistic. How can we ensure optimised user behaviour? On the socio-technical whole, how can we measure the success of the entire system i.e. how do we differentiate between good and bad AI-human interactions?
- The use of AI to decarbonise infrastructure development: Net-Zero and Decarbonisation are key topics for infrastructure. Current approaches to designing for low carbon footprints for infrastructure assets are identifying carbon savings but this because the current carbon baselines (against which carbon reduction is measured) are very high and the solutions currently being worked on and delivered are targeting the changes that are easy to make and they are reducing cost as they are improving the way that the construction process is carried out. As the carbon baselines reduce overtime it will be become harder to identify solutions that meet project KPIs of not only lower carbon, but reduced costs whilst maintaining quality and lifecycle effectiveness. This challenge asks how can we use AI to improve the decision-making process to decarbonise the development of infrastructure assets, whilst also balancing other KPI’s?
- The role of AI in radical decision making: This is a philosophical challenge with real world application that relates to how do you make disruptive breakthroughs in decision making with AI and evaluate the outcomes to reduce the probability of failure of the proposed solution? This use case came up when discussing the use of AI in battlefield scenarios where there is a need to apply change beyond the “predictable decisions” that officers make, the contention being that there is a continuum that varies from “predictable behaviour” that brings known “safe” results to a more “disruptive behaviour” that provides more advantage leading to victory for the home side. How can this be captured in an AI and what methods exist for validating the correctness of the decisions that are made by the AI? Although the initial idea from this came from the world of defence, it also has applications across different fields like project management and autonomous design.
- Synthetic data generation: Getting Data Sharing Agreements in place before starting a research project at Mayden is a costly and time consuming process and is required not only for internal but for collaborative academic partnerships. Electronic Patient Record (EPR) data is complex in nature, having several hierarchies where each patient can have multiple referrals to a mental healthcare service, each referral having multiple appointments which in turn have several outcome measures and data points associated with each. Mayden would like to be able to generate meaningful EPR data based on but not including any sensitive data. This would allow data to be shared, analysis to be planned and research to continue while appropriate agreements are put in place for using the original sensitive data. Key outcomes from the day could include 1. Deeper understanding of synthetic data generation, including understanding of the inputs (eg. database schema, original data summaries) and outputs (eg. synthetic data) required of a synthetic data generator 2. Algorithm for generating synthetic data scoped for further development 3. Algorithm for generating synthetic data developed for future use by Mayden.
- Monitoring correct iaptus usage: Mayden’s patient management system, iaptus, holds significant usage data. Mayden would like to be able to use this data in order to identify correct or incorrect usage. This classification could enable the development of a recommendation engine for therapists to help them navigate the system accordingly, but could also enable service managers to identify users who are accessing areas of the system they don’t need to or shouldn’t be accessing. It should be noted that this data could not be shared on the day. Key outcomes from the day could include 1. Deeper understanding of classification of usage data 2. Algorithm for usage data classification scoped for future development 3. Algorithm for recommendation engine scoped for future development
- Predicting customer behaviour in the Defence sector: How can we utilise AI to account for government economic policy and defence priorities, geopolitical tensions, weather conditions and other external factors to provide better service to our customers?
- Assuring security: Modern advanced security appliances are based on behavioural analysis, and AI systems are often very opaque. How do we prevent, and detect, compromise of these AI systems? AI systems consume large quantities of data of various quality, from many sources of varying sovereignty and security levels. How do we ensure that this data is not “poisoned” deliberately or accidentally, resulting in unwanted or damaging behaviour? Without an incorporation of biology, AI will never get very far, e.g. a self-piloting vehicle may not differentiate between the decision-making processes of a 7 year-old, a 17 year-old and a 70 year-old when they are approaching a pedestrian crossing/road. Without this understanding driverless cars will never display the effectiveness sufficient for wide adoption. How do we ensure that development of AI places sufficient importance on the combination of AI and biology and not just AI?