The annual ART-AI ‘AI Challenge Day’ will take place on Monday, 26th January 2026, from 09:00 to 17:00 (GMT) at the APEX City of Bath Hotel, James St W, Bath BA1 2DA.
ART-AI students and industry will be presenting an AI Challenge and will host group discussions around a table. The goal is to foster thought-provoking conversations and advance research through collaboration, while also providing valuable networking opportunities. At the conclusion of the day, each group will present a summary of their discussions.
Details of the challenges are below. Please register and select which AI Challenges you would like to participate in by Monday 12th January 2026.

Every effort will be made to assign you to your selected topic(s), and to move you to a new table for the afternoon session if you select more than one topic, however we will also need to ensure a spread of participants across topics.
Programme 0900-17:00 (GMT)
09:00-09:30 Arrival Tea/Coffee
09:30-09:45 Introduction to the day
09:45-10:45 Presentation of AI Challenges from the table hosts
10:45-11:00 Mid Morning Tea/Coffee
11:00-12:30 Table discussions
12:30-13:30 Lunch
13:30-15:00 Table discussions
15:00-15:15 Afternoon Tea/Coffee
15:15-15:45 Prepare presentation of table discussion
15:45-16:45 Presentations/Feedback from table discussions
16:45-17:00 Closing the day
17:00 Drinks reception
Challenges
Who Guards the Guardrails? Accountability in AI Education
Lena Chauhan (Rise IQ)
Tech giants like Google and Microsoft are rapidly scaling AI education in schools, but their curricula lack transparency about commercial interests and don’t foster accountability for ethical AI use.
Meanwhile, children already have unrestricted access to AI tools, are learning risky behaviours through peer networks and social media and traditional ethics-led curriculum development can’t compete with Silicon Valley’s speed or resources.
How do we create transparent, accountable AI education frameworks that can scale at the speed of corporate programs while genuinely influencing children’s responsible AI behaviour – particularly when we’ve already lost the race to control access?
Advanced Reasoning in Foundation Models and How to Deploy Them Effectively and Responsibly
Harish Tayyar Madabushi (Bath) and Steven Schockaert (Cardiff)
Pre-trained foundation models have achieved remarkable capabilities, yet their ability to perform reliable, multi-step reasoning remains an important area of active research. This gap is a primary roadblock for adoption in high-stakes domains like medicine, finance, and law, where clear explanations and near perfect accuracy are critical. Therefore, this roundtable poses a two-part challenge:
- Part 1: The ‘Now’ Challenge (Deployment)
What are the biggest barriers to deploying current-generation models responsibly, and what practical safeguards, governance frameworks, and human-in-the-loop processes must we develop to increase adoption and safety?
- Part 2: The ‘Future’ Challenge (Advanced Reasoning)
What are the long-term technical pathways (e.g., neuro-symbolic hybrids, new architectures for causal inference, or formal verification methods) that will lead to a new generation of models with truly advanced and verifiable reasoning?
‘My Reviews Were ChatGPT-Generated’: On the Future Economics of Scientific Discovery and Evaluation
Joseph Marvin Imperial (ART-AI)
OpenAI recently announced its plans to develop an automated research assistant by 2026 and a fully autonomous AI researcher by 2028. Simultaneously, AI Scientist agents from startup labs like Sakana AI and Intology AI have reported having their papers accepted by top prestigious AI conferences such as ICLR and ACL. At this pace, what will scientific inquiry look like in the next 50 years? The automation of scientific discovery and evaluation through peer review is no longer a distant speculation; it is poised to directly impact how research is valued and who controls the means of discovery. This table will explore and dissect several key points regarding the future economics of producing and evaluating scientific works, including forecasting aspects of research that may soon be automated, why this shift matters for how we value ideas, and what new roles researchers might assume. We invite students, staff, and industry partners across a broad range of disciplines such as pure science, social sciences, computing, engineering, etc. to participate in this table. The main discussion will be guided by questions as follows:
1. How much of today’s research can be automated (e.g., literature review, experiment design, code generation, analysis of results)?
2. What shifts in incentive and attribution structures (e.g., intellectual property, knowledge ownership) may occur when ideas become cheap but experiments become more expensive?
3. How do we maintain scientific integrity in the face of rising high-volume low-quality ‘cheap science’?
4. What will be the new economic value of a PhD degree in the next 50 years under increasing automation of scientific inquiry and assessment?
5. What roles will emerge in a post-automated research world (e.g., orchestrators, auditors, meta-scientists)?
Hack for Science: From Computational Challenges to Prototyped Solutions
Miles Pemberton and James Proudfoot (ART-AI)
Join researchers from across physics, chemistry, biology, and related fields to explore how AI can meaningfully accelerate scientific workflows. In this challenge, we’ll uncover common computational bottlenecks in the natural sciences and brainstorm creative, cross-disciplinary solutions. No advanced AI expertise is required, just curiosity and a willingness to collaborate.
In the morning session, we’ll map out what slows down current scientific work by discussing which tasks are repetitive, fragile, or ripe for automation, and identifying where AI could provide real value in modelling, simulation, data handling, visualisation, documentation, experiment planning, or improving interdisciplinary communication.
The afternoon will be hands-on, as we design and build a minimal prototype or workflow demonstrating how AI could address one of the challenges identified earlier in the day. Coding experience is helpful but not essential; thoughtful design, scientific insight, and clear articulation of needs are just as important.
By the end of the day, we’ll share our concept and highlight opportunities for AI to support real scientific discovery. Join us to co-design tools researchers would actually want to use!
Financial Literacy That Actually Works
Goldman Sachs
Goldman Sachs is interested in creating AI-powered financial simulators that make abstract financial concepts – such as compound interest, risk/return and volatility – genuinely visceral and understandable, particularly for younger generations.
The core challenge is “making the invisible visible” so that people truly understand what they’re buying and the long-term implications of financial decisions.
We’re especially keen to develop a financial literacy product aimed at younger audiences even children, to help build foundational understanding of financial concepts early in life.
The Future of Legal Services
Norton Rose Fulbright
Norton Rose is tackling a forward-looking question: What does legal services and advice look like in 2026 and beyond, and how can market advantage be created through responsible AI adoption?
Specific areas of interest include:
- Responsibly leveraging collective data and experience to offer new insights to clients
- Exploring whether AI-enabled services (such as benchmarking, predictive analytics or client portals) could redefine how to deliver value
- Understanding the ethical and competitive implications of monetising “market intelligence” and whether that changes who can be represented
The challenge is particularly complex given unique constraints: as a law firm, we have deliberately siloed much of our data to adhere to privilege and confidentiality obligations and we operate on a pyramid resourcing model paid by the hour. We recognise that GenAI is shifting not only what we do, but how we sell, who does the work, and most excitingly, what we can offer going forward. We want to explore these opportunities while adhering to strict professional standards and insurer requirements around privilege and confidentiality.
AI Literacy and Vulnerable Populations *May run for the morning or afternoon only, depending on numbers.*
Adam Joinson (Bath)
“Literacy” – in its various guises (e.g. digital literacy, financial literacy) is often promoted as a panacea for the risks posed by novel technologies or services. In the context of AI, there are courses aimed at supporting the use of AI tools by professionals (e.g. in law: https://www.kcl.ac.uk/news/kings-launches-world-first-ai-literacy-programme-for-law-students-and-st…), but little work on everyday populations, in particular those who might be most vulnerable to AI enabled harms, or to gain most from AI support. This half day round table will explore what we mean by “AI literacy” – what factors might contribute to it – and how it might be used as a construct to support vulnerable populations. For those interested, we could also potentially work to develop a scale to measure AI literacy and skills to support such interventions.
AI-Human Teaming *May run for the morning or afternoon only, depending on numbers.*
Julian Padget (Bath)
AI technologies are often talked about as a means to support or augment human activity but how does this translate into actual situations that humans want rather than technologists dream about? Depending on the interests of who joins, the debate could focus on disembodied or embodied agents, or both. What are the dimensions of AI-human teaming? How many participants does it take to make a team? What capacities and capabilities are wanted in team members? Which scenarios would tease out some technical, practical and moral problems? How do we migrate thinking from one-shot tech-bro scenarios, such as tourism/shopping/entertainment, to long-term quality-of-life enhancing interaction with socio-economic impact? This half-day round table aims to explore these dimensions, and others that participants bring, to explore the construction of a human-centric characterisation of AI-human teaming, identify points in this design space that are achievable now, or close to achievable, versus those that are not, what the barriers are and to develop blue-sky thinking on what is worth pursuing and how.

