Artificial Intelligence currently has an exceptionally high profile and is at the heart of much envisaged economic, societal and industrial progress. There is a general feeling that AI’s time has come. However, for AI to reach its full potential here needs to be confidence that it will behave safely and securely. Safety, in particular is necessarily a conservative domain, and the way many AI techniques work leaves a persistent feeling that current assurance approaches simply aren’t up to the job. Furthermore, the rise of malicious cyber-attacks mean that there are new vectors by which safety problems can occur. Particularly nasty is the notion of a trapdoor in an AI component that has been trained maliciously to operate well under almost all circumstances, but which malfunctions when presented with a maliciously crafted situation. In this talk I will consider the prospects for AI and in particular for prospects of assured AI. I will also consider how AI may be used to advance how assurance in AI can be gained. I will mix absolute fact with personal judgement and in some cases speculation.