
Enough About AI
Enough about the key tech topic of our time for you to feel a bit more confident and informed. Dónal & Ciarán discuss AI, and keep you up to date.
Enough About AI
Bursting the Bubble?
Dónal and Ciarán explore the increasingly looming questions about the overinflation of the major AI companies' valuations and the anxiety about whether we are in a bubble - and what might happen if it pops. Relevant to their discussion of the need for AI models to keep the hype levels high, they discuss the muted reception to the release of ChatGPT-5 and some of the emerging strategies to make AI chatbots more palatable to certain audiences who are worried about "woke".
Topics in this episode
- Are we in an AI bubble? Spoiler alert: Yes - based on any normal metric of what an investment bubble is - but why is the promise of an almost-there superintelligence keeping things from popping?
- Despite being in a bubble, the lasting impacts of where AI already is on jobs and society is discussed
- The recent release of ChatGPT-5 has led to negative feedback from the tech press and vocal users - this is contrasted with other recent version releases.
- Examining how AI companies are trying to find new ways to add value, leading to a discussion of "Third Devices" and AI hardware
- The limitations and diminishing returns of training on synthetic data - and the apparent slowing down in model progress
- AI & Ideology - what does it mean to have a non-woke AI?
Resources & Links
- The Economist story mentioned by Dónal: "AI valuations are verging on the unhinged - Unless superintelligence is just around the corner" (25 June 2025)
- Article in TheJournal.ie on "Brendan", the AI Dublin Tour Guide
- ChatGPT's dodgy graph is linked and discussed here: "OpenAI gets caught vibe graphing" (The Verge, 07 August)
- Sam Altman (OpenAI) tells venture capitalists that he will take billions of their money and build AGI - and then ask it how to make a return on the investment (Twitter Video, Warren Terra)
- Some good discussion on the struggles of agentive AI ("AI Agents have, so far, mostly been a dud", Gary Marcus, Substack)
- Apple's important recent paper on the limitations of "reasoning" within tested reasoning models is available as a PDF here
- Coverage of Truth Social's deal with Perplexity - to make a non-woke chatbot for the platform
You can get in touch with us - hello@enoughaboutai.com - where we'd love to hear your questions, comments or suggestions!
Hello, I'm Dónal Mulligan. And I'm Ciarán O'Connor. And you're listening to Enough About AI, our ongoing podcast series covering some essential themes and topics as we examine the ways that artificial intelligence continues to affect our lives. In the last episode we discussed reasoning models, new releases, and the increased ubiquity and invisibility of AI in all our lives. Today we're going to take a look at the release of ChatGPT 5 by Open AI, broaden this out to discuss the limitations some models are facing as well as calls to make AI "non-woke" - and what exactly that means. But first, there's been much debate around the growing level of finance being pumped into AI companies, seemingly endless calls for investment from these organisations themselves, and questions about valuations, hype and disappointment leading people to ask, are we in an AI bubble? So, Dónal, you've been crunching the numbers, You've been reading the tea leaves. What's actually happening here? I have it's, it's - this is a, a very nerdy thing - but I have for the last while been keeping uh semi regularly updated spreadsheets where I'm looking at the valuations of these companies versus uh, as they release information quarterly about their, their profits, where their sales are to see what's the kind of ratio between those two things. So I have to give the huge caveat here that I have no training in, uh, economics of any kind, but there's a rough metric that's useful called price to sales, where you look at piles of dollars and euro the investors are willing to put in against the future sales that they hope that that company is going to make. And you can see what is the, the kind of proportion between those two things. And there's sort of set standards that are fairly usual for investments that might be made in tech, which has a, a bigger cushion for this than in other kinds of industry. But we are way, way beyond those norms when it comes to the AI companies. All of the AI companies, but some very particularly hugely, are valued at massive multiples of what their current income or even their projected income is. And so we might expect this to be in the range of, you know, 5 to 10, uh, times, uh, if we, if we're staying in the kind of safe old ways of thinking. But when we get into a bubble, we tend to see huge amounts of investment being poured into particular, uh, kind of companies or entities. And there's been lots of these bubbles in the past where that kind of connection of the valuations that are being put on the companies, the amount of money people are willing to invest becomes totally disconnected from the amount of money that that company is making. That has happened here. We're seeing roughly at the moment based on the the most recent figures that I can, I put together - and I'm happy to share this spreadsheet in the show notes - that there are valuations, more than 100 times in some of the cases, the revenue of these companies. So this is usually indicative of a bubble in the classic sense. So when when economists talk about this, when they talk about the "dot com bubble" or the "property bubble", these are the kind of things that they're looking for - and we are very certainly in that space at the moment. I can run through some of the particulars if you want.(Yeah, yeah, please.) Yeah. So in looking at the big kind of expected players in this space, so we have Open AI, we have XAI, Elon Musk's company, Anthropic Perplexity and Google. And of those 5, all of them are way outside that normal ratio for where we'd expect the the price to sales, the amount of investment versus the valuation coming in. But some are alarmingly outside that. And so looking at the numbers coming from XAI, they're in the range of 150 to 200 times the amount of money that it's bringing in in sales. So this is wildly out of kilter with where you'd normally expect investment to be. But similar to that Perplexity, which is an AI search company that's got a lot of attention recently. And I think we'll talk about them a little bit later when in the "non woke part" of our discussion, they're at around 120 times in terms of their evaluation. But it's still very large for Anthropic or Open AI, the big players that we're used to speaking about, it's between 30 and 40 times for, uh, OpenAI and around about 50 times for Anthropic. So these are all valuations that we would not see in normal types of investment. And these are the kind of things that usually indicate that you're firmly in a bubble. And whilst the valuations of these companies can be a little bit mercurial, hard to really actually gauge sometimes, running at the same time, we do know hard figures of the losses, the spend that these companies are incurring - Perplexity, for example, uh, generated revenue of of 34 million last year, I believe, but burned around 65 million. So in in that one sense, the numbers don't seem to add up. Yeah. And so usually this is what indicates a kind of a bubble economy is that these runaway valuations cause more and more investment to be made in the companies. There's no way of recouping that investment because there aren't sales coming in. And then at some point, there's usually a crash. Sometimes people just stop investing very suddenly, try to pull all of their money out very suddenly. And that's what we saw in the property crash that, you know, so drastically affected all of our lives all over the world at, at in, in around 2009 or so. But I'm interested in an earlier bubble. And I mean, there's been lots of them in history. And we can go back to Tulip bubbles in Amsterdam and the South Seas Bubble in, in the 1700s, but the Dot Com bubble, the one that happened, which was also a tech bubble in the early 2000s - I think is a really interesting parallel with where we are now. The same sort of wild valuation was given to lots of companies that we have no idea where they are now. That kind of all fell apart based on this projection that at that time, the changing way in which we might use the Internet and all of the commerce that might come with that seems like this untapped, uh, endless supply of money that was just around the corner. And huge amounts of, of, umm, investment were piled into companies that caused a bubble that inflated the stock price of lots of these, uh, kind of burgeoning tech companies in around 2000 and 2001. And very suddenly, once it became obvious that the money was not going to be recouped, they fell apart. And in that particular case the Dot Com Bubble isn't as well remembered as the property one because it wasn't as widespread. It didn't affect as much of society. Certainly lots of people were drastically affected. Lots of jobs were lost, lots of investment that could have gone elsewhere was, was lost in, in that period. And, but I don't think it had the same kind of resonance for us as, as the more recent one. But I think there are very strong parallels in what happened there in terms of an expectation that a really rapidly changing tech space was about to provide riches, enticed a lot of people in. And I think we're very much in that same place again. But I think this time, if this bubble does pop, it's going to be a bigger pop. And I think we're going to really see quite drastic effects if that were to happen. But if we think of the perception of a bubble popping, you might be led to believe that it disappears, it goes away for forever. But are there any parallels from maybe what occurred in the kind of post Dot Com burst in terms of how these companies or even maybe the technologies that became available at the time didn't quite disappear? Yeah, yeah, I think it's, it's interesting to, so just to recap because some of our listeners will have been born after the time that that happened. I mean, I'm old enough that I lived through it. Uh, but uh, uh, there's, there's lots of people who won't be the Dot Com bubble was uh, basically born out of the, a arrival of e-commerce to the world. So the idea that lots and lots of people for the first time had personal computers. And so from Windows 95 onwards, more and more people bought computers. They became something that you would have in your home and use. I mean, this sounds so obvious to us now, but was certainly not the case in the 1990s. And one of the key things that Windows 95 in particular did, and later Windows 98, those early Microsoft operating systems were, was that they allow people to connect to the Internet. So people on their telephone line at home use their modem, which made a series of noises that we all fondly remember if we grew up in that area and connected them to the Internet. And people were moving online and starting to do things online and the Dot Com bubble was A huge amount of investment into companies who were saying "I am going to be the purveyor of X service"."I'm going to be the, the person who sells..." whatever it might be. And some of those did survive. Amazon, eBay, these companies were born in that period, but hundreds of other ones were also part of that bubble in the early 2000s. And very rapidly they ceased to exist. Massive amounts of money was poured into things like online pet stores and, uh, ideas about how society would rapidly change and the new revenue streams via the Internet would come in. And very suddenly people realised, "oh, we may not recoup this investment", tried to pull their money out of it and it precipitated this, this crash at the time. And so you're right to point to the, the parallels that this might have with now and whether there are besides that bubble just popping and going away, whether there are longer term kind of implications. And I think one of the key ones that makes this a parallel with the Dot Com bubble is that like then we are investing very heavily in infrastructure alongside the hope that this bubble is going to make money. In the Dot Com bubble, a lot of what is now the global communications infrastructure of the world was built. There was a huge investment in fibre optic connections, in intercontinental cables, in the sort of foundational elements of the Internet. And so even though the amount of money that went in was lost to a very large degree, even though a lot of those companies don't exist anymore and we consider that a bubble that has popped, its legacy was that it left us with a huge amount of infrastructure upon which the rest of the Internet and the social media era was able to grow and able to flourish. And the same thing is happening here. So it is possible, of course, that tomorrow the AI bubble will pop and there's a lot of investors might simultaneously lose faith in these wild valuation ratios that we just went through and pull their money out. But again, they have built a technology that even if it were to stop at its current level is extraordinary in its capability. And we've massive investment gone into data centres and- problematic as they are, especially in Ireland's case - those things would survive that bubble in the same way. So I think we're, we're looking at something that, uh, is precarious and has a lot of the same features of previous, uh, kind of, uh, valuation crashes that we've seen, but that has already built a technology like the Dot Com era that is going to last beyond it and has already built a lot of, uh, the kind of infrastructure that might last beyond this period too. Which is quite notable too, because when you listen to some of the, I suppose, leading figures in, in AI, they very much talk in terms of the next 18 months, the next two years, the next immediate period, let's say. But the kind of parallels you're drawing there, look for a longer scope of how this technology may play out, which might be a more realistic, nuanced way of looking at it rather than this kind of venture capitalist formal world that we're that's leading to a lot of this these these kind of over valuations perhaps. But it's kind of coming down to it that is this a moment of truth for for AI companies? Yeah, this is it - I still constantly think of a headline that The Economist had in the middle of June which said "AI valuations are verging on the unhinged." And then the sub headline in a slightly smaller font underneath said "unless super intelligence is just around the corner." And this is the problem is that maybe it is. And so the hype that you're talking about there, which you know, we can very much associate with figures like Sam Altman, who is constantly telling us that, uh, he's "feeling the AGI" that it's about to happen at any moment. That is the the thing that secures the continuity of this not popping is that if it is about to change everything, if we're going to have some sort of civilisation-changing moment, then maybe it is rational to keep pouring money into something that's valued at 200 times the sales that it's bringing in. And so there's a very clear incentive for the people who are most centrally responsible for administering these companies to keep that hype going because they need this bubble not to pop. And so again, we're always telling our audience to be very critical in their reading of what gets said about AI. But it is really important to see these companies as first and foremost a money making entity that needs to sustain, uh, what is, you know, by any normal, uh, economic metric, a really dangerously overinflated level. They need to keep that going by keeping the hype going. With the mass investment that's moving towards AI now, I read recently that it's something like over half of the level of investment taking place in the US, for example, is moving towards AI, which then raises questions about, well, what's not being invested in, what other areas are being affected? And I think despite these discussions of the bubble, whether it may pop or not, we are already seeing the very real effects of AI or AI hype on jobs and on our economies. And we're seeing companies now pivoting towards using AI technology perhaps in place of entry level graduates and things like this as well. So I think that creates an interesting colouring where that the hype and the discussion of the bubbles over here, but there's also very real material effects already happening because of this. I agree completely. I think as I was saying there, even if this were to disappear tomorrow in terms of the investment and even if these major companies were to, to falter where we already are at the, the capability the technology already has is already transformative. So yes, AGI might be around the corner if we're listening to Sam Altman, but might be many years from now, if we're listening to, to other profits in this space, but already the capability of the models that are, you know, made available by those big companies can displace jobs - and is doing so. So we're seeing this, uh, kind of disruption of the jobs market. It's been happening earlier in some fields and others. We alluded to this even last year we were seeing the beginnings of this, uh, in jobs related to coding, for example. I mean, that has utterly, utterly changed now. And so this is really, it's a deep period of uncertainty for people. And I always think at this time of year, with the new semester coming up of new students coming in, and you know, how those students, when they were filling out their forms for where to go to university and what courses they might pick, the anxiety they must be feeling at the moment must be very intense in thinking, well, should I even study this subject? Are there going to be jobs at the end of it? And for people in jobs in certain areas, that's already very obvious that their future might be precarious too. Yeah, that's, that's an interesting place to come in because only last week I was listening to the radio and there was a a prominent commentator on and he was taking questions from from listeners. And one of the questions was a parent asking whether their child who was planning to study radiography in university should continue down that course because of the possible displacement of AI. And the the commentator on the radio said, well, actually he was quite candid. He said, "actually, I might, I might look at that one again and, and maybe pivot in a different direction because that could be one in industry that is massively disrupted." There's other ways too, in which this, these anxieties are beginning to crystallise and to, to materialise too. I mean, the, the, the AI Oireachtas committee that's been, umm, started in the last couple of months here heard from the AI Advisory Council of Ireland recently of expectant job losses in different sectors that might be impacted. First and foremost, Dublin has a new AI tour guide named Brendan who, uh, will, will guide you around the city based on your location data and these kinds of inferences. But at the same time, that has sparked criticism from actual historians and tour guides in the city whose jobs may be put out because of this new technology. Uh, and, and whether or not the, the tour guide really gets the nuance of Dublin, I believe he called Oscar Wilde the G.O.A.T. Uh. God, yeah, I, I saw some, some video footage of that AI tour guide and it was a little, but, uh, insufferable alright. But yeah, I mean, we talked to again in - in our first series when we were looking at some of the themes in a bit more foundational detail - we compared some of what's going on to the Luddites and the Saboteurs. The previous kind of eras when vast technological change scared people and their jobs and they began to, to kind of fight against the technological change that was coming. And we saw that with the tour guides. I mean, they did the equivalent of throwing their, their shoe into the, the loom in that case. And successfully, I think I, I don't think the Dublin City Council is going forward with uncanny creepy Brandon and his, his, uh, his digital tours, Umm, but we're going to see that increasingly. I mean, people are rightly worried that their, their jobs and their job prospects are, are going to change. I think we're going to see that in lots of different areas. And it's very difficult for us to advise those students or to, to umm, you know, foresee where the biggest kind of layoffs or changes or redundancies or, you know, complete rewriting of types of career might be. And despite all the conversations about anxieties and potential displacements, these companies are still forging ahead, releasing new models, releasing new technologies. The biggest one, perhaps since we last spoke has been the release recently of OpenAI's ChatGPT 5. Umm, much hype, much discussion. It's been out now for a little bit of time. What's been the reaction to this tunnel? Yeah so this is an interesting one because the reaction is fairly negative in a lot of the the Tech Press. Umm one of the things that's happened there is that this is not the only model that's been released recently. It's just the one that everyone looked to because of the hype and an interesting strategy perhaps that some of the other companies have taken is to quietly release iterations on their models not to give it a new model number, but maybe it went from a 3.2 to a 3.3 umm and so there's there's kind of iterative improvement happening across the board. So all of the models are making advancements, but uh, in OpenAI's case, ChatGPT has now released model 5 and they usually use the number the change from 4 to 5 or 3 to 4 uh, to indicate a huge leap in capability and Sam Altman, who we've already mentioned as a constant master of hype, has been hyping this up. And so I think a lot of what's happened with the, the quite subdued - and in some places negative - reaction to it is that people are disappointed based on that hype. So a lot of the people who are making those comments are the Tech Press are kind of power users of these, uh, sort of services in the first place. And for them, it isn't a huge sea change. And OpenAI has kind of set the bar high for itself because its own leap, especially from GPT 2 to 3 was so vast in terms of what that could do. And even from 3 to 4, when some of those "thinking models" were brought in - we talked about that last year - there was a big kind of change in the scope and capability that it had. The move from 4 to 5 was not as evident. It's certainly more capable and, and there are various improvements in terms of, of some of the depth of interaction it can do. And it's, it's kind of deeper thinking stages, but a lot of the decisions that they made around its release have, uh, kind of backfired on them. One of the key ones I think is that they took away the, uh, for paid users, that ability to choose between lots of different models. So their available models were in a drop down at the top of the window. And, uh, it was a little bit tricky because often, I mean, I found this myself if I sometimes use it, I wasn't quite sure which one to choose. And I sort of wildly made a guess sometimes. And so they said, well, we'll make it easier for you. We'll look at your question. And then we'll, based on your question, divert it to the appropriate version, either 5, the kind of simple version of 5, which will have some improvements, or a kind of more thinking version of 5 that will spend longer answering your question, do some of the reasoning that we mentioned. And so they hoped, I think, that that would make it intuitive to use and that people would find it a kind of smoother process. Many people were angry because they had particular workflows, they used a particular model, they had a particular almost relationship in many cases with certain models, people have those kind of parasocial relations and lots of cases. And they said, Oh no, my, you know, the ChatGPT that knew me and that conversed with me daily has now been lobotomised and has been changed in some way or it isn't as verbose in its response. It doesn't talk to me. It doesn't seem to have the same personality. So there was that kind of criticism and but there was also criticism from people who just expected that sea change. And then so maybe some slight improvements, but nothing terribly discernible. And I think that's really important. I think we're seeing there one of the effects that that hype machine has when you know, it's it's not met in the in the way that's expected. And I think it's also important to say that if you're someone who has been using. Chachi PT's free version up till now, you probably will see a bigger change than a lot of the people who are writing that tech press. There are people like ourselves who maybe have used a paid version of it at some point or have an experience across lots of different models are maybe up to date in how they work and have you know, used reasoning models quite a bit. Lots of people who've used the free version of Chachi PT weren't on the higher end of Chachi PT-4 anyway. They weren't seeing it's, it's range of capability. And now even in the free version of Chachi PT, they'll see that with the advent of five. So there are lots of people who I think will in the coming weeks and months see that it has, you know, a lot more capability than it used to. But I'm, I still think that the expectations that everybody had and that I have to say we're stoked by OpenAI themselves for a really, you know, transformative change that hasn't occurred. And it's even led Sam Altman to admitting that GPT 5 is quote"way dumber" than previous models. So perhaps as you mentioned there, the kind of power users and being brought in by the hype, not noticing significant changes, but for perhaps free users noticing significant differences, it's a case of as opposed to, to, to raising the ceiling, they're actually raising the floor for, for accessibility there. But it seems as though not not all is incredibly positive for for open AI at the moment. There seems to be a lot of wind in the sails of of other models, umm, Google Gemini. I think one in particular is seems to be incurring a lot of of, uh, interest online and prediction markets in particular. Yeah. So a lot of, uh, recent, uh, politics and investment has followed this, uh, space of prediction markets. So I'll briefly describe what they are because they might be less familiar to some listeners, but they're essentially A betting market where you can bet on things that are not horse races and the conventional types of sports betting we're used to, but bet instead on perhaps the outcome of an election or in this particular case, on which of the companies might deliver AGI or which might have the next most powerful model in a particular time period. And so a lot of people have started looking at these prediction markets as a way to assess amongst the people who are very plugged into investment in particular and are making bets on these things. They might also give an indication of where the money might flow and where the valuations might be. Because if they're both rich enough to be making investment and rich enough to be making bets on where that goes, the these two things can be used in parallel. And sometimes there's a lot of weight given to the outcome of different prediction markets. And so there are sets of markets that are based around people's bet around who will deliver AGI and who will have the most powerful model at the end of this quarter. And up until ChatGPT 5 came out, OpenAI, I was very firmly in this driving seat again, probably driven by their hype. And there was a lot of, uh, kind of discussion from, uh, Sam Altman in some of the interviews he gave right prior to the release where he talked about being almost afraid of how good it was and talking about it being a PhD level intelligence on every possible topic, these sorts of, of, umm, claims. And then when it came out and was tested, the prediction market changed very drastically and very quickly and now puts Google Gemini as the, uh, company most likely to have the best model at the end of this quarter. And part of what's going on there, I think is a reaction against that hype because Google with their models have been doing the quieter version that we mentioned earlier. They're doing iterative releases. Their models are very good also, but they're not as loudly trumpeting each one as opening eye is. And I think now there's a belief amongst the prediction markets that perhaps that approach of softly making progress is the right one and the money might now flow in that direction. There was a time when Google was seen to be lagging behind the other players in this race. Is that right? Very much so, yeah. So there was a, a, a period, and we even talked about it in our prior podcast last year where it seemed like Google was the place where this started. I mean, Google took over the, uh, DeepMind, uh, Labs and the various ventures there that that led to transformer models that led to this explosion of the kinds of AI that we have now. And they did seem surpassed for a long time. And even Anthropic seemed well ahead of them. And there, there was a period where they seemed to have kicked something off, but then bowed out of it. And they had, as Google has had many times with other, uh, kind of software and, and product launches, they had a kind of damp squib launch around Google Bard, such as it was back in, uh, a year or two ago, uh, before it became Google Gemini. So part of the reason it has that name, I think was to get away from a not great launch of its previous name. And so, uh, Google had been a little bit quieter and maybe they were chastened by that experience and therefore aren't doing the same hyping. It's also, I, I should mention that I didn't give figures for their valuation when I was talking about those valuations earlier. And that's because I can't find them easily. So Google, it's much harder to find specific numbers about their AI valuation and AI income to make that kind of claim that I did on, on the ratios for the others. And so maybe that also is, uh, part of the, the drive for why you might be more comfortable investing because it's not as clear where the, the line is drawn there. So if the race for valuation, the race for new releases and things like this is so much of a kind of hype game, does that not lead to the conclusion that ChatGPT or Open AI's press conference for Chat GTP 5 will not a bit of disaster? Maybe so, yeah. And I mean, this is the, the ultimate outcome where it got this kind of, uh, less than enthusiastic reception from the tech press is also bolstered even further by some of the smaller disasters that happened during this, uh, uh, event itself. So we'll share in the show notes for this for people who didn't catch it at the time, but, uh, some of the graphs that OpenAI used during the, the press release and, uh, during the, the kind of discussions they were making are problematic to say the least in terms of their reflection of good practice for data communication. That's a, it's a module that I lecture actually in DCU - Data Communication. And so these are now firmly going into my slides for this year as very terrible examples of of some of what's going on in that space. And they haven't confirmed this, but a lot of people think that perhaps what happened was that they used the model itself to generate some of the graphs that show its performance versus its previous iteration. So comparing it to the model O3 that was in widespread use before and showing how much better it is than O3. The graph that they used to show that had vertical bars showing the performance. And the bars had no relationship to the actual numbers whatsoever. And it wasn't that much of a giant leap from O3. But the graph, if you just quickly glanced, it would make you think so. And maybe what happened there, because it's hard to imagine in a very large professional company that you would put those graphs out to the public at a really important release point. But maybe that itself is indicative of the internal culture that they have that that how much they've, you know, imbibed this hype themselves, that maybe they're not even critically looking at their own communications around it and taking a, a, a breath to to see if everything is as it should be before they put it out to the public. It's it just seems like a a race that's accelerating I think all the time. And another area in which we're seeing a lot of hype to use the word again, but also a lot of development is around what you might term "third devices", essentially wearable tech, the kind of next piece of technology that's going to be informed by AI beyond laptops and computers, beyond smartphones. And these are things like metas, smart glasses. So Mark Zuckerberg wants us all to be wearing a pair of RayBans that have a camera and a microphone and perhaps eventually a kind of computer or desktop interface on one of the lenses, something like that. Why such a push for this kind of technology do you think, by these companies? Again, I, I suspect that a large part of that is that there exists probably a, a theorised space for some new kind of product, the new iPhone, the third device, as you say, some, some new thing that's going to fit there. And so people want to be the first of that. They want to be the apple of that to, to kind of seize that market. But also, if you're selling devices, you're selling them for a lot more money than you're selling subscriptions and subscriptions can be cancelled. And indeed, lots of subscriptions to OpenAI's services seem to have been cancelled on foot of that, uh, not great launch that they had with, with, uh, version 5 there, Uh, whereas you can for a lot of money sell whatever the third device is. And so I suspect that lots of the companies will look at trying to have some offering in the space. We've seen a few attempts at this last year. So there were small devices like a clip on device that you could wear that was sort of listening to your conversations and unhelpfully interrupting with AI components or as you say, the smart glasses that kind of show you real time information. And there's, there's been products like there are AI based essentially cheating services for if you're doing online interviews on you, you know, that's kind of sit on your screen as an extra layer and help you with questions. So there's a few kind of software and some early hardware things in that space. But there's nothing at the level of the iPhone, nothing that's really a new device that we would all use. And many of the companies are investing in that and are bringing in personnel who they would hope are going to deliver that for them because that's a major revenue stream if that happens. Kind of points towards the, the other areas of real estate that we kind of embody for for for these companies. It seems as though another part of that is the topic of kind of AI agents yes. So these so these uh services that will kind of plug into our life. They might read your emails, read your calendar, give you a schedule of the day, but not without hallucinations and other kind of teething problems that a lot of these technologies seem to have as well. Yeah, I mean, those two things are probably linked to because I suspect that the third device such as it will be, is going to be one that is doing agentic things. We talked about agency with AI several times previously. That's the idea that you can have it do stuff on your behalf. It's going to manage your calendar. It's going to be your assistant in a much fuller way than we currently think of assistants like Siri or Alexa. And you can hand off entire functions. It might reply to emails, it might, you know, manage your, your schedule, whatever it, it could do a whole load of, of things for you if it was operating in that full agentic capacity. But that's, yeah, that's been slow. And a lot of the progress that's been made on models has been to try and address this. So one of the key, uh, issues that I'm not sure we mentioned previously was while the models might get better and better at reasoning and coming up with solutions, combining and synthesising answers from lots of different sources of data, they're less good at doing sustained long term tasks. And so a lot of the recent training has been focused on building models that are able to do that. Because if you want an agent type of model, if you want something that's going to run alongside your life for weeks and months and years at a time and, you know, handle things for you, be your assistant. It needs to be capable of having that level of autonomy to run lots and lots of tasks, not to be prompted at each point as we currently do. And so the models that we currently have, the, the large language models that we're used to using, they're incredible in what they're able to do. There's a huge amount that they can do in terms of the reasoning that they have at the moment, but they require this ongoing relationship with us. We need to read and see what they've done. We need to then prompt them to take the next step and the next and the next. And in that space, in that kind of interaction, they're very good. But to let them loose as autonomous agents to manage things for us, that's much less the case. And I think there's a lot of focus on developing their capacity in that space now. So limits of data, limits of synthetic data, it brings to mind the kind of hockey stick graph where there was exponential growth for a while and now it's kind of levelling off where we've seen kind of diminishing returns here. Is that what's happening? Yeah. So we talked again that we're we're connecting well with some of our prior podcasts, but we talked in the past about the the tensions around data. So the the input data to train these models in the 1st place. There was all of the issues around the ethics and legality and copyright and intellectual property concerns with that. Those still exist. Those are unresolved. In many cases, there have been settlements on things that we mentioned previously where, umm, large publishers have had payouts, but that hasn't really settled the question of whether some of those things were, were, uh, morally right or whether they were legally right in, in lots of contexts. Umm, but that initial kind of phase of can we gather all of the data of humanity and use that as an input? We sort of pushed through that and we effectively did. And then we moved into something we talked about previously, synthetic data. So can we use these models to generate new kinds of data that represents structured connections between stuff that exists so that that can be used to train the next generation? And we've kind of moved through that and we're seeing, I think now diminishing returns on using that sort of synthetic data and where it seems now that, uh, the advancements are being made, they're being made at a later stage again. So the models are trained on this data and synthetic data, but they're then post, uh, training, they're put through more kind of rigorous processes around challenges that they have to, to, uh, train themselves on. So give task based, uh, or challenge based learning that they, they have where they try and refine their, uh, model based on is doing long term tasks, for example, or, you know, uh, repeatedly, billions and billions of times, uh, iterating through, uh, sort of complex tasks and practising different sorts of, of interactions. And so it's a newer form of training than the data based training that we, we looked at, uh, previously, But again, the progress seems much slower. So exactly as you say, that kind of, uh, classic, uh, exponential curve of a graph that just gets steeper and steeper and steeper that previous predictions have been based on. So again, going back to our episode on, uh, uh, the AI-2027 and the doom laden projections, it had it very much based those predictions on how that graph looked at that time. And that was the one that you're talking about, one that gets steeper and steeper, an exponential graph. But it is seeming now, based on what the models are capable of, that perhaps there are diminishing returns, and perhaps that graph might be an upward pointing but not so steeply upward pointing one. You'd have to wonder too if these diminishing returns could be the the eventual pin that pops the bubble. Well, this is it, yeah. So, so the extent to which they yeah, keep everybody waiting on those returns that all of that investment money is hoping for is the big question. So yeah, if, if there is a slowdown, if that hype is not lived up to, yeah, that's that's when bubbles pop. Apple brought out a, a research paper too, that found no evidence of, of formal reasoning in language models. I think a good reminder that these models aren't thinking, but more so it's, it's, it's sophisticated autocomplete. Uh, as I said in, in the first episode in, in the first season, umm, but challenges still remain for the models and how they are trying to promote themselves, especially amongst powerful figures, uh, in the world. Uh, not too long ago we saw calls by President Trump to to make, it's funny to say, to make AI models "non woke". Yeah, I mean, it's funny and terrifying at the same time, but it's yeah, this is coming out of the, again, something that that you talked about very clearly in in our last episode, the the degree to which people have moved away from Googling things to consulting AI and often CahtGPT specifically. And so the, the, as people increasingly become more OK with that being how they receive their information as, as it becomes more ubiquitous, which we talked about the last day that we would use these kind of solutions, then what those solutions are telling people, what that, that tool might say to them when they ask a question about society or politics or"woke" adjacent things becomes very important. And that first seemed to happen very, very radically within the space of Twitter, where xAI, the, uh, umm, Elon Musk's, uh, company that's produced their model Grok, which they integrated into, uh, Twitter, in which we talked about the problematic disinformation issues with before. And people were asking it questions. And because you can kind of tag it into your conversation while you're tweeting, often somebody would say something and in many cases, people might make a particular comment about a social issue and someone might say, "is this true Grok?" or ask Grok, you know, "can you support what has been said?" And Grok was tending to have a loosely liberal and left wing view, such as it would be seen in the US on lots of the kind of facts it brought to conversations. And this led to a lot of tension and a very strong lobotomisation of Grok by Elon Musk. And this in turn, I think brought it to to Trump's attention. So we're seeing that the ubiquity that we talked about in our last episode is causing a lot of ideology to now enter this this space. And, uh, Grok referring to itself as as "MechaHitler" of all things too? Horrifying, yeah. Umm, is this technically feasible for an AI to align with a particular ideology without kind of breaking some of its own underlying architecture here? Yeah. So I mean, this gets to the idea of the separation between the data that represents reality and then your view of that data, and your - how your ideology colours how you see that. And those are two quite disconnected things. And I mean, there's a, a great quote that's I often use when - that Stephen Colbert said in 2006 - that "Reality has a well known liberal bias." And that's great because it sort of points to the fact that, you know, our window on things might be different, but the actual reality of what's going on, generally the world is progressive. Generally we're, we're seeing progress and fairness increase where, you know, the, the, the growth, thankfully within our age and, and democracy and values has trended in that direction. But there's been reaction against that. And the way in which lots of people see the world, the ideology through which they look at that reality is very different. And so what's happening there is that some of these models are serving what they learned from the data that they imbibed from the world. And that includes perspectives, uh, seen as liberal by, by these kind of, uh, companies. And so, uh, that being reflected back shouldn't be terribly surprising, I suppose, in the models. And so the way in which they've sought to address this, Trump in the case of, of the Truth Social social media platform he has or Musk in the case of Twitter, is to add a stage of rules after the training. So to train the model to build it and then to give it guidelines. Uh, the same kind of things that might be used as guardrails or as a constitution in, uh, in the case of Anthropic, we mentioned that many times before, but to give it guidelines that say,"take this particular viewpoint". So process what you want, process the question, come up with the answer, but then colour that answer to match a particular ideology. And this was very clear in, uh, the, the interventions that were made with Grok. So, as I mentioned, lots of users there were seeing the"wrong" sort of answer from Grok from Musk's point of view. So he put in place a series of rules that Grok had to operate within that involved consulting his own prior tweets and checking if he agrees with what it's about to say. And you could see this very clearly because it says it's steps when it reasons. And you could see one of the steps it was taken was checking what Elon Musk has to say about this issue, which is insane. Umm, but those kind of rules are being pinned on at the end. So you're building the model on the data of the world, you're training it and on these kind of relationships between data, it's able to achieve some sort of consensus view or some sort of answer to your question. And then it's being asked at the very last stage to pipe that answer through an ideology. And that's why it's seeming so crude. That's why it's seeming not a good fit. And as we mentioned, Trump is now trying to build his own and do the same thing. So it'll, it remains to be seen because it's only in development at the moment, but he is partnering or Truth Social is partnering with the company Perplexity, which we mentioned earlier, uh, which has evaluation at the moment, 120 times, uh, greater than or 120 times it's it's projected sales, but maybe part of what it's doing there in this partnership with Trump is to increase those sales and to to try and address that. It, it seems to run totally counter to the spirit of these models that there are then kind of tuned to be highly deterministic and in line with an ideology, let's say. And for be it political or, or, or tech figures to kind of have their, their finger on the scales, umm, so to speak. I probably can't detract from the perhaps growing confluence of these large tech companies and federal contracts as well that they're kind of aligning closer. All the while. I know that the the the Attorney General of Missouri recently threatened Google, Microsoft OpenAI and Meta among among others with investigation because someone had asked these models to rank the past five US presidents in terms of anti-Semitism. So Microsoft's Copilot refused to answer and the others ranked Trump last. And in response, the Attorney General claimed they were providing misleading answers to a straightforward historical question. Yeah. But I, I think the, the problem really is that people do think that that's a straightforward historical question that these things are not piped through their, their kind of window on reality that ideology is. And I think that's something that's, you know, in an increasingly polarised world that we live in. And especially that's, that's heightened in the US as a context versus here, thankfully at the moment. And then, you know, it, it's not surprising that people are viewing it in these polarised ways, expecting it to answer the way they think, expecting it to reflect their view of the world. And it's clear that at the moment there, the, the models are not doing a good job of that. And the solution to try and make them do it is itself problematic here. It's, it's not leading to a, a very smooth experience for the people using it. So it's an interesting, um, situation because it points us again to the fact that in the, the economic and political realities that these large monopolistic companies have to operate within at the moment, they're going to have to keep these people happy. You know, so Musk, until he left the, the prominent role he had in the US government had outsized power and a lot of people needed to align with his view. Trump is still there and is still requiring these companies to come and pay homage to him. And so there's a lot of, of pressure on them to adjust their models in these various ways. There's a lot of pressure, again, I think for lack of regulation for, you know, solutions that, uh, will allow these companies to have the fair use, so to speak, of, uh, all of the data that they, they took in, uh, without consequence. Uh, there's a really problematic, uh, space in which all of this is operating, I think, which, which is a monopolistic one, where a small amount of companies - we've mentioned this several times - are vastly powerful, are increasing in power, and are aligning that power with the hegemonic power of the US and the current president of the US. And so that's not good for human society. It's certainly not good for American Society, but it's increasingly pressurising us in Europe too. It is and it seems to be also feeding that hype machine and feeding that sense of there being a bubble around this. I think that covers most of the topics we wanted to cover today at Dónal. But I think the final question I have for you today is that with new models like GPT-5, but other ones that are released, being released at the moment profess to have a "PhD level of intelligence", would they pass their PhD viva on current on current evidence? Yeah, I, I don't think they, no, no, they wouldn't. I, I saw there was lots of commentary, of course, on that particular thing happening within spaces on social media. And I actually had a discussion myself on, on BlueSky about this. I think part of what's happening there is that if you're not an expert in a particular field and you have a conversation with an AI about that field, it's going to bring together lots of ideas that you weren't aware of. It's going to be able to synthesise stuff quite well. And it's going to appear to you like it is a PhD level intelligence in, in that field because it's bringing you lots of connections that seem to be deep insights that you didn't have before because of your lack of expertise. But if you are an expert in a particular field and you talk to us, you're going to find a lot of flaws very quickly. It isn't really a PhD level expertise in every field, but it has the capability to synthesise information in valuable ways. There's no question that, uh, these, these models are, are useful there. There's lots of utility that they have. They're certainly able to help bring things together, help find patterns that we were not able to find ourselves. But the idea that they're so close to this "AGI" concept and that they're, they're this kind of level of intelligence, it's not borne out either by the experience of people who are PhD level experts who are testing them out or indeed by those tests that we mentioned previously. So we would expect if that's the case, that those benchmarks that we mentioned a few episodes ago, Humanity's Last Exam and those other things that we would see a huge rise in those - we haven't. So these new models have not shot up in the in the their performance there either. So again, we're saying this constantly... our refrain must always be in this podcast that we are critical in our view of these technologies. This technology is transformative. If there is a bubble or not, if the bubble pops tomorrow, they will still have wrought transformation on society. That is going to happen. We're already going to see displaced jobs. We're going to see vastly changed areas of work. For lots of us, this will touch our lives regardless of whether AGI happens or not. But we need to, as it continues to unfold, be a little bit careful about reading into the hype too much and always kind of uncritically accepting what is being said, especially by people whose vested interest is in making us think that that AI is or that AGI is just about to happen... That superintelligence - that"unless" that's in that headline from The Economist - that that's always just on the cusp of happening. And if we just hang on, we'll, we'll see it. We need to be a little bit more careful to see exactly what the capabilities are, exactly how it might touch our lives. And but we can rest assured that whether this is something that is like the Dot Com bubble was and passes away and seems to to pop, it will still have changed things. It will still have built a lot of infrastructure and built a lot of relationships and a lot of new ways in which we connect with one another and do our work. And that's what's going to last beyond this. That's a good place to leave it, Dónal. Thanks very much. Thank you. In our next episode, we hope to incorporate questions from you about some of the topics that we've discussed on the podcast or some that you found interesting yourself in your own reading about AI. So get in touch at hello@enoughaboutai.com - and if you've liked what you heard, leave us a rating and subscribe to our podcast online!