Enough About AI
A podcast that brings you enough about the key tech topic of our time for you to feel a bit more confident and informed. Dónal Mulligan, a media and technology lecturer, and Ciarán O'Connor, a disinformation expert, help you explore and understand how AI is affecting our lives.
Enough About AI
Hype, Hazards, Harms, and Help
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Dónal and Ciarán return for a new year of new updates and new concerns in the world of Generative Artificial Intelligence. Again for 2026, the podcast will provide roughly quarterly updates on major themes and topics and this Spring episode covers recent news and connects with older themes and conversations.
Topics in this episode
- "BUBBLEWATCH" - new precarious developments in the massive valuations of a small number of tech companies at the forefront of AI development, and some concerns about stock prices and where future income will come from to cover the escalating costs.
- Grok (xAI) and the fallout from the use of the tool in generating non-consensual sexual imagery - and its wider impact in raising awareness about unregulated uses.
- Agentic AI and the rise of OpenClaw - a framework for providing semi-autonomous AI task management, which has seen recent swift uptake and security concerns.
- Related discussion of the MoltBook story, and perceptions about insight into what AI is thinking
- AI Hype and recent viral articles, and their effect in driving worried people to purchase subscriptions
- Safer Internet Day and how people, especially parents, can take a more critical look at safe and appropriate use of AI tools
Resources & Links
- Safer Internet Day resources on AI for teachers and parents
- Matt Shumer's viral article, Something Big is Coming, and Ed Zitron's very critical podcast rebuttal, which also contains an annotated version of the original.
- Coverage of Meta's patent on AI impersonation of dead users, and Meta's troubling memo indicating their desire to use political turmoil to cover release of a controversial feature
- An example of the MoltBook hype, from the Guardian
- Coverage of AI as a threat to democracy in Ireland
You can get in touch with us - hello@enoughaboutai.com - where we'd love to hear your questions, comments or suggestions!
I'm Dónal Mulligan.
Ciarán O'Connor:And I'm Ciarán O'Connor.
Dónal Mulligan:And welcome to a third season of Enough About AI. It's a new year, there are new developments and new concerns for us to talk about. Again for 2026, we'll aim to provide you with a broad quarterly catch-up on the themes and the topics in this unfolding world of generative artificial intelligence. And we'll try to provide some accessible overviews of what's going on, so hopefully it leaves you a little bit more confident that you know Enough About AI.
Ciarán O'Connor:If you're a new listener to Enough About AI, a quick recap to bring up the speed. In season one, we introduced some broader explainers on history, development, key themes and concepts around AI. And in season two, we gave you deeper explanations on some of the issues around AI development, like alignment and persuasion. And now in season three, we'll continue with a slit of recent AI pieces of news as we march forward into 2026. In this episode, we're going to be talking about how AI is accelerating in a number of different ways the use of Gen AI in disturbing and ever more unavoidable ways through the Grok controversy, increased investment amongst technology companies fueling further bubble fears. We'll also be talking about agentic AI and we'll pause, hopefully, towards the end for a moment of reflection and talk a little bit about safer Internet Day. So, Dónal, we're back here in the studio. It's 2026 and things are accelerating. Can we go to the wall, John King style, for a key race alert to find out what's the latest in Bubble Watch?
Dónal Mulligan:Well, the bubble is still there to be watched. The bubble is bigger than it was even the last time we spoke. And although it hasn't popped, I think we're ever more in the space where people are watching warily and uncertain what the future holds. So for newer listeners, the bubble we're talking about here, of course, is the AI bubble around the valuation of some key companies who are leading the way in the kind of development of AI. And particularly there, we're talking about Google and their product Gemini. We're talking about OpenAI and ChatGPT, but also its partnership with Microsoft, and a company called Anthropic, which sometimes gets left out of those big two, and their product Claude. So recently we've seen some share price drops in both Amazon's stock value and also in Microsoft's. And very interestingly, in the case of Microsoft, which had a 10 or 12% drop there recently, it followed a point when there were shareholder disclosures and earnings reports, and the shareholders are angry with Microsoft about the scale of investment that Microsoft is putting into OpenAI, it seems. And so this is an interesting thing that we talked about a little before when we first mentioned these bubbles, is that there's a hype about investing. But there will come a point when people need to see a return on that investment. And I think what was happening for Microsoft, and it seems uh Amazon, who are partnered with Anthropic, are experiencing some of the same things, is that shareholders are worried about the fact that there's these huge losses uh happening quarterly. We talked about the scale of those in a previous episode, and it's still in that region where you know over 10 billion uh dollars is being lost per quarter by Microsoft in their investment into OpenAI. And so shareholders of Microsoft are saying, well, where are the income streams that justify this? Where is the future earnings uh going to come from that are are going to steer us in the right direction? And so that leaves things more precarious perhaps than it did before.
Ciarán O'Connor:It feels as though there's also maybe an issue of kind of circularity involved in this. Uh late last year, Nvidia announced plans to put $100 billion into OpenAI. And then the semiconductor company, AMD, issued warrants for OpenAI to purchase AMD shares. And in return, OpenAI has promised to purchase six gigawatts of GPUs from AMD. And this is just one of multiple kind of back and forth deals between OpenAI or other companies. And the numbers that OpenAI, for example, are predicting in terms of revenue are mind-boggling even in the scale of what are already massive predicted or hoped for revenues in AI. On a recent podcast, Sam Altman was asked, how can a company predicting only $20 billion in revenue spend like a trillion-dollar company?
Dónal Mulligan:I mean, these things we we mentioned this before. This this kind of circularity of money is really interesting and usually is one of the key things that determine a bubble. That you know, the the companies that are investing are then doing so with a promise that that money will then be spent within them. And so this raises the on-book valuation of both companies involved. And NVIDIA, who are a major chipmaker in this space, have been doing that with several companies. Uh Anthropic, uh, that uh company we mentioned that has uh Claude as its principal product, um, they have been uh going through, I think they're now in round G, so that all of these uh rounds of funding where they look for more uh uh investment. So their their latest round of investment uh took in another $30 billion that they can spend. Uh, but that $30 billion included some previously promised money, and there's quite a few questions emerging in the last few weeks about uh the degree to which this represents real new investment that would have an associated confidence with it versus existing or prior things that are just kind of being mentioned again. And at the same time, there are greater questions around uh Anthropic's again, streams of revenue later or justification for what they're spending, alongside a time when Anthropic is already predicting, similarly to Sam Altman within OpenAI, massive costs down the line for training their next model. So where that capital investment will come from for the paying for the training of the next model, where the investment will come from for building the data centers that are involved, these are ambiguous questions. And uh yeah, these all lead us to this very weird, bubbly thing we're watching.
Ciarán O'Connor:And if the revenue fails to materialize, where might we end up?
Dónal Mulligan:Well, yeah, I mean this is a really serious question, and and ever more so because if we look at the degree to which the United States economy in particular is directly dependent on the investment that's associated with AI development, if that were to falter. So if there was to be a kind of first stumble by someone like Anthropic in this space, it could really take down quite a lot of not just at this point those companies directly, but huge amounts of associated banks and investment uh uh vehicles of different sorts, including lots of people's pensions, which are very much tied to the value of these companies now, especially the 401k type pensions that uh people in the US have. Uh but also there are alongside this companies like Oracle, who have, you know, lots of other have been in the news for other things, like their their uh uh CEO and his son are trying to buy Warner at the moment, and their their tendrils are in lots of other places too. But Oracle are very directly involved in the uh kind of hardware of data centers. So the servers and the chips and the data center infrastructure that is required to build out these models. And so again, they're directly connected, and their money and their investment is connected to the success and continuing flourishing of this. You have, much as we had when we talked about this last time, this remarkably precarious situation in which everything is uh predicated or built upon this idea that there will always come more investment, and hopefully eventually there will come some income stream that will, you know, make this all worthwhile. And we just are yet to see the income stream.
Ciarán O'Connor:Yeah, and it it it begs the question or begs the phrase the too big to fail as these companies become more entwined in the technological infrastructure of the US and of other countries as well. Okay. I think we can park Bubble Watch for the moment. It hasn't popped just yet. Um also a major uh discussion debate in Ireland and multiple countries around the world since our last episode was Grok. Uh Grok is the integrated AI chatbot of X, formerly known as Twitter. And it was in the news for roughly from mid-2025 onwards, but it really began to peak in December 2025, of allowing and enabling users to create non-consensual imagery where people's clothes were removed based on prompts posted in public on the X platform most often. And the majority of these prompts were aimed at women and girls and minors also allegedly featured too. And by December, as I mentioned, it became kind of popular sport on X for users to request women to be put into bikini, uh, these kinds of things. There's been a massive backlash to this, and it seems it's really, for many people, brought the issue of AI, the the use or misuse of these platforms into public consciousness for a lot of people.
Dónal Mulligan:I think so. And I'm in no way glad that this happened because it's it's you know a pretty monstrous and horrific thing, especially for the people involved. But at the meta-level beyond that, I'm glad that now it has at least raised awareness amongst people who perhaps were not paying attention to the degree of harm that can come from these kind of unregulated systems. And so we've talked about this several times, and we've talked about this massive struggle between the idea that we must be constantly innovating and the associated notion that that means we shouldn't regulate or in any way curtail what these companies are doing or how these tools get used or that kind of space around them. And we've seen very much with X as a platform, and indeed with Grok, the associated uh kind of tool, and XAI, the company or division of X that's producing it, that Elon Musk has taken that stance of free speech must be utterly uninhibited. People should use these tools as they wish. And now that we're seeing the really truly horrific and dark side of that, I think it has woken up people in ways that had not happened before. We've seen it with the Irish government, who have been sidestepping some of these issues, I'm sorry to say, and and we certainly saw a few months ago uh some pretty naive uh kind of discussion from uh ministers originally when this uh scandal first uh kind of broke, including uh what was compared at the time to the equivalent of the guns don't kill people, people kill people type framing, where uh ministers were saying, well, you know, the existence of the tool is not the problem, it's the people using the tool. And I think that's you know, that belies the fact that uh the tool makes very easy some remarkably problematic things. So that does seem to be changing, and as you say, I think that's about w awareness amongst parents and groups and indeed response now from government.
Ciarán O'Connor:I think that's a that's a good point. I think broader terms, what was slightly misconstrued was that opinions that the platform was perhaps misused or used illegitimately. Yet these platforms, these tools, were being used as designed and as rolled out. Um and if if you follow the news, especially in early January, it was quite unavoidable. Uh one piece of analysis from the Center for Countering Digital Hate estimated that Grok generated eleven million sexualized images in the span of three days. And predictably thankfully a backlash followed. Uh interestingly enough, I was uh asked to appear before an Rocktus committee in mid-January, and interestingly the invitations for this committee went out before the the Christmas break to discuss and the invitation was to discuss disinformation and broader digital topics like that. But by the time the committee was held on the 16th of January, this was the only topic of mind for the TDs and Senators who were there on the day. The Guardi were there, two detective sergeants were there, and they were questioned at length about the investigations. They confirmed at the time that they themselves were investigating 200 individual reports of nonconsensual imagery posted uh via the the GROC tool. More recently, we've seen in Ireland the Data Protection Commission open an investigation into X into potentially harmful and nonconsensual, intimate or sexualized images. And interestingly, they are investigating whether there was a breach of GDPR legislation. So along with the DSA Digital Services Act legislation, these are the kind of landmark pieces of legislation around which hopeful platform accountability will be enforced. But I think what's interesting here is that legislation like the GDPR or the DSA, it requires risk or impact assessments to be carried out by platforms when they're rolling out any new feature. And perhaps in this instance they were carried out and found determined to be fine, or perhaps they were not carried out. But equally, both are quite concerning because we've seen the kind of downstream impact of what this of of what happened online and how it affected so many people.
Dónal Mulligan:I agree completely. I mean we've seen and we've mentioned before that uh things like the DSA have requirements that uh these companies make disclosures, allow researcher access, have regular reports, and uh X, uh Elon Musk's company, and seemingly very directly at his direction, have not been uh meeting their requirements in that space. So they have not been allowing that sort of access. They have not been uh reporting in the ways that they're supposed to. So it's not clear whether they carried out some internal risk assessment, but they certainly are not doing so in the frameworks intended by European regulation in this space. And so I think to say that they are good faith actors in the way that we would expect people with this, you know, reach and with these this this kind of power of uh tools to be acting, uh, I I don't think it's reasonable anymore to say this. I think it's pretty clear that Musk in particular is uh someone who's who really wants a an utterly unregulated space here and didn't seem phased at all. I think in the reporting about this unfolding scandal, which you know was a huge story in in lots of different countries and in indeed in the US as well, he played it down, he joked about it, he seemed to find it, you know, a a trivial matter. So it's really interesting that that seems to have finally been enough of a crisis to put this on a lot of people's radar. And sadly, we said this was very likely to happen when we've talked about this previously, that it sometimes takes a really big, horrible scandal to make something that we can then respond to. That we're we're pretty terrible at regulating in advance, even though we can theorize the harms and we knew something like this could be coming. And I have a growing worry that the same thing is true in the space of AI and democracy, the thing that you were speaking with the Aractus Committee about, because we haven't yet seen the horrible event that could occur where, you know, uh we're we're seeing erosion of democracy, certainly, but where really profound problems arise in a particular country or in a particular setting. But we're still not putting out the regulation to actively stop that. And I think, again, are we going to wait until we really see governments crumble or or other massive democratic damage done before we start to then try and regulate? So yeah, this this remains a concern. I'm glad to see that it it's finally getting more attention because of this specific issue. Uh, but uh yeah, my my concern remains.
Ciarán O'Connor:Yeah, in in response to the the criticisms online, I think X announced that that the these prompts, these tools would only be uh available to I think verified X users on the platform, a kind of piecemeal response to it. Investigations have also followed in other countries too. I think France is another one who've launched uh investigations. And it interestingly also falls into the the broader geopolitical tensions between the US and the kind of free speech absolutist stance and the EU, characterized by the US as censorious and trying to stifle free speech versus the kind of regulation-fronted approach or argument that the EU countries would make. Um but moving from one questionably used and possibly not regulated form of AI to another to talk about agentic AI somewhat. Um you
Dónal Mulligan:were telling me a little about OpenClaw and and I'm really a a noob now when it comes to this, so tell me all about Open Claw, Dónal. Yeah, this is again one of those things that if it hasn't already started to appear in your life, will uh certainly begin to do so, I suspect, for many people and in many working conditions over the coming months. We've previously talked about agentic AI, and you can certainly go back and have a listen to our prior episodes where we've mentioned that. But uh, to be very quick about it, it is essentially the idea that we would use AI in a slightly different manner to the kind of chatbot interface that we traditionally use where we put in a prompt and we get a response to that prompt, and we might have a conversational interaction back and forth, but at each point we are responding. So we are taking an active role in writing the next prompt and then the next and the next. Agentic AI as a kind of workflow is a little bit different to that, where much broader goals are set, and the actual uh writing of those prompts continually happens in the background. And so we give over to a particular uh framework or system or model uh the ability to act for us in a set of places we want. So we might allow it to have access to our Slack or our WhatsApp or our chat or our email or indeed our bank details or indeed our logins for our work, and we might then allow the uh the framework, uh OpenClaw is one of those, and the one that recently became famous for reasons we'll talk about in a few minutes. We might allow that agentic framework to then use those tools on our behalf by sending prompts to various models. In theory, you can do this by having those prompts send uh instructions and receive instructions from a local AI model on your machine, but you would need a pretty high spec machine for that. And so in practice, most people are not doing that. They're using something like OpenClaw or these frameworks to connect to models like Claude or uh ChatGPT and to uh, on the higher end of those models, have it make decisions, send something back, and then proceed from there. One of the things that's happening then, of course, is that these uh models like OpenClaw are often over the course of a day sending hundreds or thousands of requests to something like ChatGPT or Cloud. And each of those things cost money. So you're doing that in terms of tokens. We've mentioned those before as well. Tokens are those little pieces of language that you're sending each time. And so you're paying by the token, and there's been quite a lot of coverage in the last week or so as this particular framework called OpenClaw has has become a viral sensation in certain uh workflows and certain areas of work for people. Uh, people are being landed with their first bills. And some of them are quite surprised by the many hundreds of dollars that it has accrued for them in the background. Really? Yeah. So one of the ways in which uh this is happening is that, again, depending on the degree of uh agentic uh kind of space you allow, how much uh agency do you give to OpenClaw to do stuff for you? It may need to run all through the night and day doing stuff on your behalf. And each of those actions it's taking requires one or several uh prompts to be sent to a system like uh OpenAI or to Anthropic for uh direction and for for kind of checking uh what it should do next or interpreting maybe images it's seeing on a screen or whatever. So there's this constant back and forth in the background. So that can seem really seamless to the person using it. They just set it up and say, manage my calendar today, or do my work, or write this paper for me, or whatever it is. And it does that quietly in the background. But of course, it has accrued a bill for all of those tokens that it's sent. And I think that's one of the first uh uh little surprises that this has caused for people who are taking it up uh for the first time.
Ciarán O'Connor:To play devil's advocate for for a moment. So the the pitch these developers may make is that with access to your emails, to your calendar, to your Slack, to your bank account, all these other forms of kind of personal or sensitive information, this is to create a more intuitive uh agent working on your behalf, a virtual assistant powered by AI. Is this not the trade-off along with the cost that comes for a highly technical and highly sophisticated agent that will run your life for you? Are there no risks involved?
Dónal Mulligan:Well, yeah, that's why people are using it. So it's pretty clear that um the people who are really evangelical about this, and I- I mean I know some of them directly, there are people in my life, not uh in my immediate friend circle, but certainly people whom whom I know as as acquaintances who are really deeply, you know, they they've drank the Kool-Aid on on the utility of this stuff. And that's absolutely true. What you say is is, of course, the case that if you have something that is capably running in the background like this, a framework like OpenTlaw, this particular one, which can connect to all of these various different models, and that might require you, of course, to have a subscription to OpenAI and a subscription to Clawed and a subscription to Gemini, of course, so that it can use these different models for different things on your behalf constantly, then it can do lots of admin work for you. It can, of course, manage your calendars and do your coding and do these various things. And so again, it's the the the promise of that that I think is leaking into this area where more and more people want to try it and set it up. And I've spoken with people directly and I've seen uh and read lots of reporting on people who are hearing of that anecdotally. But this is really useful, especially in coding spaces where, again, a lot of these things start. And people want to say, well, maybe that will work for my workplace or my, you know, workflow over the day. And there are two things that come up there. One of them is if you're not sure what you're doing there, you can really quickly spend a lot of money. So it can be doing stuff for you, and maybe that's useful, but whether that offsets the cost that it's generating for you in terms of those tokens is one question. But the second and really big really problematic one is that of course, the more stuff it can do, the more access it requires to your systems, to your passwords, to your banking, to your secure login at work, to your messaging systems. And so lots of people are happy to have this sending emails or sending Slack messages or WhatsApp messages, or handling their intimate contacts over iMessage or whatever it might be on their behalf. And that's certainly a decision that you might want to take. But that comes with a a lot of you know big deep security issues. And so we're starting to see, of course, workplaces ban it. Notably, Meta, for example, has banned OpenClaw amongst its employees. And certainly lots of government agencies and lots of uh states at the highest levels of different departments are saying this absolutely cannot be a part of your work because of the potential security issues. And we have already seen clear evidence that there are major security uh faults. So part of that is coming out of something called Moltbook, which you probably may have just begun to hear about in the last few weeks too.
Ciarán O'Connor:Okay, so then I know for sure next time I'm speaking with you over text, it is in fact the real Dónal Mulligan and not uh a virtual assistant that you've assigned uh broad access to your life. Um yeah, Moltbook. So Moltbook is interesting. I haven't heard a lot about it, but in broad terms, in broad headlines, it's a social media for AI agents.
Dónal Mulligan:Yeah, exactly that. It was sort of uh the model for it, I think, was really based on Reddit. So the idea with Reddit, and of course lots of users may have, or lots of listeners may have have been users of Reddit for for many years. I certainly use Reddit myself, and I I find it an interesting place because it's a sort of topic-based forum, essentially, where people post things and you can subscribe to the ones you're interested in or whatever. Um, Multbook was an attempt to do something like that, but to have the agentic bots who are doing this, so the open claw uh systems that were in use, post there what they're doing and begin to uh, you know, share information with one another, etc. And so there was a lot of reporting on this, and there was a lot of hype around this because for many people this seemed like, oh, we're getting an insight into what bots really think or what AI systems are really thinking about and what interests them. And within that space of uh the the kind of multbook frameworks, these systems seem to be talking about religious experiences or religions that might arise for them and talking about issues of the day, and you know, so lots of people were breathlessly reporting on the fact that, oh, look, there's this this other world of connection and of uh interesting social interaction between these bots. And I think it's really important to take a bit of a step back from that and say, well, why are those bots doing that? Is that really an insight into the reasoning and thinking behind them? Or is that, again, something they have been incentivized to do by their training and seen done in the training they've had? These systems were trained on things like Reddit, so they know how humans interact in these ways. And so it's not terribly surprising that if they are directed in their programming to or in their instruction to go and post there, they're going to post in the style of things that they've seen before. And so I'm really dubious about reading into this some sort of metacognition of AI, that it's giving us real insight into their social interactions or their thoughts about spirituality. And I think it's much more likely that they are, as they do in all other areas, performatively doing the things we've incentivised them to do. And so the multibook thing, you know, I regard it as a bit of a a weird blip and a strange thing, but it's absolutely undeniable that the reporting on it, and that included major sources like The Guardian, raised the issue of this set of tools and therefore raised the profile of OpenTlaw. And so OpenTlaw, because it was associated with this, was a much less known, smaller framework for doing agentic stuff, probably you know, mostly used by higher-level programming people in Silicon Valley. Now it's become much more widespread. And so the multbook kind of story and narrative is really what's giving rise, I think, to that uptake that we're seeing much more widely. And then, of course, to those sudden bills and to those security concerns. And so, you know, I think it again, hype is at the the basis of a lot of what's going on here.
Ciarán O'Connor:This- this is what it seems to be at its core. It's it's an experiment to see how these agents perform whilst the humans aren't involved. Although I have read that humans can only access the site as observers. So we may finally get an answer to the question of do androids dream of actual sheep?
Dónal Mulligan:Well, I mean, we can see what their religious views are and all sorts of things by by observing it. But yeah, again, I I think we we talked about this in a slightly different way in some of our previous episodes where we mentioned this concern that is talked about in the um AI 2027 uh article from a few years ago. Um and the the kind of concerns there were this idea that at some point the complexity of these bots and the complexity of the interactions that they're undertaking will become so uh impenetrable to us that we won't be able to supervise them meaningfully. And this was already well discussed at that time, that the only way we have to really check on what they're doing or how they're behaving would be if they are telling us in plain English what they're doing. And so this was a feature that was kind of developed in this space to do that. But of course, it again is not really accurately reporting what their thoughts, quote unquote, are. It's instead somewhere where they're they're sort of confabulating the type of thing that they have seen in their training humans do in social platforms like this. And so I think we're still in that space where uh our ability to meaningfully supervise some of these things is quite hampered, and we're getting back what we put in a little bit in that case.
Ciarán O'Connor:I don't think you can talk or think about Moltbook either without bringing up the Dead Internet theory, which is something that we discussed in previous episodes. It's a theory going back years. I think it emerged in on Reddit or other forums in the last decade of the belief that more and more most of the internet is powered by automated agents, of bots, of inauthentic accounts. And here is a very deliberate attempt at recreating that and seeing what happens next.
Dónal Mulligan:And not the only dead internet thing that's happening recently. I mean, we've we've certainly mentioned this issue before, and we see it in our own lives. We're we're well accustomed to the fact that lots of common sections we look at or lots of social media platforms we might use are full of what is clearly bot content. I mean, I rarely if I visit a YouTube video or something, can be bothered to look at the comments underneath because they're so obviously fabricated in lots of cases. We're- we're well used to the kind of thing that was being described in dead internet theory. But a really quite macabre and and frankly pretty gross dead internet that's occurring is uh the recent uh coverage of Meta's decision to file a patent that would allow them, based on the collected data they have on users, to continue posting as that user to their various products after that user dies. So a really literal dead internet where a bot is taking the place of someone you know or knew, and then posting to WhatsApp or posting to Instagram or posting to Facebook as that person.
Ciarán O'Connor:It's already a bit of a dead platform for most people.
Dónal Mulligan:Well, yeah, I mean I think this, but it that seems to cross a line, I think, for a lot of people. And so there's there's been pretty widespread revulsion at that idea. But uh it's one of a few things I'm afraid recently that that Meta have been doing that uh has caused some some alarm for people. And I think uh, you know, we I've I've spoken somewhat unfavorably before about uh some of the things that I I've observed Meta doing, but uh in addition to that patent, which they obviously then intended to make use of of that kind of functionality, uh there was the one that uh we've uh discussed, the two of us in advance of the show, where uh Meta are hoping, I think, to use this period of tremendous uh volatility in the politics of the US to smuggle in some things.
Ciarán O'Connor:Yes, so amid the tumult uh politically in the US and the focus on some of the major AI companies, OpenAI included, Meta themselves have been busy in other ways. They are looking to reintroduce facial recognition on their smart glasses. Now, this is a move fraught with obvious ethical concerns, privacy concerns, potential for misuse. But they're pushing ahead. In fact, they're seeking to capitalize. Uh what's what's what's most interesting about some of the reporting around this is it includes an internal note uh published by Meta. I'm just going to quote this uh from the source. We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns. This seems a very cynical and deliberate move to capitalize on the unrest in the US and around the world to bring in a feature, a product that has raised serious concerns in the last few years. I'm afraid it is. I like it has very direct links to the kind of, you know, civil unrest and political turmoil that's happening in the US because some of their products, Meta's own uh, you know, those glasses that they made with Ray-Ban that have their cameras built into them, people observed that iCE agents were wearing those in the US during some of the raids. And of course, if there's facial recognition built into that, it's a tool that might directly be contributing to uh the kind of behaviors that that we're observing in the US with with regard to iCE arrests, etc. And so for Meta to very directly say, well, this is a time when people will be distracted, let's bring in this now so that we won't get attacked, and they they use the word attack, uh, it's it's incredibly cynical. It's it really it's it's it exposes a sort of opportunism in the worst sort of of politics to try and get a product in there. And I'm sorry to say for me, it's it's not surprising from Meta because we've seen some pretty questionable ethical behavior from them before. Um, but it's one that really underscores the fact that, like lots of other companies at the moment in this space, they are finding ways where they can to really milk the current political environment for profit, and they need these income streams uh going on. Meta in particular, we were previously talking about them in our prior podcast as standing up there alongside those other three uh large companies, but their presence in the AI uh kind of space of developing models has faded away. And so I think for Meta, it's much more important for them to find other ways to capitalize and other ways to use kind of secondary AI tools like their hardware with these glasses. And so, yeah, this it's it's contributing to a pretty saddening space for for this kind of development. I can't help but think when reading pieces of news like this, that we're repeating all the same steps of the early social media years where platforms, technology companies raced ahead with features and products without the sufficient we mentioned risk assessment, impact assessments in the context of Grok, that's another example here, without the sufficient checks and balances or allowing for regulation or legislation to catch up as well. And that it it it worries me that in a number of years' time we're going to be then repeating the same arguments in our politics and broader civil society of okay, how do we build legislation around this that can future proof this too, because platforms and technology companies are failing to bake in these safeguards at the core.
Dónal Mulligan:Yeah, I agree completely. I mean, I- I'm at the stage where I hope we have a broader civil society in a few years because I think some of this is so terribly damaging. Um, and it it's hard when you look at it in those uh terms to reconcile why this is going on. Why are people okay in large part with things racing on in this way? And a lot of it again comes back to the thing we mentioned earlier, which is this degree of hype that we're seeing, that people are so dreadfully afraid of missing out on opportunities for enrichment or development, or at a national level for their country to be amongst the leading countries in the space, or indeed the EU to stand alongside China or the US in this kind of development. And I want to mention one other article that uh, again, reader or listeners may have come across and may have read themselves in the past few weeks because it became so prevalent, and that's a viral article by a man called Matt Shumer, who talked about uh the degree to which agentic models, exactly what we've talked about. In fact, he specifically mentions Open Claw in his article, the degree to which that has changed his personal workflows and the workflows of people he sees around him, and the productivity of people, and has you know moved huge amounts of work away from a particular person who just checks in on it and therefore can do much more work than they could before. And his prognostication alongside that that this means loads of people will lose their job if you're not already using these tools, you need to start because you're going to lose your job, etc. etc. And this really took off. I think this directly frightened a lot of people. It was for my sins, I- I still access X as part of my work. And at certain times certain clips take off and they are unavoidable, and even for me, I- I couldn't scroll -poor me - but I couldn't scroll without seeing this article. And and there was the initial wave of the article and the contents of it being being uh promoted by the author, but then it it it really took off and it caught fire, and others were evangelising on his behalf. And it seemed as though without much critical consideration, just contributing to the hype and essentially saying, you know, this with the finger emoji pointing down, saying this is what's happening. But it did take off, and it seems to have really touched many levels of business and even politics, and is already kind of shaping people's views on what might be possible. Yeah. I- I had a very similar experience. Anecdotally, you know, I'm I'm obviously because of of work I'm doing elsewhere and things like this podcast, lots of people in my extended circle know me as, oh Dónal - that guy who's constantly going on about AI. So I got a lot of texts about this, and a lot of people saying, Oh my god, should I be, you know, what what do I do here? Should I be learning this tomorrow? Uh, should I be paying for these these kinds of tools? And so it definitely did have a a pretty substantial impact in that way. And I'm not saying that it is without merit what he's saying. It is absolutely the case that, of course, work is going to change. Of course there are jobs that will be affected by this. But I think we, again, really need to take a step back and a deep breath and say, is the hype that's going on here, is it from a useful source? And we've seen now in rebuttals of what Shumer is talking about, quite a lot of criticism of him as a sort of grifter in this space. And he has a history that, again, I'll link in the show notes to various people like Ed Zitron, who have have really gone into detail about his own past in terms of hyping up previous models and approaches and things like that. And he is directly in that viral article calling for people to buy AI products. He's saying you should be subscribing to Anthropic tomorrow and you should be paying for these tools. And of course, Anthropic, I'm sure, are delighted for that to be the message. And indeed, the tool OpenClaw that we talked about has been since effectively bought over. The person who created it now works for OpenAI. And I'm sure they will say, well, you should have an OpenAI subscription. And that's what people were asking me. Friends and acquaintances were messaging me saying, Do I need to quickly update to this? Which one do I need to buy in order to be sure I can keep my job in the future? And again, I am not at all saying that we should not use these tools or that these tools are something that will not change the world. They absolutely will. But this sort of frantic, uncritical adoption is something that these companies definitely need, and they're incentivized to promote this hype. And people like Matt Schumer are contributors to that, and it's it's worth taking a look.
Ciarán O'Connor:- Hype is is good for business. Uh but but but you could also couple that article and the kind of reaction it created with the comments from Microsoft AI Chief who who predicted that in 18 months the work of lawyers and accountants will be obsolete. Simply obsolete. So I I'm dubious as to the claim myself. I'm I- I don't doubt that they'll be disrupted and they'll may change the fundamentals of the work, but I think it's a large claim and I think it - we can also maybe package it under some of this accelerationist hype that's so common within this space.
Dónal Mulligan:... Well, not only that, we can also connect it to the start of this podcast where we were talking about the amount of money that Microsoft lost recently in their valuation because of their own shareholders being angry at the investment they're making. So is it a surprise that a senior Microsoft figure would predict massive growth and uptake in use of their models and a potential income stream that would be associated with that? Perhaps not surprisingly in that case. Again, I don't, I'm not saying this because I think that won't happen to some degree. Of course it will happen to some degree. But the fact that it would change to this degree that, you know, if you don't immediately tomorrow start to pay for these tools, you are, you know, cast to the curb. I think that is an extreme view. And I think that is much, much less likely than some of these predictions in terms of the timelines involved. Absolutely, we will see changes. We have seen them at every level over many years, and they are happening with increasing frequency. But the the kind of uh fevered push for us to uh explore this stuff by paying money to companies who desperately need that money, I think we have to be a bit critical about that.
Ciarán O'Connor:So as things appear to be accelerating in a number of ways, I think it's also probably a good reminder to slow down and to talk about some of the basic fundamental principles and guides for users of AI. Uh Dónal, you were involved in something related to this very recently, Safer Internet Day.
Dónal Mulligan:Yeah, Safer Internet Day is uh a kind of a- it's in its 23rd year, though a lot of people had never heard of it before, but it's one of those international things. I think it started in the EU actually. Uh so there's around 160 countries who are involved in it now. And it's basically a day for exactly what it sounds like, trying to raise awareness of you know harms, especially for younger people online. And so this year their focus was on AI, because AI is, of course, an additional uh online tool that young people may reach for, and uh we've covered this already. But the particular harms in two areas for young people are a great concern. One of them is harms to their education. So if they were to uncritically begin using these tools, especially at earlier stages of education, we've talked about this before. A thing called cognitive offloading is a real issue there. The fact that if you let the tool do the thinking for you, you never take the opportunity to learn. And it really drastically, we can now see with increasing uh clarity as as more and more studies emerge on this, it really affects people's cognition and their ability to do higher or more complex thinking. So that's that's certainly one issue. And the second, of course, is another thing we've mentioned before, that these sorts of tools, when used by young people, can really appear to be a friend, and that that might may be positive in some ways, but they can be a friend that causes them to perhaps not talk with their parents, to be secretive, and in the worst cases, and we've covered this before, really sad and horrible cases, to hide from their parents suicidal thoughts and really, you know, harmful things that in fact some of these models appear to have encouraged directly. So that same kind of encouragement to always take the next step and the next, which might be useful in a conversation where you're learning something potentially, is much less useful and of course is directly problematic in a conversation where you're having darker thoughts and it's it's asking you to take more extreme actions. And so those kind of harms and potential harms are something that, say, for Internet Day uh was focused on this year. And so I did some work with a brilliant group called WebWise, who uh have worked in this space for years and have been very good on resources for young people uh for things like social media use and and uh safety online and have the most fantastic youth ambassadors in schools from all over the country. There's nothing when you're as sad as I am sometimes about this, and as I clearly am coming across in part of this podcast about the dire things that could happen, nothing is better than meeting enthusiastic people who really want to engage with this stuff. And so I was delighted to have done stuff with them. And as part of that, I I published with with WebWise some guidance for teachers for uh the late stage of primary school to start talking about these things. Again, we in no way think that children of that age should be using the tools. That's very important. If there's one thing to take away for parents, it's that these tools themselves very clearly say these are for 13 and up. The law says the same thing, and some of the tools, Claude from Anthropic says it's for 18 and up. So those are not an optional thing to take as a hint. Those are a very serious sort of barrier to really enforce for your children. Your younger children should not be using these tools to play with or experiment with at any point. But it is useful in primary school to know something about what this technology is, because later they will encounter it. Perhaps their parents are using it at work, or perhaps it's something that they're aware of elsewhere in life. And so letting them know what this is, why it might be harmful, and how you might safely use it later is important. And then in secondary school, trying to bring in actual resources that allow people to think about that with a bit more criticality and in a bit more detail is something that I'm involved with too.
Ciarán O'Connor:So just instilling good principles from an early age.
Dónal Mulligan:If we can! I mean, it's very difficult to do, of course. I say this with a uh a lot of hope, and I recently was speaking on on uh the radio in Ireland on News Talk, and the presenter was almost pitying me for having to try to do this. And it was, you know, I understood completely where he was coming from because it's such an uphill battle, and we know this already from social media, to get people to do that exact thing we just talked about, take a deep breath and a step back and have a look at is this a wise move for me to do? Is this a thing I really need to do now? It's so hard to do that, and there's so many pressures not to. And there's a deep social pressure, as there was for social media, to be using the thing everyone else seems to be using, or to be using the thing that seems to be the current hyped thing of the moment. And so being able to address that to parents, especially or to teachers, and to help them to help children, I think, is something that's worth doing. And so we'll link, of course, in the the notes with this podcast to some of the resources that are there from WebWise and to some other articles and things recently that I've I've written about this, because I think there are some pretty easy to implement concrete immediate steps that a parent listening to this could do. And I think it's worthwhile having a read of those.
Ciarán O'Connor:Very good. I think at that I'm ready to take a deep breath and to step back and to re-emerge into the into the real world uh once more. I think that about wraps it up for today. Thank you very much for listening to episode one, season three of the Enough About AI podcast. I'm Ciarán O'Connor.
Speaker:And I'm Dónal Mulligan. We'll be back again with more information. And in the meantime, if you have questions or if things that have come up today are of interest to you and you want to point us at resources we should look at or ask us questions we can tackle in our next episode, please do that. You'll find that linked in the bottom of the show notes here, or you can email us at hello @ enoughaboutai.com. Thanks.