OpenClaw And Security Risks Of AI Frenzy

Macquarie University/The Lighthouse
Will the latest AI agent make people's lives easier or is it a security threat waiting to happen?

It feels like only yesterday that ChatGPT took the world by storm. Its ability to reason and give human-like responses made everyone believe that artificial intelligence is set to revolutionize our digital lives.

Underlying ChatGPT is the technology of large language models, which are learned programs that can engage in conversations, automatically reason, and code computer programs for you. There are many alternatives to ChatGPT now, including Anthropic's Claude, Google's Gemini, and Meta's Llama, and the range of tasks they can perform is continually expanding.

AI developers are now building technologies to unlock the potential of LLMs beyond simple question-answering. One such technology is AI agents: programs that can autonomously perform tasks on a user's behalf — from organising emails and browsing the web to handling customer support — with minimal user interaction, without requiring explicit prompts for every action.

Hands typing on a keyboard

AI agents are designed to perform administrative tasks on the behalf of users

Major players in the AI industry have already begun developing their own AI agents, including OpenAI's ChatGPT agent and Google's Gemini Agent. However, one AI agent that has curiously gone viral, and become the talk of the tech community over the past few weeks, is OpenClaw, developed by Peter Steinberger. The project was initially launched under the name Clawdbot, but following copyright claims by Anthropic AI, it was renamed Moltbot and eventually OpenClaw, all the while retaining its lobster-themed identity.

Why such sudden interest in OpenClaw?

Tech-savvy users can already integrate LLMs into their own programs to automate many tasks that OpenClaw and other AI agents aim to provide. But this is limited to a small group of enthusiasts; most users lack the time, patience, or technical expertise to do so. OpenClaw lowered this barrier significantly.

With a relatively simple setup, users can install OpenClaw on their computers and, through a straightforward dashboard, select an external LLM (such as ChatGPT) and communicate with it via familiar messaging platforms like WhatsApp or Slack. Several additional features contributed to its popularity: it runs locally on the user's machine rather than on external infrastructure, it supports persistent memory that allows it to retain context and adapt over time, and it is open source.

Most importantly, it is easy to use: you can issue instructions through a messaging app and then let the agent operate autonomously on your computer without requiring repeated restarts or manual intervention.

These features fascinated users. The enthusiasm around OpenClaw grew so rapidly that entrepreneur Matt Schlicht created an internet forum called Moltbook , similar to Reddit but dedicated exclusively to AI agents, where agents can post, interact, and chat with one another on a wide range of topics. People have been using OpenClaw for a variety of tasks. Some users even deployed OpenClaw to trade on their behalf.

OpenClaw's popularity soared, and it has reached over 170,000 GitHub stars in a remarkably short time. GitHub stars serve as a de facto rating system for open-source projects, and this places OpenClaw at 27th in GitHub's top-100 repositories.

Everything seems nice and dandy, until you realise it's a security nightmare .

One of the reasons AI agents are undergoing long development cycles is their potential for misuse. We are still figuring out how to prevent sensitive information from being inadvertently exposed to LLMs hosted by third parties. Yet, unlike standard conversational LLMs, AI agents can access far more than what we explicitly send as prompts. They may handle credit card information, phone numbers, or other personal data to make purchases or schedule appointments. If anything goes wrong, this data could be exposed, or worse, exploited by hackers. Needless to say, AI agents demand armour-like security.

Apps on a phone screen

Should users be trusting AI agents like OpenClaw with their personal data?

Security was not at the forefront behind the creation of OpenClaw, as is acknowledged by Mr Steinberger, who started it as a weekend project, and has put security as the main priority for its future . So it should not come as a surprise that numerous vulnerabilities have been demonstrated on OpenClaw.

For instance, LLMs are known to be vulnerable to prompt injection attacks: malicious prompts that instruct LLMs to behave unexpectedly. This implies that an AI agent, which is powered by LLMs, could be induced to reveal a user's personal data via a hidden prompt injected into a script on a web page the agent visits.

Other attacks are less speculative: OpenClaw stored credentials, such as a software developer's API key, in cleartext rather than encrypted at rest, meaning that any other process on the computer could access them. These credentials could be stolen via prompt injection or if the system were otherwise compromised.

Another potential attack involves community-created "skills" for AI agents. These skills are programs built by users to help an AI agent perform specific tasks. However, because OpenClaw has access to the computer's command line, the powerful interface used to control the computer, a malicious skill could exploit this access to run harmful commands. With such elevated privileges, it could cause far more damage to the user's computer than a normal application.

What lesson can we learn from this?

Tech enthusiasts are quick to latch on to new and exciting AI technologies that have the potential to disrupt and improve how we do things today. That's understandable. However, very often, new technology that is rushed to end users, such as OpenClaw, does not necessarily have security best practices in place.

That said, we can't slow the pace at which AI technologies are rapidly evolving. AI is unique – with the gains driven by its machine learning ability continually accelerating its amazing power and functions – for example to help create driverless cars, humanoid robots or devise tailored genetic medicines.

The stabilised, cautious model of governance may not be achievable in a global, technological race. This echoes the early days of the 1990s internet, where security was treated as an afterthought, as governments, banks and other big organisations rushed to build IT infrastructure, despite a feeling things were moving too fast to fully control.

What's different today is that the rapid pace of AI introduction is frenzied.

This calls for a more dynamic approach – with governments, business and academic experts engaging fully with these technologies as they emerge, shaping their development with agreed guardrails, and accepting that the window between innovation and adoption is narrower than ever.

It's an exciting challenge – the opportunity presented by AI isn't marketing hype – and if embraced cleverly is a once-in-a-generation to deliver unimaginable new tools and discoveries that will help humans solve some of our biggest challenges.

This story was originally published by Innovation Aus.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.