Last month, Jason Grad issued a warning tardy last night to 20 employees of his technology startup. “You’ve probably seen Clawdbot trending on X/LinkedIn. While it’s cool, it’s currently unproven and poses a high risk to our environment,” he wrote in Casual message with red mermaid emoji. “Please keep Clawdbot away from all company equipment and work-related accounts.”
Grad is not the only CTO to express employee concerns about the experimental AI tool, which was briefly known as MoltBot and is now called OpenClaw. The Meta executive claims that he recently told his team to keep OpenClaw away from the laptops where they usually work or risk losing their jobs. The executive told reporters he believed the software was unpredictable and could lead to privacy breaches if used in secure environments. He spoke on the condition of anonymity to speak honestly.
Peter Steinberger, the independent founder of OpenClaw, launched it as free and open source tool in November last year. However, its popularity increased last month as other developers added fresh features and started sharing their experiences using it on social media. Last week Steinberger joined Developer of ChatGPT OpenAI, who says he will keep OpenClaw open source and support it through a foundation.
OpenClaw requires basic software engineering knowledge to configure. It then needs only confined guidance to take control of the user’s computer and interact with other applications to aid with tasks such as organizing files, conducting Internet research, and online shopping.
Some cybersecurity professionals do publicly he insisted companies to take action to strictly control how their employees employ OpenClaw. The recent bans show how companies are moving quickly to ensure they prioritize security over the desire to experiment with emerging AI technologies.
“Our policy is ‘mitigate first, investigate later’ when we come across something that could be harmful to our business, users or customers,” says Grad, co-founder and CEO of Massive, which provides online proxy tools to millions of users and businesses. His warning was issued to staff on January 26, before any of his employees installed OpenClaw.
At another technology company, Valere, which works on software for organizations including Johns Hopkins University, on January 29, an employee posted about OpenClaw to an internal Slack channel to share fresh technologies for potential trial. The company’s president quickly replied that using OpenClaw did strictly prohibitedValere CEO Guy Pistone tells WIRED.
“If he gained access to one of our developer machines, he could gain access to our cloud services and our customers’ sensitive information, including credit card data and GitHub codebases,” Pistone says. “He’s pretty good at cleaning up some of his actions, which scares me too.”
A week later, Pistone allowed Valere’s research team to run OpenClaw on the employee’s senior computer. The goal was to identify software vulnerabilities and potential fixes to augment its security. The research team later recommended limiting the number of people who could issue OpenClaw commands and making it available on the Internet only with a control panel password entered to prevent unwanted access.
In a report shared with WIRED, Valere researchers added that users must “accept the fact that the bot can be fooled.” For example, if OpenClaw is configured to summarize a user’s email messages, a hacker could send a malicious email to a person, instructing the AI to share copies of the files on that person’s computer.
However, Pistone believes that safeguards can be put in place to make OpenClaw more secure. He gave the team at Valere 60 days to investigate the matter. “If we think we can’t do it in a reasonable amount of time, we’ll give up on it,” he says. “Whoever figures out how to keep businesses safe will definitely have a winner.”
