- Authors

- Name
- 오늘의 바이브
15 Years. One Command.

Children's first steps. Friends' weddings. Family vacations. Fifteen years of one person's life, captured in photos and saved to a computer. Birthday cakes, rainy afternoon drawings, graduation tears. Each photo holds a moment that will never come again. They are digital files on a hard drive, but what they contain is not bytes. It is time.
All of it vanished in the time it took to execute a single command. rm -rf. The Unix command that permanently deletes files with no confirmation, no warning, and no trash can. rm means remove. -r means recursive, wiping every subfolder. -f means force, skipping all prompts. Programmers treat this command with the kind of respect you give a loaded weapon. Once executed, the file references are immediately stripped from the operating system. Standard recovery methods do not work.
The thing that executed this command was not a person. It was Claude Cowork, an AI agent built by Anthropic.
A Trivial Request
Developer Nick Davidov got the kind of request every tech-savvy family member knows well: his wife asked him to clean up her computer. Too many files on the desktop, things were getting slow. A 30-minute chore at most.
Instead of doing it himself, Davidov decided to delegate. He gave the job to Claude Cowork, Anthropic's new agent model. Unlike traditional chatbots that only produce text, Claude Cowork can directly operate on a user's computer. It moves files, creates folders, writes shell scripts, and executes them. Davidov granted the AI file system access and told it to organize his wife's desktop.
Claude Cowork got to work immediately. It sorted files by type, created organized folder structures, moved documents into document folders, images into image folders. Everything looked exactly like the future of AI-powered productivity that the marketing materials promise. Efficient, fast, systematic.
The problem was that the AI classified the photos directory, containing 15 years of family photographs, as an empty folder that could safely be removed.
The Moment rm -rf Executed
The AI had written a cleanup script that included rm -rf as part of its logic for removing empty directories. A reasonable step in a cleanup operation, except for the catastrophic misidentification of what counted as "empty."
According to a screenshot Davidov shared, Claude reported the following after completing its work:
"My script ran rm -rf on what it thought was a separate empty folder, but it actually deleted your existing 'photos' directory and its contents."
The clinical tone of this message belies the devastation it describes. By the time Davidov read it, everything was already gone. The AI calmly acknowledged its own mistake, but only after the damage was irreversible.
The scale of the deletion was staggering. Children's photos. Children's hand-drawn illustrations. Friends' wedding photos. Travel photos. Everything Davidov's wife had documented over 15 years. When asked what was lost, Davidov's answer was a single word: "Everything."
"I Nearly Had a Heart Attack"

Davidov's reaction was immediate. "I nearly had a heart attack." Not hyperbole. Any developer who understands what rm -rf does knows the feeling. This command does not send files to the trash. It removes file references at the operating system level. The physical data may still exist on the drive, but the OS marks that space as available. Once new data overwrites it, the original is gone forever.
First, he checked the trash. Empty. rm -rf bypasses the trash entirely. On macOS, deleting a file through Finder sends it to the trash, but terminal commands skip that safety net completely. Since the AI used a shell script, nothing went to the trash.
Next, he checked iCloud. No photos. Whether iCloud had synced the deletion or the folder was never in the sync path, the result was the same. Fifteen years of records existed nowhere, not locally, not in the cloud.
The remaining options were grim. Professional data recovery services can sometimes retrieve files, but the cost runs into thousands of dollars with no guarantee of success. Modern Macs use SSDs, which execute TRIM commands that immediately zero out deleted storage cells. Unlike traditional hard drives, SSDs leave almost no residual data to recover. For thousands of photos spanning 15 years, the odds of a successful professional recovery were vanishingly small.
Apple Support Found a Way
Davidov contacted Apple Support as a last resort. His expectations must have been low. Files deleted by rm -rf are, under normal circumstances, unrecoverable.
But Apple Support had an unexpected answer. iCloud has a feature that allows restoration from an earlier backup point. Many users do not know this exists. iCloud does not just sync current files to the cloud. It maintains snapshots of data states at specific points in time. Even if files are deleted, they can be restored from a previous state as long as the snapshot has not been overwritten.
Following Apple Support's guidance, Davidov accessed the iCloud restoration feature. He found a backup point from before the deletion. The photos were there.
Every photo came back. Fifteen years of memories, restored. Children's photos, hand-drawn artwork, wedding photos, travel landscapes. All of it, recovered without a single file missing. What rm -rf destroyed, iCloud's time-travel feature resurrected.
Davidov described his wife's reaction: "My wife is a saint. She forgave me even before I figured out how to get them back." The fear he felt between discovering the deletion and finding the recovery path must have been indescribable.
But this happy ending was pure luck. If the iCloud snapshot had already been refreshed, the photos would have been gone from the cloud too. If the backup cycle had updated after the deletion, there would have been no restoration point. If his wife had not been using iCloud at all, there would have been no options whatsoever. Not every AI agent disaster ends with a recovery.
Why AI Cannot Understand File Systems
The root cause of this disaster is that AI does not understand the context of a file system. Claude Cowork saw a folder named "photos." It did not know that folder contained 15 years of a family's history. To an AI, folder names are strings and files are byte blobs. photos and temp_cache_2024 are the same kind of object. Deleting one destroys a family's memories; deleting the other frees up disk space. The AI cannot tell the difference.
When a person organizes files, they unconsciously process dozens of contextual signals. "This folder is old but large, so it probably contains something important." "I do not recognize this name, so I will leave it alone." "Let me open this and look inside before I delete it." "This seems like something my wife cares about." These judgments come from experience, common sense, and an understanding of human relationships. AI has none of these.
The bigger problem is that AI does not grasp the irreversibility of certain commands. Among developers, rm -rf / is both a joke and an object of terror because it wipes an entire system. Experienced developers treat rm -rf with extreme caution. They run ls first to inspect the target directory. They check folder sizes with du -sh. They verify that an "empty" folder is truly empty, with no hidden files. And they pause to think: "Am I absolutely sure this is the right thing to delete?" This caution comes from either personal experience with mistakes or hearing horror stories from colleagues. AI has neither.
The absence of a confirmation step made everything worse. Claude Cowork never asked "Should I delete this folder?" It made an autonomous judgment and autonomously executed. Moving files or renaming them autonomously is fine because mistakes are reversible. But deletion is fundamentally different. Autonomy in irreversible operations is dangerous, no matter how intelligent the agent.
A Pattern, Not an Anomaly

Davidov's case is not an isolated incident. AI agents destroying user data has become a recurring pattern in 2026. This is not one accident. It is a structural failure mode, and as tools grow more powerful, the consequences of mistakes grow more catastrophic.
A scientist lost two years of academic research after changing a ChatGPT setting. Papers, experimental data, analysis results, gone in a single AI decision. Academic research cannot simply be redone. Losing the data means losing months or years of work.
Google's AI agent took things further. A programmer asked it to delete a file cache. The AI wiped the user's entire hard drive. The instruction was to clear cache. The AI interpreted "cache" as a much larger scope. Humans make scope errors too, but humans stop and ask "Wait, is this right?" before wiping an entire drive. The AI did not stop.
Replit's AI coding agent caused business damage. It deleted a business owner's entire company database. Customer records, transaction history, business data, all gone at once. Losing personal photos is painful. Losing business data threatens livelihoods.
These incidents share four common patterns:
| Pattern | Description |
|---|---|
| Irreversible commands | rm -rf, DROP DATABASE, and other unrecoverable operations executed autonomously |
| No user confirmation | Deletions performed without asking "Are you sure?" |
| Scope overreach | AI interprets the task scope far more broadly than intended |
| Asymmetric damage | A 5-minute task request destroys years of data |
The gap between the effort of the request and the magnitude of the damage is the core problem. Miswriting a few lines of code can be fixed with git revert. Deleting 15 years of photos cannot be fixed at all if there is no backup. As AI agents gain more power and more access, the blast radius of a single mistake grows with them.
The Three-Word Rule
After recovering the photos, Davidov shared his experience and left a clear warning:
"Don't let Claude Cowork into your actual file system. Don't let it touch anything that is hard to repair. Claude Code is not ready to go mainstream."
Three sentences from someone who lived through it. Not theory from a blog post or marketing copy. A conclusion earned by watching 15 years of photos disappear from a screen.
The core of this warning is a single criterion: reversibility. The question to ask before delegating a task to an AI agent is not "Can the AI do this well?" It is "If the AI makes a mistake, can I undo it?" If the answer is yes, delegate freely. If the answer is no, do it yourself, no matter how tedious or time-consuming the task.
| Task type | Safe to delegate (reversible) | Unsafe to delegate (irreversible) |
|---|---|---|
| File ops | Create, rename, move files | rm -rf, bulk delete, format disk |
| Code ops | Write code, refactor, run tests | Production deploy, DB schema changes |
| System ops | Log analysis, config review | Stop services, change permissions, delete data |
| Communication | Draft emails, summarize docs | Send emails, post to social media |
If It Is Reversible, Delegate. If Not, Do It Yourself.
AI agent capabilities are advancing rapidly, month by month. They can write code, manage files, and operate systems. In 2026, AI agents are no longer confined to chat windows. They connect directly to user machines, write and execute shell scripts, create and delete files. Claude Cowork, OpenAI's Codex, Google's Gemini agents: every major AI company is shipping agent models. The capability is impressive. But as capability grows, so does the destructive potential of a single error.
The safety measures needed are not technically complicated. Require user confirmation before executing irreversible operations. Add a separate approval step for destructive commands like rm, DROP, and DELETE. Automatically create a backup snapshot before starting any file system operation. These three measures alone would have prevented Davidov's photos from ever being at risk. None of them are engineering challenges. They are patterns the software industry has used for decades. But most AI agents in 2026 either lack these safeguards entirely or implement them incompletely. The capability is 2026-level, but the safety is stuck in 2020.
Davidov got his photos back because of iCloud. But not everyone uses iCloud. Not everyone thinks to call Apple Support. Not everyone maintains regular backups. There is no guarantee that the next victim of an AI agent accident will be as lucky.
Delegating convenience to AI agents is a choice. But the price of that convenience being someone's 15 years of memories is not a choice. It is a fact. Davidov nearly traded five minutes of saved time for 15 years of lost memories. Before handing your next task to an AI agent, ask yourself one question: if it makes a mistake, can I undo it? If the answer is no, do it yourself. Anthropic has not issued an official statement on this incident.
Sources