I keep seeing the same panic cycle play out on developer forums. Last week, two massive software building blocks—Axios and LiteLLM—were hijacked. Millions of apps use these tools. The attackers didn't hack a secure server. They just stole the maintainers' passwords and slipped remote-control malware into the code updates.
Half the tech industry woke up wondering if their apps were secretly sending user data to a random server.
This is exactly why we built OnDeviceAI differently.
The security cost of convenience
Most modern software is glued together using public code libraries to save time. It's incredibly convenient for developers. But it's also a security nightmare. If one random script in the dependency chain breaks or gets infected, the whole app is compromised.
When we started OnDeviceAI, we made a deliberate choice to avoid that approach. It meant writing a lot more code from scratch. We bypassed the vulnerable web ecosystems entirely and built directly on Apple's native technologies.
Here is what that actually means for your data:
Zero external dependencies where it counts
We don't rely on Node.js packages or Python installers for our core processing. The app runs on Swift and Apple's own frameworks. There are no mysterious background scripts pulling updates from a community repository.
It physically can't send your data away
The recent LiteLLM attack was designed to sweep a user's computer for passwords and API keys, encrypt them, and upload them to a hacker's domain. OnDeviceAI avoids this risk simply by not using the cloud for AI processing. Your prompts stay on your physical hardware.
Walled-in processing
Because we built this natively for Mac, iPad and iPhone, the app sits inside Apple's strict security sandbox. Even if something went catastrophically wrong, the software is walled off from the rest of your files. It doesn't have the permission to wander around your hard drive.
It takes longer to build apps this way. But watching the fallout from these supply chain attacks reinforces why we do it. You shouldn't have to wonder if your AI assistant is quietly leaking your data because of a compromised library update. With native processing, your data actually stays yours.