Ahead of the launch of Apple’s private AI cloud, called Private Cloud Compute, the company has announced a bounty program offering up to $1 million to security researchers who identify vulnerabilities that could jeopardize the platform’s security.
In a statement on its security blog, Apple confirmed that the maximum $1 million reward will be given to anyone who discovers exploits enabling remote execution of malicious code on its Private Cloud Compute servers. Additionally, researchers can earn up to $250,000 for privately reporting vulnerabilities that could expose users' sensitive information or the prompts they input into the private AI cloud.
Apple also noted that significant security issues beyond its predefined categories could be eligible for rewards. For instance, exploits that allow access to sensitive user data from privileged network positions may qualify for payouts of up to $150,000.
“We provide the highest rewards for vulnerabilities that impact user data and inference requests beyond the trust boundary of the [private cloud compute],” Apple explained.
This initiative is an expansion of Apple’s existing bug bounty program, which incentivizes ethical hackers and researchers to disclose security flaws that might otherwise put customer devices and accounts at risk.
In recent years, Apple has taken steps to enhance the security of its products. This includes developing a research-focused iPhone to aid in uncovering vulnerabilities, a response to the growing threats posed by spyware targeting its devices.
Apple’s blog post also detailed the security framework for Private Cloud Compute, accompanied by relevant source code and documentation.
Positioned as an extension of its customers’ on-device AI model, known as Apple Intelligence, Private Cloud Compute is designed to handle more complex AI tasks while maintaining robust privacy protections for its users.
Post a Comment