ARTICLE
Google Play Store Data Safety Compliance
What now?
In case you missed it earlier this summer, Google recently added “Data safety compliance” to their Play Store application.
From Google:
By July 20, 2022, all developers must declare how they collect and handle user data for the apps they publish on Google Play, and provide details about how they protect this data through security practices like encryption. This includes data collected and handled through any third-party libraries or SDKs used in their apps.
As an app developer, you must disclose what data you are collecting and for what purpose, AND you must report the data collection practices of any tools or SKDs you’re using. This just made app development a whole lot more complex. Additionally, the CCPA (California Consumer Protection Act) also governs how you need to hold that data and how your customers need to be allowed to control it.
Ugh, what is an ethical software developer to do?
The atPlatform has you covered. The atPlatform lets you easily develop apps that are end-to-end encrypted. Because your app’s customers hold the only keys to their data, you (and your atPlatform SDK) completely fall out of scope for data collection and don’t have to declare anything. That’s it!
There are other benefits to developing on the atPlatform that are worth mentioning as well, including:
- All apps are peer-to-peer
- End-to-end encryption is built-in
- It’s open-source (free!)
- There’s no infrastructure – we run the encrypted microservers for you, so there is no back-end design or cost involved in running your app
- Apps can be polymorphic (OK, what the heck does that mean? In a nutshell, different information can be shown to different people. Imagine building an Instagram-like app that reliably showed different pictures for family vs. friends vs. business followers, based on what the app customer designates.)
Can’t wait to get started? Check out the open-source repo here or docs.atsign.com.
Photo by Firmbee.com on Unsplash
AI Sprawl: The Network Nightmare Caused by the AI Security Paradox
AI Sprawl is the resulting network complexity that occurs when enterprises must deploy many specialized AI agents to secure and govern the use of large language models (LLMs).
Why the Model Context Protocol (MCP) Demands a Structural Rethink of AI Security
AI agents using MCP bypass traditional network security, demanding an identity-first, Zero Trust architecture to eliminate the risks created by exposed ports and centralized tokens.
Why Our Cybersecurity Industry Is Fundamentally Broken
The $200B cybersecurity paradox is that breaches worsen because the industry’s flawed economic model rewards liability transfer over true prevention, making a shift to preemptive, connectionless Zero Trust necessary.
Governing AI: Essential Questions for Board Members to Ensure Safe and Secure Deployment
As AI transforms business, board members must ask critical questions to oversee its safe, secure, and ethical deployment and mitigate new, evolving risks.
Why Most AI Projects Fail and What to Do About It
How to stop AI project failures! Learn how Model Context Protocol (MCP) & Atsign’s atPlatform deliver secure, trustworthy, & compliant AI deployments with clear ROI.
