For the sake of more theorizing on the use of Digital Rights Management (DRM) systems to enforce privacy constraints in digital social systems, let’s take “the privacy of photos” as another example. (I mentioned “the privacy of email addresses” in my last post.)
Let’s say you give me viewing access to a photo that you’ve posted on your favorite server. How do you keep it private? What digital rights management could you employ? Would the DRM say what software am I allowed to use to view this photo? Am I allowed to view your photo in previously unknown software that I’ve written myself? Can I use any other cloud-based software to help me consider that photo? Am I allowed to derive knowledge from that photo and use that knowledge in other contexts? Could I get recommendations from other knowledge bases based on the fact that I enjoyed the photo?
The future possibilities of advanced social networking systems get progressively more difficult to represent with the kind of strict DRM code we might be tempted to use in social computing today. As the federated social systems evolve, the concept of “privacy” must take on semantic representation.
A real world social example: Say I give you a photo of me and Mike exploring the secret tunnels under University Hall and say that it’s for your eyes only since we probably shouldn’t have been down there. Say this is the first and only time you learn that I know how to get into the tunnels. Later, your friend John asks you for help getting into the tunnels. Should you recommend he talk to me? Of what I showed you, what did I intend to be kept private– the photo itself or the content of the photo?
Advocating strict concepts of privacy in social networks ties us into “walled-garden” systems. As possible usage of data expands, there is no way to export privacy without adding semantics. We’re going to need to get used to “privacy” being a somewhat loose concept in social systems.
UPDATE: Rewritten dec 29, 2008. Thanks, Chad.
Leave a Reply