Safe Social Spaces


Technologies that mediate social interaction can put our privacy and our safety at risk. Harassment, intimate partner violence and surveillance, data insecurity, and revenge porn are just a few of the harms that bedevil technosocial spaces and their users, particularly users from marginalized communities. This Article seeks to identify the building blocks of safe social spaces, or environments in which individuals can share personal information at low risk of privacy threats. Relying on analogies to offline social spaces—Alcoholics Anonymous meetings, teams of coworkers, and attorney-client relationships—this Article argues that if a social space is defined as an environment characterized by disclosure, then a safe social space is one in which disclosure norms are counterbalanced by equally as powerful norms of trust that are both endogenously designed in and backed exogenously by law. Case studies of online social networks and social robots are used to show how both the design and law governing technosocial spaces today not only do not support trust, but actively undermine user safety by eroding trust and limiting the law’s regulatory power. The Article concludes with both design and law reform proposals to better build and protect trust and safe social spaces.


Technosocial spaces, Safe social spaces, Privacy threats, Disclosure, Information fiduciaries, Section 230, Products liability



Ari Ezra Waldman (Princeton University, Center for Information Technology Policy; New York Law School)



Publication details



All rights reserved

File Checksums (MD5)

  • pdf: 548c485064fa7cffd385890d939e53a3