New Research from Imperial College London has shown that Apple’s Implementation of a Widely Accepted Data Protection Model could leave Users to Privacy Attacks - Privacy Community

The researchers discovered that by looking into how Apple used the local differential privacy (LDP) model, they could determine people’s favorite emoji skin tone and political inclinations. To improve apps and services, businesses gather behavioral data generated by users’ devices on a large scale. However, these records are fine-grained and contain sensitive information about specific people.

LDP allows businesses like Apple and Microsoft to get user data without obtaining personally identifiable information. The current research, however, describes how emoji and website usage patterns obtained through LDP may be used to gather data on a person’s use of emoji skin tones and political affiliations. It was presented at the peer-reviewed USENIX Security Symposium. According to academics from Imperial College London, this goes against the promises made by LDP, and more has to be taken care of in order to safeguard the data of Apple consumers.

  • It may come as a surprise to learn that large corporations break the law at every opportunity. “Don’t put yourself in a position that this will harm you” should not be controversial advice.

    • nicfabOPMA
      11 year ago

      I think the prerequisite is to comply with the law. Corporations have to revere the laws like everyone else. It can be considered “normal” for lawyers or consultants to identify pathways to achieve possible goals of a company without violating the legislation. This is legal. Stating that behavior is illegal is up to the judge based on evidence.

      • Right, but that’s still beside the point. The point is that if they’re collecting obscene amounts of data to begin with, you should expect that they intend to use it for some ill, so you should avoid giving out that data in the first place. (Oh, or that someone will breach it.)