@nicfab - Privacy Community
nicfab
  • 498 Posts
  • 34 Comments
Joined 9M ago
cake
Cake day: May 11, 2022

help-circle
rss
**Through the looking glass: Sometimes unfortunate or sometimes deadly incidents involving self-driving cars happen. However, recent occurrences with autonomous vehicles in San Francisco have been downright bizarre. Not only does the software controlling the cars struggle to negotiate the real world, but companies and authorities are still learning about how humans behave around self-driving vehicles.** Recent formal complaints to California regulators reveal strange incidents that have occurred since fully-autonomous taxis started operating in San Francisco and Los Angeles. The cases mainly involve robotaxis disrupting first responders or otherwise wasting their time. ...

(Reuters) - Microsoft Corp, Microsoft's GitHub Inc and OpenAI Inc told a San Francisco federal court that a proposed class-action lawsuit for improperly monetizing open-source code to train their artificial-intelligence systems cannot be sustained. The companies said in Thursday [court filings](https://tmsnrt.rs/3kDVJFo) that the complaint, filed by a group of anonymous copyright owners, did not outline their allegations specifically enough and that GitHub's Copilot system, which suggests lines of code for programmers, made fair use of the source code.

Happy Data Privacy Day! Yes, it's a completely made-up holiday, but it's as good a time as any to take a hard look at your online life and shore up your efforts to protect your personal digital privacy. The annual occasion, feted by cybersecurity and digital privacy enthusiasts around the world, began in the US and Canada back in 2008. It's an extension of a European commemoration marking 1981's [Convention 108](https://www.coe.int/en/web/data-protection/convention108-and-protocol#link=%7B%22role%22:%22standard%22,%22href%22:%22https://www.coe.int/en/web/data-protection/convention108-and-protocol%22,%22target%22:%22%22,%22absolute%22:%22%22,%22linkText%22:%22Convention%20108%22%7D), the first legally binding international treaty on protecting privacy and data.

I don’t really know how I feel about this. On one hand the algorithm is presumably spitting out something unique based on other work rather than regurgitating other people’s work. On the other hand, they are making use of a huge body of work to create that new unique work. Is that acceptable? I don’t know.

The other side of this is, can you really copyright code that has been produced by an ai? If something has been created by a mechanism, my very limited input from a human, can you really call that a creative work? In the monkey photo case, it was determined that the photograph that was taken by the monkey could not be copyrighted by the photographer because the photographer did not take the photo. If you have a mechanical monkey spitting out code for you, can you copyright the equivalent of a mechanical monkey pressing a button?

There are several issues with the generated content from AI systems and copyright aspects. In the USA, someone already filed a law­suit with a class action on the most relevant issues related to Ai generated content concerning art representations. See https://stablediffusionlitigation.com


The European Union and the United States of America strengthen cooperation on research in Artificial Intelligence and computing for the Public Good
The United States Department of State and the Directorate-General for Communications Networks, Content and Technology (DG CONNECT) of the European Commission signed an “Administrative Arrangement on Artificial Intelligence for the Public Good” at a virtual ceremony held simultaneously on 27 January 2023 at the White House in Washington DC and in DG CONNECT, Brussels.

Protecting Data: Can we Engineer Data Sharing?
To celebrate the European Data Protection Day on 28 January 2023, ENISA publishes today its report on how cybersecurity technologies and techniques can support the implementation of the [General Data Protection Regulation (GDPR)](https://eur-lex.europa.eu/EN/legal-content/summary/general-data-protection-regulation-gdpr.html) principles when sharing personal data.

Engineering Personal Data Sharing
This report attempts to look closer at specific use cases relating to personal data sharing, primarily in the health sector, and discusses how specific technologies and considerations of implementation can support the meeting of specific data protection. After discussing some challenges in (personal) data sharing, this report demonstrates how to engineer specific technologies and techniques in order to enable privacy preserving data sharing. More specifically it discusses specific use cases for sharing data in the health sector, with the aim of demonstrating how data protection principles can be met through the proper use of technological solutions relying on advanced cryptographic techniques. Next it discusses data sharing that takes place as part of another process or service, where the data is processed through some secondary channel or entity before reaching its primary recipient. Lastly, it identifies challenges, considerations and possible architectural solutions on intervenability aspects (such as the right to erasure and the right to rectification when sharing data).

As generative AI enters the mainstream, each new day brings a new lawsuit. Microsoft, GitHub and OpenAI are currently being [sued](https://www.theverge.com/2022/11/8/23446821/microsoft-openai-github-copilot-class-action-lawsuit-ai-copyright-violation-training-data) in a [class action motion](https://www.theverge.com/2022/11/8/23446821/microsoft-openai-github-copilot-class-action-lawsuit-ai-copyright-violation-training-data) that accuses them of violating copyright law by allowing Copilot, a code-generating AI system trained on billions of lines of public code, to regurgitate licensed code snippets without providing credit.

Data Protection Day 2023
On the occasion of Data Protection Day, we invite you to take a look back at GDPR enforcement over the last few years and explore how the EDPB helps all EEA DPAs act as one to safeguard your rights, today and tomorrow. Join us to see how European data protection authorities (DPAs) work together to make sure that your data protection rights are protected and that the companies handling your data are held accountable.

**SearXNG è un motore di meta-ricerca.** Abbiamo già descritto SearXNG nell’articolo intitolato “[**Sei consapevole dell’impatto sulla privacy delle ricerche online e quindi della giusta scelta del motore di ricerca? (aggiornato)**](https://notes.nicfab.it/en/posts/privatesearchengine/)” relativo ai motori di ricerca o meta-motore di ricerca che rispettano la privacy. Riportiamo di seguito, da quell’articolo, alcune informazioni su SearXNG.

**SearXNG is a meta-search engine.** We already described SearXNG in the article entitled “[**Are you aware of the privacy impact of online searches and thus the right choice of search engine? (updated)**](https://notes.nicfab.it/en/posts/privatesearchengine/)” related to the search engines or meta-search engines which respect privacy. We recall below, from that article, some information about SearXNG.

On 28 January each year, member states of the Council of Europe and EU institutions celebrate Data Protection Day. It marks the anniversary of the Council of Europe’s data protection convention, known as “Convention 108”. It was the first binding international law concerning individuals’ rights to the protection of their personal data. [Watch](https://edps.europa.eu/press-publications/press-news/videos/data-protection-day-2023_en) the European Data Protection Supervisor's video to mark Data Protection Day 2023.\ [Read](https://www.euractiv.com/section/all/opinion/it-is-time-to-tear-down-this-wall/) the op-ed by Wojciech Wiewiórowski published in Euractiv.

**From the EDPS website** All European Union (EU) institutions, bodies, offices and agencies (EUIs) process personal data in their day-to-day work. Discover more about your rights.

The CNIL creates an Artificial Intelligence Department and begins to work on learning databases
# Creation of an Artificial Intelligence Department (AID) Five CNIL’s agents will work in the Artificial Intelligence Department, including lawyers and specialized engineers. This department will be attached to the CNIL's Technology and Innovation Directorate, whose director, Bertrand PAILHES, was previously the national coordinator for the Artificial Intelligence strategy in the French Interministerial Directorate of Digital and Information Systems of the State.

Cookie consent banners that use blatant design tricks to try to manipulate web users into agreeing to hand over their data for behavioral advertising, instead of giving people a free and fair choice to refuse this kind of creepy tracking, are facing a coordinated pushback from the European Union’s data protection regulators. A taskforce of several DPAs, led by France’s CNIL along with Austria’s authority, has spent many months on a piece of joint-work analyzing cookie banners. And in a [report](https://edpb.europa.eu/system/files/2023-01/edpb_20230118_report_cookie_banner_taskforce_en.pdf) published this week they’ve arrived at some consensus on how to approach complaints about certain types of cookie consent dark patterns in their respective jurisdictions — a development that looks set to make it harder for deceptive designs to fly around the EU.

La nuova convenzione internazionale sul crimine informatico a cui l'Onu sta lavorando si è rivelata un terreno più insidioso del previsto. Perché quello che sulla carta è nato come un trattato universale per rafforzare le difese e la prevenzione contro la criminalità cyber, secondo una [risoluzione delle Nazioni Unite del 26 maggio 2021](https://documents-dds-ny.un.org/doc/UNDOC/GEN/N21/133/51/PDF/N2113351.pdf?OpenElement), si sta trasformando in un assalto alla diligenza dei diritti su internet. Lo dimostrano alcune delle proposte presentate a Vienna, dove si chiude il 20 gennaio la quarta sessione del comitato dell'Onu incaricato di scrivere una bozza del trattato, che dovrebbe arrivare sui banchi dell'assemblea generale nel 2024.

[Twitterrific](https://twitterrific.com/beyond), one of the most iconic third-party Twitter clients, said today that it has removed the iOS and Mac apps from the App Store. Iconfactory, the company that made Twitterrific, said in a [blog post](https://blog.iconfactory.com/2023/01/twitterrific-end-of-an-era/) that under Elon Musk’s management, the social media network has become “a Twitter that we no longer recognize as trustworthy nor want to work with any longer.” The app has had a rich association with Twitter. It was one of the first mobile and desktop clients for the platform, and it helped form the [word “Tweet”](https://furbo.org/2013/06/28/the-origin-of-tweet/). In fact, Twitterrific was built back in 2007 — even before Twitter made its own iOS app. Twitterrific’s demise comes after Twitter intentionally [started blocking third-party clients last Friday](https://techcrunch.com/2023/01/16/twitters-third-party-client-issue-is-seemingly-a-deliberate-suspension/) without any explanation. Earlier this week, the TwitterDev account posted that the company had been suspending these apps in breach of “its longstanding API rules.” But it didn’t specify what rules were broken.

After [cutting off](https://techcrunch.com/2023/01/16/twitters-third-party-client-issue-is-seemingly-a-deliberate-suspension/) prominent app makers like Tweetbot and Twitterific, Twitter today quietly updated its developer terms to ban third-party clients altogether. [Spotted](https://www.engadget.com/twitter-new-developer-terms-ban-third-party-clients-211247096.html?src=rss) by Engadget, the “restrictions” section of Twitter’s 5,000-some-word [developer agreement](https://developer.twitter.com/en/developer-terms/agreement) was updated with a clause prohibiting “use or access the Licensed Materials to create or attempt to create a substitute or similar service or product to the Twitter Applications.” Earlier this week, Twitter said that it was “enforcing long-standing API rules” in disallowing clients access to its platform but didn’t cite which specific rules developers were violating. Now we know — retroactively.

Another bill has come in for Meta for failing to comply with the European Union’s General Data Protection Regulation (GDPR) — but this one’s a tiddler! Meta-owned messaging platform, WhatsApp, has been fined €5.5 million (just under $6M) by the tech giant’s lead data protection regulator in the region for failing to have a lawful basis for certain types of personal data processing. Back in December, Meta’s chief regulator, the Irish Data Protection Commission (DPC), was given orders to issue a final decision on this complaint (which dates back to May 2018) — via a binding decision from the European Data Protection Board (EDPB) — along with two other complaints, against Facebook and Instagram.

PayPal is sending out data breach notifications to thousands of users who had their accounts accessed through credential stuffing attacks that exposed some personal data. Credential stuffing are attacks where hackers attempt to access an account by trying out username and password pairs sourced from data leaks on various websites. This type of attack relies on an automated approach with bots running lists of credentials to "stuff" into login portals for various services. Credential stuffing targets users that employ the same password for multiple online accounts, which is known as "password recycling." *** credits @avoidthehack@mastodon.social

EDPB determines privacy recommendations for use of cloud services by public sector & adopts report on Cookie Banner Task Force
Brussels, 18 January - Commissioner for Justice Didier Reynders participated in the Plenary meeting. He presented the draft adequacy decision for the EU-U.S. Data Privacy Framework to the Board and had an exchange of views with its Members. The Board is currently working on its opinion on the draft decision, which will be finalised in the coming weeks. The EDPB has adopted a report on the findings of its first coordinated enforcement action, which focused on the use of cloud-based services by the public sector. The EDPB underlines the need for public bodies to act in full compliance with the GDPR and includes recommendations for public sector organisations when using cloud-based products or services. In addition, a list of actions already taken by data protection authorities (DPAs) in the field of cloud computing is made available.

L’ambito soggettivo previsto dalla Direttiva NIS 2 è articolato e disciplinato dall’articolo 2. La nostra interpretazione, derivante dalla lettura delle specifiche norme, è descritta nel contributo, ove si chiarisce il senso del topic.


Letting regulators nose under the tent is bad. It might feel good to gotcha Twitter and Facebook, but they’re always coming for us next. :(

Indeed! It’s a dangerous and bigger game than anyone. At certain levels, there are great pressures, and sometimes there is also a lack of technical competence.


Certamente chi espone servizi self-hosted dovrebbe sapere qualcosa in materia di sicurezza. Tuttavia, i temi della NIS 2 sono altri, soprattutto quello contenuto nel contributo. A fronte di una dichiarata volontà delle istituzioni europee di avere una sovranità digitale europea e di intervenire in ambito cybersecurity, l’impianto della Direttiva NIS 2 sembra coprire qualsiasi ambito, inclusi quelli relativi a privati che mettono a disposizione gratuitamente servizi, correndo così il rischio di imporre pesanti limitazioni. Ci sarebbe molto da discutere …


lol, lmao Will they ever learn? Relatedly, get into webauthn. And don’t make it someone else’s responsibility.

Indeed! 🤣 MFA/2FA, but IMHO the best overall is FIDO2



I think there are some real dangers of having non-humans involved with court proceedings.

First there’s the obvious slippery slope of first your lawyer is an AI, then the prosecutor is an AI, then the judge is an AI and suddenly we’re living entirely off the dictates of an AI system arguing with itself.

Second, there’s the fact that no AI is a human. This might not seem important, but there’s a lot of truth that a human can perceive that an AI can’t. The law isn’t computer code, it’s extremely squishy and that fact is important to it being just but it’s also important because you can’t just enter text into a prompt and expect to get the results out of the system you want. There’s a big difference between the same question asked by a judge who appears to be convinced by your argument and a judge who appears to be skeptical of your argument.

You might make an argument that it’s just traffic violations, but there’s a slippery slope there as well. First it’s traffic violations, eventually you might have poor people making use of the AI for serious crimes because through degrees you go “oh, it’s just a traffic violation, oh it’s just a low level possession charge, oh it’s just for crimes with a guilty plea anyway, oh it’s just a tort claim, oh it’s just a real estate case…”

Another thing is as AI expands, suddenly you get a potential risk with hackers. If you have a really important court case, it might be justifiable to pay someone to break into the AI and sabotage it so you win the case.

I agree with you. The topic is complex ad would deserve much more space to be deepened. Some issues are related, for example, to biases; there are several misdefined cases due to AI biases, especially in the USA.


I don’t know if the encryption protocol used for Signal represents the state-of-the-art. Probably, there are other valid encryption protocols; I refer, for example, to that one on which is based Matrix.


Thank you @graphito@beehaw.org I only want to highlight that I am reachable on Mastodon at @nicfab@mastodon.nicfab.it and not at the address you mentioned.


It has not escaped your notice. I usually talk about app-related issues. The choice for one or the other solution is based on trust, and personally, after several trials with different solutions, I trust Apple. I am certainly aware that Apple is one of the biggies and that it is not exempt from criticism, but the policy adopted in recent years is user-friendly. It is only worth mentioning that in 2018, during the international conference of Data Protection and Privacy Commissioners, Tim Cook wished that the U.S. had a privacy regulation like the GDPR. This is not the appropriate venue, but your comment will allow me to post something on the point you arise.


Well, that sounds huge. I wonder what consequences this will have. Only fines or actually more privacy in the future?

It isn’t easy to make forecasts. It’s an appropriate step, indeed. We should pay attention to the future.


We retrieved the article from the Internet and didn’t write it. We seemed that news interesting. Feel free to do what you want, even to downvote it


It is really unbelievable how people continue to use wa, especially for work (which is very serious), without bothering to check whether data protection regulations are being followed, especially by the controller (that is WhatsApp). What has happened shows how high the risks are for users’ personal data who are not given control over their data. Join our awareness campaign on the conscious and correct use of IM apps that respect data protection and privacy.




I agree with you. Most people do not know the Fediverse.


I think the prerequisite is to comply with the law. Corporations have to revere the laws like everyone else. It can be considered “normal” for lawyers or consultants to identify pathways to achieve possible goals of a company without violating the legislation. This is legal. Stating that behavior is illegal is up to the judge based on evidence.



All companies collect data and personal data. They should respect privacy legislation (in the EU, the GDPR) and users’ rights. Notably, the processing of personal data should be according to the purposes of the information provided to clients. I think that Apple doesn’t expose to risks simply of misusing personal data.


Hi, thank you for writing us. At the moment, this community host content both in Italian and English. The contents in Italian are few in respect to those in English. Anyway, we will consider your proposal