Czat ChatGPT wycieka prywatne rozmowy i dane logowania użytkowników

Screenshots provided by an Ars reader indicate that ChatGPT is leaking private conversations containing user login information and other personal details unrelated to the users. Two particular conversations out of the seven screenshots shared by the reader have drawn attention. Both conversations contained multiple pairs of usernames and passwords that appear to be associated with the support system used by employees of a pharmacy-related regulations portal. It seems that an employee using the AI chatbot was resolving issues encountered while using the portal.

The conversation is far from polite language and, in addition to login data, includes the name of the application the employee is fixing and the store number where the problem occurred.

The entire conversation pertains to what is shown in the retouched screenshot above. When Ars reader Chase Whiteside included the URL, it revealed the entire conversation. The URL exposed additional pairs of login data.

Chase Whiteside wrote to me in an email: “I asked a question (in this case, to get help coming up with creative names for colors on a palette), and when I came back, I noticed additional conversations.” “They weren’t there when I last used ChatGPT yesterday (I’m a fairly regular user). They were not my queries—they just showed up in my history and definitely aren’t from me (and I don’t think they’re from the same user either).”

The leaked conversations to Whiteside included the name of a presentation someone was working on, details of an unpublished research project, and a script using the PHP programming language. Users in each leaked conversation appear to be different and unrelated. The conversation about the prescription portal included a date from 2020. Dates did not appear in other conversations.

Incidents like this highlight the importance of removing personal data from queries submitted to ChatGPT and other AI services, whenever possible. In March last year, the creator of ChatGPT, OpenAI, had to shut down the AI chatbot after a bug caused it to show conversation history titles of one active user to other unrelated users.
In November, researchers published an article describing how they used queries to elicit ChatGPT to reveal email addresses, phone numbers, faxes, physical addresses, and other private data present in the training materials of the large language model.

Due to concerns about the potential leakage of confidential or private company data, companies like Apple have restricted their employees’ use of ChatGPT and similar websites.
As mentioned in the December article when many people discovered that Ubiquity’s UniFi devices were broadcasting private videos to unrelated users, these types of experiences are as old as the internet. As explained in the article:
The exact causes of these types of system glitches differ from incident to incident, but they often involve “middlebox” devices located between front-end and back-end systems. To improve performance, middleboxes store certain data, including login information of recently logged-in users. When abnormalities occur, login data of one account may get associated with another.
A representative from OpenAI said that the company is conducting an investigation into the matter.

FAQ Section based on the main topics covered in the article:

The source of the article is from the blog coletivometranca.com.br