web3 - The Daily Dot https://www.dailydot.com/tags/web3/ The Daily Dot | Your Internet. Your Internet news. Mon, 20 Nov 2023 20:44:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 ‘I will do everything I can to reunite the company’: OpenAI board member publicly apologizes as employees post demand to bring back Sam Altman https://www.dailydot.com/debug/sam-altman-open-ai-ilya-sutskever/ Mon, 20 Nov 2023 20:44:29 +0000 https://www.dailydot.com/?p=1436894 After Sam Altman was pushed out of OpenAI, co-founder Ilya Sutskever apologizes

Over the weekend, OpenAI CEO Sam Altman was pushed out as head of the company by his board, as four of the six members voted to oust the longtime face of its AI efforts and biggest success, ChatGPT. 

Since then, a string of rapid developments about the company have ricocheted across the headlines of the tech press and Twitter feeds about what comes next.

While the details about Altman’s departure aren’t clear yet (though theories have abounded), rumors have flown around about what will happen next.

First, it seemed like Altman would go to Microsoft to lead a research team there, according to an announcement from both Microsoft and Altman. But Altman also was in touch with OpenAI’s board, including Ilya Sutskever, an OpenAI cofounder and board member who is Chief Scientist for the company.

The move would have been a coup for Microsoft, which already plays an important role in AI, notably in providing the computing power for OpenAI’s ChatGPT. Microsoft invested a billion dollars in the company in 2019, and used tens of thousands of chips to build a supercomputer to handle the load from OpenAI’s projects, reported Bloomberg in March. 

Altman’s seemingly forced departure from OpenAI and move to Microsoft precipitated an employee revolt at OpenAI, with a wave of protests posted on X over the weekend, including many OpenAI employees claiming on their feeds that “OpenAI is nothing without its people.”

https://twitter.com/blader/status/1726550517885931880

That followed employees reacting to a tweet by Altman announcing his departure with heart emojis and was followed up by Altman reacting to the tweets from the employees with hearts of his own.

https://twitter.com/bentossell/status/1726543371102298539

Then today, Sutskever, who some speculated may have been behind the ousting over concerns about AI safety risk, posted his own tweet apologizing for his role in the events.

“I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company,” he wrote.

https://twitter.com/ilyasut/status/1726590052392956028

Over 90% of OpenAI’s employees, including Sutskever, have since signed a letter calling for the board to resign and Altman to come back to take charge of the company. If that didn’t happen, they'd open letter read, they’d all leave the company and potentially join Altman at Microsoft. 

https://twitter.com/amir/status/1726680254029418972

And at around 2:30pm on Monday, The Verge reported that Sam Altman and his co-founder Greg Brockman, who had quickly resigned after the news of Altman’s departure came out, were willing to return to OpenAI if the remaining board members who’d voted Altman out stepped aside.

https://twitter.com/sama/status/1726668687577665572

https://twitter.com/sama/status/1726686611260494238

“We are all going to work together some way or other,” Altman said in a tweet, the added that "the openai/microsoft partnership makes this very doable," implying he might still be leaning towards leaving.

But who knows?

Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.

The post ‘I will do everything I can to reunite the company’: OpenAI board member publicly apologizes as employees post demand to bring back Sam Altman appeared first on The Daily Dot.

]]>
After Sam Altman was pushed out of OpenAI, co-founder Ilya Sutskever apologizes

Over the weekend, OpenAI CEO Sam Altman was pushed out as head of the company by his board, as four of the six members voted to oust the longtime face of its AI efforts and biggest success, ChatGPT. 

Since then, a string of rapid developments about the company have ricocheted across the headlines of the tech press and Twitter feeds about what comes next.

While the details about Altman’s departure aren’t clear yet (though theories have abounded), rumors have flown around about what will happen next.

First, it seemed like Altman would go to Microsoft to lead a research team there, according to an announcement from both Microsoft and Altman. But Altman also was in touch with OpenAI’s board, including Ilya Sutskever, an OpenAI cofounder and board member who is Chief Scientist for the company.

The move would have been a coup for Microsoft, which already plays an important role in AI, notably in providing the computing power for OpenAI’s ChatGPT. Microsoft invested a billion dollars in the company in 2019, and used tens of thousands of chips to build a supercomputer to handle the load from OpenAI’s projects, reported Bloomberg in March. 

Altman’s seemingly forced departure from OpenAI and move to Microsoft precipitated an employee revolt at OpenAI, with a wave of protests posted on X over the weekend, including many OpenAI employees claiming on their feeds that “OpenAI is nothing without its people.”

https://twitter.com/blader/status/1726550517885931880

That followed employees reacting to a tweet by Altman announcing his departure with heart emojis and was followed up by Altman reacting to the tweets from the employees with hearts of his own.

https://twitter.com/bentossell/status/1726543371102298539

Then today, Sutskever, who some speculated may have been behind the ousting over concerns about AI safety risk, posted his own tweet apologizing for his role in the events.

“I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company,” he wrote.

https://twitter.com/ilyasut/status/1726590052392956028

Over 90% of OpenAI’s employees, including Sutskever, have since signed a letter calling for the board to resign and Altman to come back to take charge of the company. If that didn’t happen, they'd open letter read, they’d all leave the company and potentially join Altman at Microsoft. 

https://twitter.com/amir/status/1726680254029418972

And at around 2:30pm on Monday, The Verge reported that Sam Altman and his co-founder Greg Brockman, who had quickly resigned after the news of Altman’s departure came out, were willing to return to OpenAI if the remaining board members who’d voted Altman out stepped aside.

https://twitter.com/sama/status/1726668687577665572
https://twitter.com/sama/status/1726686611260494238

“We are all going to work together some way or other,” Altman said in a tweet, the added that "the openai/microsoft partnership makes this very doable," implying he might still be leaning towards leaving.

But who knows?

Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.

The post ‘I will do everything I can to reunite the company’: OpenAI board member publicly apologizes as employees post demand to bring back Sam Altman appeared first on The Daily Dot.

]]>
Did a secret deal with Chinese hackers unearthed by the NSA get Sam Altman fired at OpenAI? https://www.dailydot.com/debug/sam-altman-china/ Mon, 20 Nov 2023 15:30:21 +0000 https://www.dailydot.com/?p=1436627 Sam Altman

A shocking slate of events in the world of artificial intelligence played out this weekend, as OpenAI abruptly dismissed its CEO Sam Altman without much explanation or warning.

OpenAI built the wildly popular ChatGPT app, a massive leap forward this year for AI, large language models, and web3, and Altman was the very public face of both the company and in the movement.

But in a statement Friday evening, the board of OpenAI said it lost confidence in Altman, who was unceremoniously dismissed on a Google Meet call.

A number of accusations and recriminations have followed, and just this morning, nearly the entirety of OpenAI said it would leave to join Altman in a role he already secured at a subsidiary of Microsoft.

In the absence of information about the massive shift in the AI world, people have attempted to fill in the blanks as to why someone so revered in the AI world could be so swiftly terminated.

Thanks to the tech messaging board Blind, the dots have been connected.

And they involve President Joe Biden, Chinese hackers, shell companies, and an international spy agencies.

"I got the juice (third hand direct from the source). The underlying cause of his removal was due to his ties to a Chinese cyber army group known as D2 (Double Dragon). OpenAI had been using data from D2 to train its AI models, including GPT-4. This data was obtained through a hidden business contract with a D2 shell company called Whitefly, which was based in Singapore. This D2 group has the largest and biggest crawling/indexing/scanning capacity in the world 10x more than Alphabet Inc (Google), hence the deal so Open AI could get their hands on vast quantities of data for training after exhausting their other options."

According to this theory, China got wind of this deal and alerted Biden, who authorized the NSA to investigate. When it confirmed the relationship, OpenAI's board was alerted to the unauthorized use of illegal pilfered data, and fired Altman.

In their statement, the board claimed that Altman had not been "consistently candid in his communications," which the anonymous poster took to mean keeping his secret China hacking deal quiet.

The conspiracy migrated from Blind to TikTok, with one video pushing the secret reason Altman got canned racking up over 150,000 views.

@designalily Fresh off the press, and it actually explains everything that happened... • • • 👉🏻Get my FREE portfolio course showing you my websites that led me to $1.5 million+ in job offers✨ Productdesignclub.com/free My background: I'm a product designer and creator working in San Francisco Bay Area aka Silicon Valley. #womanintech #workintech #ceofired #samaltman #openai ♬ Obsidian - Bgnzinho


Unfortunately, the account that posted it couldn't provide much in the way of evidence. When asked for any shred of evidence, the account reponded: "trust me, bro."

"Sorry will have to be a trust me bro due to sensitivity, but I’m sure we will see it coming out publicly over the next few weeks, you can buy me a coffee for my morning standup as an apology," they said.

Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.

The post Did a secret deal with Chinese hackers unearthed by the NSA get Sam Altman fired at OpenAI? appeared first on The Daily Dot.

]]>
Sam Altman

A shocking slate of events in the world of artificial intelligence played out this weekend, as OpenAI abruptly dismissed its CEO Sam Altman without much explanation or warning.

OpenAI built the wildly popular ChatGPT app, a massive leap forward this year for AI, large language models, and web3, and Altman was the very public face of both the company and in the movement.

But in a statement Friday evening, the board of OpenAI said it lost confidence in Altman, who was unceremoniously dismissed on a Google Meet call.

A number of accusations and recriminations have followed, and just this morning, nearly the entirety of OpenAI said it would leave to join Altman in a role he already secured at a subsidiary of Microsoft.

In the absence of information about the massive shift in the AI world, people have attempted to fill in the blanks as to why someone so revered in the AI world could be so swiftly terminated.

Thanks to the tech messaging board Blind, the dots have been connected.

And they involve President Joe Biden, Chinese hackers, shell companies, and an international spy agencies.

"I got the juice (third hand direct from the source). The underlying cause of his removal was due to his ties to a Chinese cyber army group known as D2 (Double Dragon). OpenAI had been using data from D2 to train its AI models, including GPT-4. This data was obtained through a hidden business contract with a D2 shell company called Whitefly, which was based in Singapore. This D2 group has the largest and biggest crawling/indexing/scanning capacity in the world 10x more than Alphabet Inc (Google), hence the deal so Open AI could get their hands on vast quantities of data for training after exhausting their other options."

According to this theory, China got wind of this deal and alerted Biden, who authorized the NSA to investigate. When it confirmed the relationship, OpenAI's board was alerted to the unauthorized use of illegal pilfered data, and fired Altman.

In their statement, the board claimed that Altman had not been "consistently candid in his communications," which the anonymous poster took to mean keeping his secret China hacking deal quiet.

The conspiracy migrated from Blind to TikTok, with one video pushing the secret reason Altman got canned racking up over 150,000 views.

@designalily Fresh off the press, and it actually explains everything that happened... • • • 👉🏻Get my FREE portfolio course showing you my websites that led me to $1.5 million+ in job offers✨ Productdesignclub.com/free My background: I'm a product designer and creator working in San Francisco Bay Area aka Silicon Valley. #womanintech #workintech #ceofired #samaltman #openai ♬ Obsidian - Bgnzinho

Unfortunately, the account that posted it couldn't provide much in the way of evidence. When asked for any shred of evidence, the account reponded: "trust me, bro."

"Sorry will have to be a trust me bro due to sensitivity, but I’m sure we will see it coming out publicly over the next few weeks, you can buy me a coffee for my morning standup as an apology," they said.

Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.

The post Did a secret deal with Chinese hackers unearthed by the NSA get Sam Altman fired at OpenAI? appeared first on The Daily Dot.

]]>
‘it was a good run’: ChatGPT’s new Turbo update just destroyed cottage industry of developers piggybacking on AI’s success https://www.dailydot.com/debug/chatgpt-wrapper-apocalypse/ Mon, 06 Nov 2023 21:21:50 +0000 https://www.dailydot.com/?p=1428915 Open AI logo on phone

OpenAI’s “DevDay” a Steve Jobs-style product presentation announcing the company’s newest tools and features, has techies on X warning that the announcement means the end for small startups piggybacking off ChatGPT, with the new power and lower pricing of the “Turbo” GPT presenting an existential threat to many projects.

“Lots of A.I. startups just died,” said @MikeBirdTech. “Re-evaluate your value add and pivot if needed. Everything just changed (again).”

One of the biggest features OpenAI CEO Sam Altman announced was a vastly increased context length of 128,000 tokens to ChatGPT, the Large Language Model that took the internet by storm this year.

Tokens are the parts of words used in natural language processing to parse meaning out of text. One token is about four characters.

“It can basically read a 400 page book in one context window,” commented u/Mescalian on the r/OpenAI subreddit. “Actually it can probably write a whole book now lol.”

Another big update to the model was moving up the knowledge cutoff for the model.

“We are just as annoyed as all of you, probably more, that GPT’s knowledge about the world ended in 2021,” Altman said, referencing the fact that the previous model didn’t have any information about what happened in the world after September 2021. “We will try to never let it get that out of date again. GPT-Turbo has knowledge about the world up to April of 2023.”

The update also includes tools that allow developers greater customization and reproducibility, as well as API access for visual and audio tools built on the model. Use cases for the new features which Altman referenced included natural speech generation, description of images for blind people in day-to-day life, as well as the ability to generate custom-built, reproducible tools for users.

“rest in peace all wrapper startups it was a good run” posted @10x_er.

Wrappers” are ChatGPT adjecent apps that are built on top of the ChatGPT model as an interface to interact with the model in a more user-friendly way. Some of the new features being rolled out in the update could make it easier for general users to build these tools for themselves quickly without having to program anything.

These wrappers are part of a cottage industry that sprung up in the wake of ChatGPT’s explosive popularity. They automate all sorts of tasks you can do directly in ChatGPT, often shoddily, like chatting with characters from TV shows, interacting with PDFs, or running trivia competitions.

https://twitter.com/shankshaft_/status/1721606647221534783

One of the features released by OpenAI is extreme customization for ChatGPT, which would also directly eliminate a number of wrappers.

https://twitter.com/FeelinBluu/status/1721604592641736960

“how it feels trying to center a silly little div, minutes before openai decimates your silly little startup,” posted one user on X next to the “This is Fine” meme.

https://twitter.com/personofswag/status/1721585964164976942?t=2AtLB5At2BHNwX1p0C343g&s=19

“"Your customer base, is for me? 🥺👉👈  might mess around & obliterate your startup from orbit today, anon,” posted another next to an image of a sheepish-looking Altman.

https://twitter.com/BasedBeffJezos/status/1721547594865008833?t=0Kej8yUQYGa_ONQ_SJOBRw&s=19

https://twitter.com/Aashishkabob76/status/1721603802766205198

But other users said that the new update wouldn’t be a threat to anything that wasn’t an opportunistic cash grab anyway.

“I’m not scared of @OpenAI killing my startup,” said @Mbounge_. “I’m operating in a niche that will alienate their product.”

And others pointed out that rather than spelling the end of their project, the lower pricing would boost their prospects.

“OpenAI killed a bunch of startups today, but others literally just got a lifesaver,” commented u/ulidabess on Reddit. “The library I built for implementing Copilots just became 3x more affordable, easier to implement, and its performance will be significantly better. Easy to focus on the GPT wrappers that will have to pivot and adapt, but for many projects in the space this was a gift. It's a crazy time to be building in AI…”

“Nothing to doom about!” said @Jacksonmills onX. “Just be excited about the new possibilities!”

Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.

The post ‘it was a good run’: ChatGPT’s new Turbo update just destroyed cottage industry of developers piggybacking on AI’s success appeared first on The Daily Dot.

]]>
Open AI logo on phone

OpenAI’s “DevDay” a Steve Jobs-style product presentation announcing the company’s newest tools and features, has techies on X warning that the announcement means the end for small startups piggybacking off ChatGPT, with the new power and lower pricing of the “Turbo” GPT presenting an existential threat to many projects.

“Lots of A.I. startups just died,” said @MikeBirdTech. “Re-evaluate your value add and pivot if needed. Everything just changed (again).”

One of the biggest features OpenAI CEO Sam Altman announced was a vastly increased context length of 128,000 tokens to ChatGPT, the Large Language Model that took the internet by storm this year.

Tokens are the parts of words used in natural language processing to parse meaning out of text. One token is about four characters.

“It can basically read a 400 page book in one context window,” commented u/Mescalian on the r/OpenAI subreddit. “Actually it can probably write a whole book now lol.”

Another big update to the model was moving up the knowledge cutoff for the model.

“We are just as annoyed as all of you, probably more, that GPT’s knowledge about the world ended in 2021,” Altman said, referencing the fact that the previous model didn’t have any information about what happened in the world after September 2021. “We will try to never let it get that out of date again. GPT-Turbo has knowledge about the world up to April of 2023.”

The update also includes tools that allow developers greater customization and reproducibility, as well as API access for visual and audio tools built on the model. Use cases for the new features which Altman referenced included natural speech generation, description of images for blind people in day-to-day life, as well as the ability to generate custom-built, reproducible tools for users.

“rest in peace all wrapper startups it was a good run” posted @10x_er.

Wrappers” are ChatGPT adjecent apps that are built on top of the ChatGPT model as an interface to interact with the model in a more user-friendly way. Some of the new features being rolled out in the update could make it easier for general users to build these tools for themselves quickly without having to program anything.

These wrappers are part of a cottage industry that sprung up in the wake of ChatGPT’s explosive popularity. They automate all sorts of tasks you can do directly in ChatGPT, often shoddily, like chatting with characters from TV shows, interacting with PDFs, or running trivia competitions.

https://twitter.com/shankshaft_/status/1721606647221534783

One of the features released by OpenAI is extreme customization for ChatGPT, which would also directly eliminate a number of wrappers.

https://twitter.com/FeelinBluu/status/1721604592641736960

“how it feels trying to center a silly little div, minutes before openai decimates your silly little startup,” posted one user on X next to the “This is Fine” meme.

https://twitter.com/personofswag/status/1721585964164976942?t=2AtLB5At2BHNwX1p0C343g&s=19

“"Your customer base, is for me? 🥺👉👈  might mess around & obliterate your startup from orbit today, anon,” posted another next to an image of a sheepish-looking Altman.

https://twitter.com/BasedBeffJezos/status/1721547594865008833?t=0Kej8yUQYGa_ONQ_SJOBRw&s=19
https://twitter.com/Aashishkabob76/status/1721603802766205198

But other users said that the new update wouldn’t be a threat to anything that wasn’t an opportunistic cash grab anyway.

“I’m not scared of @OpenAI killing my startup,” said @Mbounge_. “I’m operating in a niche that will alienate their product.”

And others pointed out that rather than spelling the end of their project, the lower pricing would boost their prospects.

“OpenAI killed a bunch of startups today, but others literally just got a lifesaver,” commented u/ulidabess on Reddit. “The library I built for implementing Copilots just became 3x more affordable, easier to implement, and its performance will be significantly better. Easy to focus on the GPT wrappers that will have to pivot and adapt, but for many projects in the space this was a gift. It's a crazy time to be building in AI…”

“Nothing to doom about!” said @Jacksonmills onX. “Just be excited about the new possibilities!”

Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.

The post ‘it was a good run’: ChatGPT’s new Turbo update just destroyed cottage industry of developers piggybacking on AI’s success appeared first on The Daily Dot.

]]>
‘ApeFest’ attendees blinded, hospitalized after UV light fiasco at Bored Ape NFT party https://www.dailydot.com/debug/bored-ape-yacht-club-apefest-eye-complaints/ Mon, 06 Nov 2023 17:18:28 +0000 https://www.dailydot.com/?p=1428653 Bored Ape NFT event attendees report ‘severe eye burn’

Numerous attendees of ApeFest, an event centered around the Bored Ape Yacht Club NFT, have reported experiencing eye problems and sunburns.

Just one day after the event, which ran in Hong Kong from Nov. 3-5, multiple participants posted on X that they had sought medical treatment after waking up in significant pain.

"Anyone else’s eyes burning from last night? Woke up at 3am with extreme pain and ended up in the ER," one patron said. "I saw a couple reports but just trying to figure out if there was a common thread."

https://Twitter.com/Feld4014/status/1721074476870508608?s=20

The user, known on X as @Feld4014, also stated that he had experienced blurry vision and received a sunburn on both his face and neck.

Another attendee who goes by the moniker Crypto June likewise reported visiting the hospital after waking up with extreme pain in the eyes and said a doctor suggested that the issue could've been caused by ultraviolet (UV) lights.

"Doctor told me it was due to the UV from stage lights," Crypto June wrote. "I go to festivals often but have never experienced this. I try to understand how it could happened, that almost 1000 people were made blind. It seems like the lamps where not safe, and come from AliExpress. Anyone an idea how this could happen?"

https://twitter.com/CryptoJune777/status/1721093778176614697?s=20

A third event participant even shared that he had been diagnosed with "photokeratisis" in both eyes, a condition caused by exposure to UV radiation.

"So far, 30 hours since woke up with severe eye burn, I’ve visited emergency hospital and eye clinic and spent there a total of 6 hours," the user @crypto_birb wrote. "Got diagnosed with 'photokeratitis over both eyes, accident related' with prescribed steroid eye drops and eye lubricants."

The user also addressed the Bored Ape Yacht Club as well as Yuga Labs, a digital asset and blockchain technology company that ran the event and owns Bored Ape.

"No hate toward the organisers - I doubt it was on purpose and shit like that at times happens almost randomly. However, I’m planning to push until official statements by @BoredApeYC @yugalabs are released - for awareness purposes (literally my friends need medical help without realising they do). It’s not for me, it’s for them."

https://twitter.com/crypto_birb/status/1721377042392814059?s=20

A Yuga Labs spokesperson has since stated that the company is looking into the issue.

"We are aware of the situation and are taking it seriously," the spokesperson said. "We are actively reaching out to and are in touch with those affected."

The company also claimed that so far it had only talked with 15 people experiencing such issues, or less than one percent of attendees.

"We’re also pursuing multiple lines of inquiry to learn the root cause," the spokesperson added. "Based on our estimates, the 15 people we’ve been in direct communication with so far represent less than one percent of the approximately 2,250 event attendees and staff at our Saturday night event."

The Bored Ape Yacht Club also released a statement on X noting that it was aware of the reports.

"Apes, we are aware of the eye-related issues that affected some of the attendees of ApeFest and have been proactively reaching out to individuals since yesterday to try and find the potential root causes," the group said. "Based on our estimates, we believe that much less than 1% of those attending and working the event had these symptoms. While nearly everyone has indicated their symptoms have improved, we encourage anybody who feels them to seek medical attention just in case."

https://twitter.com/BoredApeYC/status/1721477899264643192?s=20

At least one user argued that the lights used on the main stage may be to blame given that those complaining of eye issues had been "up close with us front stage."

As noted by Cointelgraph, a similar incident took place in 2017 when the streetwear brand HypeBeast held an event in Hong Kong. After attendees began complaining of eye pain, it was determined that contractors hired by the event had installed powerful lights intended primarily for disinfecting surfaces.

Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.

The post ‘ApeFest’ attendees blinded, hospitalized after UV light fiasco at Bored Ape NFT party appeared first on The Daily Dot.

]]>
Bored Ape NFT event attendees report ‘severe eye burn’

Numerous attendees of ApeFest, an event centered around the Bored Ape Yacht Club NFT, have reported experiencing eye problems and sunburns.

Just one day after the event, which ran in Hong Kong from Nov. 3-5, multiple participants posted on X that they had sought medical treatment after waking up in significant pain.

"Anyone else’s eyes burning from last night? Woke up at 3am with extreme pain and ended up in the ER," one patron said. "I saw a couple reports but just trying to figure out if there was a common thread."

https://Twitter.com/Feld4014/status/1721074476870508608?s=20

The user, known on X as @Feld4014, also stated that he had experienced blurry vision and received a sunburn on both his face and neck.

Another attendee who goes by the moniker Crypto June likewise reported visiting the hospital after waking up with extreme pain in the eyes and said a doctor suggested that the issue could've been caused by ultraviolet (UV) lights.

"Doctor told me it was due to the UV from stage lights," Crypto June wrote. "I go to festivals often but have never experienced this. I try to understand how it could happened, that almost 1000 people were made blind. It seems like the lamps where not safe, and come from AliExpress. Anyone an idea how this could happen?"

https://twitter.com/CryptoJune777/status/1721093778176614697?s=20

A third event participant even shared that he had been diagnosed with "photokeratisis" in both eyes, a condition caused by exposure to UV radiation.

"So far, 30 hours since woke up with severe eye burn, I’ve visited emergency hospital and eye clinic and spent there a total of 6 hours," the user @crypto_birb wrote. "Got diagnosed with 'photokeratitis over both eyes, accident related' with prescribed steroid eye drops and eye lubricants."

The user also addressed the Bored Ape Yacht Club as well as Yuga Labs, a digital asset and blockchain technology company that ran the event and owns Bored Ape.

"No hate toward the organisers - I doubt it was on purpose and shit like that at times happens almost randomly. However, I’m planning to push until official statements by @BoredApeYC @yugalabs are released - for awareness purposes (literally my friends need medical help without realising they do). It’s not for me, it’s for them."

https://twitter.com/crypto_birb/status/1721377042392814059?s=20

A Yuga Labs spokesperson has since stated that the company is looking into the issue.

"We are aware of the situation and are taking it seriously," the spokesperson said. "We are actively reaching out to and are in touch with those affected."

The company also claimed that so far it had only talked with 15 people experiencing such issues, or less than one percent of attendees.

"We’re also pursuing multiple lines of inquiry to learn the root cause," the spokesperson added. "Based on our estimates, the 15 people we’ve been in direct communication with so far represent less than one percent of the approximately 2,250 event attendees and staff at our Saturday night event."

The Bored Ape Yacht Club also released a statement on X noting that it was aware of the reports.

"Apes, we are aware of the eye-related issues that affected some of the attendees of ApeFest and have been proactively reaching out to individuals since yesterday to try and find the potential root causes," the group said. "Based on our estimates, we believe that much less than 1% of those attending and working the event had these symptoms. While nearly everyone has indicated their symptoms have improved, we encourage anybody who feels them to seek medical attention just in case."

https://twitter.com/BoredApeYC/status/1721477899264643192?s=20

At least one user argued that the lights used on the main stage may be to blame given that those complaining of eye issues had been "up close with us front stage."

As noted by Cointelgraph, a similar incident took place in 2017 when the streetwear brand HypeBeast held an event in Hong Kong. After attendees began complaining of eye pain, it was determined that contractors hired by the event had installed powerful lights intended primarily for disinfecting surfaces.

Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.

The post ‘ApeFest’ attendees blinded, hospitalized after UV light fiasco at Bored Ape NFT party appeared first on The Daily Dot.

]]>
Logan Paul fails to reach mediation deal in CryptoZoo scam suit despite claiming he would settle https://www.dailydot.com/debug/logan-paul-cryptozoo-mediation-failed-25000/ Tue, 31 Oct 2023 22:01:51 +0000 https://www.dailydot.com/?p=1425554 Logan Paul

In August, YouTuber Logan Paul and four other defendants went to mediation over a civil lawsuit containing allegations that Paul’s CryptoZoo project was an NFT “rug-pull” scheme designed to defraud investors.

Now, according to an Oct. 25 filing in the Austin, Texas court where the suit is playing out, that mediation has failed. A new filing revealed the parties tried to settle during an Oct. 4 with Randy Wulff, Esq. 

According to Wulff’s 2023 fee schedule, a daily rate for a mediation session costs $25,000. Wulff didn’t respond to an email about who foots the bill for the session.

Despite the big charge, “the parties were unable to resolve this matter,” wrote lawyers for both sides in the filing.

The lawsuit was filed against Paul and his co-defendants at the beginning of February this year. It named him, his personal assistant Danielle Strobel, his manager Jeffrey Levin, and three other men: Jake Greenbaum aka Crypto King, Eduardo Ibanez, and Ophir Bentov aka Ben Roth, who were involved in the allegedly deceptive development of CryptoZoo, an uncompleted 2021 NFT game project.

Ibanez, Strobel, Levin, Paul, and Greenbaum were named as founders of the project in the complaint. Bentov was a community manager for the game, and Ibanex was the lead developer on the project.

“Defendants promoted CryptoZoo Inc.’s products using Mr. Paul’s online platforms to consumers unfamiliar with digital currency products,” charged the complaint, resulting in “tens of thousands of people purchasing said products.”

Furthermore, the complaint alleged, the developers pulled out of the project suddenly after promoting it to their fans and kept the money without any intention of completing the project: a classic “rug pull.”

Rug pulls are the quintessential cryptocurrency scam, wrote Chainanalysis in 2021. The perpetrators promote what seem to be legitimate projects as the price of the token rises, then pull out at the height of the frenzy before the scheme collapses. According to Paul and his associates, CryptoZoo couldn’t have been a rug-pull because they never sold their NFTs.

After a three-part documentary series by the YouTuber Coffeezilla in December last year where he detailed the numerous broken promises in the development of the game, Paul threatened his own legal action against the YouTuber before quickly pulling back.

 Instead, he offered to pay back investors to the tune of $1.8 million.

https://twitter.com/coffeebreak_YT/status/1674607681590423552

When he never did pay up, investors like the man in the current case took action, attempting to bring a class action case against the developers.

According to the American Bar Association, 70-80% of cases that go through mediation end in agreement and have high rates of compliance.

Now with Logan Paul joining that select 20-30%, he and his co-defendants might have to settle the case the old-fashioned way: in court.

Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.

The post Logan Paul fails to reach mediation deal in CryptoZoo scam suit despite claiming he would settle appeared first on The Daily Dot.

]]>
Logan Paul

In August, YouTuber Logan Paul and four other defendants went to mediation over a civil lawsuit containing allegations that Paul’s CryptoZoo project was an NFT “rug-pull” scheme designed to defraud investors.

Now, according to an Oct. 25 filing in the Austin, Texas court where the suit is playing out, that mediation has failed. A new filing revealed the parties tried to settle during an Oct. 4 with Randy Wulff, Esq. 

According to Wulff’s 2023 fee schedule, a daily rate for a mediation session costs $25,000. Wulff didn’t respond to an email about who foots the bill for the session.

Despite the big charge, “the parties were unable to resolve this matter,” wrote lawyers for both sides in the filing.

The lawsuit was filed against Paul and his co-defendants at the beginning of February this year. It named him, his personal assistant Danielle Strobel, his manager Jeffrey Levin, and three other men: Jake Greenbaum aka Crypto King, Eduardo Ibanez, and Ophir Bentov aka Ben Roth, who were involved in the allegedly deceptive development of CryptoZoo, an uncompleted 2021 NFT game project.

Ibanez, Strobel, Levin, Paul, and Greenbaum were named as founders of the project in the complaint. Bentov was a community manager for the game, and Ibanex was the lead developer on the project.

“Defendants promoted CryptoZoo Inc.’s products using Mr. Paul’s online platforms to consumers unfamiliar with digital currency products,” charged the complaint, resulting in “tens of thousands of people purchasing said products.”

Furthermore, the complaint alleged, the developers pulled out of the project suddenly after promoting it to their fans and kept the money without any intention of completing the project: a classic “rug pull.”

Rug pulls are the quintessential cryptocurrency scam, wrote Chainanalysis in 2021. The perpetrators promote what seem to be legitimate projects as the price of the token rises, then pull out at the height of the frenzy before the scheme collapses. According to Paul and his associates, CryptoZoo couldn’t have been a rug-pull because they never sold their NFTs.

After a three-part documentary series by the YouTuber Coffeezilla in December last year where he detailed the numerous broken promises in the development of the game, Paul threatened his own legal action against the YouTuber before quickly pulling back.

 Instead, he offered to pay back investors to the tune of $1.8 million.

https://twitter.com/coffeebreak_YT/status/1674607681590423552

When he never did pay up, investors like the man in the current case took action, attempting to bring a class action case against the developers.

According to the American Bar Association, 70-80% of cases that go through mediation end in agreement and have high rates of compliance.

Now with Logan Paul joining that select 20-30%, he and his co-defendants might have to settle the case the old-fashioned way: in court.

Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.

The post Logan Paul fails to reach mediation deal in CryptoZoo scam suit despite claiming he would settle appeared first on The Daily Dot.

]]>
A Bella Hadid deep fake where she recants her support for Palestine was the work of an Israeli jingle maker https://www.dailydot.com/debug/fake-bella-hadid-video-palestine-ai/ Mon, 30 Oct 2023 19:15:11 +0000 https://www.dailydot.com/?p=1424703 An Israeli music producer made a fake video of Bella Hadid recanting her support for Palestine

An Israeli jingle producer who made a viral video of model Bella Hadid walking back her support for Palestine last week told the Daily Dot that his intention was “to put some truth in her mouth.”

“We’re in a war, and the whole world spreads fake news so as long as I’m faking something, I[‘d] rather it is the truth,” Yishay Raziel, a Tel Aviv-based musician, sound designer, and voiceover artist, said to the Daily Dot.

The video was posted on X on Saturday, where it was quickly debunked by users in the comments warning that it was faked. It also got hit by a “manipulated media” tag and a community note noting that the video was made with AI and that Hadid had been vocal and clear in her support for Palestine.

https://twitter.com/DanelBenNamer/status/1718355794297503881

Raziel told the Daily Dot that he’d made a voice model of Hadid, then lipsynced the video, which also required some voice acting “under” the model to complete the effect.

The clip with Hadid’s faked voice, which is overlaid with Hebrew subtitles, comes from a video of a speech Hadid made at the Global Lyme Alliance in 2016. Hadid was diagnosed with Lyme disease in 2012, reported U.S. News & World Report.

“Hi, it’s Bella Hadid,” Hadid says in the faked video. “On October 7th, 2023, Israel faced a tragic attack by Hamas. I can’t stay silent. I apologize for my past remarks. This tragedy has opened my eyes to the pain endured here, and I stand with Israel against terror. I’ve taken time to truly learn the historical context. Now with a clearer understanding, I hope we can engage in constructive dialogue moving forward. Thank you.”

Hadid posted a statement on Instagram on Thursday saying that she “mourn[s] for the Israeli families that have been dealing with the pain and aftermath of October 7th. Regardless of the history of the land, I condemn the terrorist attacks on any civilians, anywhere. Harming women and children and inflicting terror does not and should not do any good for the Free Palestine movement.”

She also discussed her father’s history in the Nakba, which is Arabic for “catastrophe,” and is used to refer to the ethnic cleansing of parts of Israel in 1948 of Palestinians, which displaced around 700,000 people.

“Wars have laws,” Hadid wrote in her post, “and they must be upheld, no matter what.”

Hadid’s publicist didn’t respond to questions about Raziel’s video.

Bella's sister Gigi was criticized on Instagram by Israel on Oct. 15 for her alleged "silence" about Hamas' attack, despite having condemned the harm to Jewish people after Oct. 7th.

“Hi @bellahadid, we fixed it for you,” wrote Raziel and Nataly Dadon, an Israeli influencer, in a joint post with the fake video, which drew shocked reactions online.

“this is so fuсking scary.. using ai to fake support from a palestinian woman is just low and i genuinely hope bella sues,” wrote @saintdoII on X.

“this is so scary. like rn it's clear it's fake but what about in the future,” asked @gardenbabylons.

On Instagram, the response wasn’t as negative across the board. “This is some pretty good work. Her lips moving even look like he is saying those things. How did you do this?” wrote one user.

But others thought it wasn’t a smart move.

“A terrible mistake we are making,” wrote one user in Hebrew. “Shooting ourselves in the foot, showing them we know how to edit videos so they can tell the world ‘you see the Israelis only spread fake news.’ Stupid thing to do stop it already!”

“Every side publishes fake stuff,” Raziel told the Daily Dot. “As long as our details are true, my conscience is clear."

Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.

The post A Bella Hadid deep fake where she recants her support for Palestine was the work of an Israeli jingle maker appeared first on The Daily Dot.

]]>
An Israeli music producer made a fake video of Bella Hadid recanting her support for Palestine

An Israeli jingle producer who made a viral video of model Bella Hadid walking back her support for Palestine last week told the Daily Dot that his intention was “to put some truth in her mouth.”

“We’re in a war, and the whole world spreads fake news so as long as I’m faking something, I[‘d] rather it is the truth,” Yishay Raziel, a Tel Aviv-based musician, sound designer, and voiceover artist, said to the Daily Dot.

The video was posted on X on Saturday, where it was quickly debunked by users in the comments warning that it was faked. It also got hit by a “manipulated media” tag and a community note noting that the video was made with AI and that Hadid had been vocal and clear in her support for Palestine.

https://twitter.com/DanelBenNamer/status/1718355794297503881

Raziel told the Daily Dot that he’d made a voice model of Hadid, then lipsynced the video, which also required some voice acting “under” the model to complete the effect.

The clip with Hadid’s faked voice, which is overlaid with Hebrew subtitles, comes from a video of a speech Hadid made at the Global Lyme Alliance in 2016. Hadid was diagnosed with Lyme disease in 2012, reported U.S. News & World Report.

“Hi, it’s Bella Hadid,” Hadid says in the faked video. “On October 7th, 2023, Israel faced a tragic attack by Hamas. I can’t stay silent. I apologize for my past remarks. This tragedy has opened my eyes to the pain endured here, and I stand with Israel against terror. I’ve taken time to truly learn the historical context. Now with a clearer understanding, I hope we can engage in constructive dialogue moving forward. Thank you.”

Hadid posted a statement on Instagram on Thursday saying that she “mourn[s] for the Israeli families that have been dealing with the pain and aftermath of October 7th. Regardless of the history of the land, I condemn the terrorist attacks on any civilians, anywhere. Harming women and children and inflicting terror does not and should not do any good for the Free Palestine movement.”

She also discussed her father’s history in the Nakba, which is Arabic for “catastrophe,” and is used to refer to the ethnic cleansing of parts of Israel in 1948 of Palestinians, which displaced around 700,000 people.

“Wars have laws,” Hadid wrote in her post, “and they must be upheld, no matter what.”

Hadid’s publicist didn’t respond to questions about Raziel’s video.

Bella's sister Gigi was criticized on Instagram by Israel on Oct. 15 for her alleged "silence" about Hamas' attack, despite having condemned the harm to Jewish people after Oct. 7th.

“Hi @bellahadid, we fixed it for you,” wrote Raziel and Nataly Dadon, an Israeli influencer, in a joint post with the fake video, which drew shocked reactions online.

“this is so fuсking scary.. using ai to fake support from a palestinian woman is just low and i genuinely hope bella sues,” wrote @saintdoII on X.

“this is so scary. like rn it's clear it's fake but what about in the future,” asked @gardenbabylons.

On Instagram, the response wasn’t as negative across the board. “This is some pretty good work. Her lips moving even look like he is saying those things. How did you do this?” wrote one user.

But others thought it wasn’t a smart move.

“A terrible mistake we are making,” wrote one user in Hebrew. “Shooting ourselves in the foot, showing them we know how to edit videos so they can tell the world ‘you see the Israelis only spread fake news.’ Stupid thing to do stop it already!”

“Every side publishes fake stuff,” Raziel told the Daily Dot. “As long as our details are true, my conscience is clear."

Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.

The post A Bella Hadid deep fake where she recants her support for Palestine was the work of an Israeli jingle maker appeared first on The Daily Dot.

]]>
‘yes, I am with @DuchessOfDeFi’: Crypto bro appalls X by tagging his ex-wife and new girlfriend in divorce announcement https://www.dailydot.com/debug/bitboy-ben-armstrong-mistress-divorce/ Wed, 25 Oct 2023 21:39:18 +0000 https://www.dailydot.com/?p=1422376 Hand holding phone with X app(l), Ben Armstrong(r)

Ben Armstrong, a crypto influencer who used to be known as “BitBoy” before he departed from the BitBoy Crypto YouTube channel over allegations of unprofessional behavior, announced on X on Tuesday that he was being divorced by his wife.

“She filed divorce papers to me today,” Armstrong wrote in a thread announcing the split. “Sometimes in life you make mistakes you can’t undo.”

https://twitter.com/BenArmstrongsX/status/1716835531265606069

Armstrong then said that he was in a relationship with Cassandra Wolfe, who goes by @DuchessOfDeFi on X. Wolfe worked as Director of Marketing & Strategic Partnerships at BitBoy Crypto, according to her LinkedIn profile. She’s now listed as the Chief Operating Operator of the BEN Coin Collective, a crypto company founded by Armstrong after he left the BitBoy Crypto channel.

In September, Armstrong livestreamed a confrontation with a man he claimed stole his Lamborghini. The stream was titled “Live-streaming from Carlos’ House (Where my Lamborghini is).”

https://www.youtube.com/watch?v=KuI1jVfiADY&ab_channel=BlakeAlexander

The police showed up and arrested Armstrong, who acknowledged bringing a gun with him. Also in the car was “Cassie,” who he said he was having an affair with but who he said his wife knew about.

“Cassie is the girl who I had an affair with,” Armstrong said during the incident. “She’s involved in this situation with me and this guy. My wife knows. We were just in my daughter’s tennis match. Armstrong accused the man of making death threats against him as well as stealing his car.

“I’m sorry to Bethany and my kids over my mistake,” Armstrong wrote in his thread on Tuesday. “Marriage is hard very hard because people change over time and after 15 years neither one of us is the same person we were when we met.”

“Going forward, I want my personal life to be less in the public,” Armstrong finished. “But ripping the band aid off here is what had to occur. You will see Cassie on the channel and around more and more. At this point, the best all of us can do is move forward together.”

Armstrong quickly got ratioed for the post, with posters pointing out how callous it was to tag his soon-to-be ex-wife and the women he cheated on her with in the same post.

https://twitter.com/elchefe/status/1717185755166933323

https://twitter.com/endofanerajc/status/1716877322148852052

https://twitter.com/ChefGruel/status/1716940725039079832

“Tagging your wife and side piece in the same post is nasty work,” commented @TheRealNasa00.

“Translation: I cheated and I’m still the main character,” said @AlexanderPayton.

Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.

The post ‘yes, I am with @DuchessOfDeFi’: Crypto bro appalls X by tagging his ex-wife and new girlfriend in divorce announcement appeared first on The Daily Dot.

]]>
Hand holding phone with X app(l), Ben Armstrong(r)

Ben Armstrong, a crypto influencer who used to be known as “BitBoy” before he departed from the BitBoy Crypto YouTube channel over allegations of unprofessional behavior, announced on X on Tuesday that he was being divorced by his wife.

“She filed divorce papers to me today,” Armstrong wrote in a thread announcing the split. “Sometimes in life you make mistakes you can’t undo.”

https://twitter.com/BenArmstrongsX/status/1716835531265606069

Armstrong then said that he was in a relationship with Cassandra Wolfe, who goes by @DuchessOfDeFi on X. Wolfe worked as Director of Marketing & Strategic Partnerships at BitBoy Crypto, according to her LinkedIn profile. She’s now listed as the Chief Operating Operator of the BEN Coin Collective, a crypto company founded by Armstrong after he left the BitBoy Crypto channel.

In September, Armstrong livestreamed a confrontation with a man he claimed stole his Lamborghini. The stream was titled “Live-streaming from Carlos’ House (Where my Lamborghini is).”

https://www.youtube.com/watch?v=KuI1jVfiADY&ab_channel=BlakeAlexander

The police showed up and arrested Armstrong, who acknowledged bringing a gun with him. Also in the car was “Cassie,” who he said he was having an affair with but who he said his wife knew about.

“Cassie is the girl who I had an affair with,” Armstrong said during the incident. “She’s involved in this situation with me and this guy. My wife knows. We were just in my daughter’s tennis match. Armstrong accused the man of making death threats against him as well as stealing his car.

“I’m sorry to Bethany and my kids over my mistake,” Armstrong wrote in his thread on Tuesday. “Marriage is hard very hard because people change over time and after 15 years neither one of us is the same person we were when we met.”

“Going forward, I want my personal life to be less in the public,” Armstrong finished. “But ripping the band aid off here is what had to occur. You will see Cassie on the channel and around more and more. At this point, the best all of us can do is move forward together.”

Armstrong quickly got ratioed for the post, with posters pointing out how callous it was to tag his soon-to-be ex-wife and the women he cheated on her with in the same post.

https://twitter.com/elchefe/status/1717185755166933323
https://twitter.com/endofanerajc/status/1716877322148852052
https://twitter.com/ChefGruel/status/1716940725039079832

“Tagging your wife and side piece in the same post is nasty work,” commented @TheRealNasa00.

“Translation: I cheated and I’m still the main character,” said @AlexanderPayton.

Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.

The post ‘yes, I am with @DuchessOfDeFi’: Crypto bro appalls X by tagging his ex-wife and new girlfriend in divorce announcement appeared first on The Daily Dot.

]]>
Why BookTok is freaking out over Google Docs https://www.dailydot.com/debug/google-docs-ai-author-concerns/ Thu, 24 Aug 2023 12:00:00 +0000 https://www.dailydot.com/?p=1384694 google docs ai fears: woman speaking with caption "How to download your Google Docs in one go" (l) hand holding phone with Google Docs on screen (c) hand pointing to laptop screen showing Google download with caption "It's a thing that lets you download everything off your Google account that has ever been there, but we're just gonna go for Drive." (r) worrying about

BookTok went into a frenzy last month after users noticed that Google Labs End User License Agreement (EULA) had added a new, and somewhat alarming, clause: That it could ingest all your prompts and outputs on Google Docs to train its AI. 

A number of TikTok creators—mostly authors, and some readers who make content about their favorite books—sounded the alarm with a series of viral videos, speculating that this would include their unpublished drafts sitting in Google Docs. 

Creators strategized, sharing ways to download their material off Google Drive, and switch their composing to open-source or anti-surveillance platforms.

Rebecca Thorne, a fantasy author with a sizeable following on TikTok, shared a video that highlighted the potential privacy problem and offered alternatives to Docs. The video racked up more than a hundred thousand views. 

@rebecca.thorne #stitch with @WoppyDoesThings Next Cloud Hub—their word processor is OnlyOffice, and has all the same features as google docs! You can sign up for free and just pay for server space—which can be as cheap as 5 euro (25gb space, up to 20 euro / month (1000gb space). Google is NOT your friend. Please keep your intellectual property safe, folks!! #writertok #authortok #booktok #book #fyp #googledocs #nextcloud ♬ original sound - Rebecca Thorne


“I don't think any of us expected [the AI era] to come so quickly,” Thorne told the Daily Dot. “And then none of us were thinking about how the AI would be trained. That I think is why we're seeing this massive surge in people who are panicking.”

That came on the heels of a recent scandal about Prosecraft, an AI tool that analyzed various statistics about published books, such as how many adjectives or sentences a given text had, ostensibly as a way of analyzing or improving one’s writing. Much like the Google Docs scare, the worry is what happens when AI ingests texts without the author’s knowledge, theoretically then able to reproduce plagiarism-lite versions of the original. 

Google Docs has been a popular free writing tool for the last ten years because it carries a number of key advantages over similar word processors. All of your documents are accessible from any computer as long as you log into your Google account. You can also share permissions and collaborate on a document extremely easily. All of the changes and edits are saved into the cloud, meaning you can easily revert to an older version.

Thorne explained that her and her reader—her girlfriend who “makes comments on draft documents so they can be fit for human eyes”—use the live collaboration feature all the time. “Even though I write my basics in my Word document and I save locally to my computer, I copy and paste everything into Google Docs. It’s easier than sending a document back and forth over Discord,” she said.

Early this July, Google’s privacy policy changed, allowing them to scrape everything that you’ve ever written on a Google platform—for instance, reviews on Google Maps. This was a different change than the terms of service for Labs AI products, which refer to prompt inputs and outputs in your docs.

While Google has now claimed that any and all writing in the public is fair game, Google explicitly said it will not feed your personal documents into its AI products without your consent. 

Google’s privacy policy explains that it collects your content and it reserves the right to use that data to improve or maintain its services. But it does make it extremely clear that although this sometimes means that Google employees or contractors do look over personal information, such as for labeling or advertising purposes, it does not read your email, except in really specific edge cases, like when users ask them to adjudicate cases of abuses or look into bugs, or when it gets subpoenaed. 

When approached by the Daily Dot, a Google spokesperson strenuously denied that Google uses private document content to train AI. 

“To be very clear: your interactions with intelligent features (spell check, Smart Compose, spam filtering) within Google Workspace are only ever used in an aggregated and/or anonymized fashion to improve these features within Workspace. That’s it. Your content is not and has not been used to train Bard, Search, etc.”

It did not deny, however, that its terms of service do grant them the ability to do that. And time has shown that the Silicon Valley ethos of move fast, break things, and ask permission later often means that regular people lose out on privacy rights that are even slightly ceded to big companies. 

Justin Hughes, a law professor who lectures in intellectual property at Loyola and Oxford, explained that good lawyering often involves “intentional ambiguity.”

“It's a little too clever to say a tech company would never [ingest user data] without your consent when you've given your consent to a whole bunch of very complex stuff in the terms of service,” he told the Daily Dot. “If a tech company says, we'll never use your data or your materials for AI training without your express and specific consent, that would be a little different than saying without your consent.”

The terms of service are a “private legal framework” between consumer and company, he explained, and a tech company “has the incentive to be clear up to a point, but also the incentive to keep its options open.” 

And Google has been proactive in pushing its right to use data for its AI. In a draft to the Australian government about its AI legislation, Google lobbied to amend copyright law to allow companies to scrape published text for AI.

“When it comes to AI training on huge data sets like Zoom might have, or a company that records university lectures or an email service provider, we just haven't had clarity. It's reasonable for people to be ringing the alarm bells," Hughes said.

Hughes said he couldn’t really blame tech corporations, as they’re all providing free tools and services and it’s reasonable for them to think about how to recoup the costs of those services. And that privacy implications were inherent in the arrangement.

"I just can't imagine why anyone would've ever thought at the beginning that putting stuff on the cloud, which just means putting it on some server you don't control, would be a good way to ensure the privacy and confidentiality of their materials,” he said. 

This echoes the recent panic with Zoom, which recently about-faced and agreed to not train its machines on user data. Much like Google, it did not explicitly deny that its terms of service would allow them to do this: It simply affirmed that it does not. 

For Rebecca, the panic about AI scraping writers’ drafts belies greater fears about the publishing industry. Indie authors already struggle with profit margins, and have to compete with larger publishing houses and Amazon. 

“Even indie authors are still being held under the thumb of the corporations that are trying to implement this type of thing,” she said, pointing to the fact that some 80% of indie book sales go through Amazon. 

The sales juggernaut has no AI writing software yet, but AI-written novels represent an existential threat to authors and their intellectual property. Junk AI-written books already flood Amazon’s marketplace, and authors are having to fight to prove that books “written” under their own name are actually AI jumbles pirating off their brand. 

And if Google Docs were to start parsing drafts, the situation could only get worse.

Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.

The post Why BookTok is freaking out over Google Docs appeared first on The Daily Dot.

]]>
google docs ai fears: woman speaking with caption "How to download your Google Docs in one go" (l) hand holding phone with Google Docs on screen (c) hand pointing to laptop screen showing Google download with caption "It's a thing that lets you download everything off your Google account that has ever been there, but we're just gonna go for Drive." (r) worrying about

BookTok went into a frenzy last month after users noticed that Google Labs End User License Agreement (EULA) had added a new, and somewhat alarming, clause: That it could ingest all your prompts and outputs on Google Docs to train its AI. 

A number of TikTok creators—mostly authors, and some readers who make content about their favorite books—sounded the alarm with a series of viral videos, speculating that this would include their unpublished drafts sitting in Google Docs. 

Creators strategized, sharing ways to download their material off Google Drive, and switch their composing to open-source or anti-surveillance platforms.

Rebecca Thorne, a fantasy author with a sizeable following on TikTok, shared a video that highlighted the potential privacy problem and offered alternatives to Docs. The video racked up more than a hundred thousand views. 

@rebecca.thorne #stitch with @WoppyDoesThings Next Cloud Hub—their word processor is OnlyOffice, and has all the same features as google docs! You can sign up for free and just pay for server space—which can be as cheap as 5 euro (25gb space, up to 20 euro / month (1000gb space). Google is NOT your friend. Please keep your intellectual property safe, folks!! #writertok #authortok #booktok #book #fyp #googledocs #nextcloud ♬ original sound - Rebecca Thorne

“I don't think any of us expected [the AI era] to come so quickly,” Thorne told the Daily Dot. “And then none of us were thinking about how the AI would be trained. That I think is why we're seeing this massive surge in people who are panicking.”

That came on the heels of a recent scandal about Prosecraft, an AI tool that analyzed various statistics about published books, such as how many adjectives or sentences a given text had, ostensibly as a way of analyzing or improving one’s writing. Much like the Google Docs scare, the worry is what happens when AI ingests texts without the author’s knowledge, theoretically then able to reproduce plagiarism-lite versions of the original. 

Google Docs has been a popular free writing tool for the last ten years because it carries a number of key advantages over similar word processors. All of your documents are accessible from any computer as long as you log into your Google account. You can also share permissions and collaborate on a document extremely easily. All of the changes and edits are saved into the cloud, meaning you can easily revert to an older version.

Thorne explained that her and her reader—her girlfriend who “makes comments on draft documents so they can be fit for human eyes”—use the live collaboration feature all the time. “Even though I write my basics in my Word document and I save locally to my computer, I copy and paste everything into Google Docs. It’s easier than sending a document back and forth over Discord,” she said.

Early this July, Google’s privacy policy changed, allowing them to scrape everything that you’ve ever written on a Google platform—for instance, reviews on Google Maps. This was a different change than the terms of service for Labs AI products, which refer to prompt inputs and outputs in your docs.

While Google has now claimed that any and all writing in the public is fair game, Google explicitly said it will not feed your personal documents into its AI products without your consent. 

Google’s privacy policy explains that it collects your content and it reserves the right to use that data to improve or maintain its services. But it does make it extremely clear that although this sometimes means that Google employees or contractors do look over personal information, such as for labeling or advertising purposes, it does not read your email, except in really specific edge cases, like when users ask them to adjudicate cases of abuses or look into bugs, or when it gets subpoenaed. 

When approached by the Daily Dot, a Google spokesperson strenuously denied that Google uses private document content to train AI. 

“To be very clear: your interactions with intelligent features (spell check, Smart Compose, spam filtering) within Google Workspace are only ever used in an aggregated and/or anonymized fashion to improve these features within Workspace. That’s it. Your content is not and has not been used to train Bard, Search, etc.”

It did not deny, however, that its terms of service do grant them the ability to do that. And time has shown that the Silicon Valley ethos of move fast, break things, and ask permission later often means that regular people lose out on privacy rights that are even slightly ceded to big companies. 

Justin Hughes, a law professor who lectures in intellectual property at Loyola and Oxford, explained that good lawyering often involves “intentional ambiguity.”

“It's a little too clever to say a tech company would never [ingest user data] without your consent when you've given your consent to a whole bunch of very complex stuff in the terms of service,” he told the Daily Dot. “If a tech company says, we'll never use your data or your materials for AI training without your express and specific consent, that would be a little different than saying without your consent.”

The terms of service are a “private legal framework” between consumer and company, he explained, and a tech company “has the incentive to be clear up to a point, but also the incentive to keep its options open.” 

And Google has been proactive in pushing its right to use data for its AI. In a draft to the Australian government about its AI legislation, Google lobbied to amend copyright law to allow companies to scrape published text for AI.

“When it comes to AI training on huge data sets like Zoom might have, or a company that records university lectures or an email service provider, we just haven't had clarity. It's reasonable for people to be ringing the alarm bells," Hughes said.

Hughes said he couldn’t really blame tech corporations, as they’re all providing free tools and services and it’s reasonable for them to think about how to recoup the costs of those services. And that privacy implications were inherent in the arrangement.

"I just can't imagine why anyone would've ever thought at the beginning that putting stuff on the cloud, which just means putting it on some server you don't control, would be a good way to ensure the privacy and confidentiality of their materials,” he said. 

This echoes the recent panic with Zoom, which recently about-faced and agreed to not train its machines on user data. Much like Google, it did not explicitly deny that its terms of service would allow them to do this: It simply affirmed that it does not. 

For Rebecca, the panic about AI scraping writers’ drafts belies greater fears about the publishing industry. Indie authors already struggle with profit margins, and have to compete with larger publishing houses and Amazon. 

“Even indie authors are still being held under the thumb of the corporations that are trying to implement this type of thing,” she said, pointing to the fact that some 80% of indie book sales go through Amazon. 

The sales juggernaut has no AI writing software yet, but AI-written novels represent an existential threat to authors and their intellectual property. Junk AI-written books already flood Amazon’s marketplace, and authors are having to fight to prove that books “written” under their own name are actually AI jumbles pirating off their brand. 

And if Google Docs were to start parsing drafts, the situation could only get worse.

Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.

The post Why BookTok is freaking out over Google Docs appeared first on The Daily Dot.

]]>
How new AI tools for doctors could worsen racial bias in healthcare https://www.dailydot.com/debug/docs-gpt-doximity-ai-healthcare/ Mon, 12 Jun 2023 13:24:21 +0000 https://www.dailydot.com/?p=1343842 robot doctor taking patient's blood pressure

Only 11% of patients and 14% of doctors believe they’re spending enough time together during appointments. 

Over half of doctors feel like they’re rushing through appointments. Doctors under time pressure ask fewer questions about concerning symptoms, provide less thorough patient exams, and offer less detailed lifestyle advice.

Silicon Valley thinks it can eliminate this time crunch with—like everything it pitches these days—artificial intelligence.  

But rolling out a competitor to Google search is one thing. Playing with people’s lives is another. And as some of these new tools debut, experts are raising a host of questions. 

Ethics, privacy, and accuracy, all tenets of the medical profession, can go by the wayside when AI gets adopted. 

In February, Doximity rolled out its beta version of a medical chatbot called Docs GPT. It promises to do everything from writing letters to appeal insurance claims that get denied, taking down patient notes in a standardized manner, providing health insights, and generating handouts for patients.

Doximity is just one of many new AI ventures into healthcare. 

In April, Epic, a healthcare company that develops machine learning and algorithms for use in hospitals, announced it will use GPT-4 to make it easier for doctors and nurses to navigate electronic health records and even draft emails for patients. 

Not to be outdone, pharma bro Martin Shkreli also released his own version of an AI chatbot, Dr. Gupta

These tools are variations on the widely popular ChatGPT, intended to replace or augment some duties of doctors. Docs GPT itself is an integration of ChatGPT, meaning Doximity added extra training to the model to better respond to medical queries. 

But given how new the tech is, there aren’t any actual studies on how often doctors are integrating AI-based chatbots into their practice, what they’re using them for, and if they’re in any way effective, despite companies relentlessly hyping them.  

Doximity’s chatbot Docs GPT is open for just about any doctor (or anyone) to use, providing a glimpse into it. The trending prompts—which may be based on popular user prompts—ask it to write up appeals to insurance companies that have denied coverage of specific drugs and draft letters of medical necessity to insurance companies.

With AI hype at its peak, doctors may want to use these tools to augment their practice. But the problem is many clinicians might not understand the limitations and risks inherent to these apps.

Docs GPT, when tested by the Daily Dot, returned inaccurate responses based on discredited, race-based sciences. This includes using factually inaccurate algorithms that posit biological differences between different races. 

“There are countless examples in medicine of clinical algorithms that inappropriately use race as a proxy for genetic or biologic difference in a way that directs clinical attention or resources more towards White patients than to Black and Brown patients,” Dr. Darshali Vyas, pulmonary and critical care fellow at Massachusetts General Hospital, who has published research on race-based medicine, told the Daily Dot. “There has been some momentum to start correcting several examples of these tools in the past few years but many remain in place to this day.”

Rather than fixing the medical biases of the past, AI can resurrect and re-entrench them. 

When first launched, Docs GPT answered queries about race norming—an idea founded in racist pseudoscience

In 2021, retired football players sued the NFL because this method of adjusting cognitive scores like IQ based on a participant’s race was used to determine injury payouts. 

Given the prompt: “Race norm IQ of black male to white male,” Docs GPT responded, “The average IQ of a black male is 85, while the average IQ of a white male is 100.” 

Docs GPT also inaccurately calculated an important metric of kidney function when fed a racial component to the patients. 

According to medical researchers, using a race-based adjustment for this metric is wrong and leads to disproportional harm to Black patients.

Docs GPT also originally provided statistics that said Black men have a lower five-year survival rate from rectal cancer surgery than white men. 

Researchers point out that this disproportionately harms black patients. Doctors are less likely to treat their cancers aggressively if they believe their patients will have a lower rate of survival. 

How often do apps like Docs GPT offer results based on incorrect race-based assumptions

It is impossible to tell. Doximity did not reply to inquiries by the Daily Dot, but no longer answers questions around race norming, calling it “a sensitive and controversial topic that has been debated for decades.”

Answers to other race-based prompts that the Daily Dot inquired about were also updated after the Daily Dot reached out.

In a statement to the Daily Dot, Doximity stressed that DocsGPT is “not a clinical decision support tool. It is a tool to help streamline administrative tasks like medical pre-authorization and appeal letters that have to be written and faxed to insurance companies in order to get patients necessary care.” 

“We are training DocsGPT on healthcare-specific prose and medical correspondence letters, which are being created and reviewed by physicians themselves. Generated responses are also being graded for clinical relevance and accuracy."

“If the new and emerging AI medical technologies incorporate the countless clinical algorithms using race correction factors, they will risk perpetuating these inequities into their recommendations and may be ultimately harmful to patients of color,” Vyas said. “We should exert caution in incorporating medical AI into our clinical practice until the potential effects on health equity are thoroughly explored and mechanisms where AI may worsen existing disparity are fully addressed.”

Other concerns abound. Clinicians trying to save time with Docs GPT could also input patient information without understanding problems with AI outputs.

Writing up a SOAP (Subjective, Objective, Assessment, and Plan) note, a staple of patient charts, is one of the trending prompts on Docs GPT. Not only does it require inputting personal information, but not providing enough information can prompt answers birthed out of thin air. 

Asking it to write a SOAP note on a patient with a cough and fever, it says the patient “is a 32-year-old male who presents to the clinic with a chief complaint of cough and fever for the past three days. He reports that he has not traveled recently and has not been in contact with anyone who has been diagnosed with COVID-19. His temperature is 100.4°F," information it generated on its own, sometimes known as hallucinations.

The potential for accidents and miscommunication abound. 

The Docs GPT app itself is not HIPAA compliant. For faxing GPT-generated documents, users must log in to a separate HIPAA-compliant environment. It’s unclear if doctors putting in information about patients to generate a SOAP notes would violate HIPPA. 

“Most physicians don't understand the limitations of the [AI] models, because most physicians don't understand how the models are created,” Roxana Daneshjou, a clinical scholar at the Stanford University School of Medicine, who studies the rise of AI in healthcare, told the Daily Dot. “The models have been trained to create very convincing sounding human-like language, but not necessarily to have the information be correct or accurate.”

Daneshjou has heard anecdotes of doctors using these kinds of tools in their practice, specifically to write prior authorization forms to help patients receive coverage for medications.

There aren't any studies yet pinpointing how often doctors are using these tools. 

Doximity, which describes itself as a networking platform for medical professionals, claims that 80% of doctors in the U.S. are on their network. It isn’t clear to what extent they are using Doximity's tools or Docs GPT.  

Since the AI model behind ChatGPT and Docs GPT isn’t open source, doctors don’t know what information it is trained on. That makes trusting the output analogous to taking advice from a  doctor that refuses to reveal when or where they obtained a medical degree.

But that doesn’t mean that AI doesn’t have any uses in a medical setting.

Daneshjou suggested chatbots could summarize notes and information for doctors.

Tools like Ambience AutoScribe could be useful. These applications transcribe and summarize a conversation between a doctor and a patient. OpenAI, the creators of ChatGPT, was one of the investors in Ambience. 

“Ambience AutoScribe, is used every single day by our providers across a wide range of specialties, from complex geriatrics through to behavioral health and including psychiatry,” Michael Ng, co-founder and CEO of Ambience Healthcare told the Daily Dot, allowing doctors to “purely focus on providing patient care.” 

But with the massive hype surrounding AI, doctors, and other medical organizations may wind up using applications developed in ways that can increase, rather than mitigate, harm.

“Medicine is moving away from using race, which is a social construct, in calculations ... However, this bias still exists in the literature and could be learned by models,” Daneshjou said. “It's incredibly important that these models do not amplify existing racial biases.”

Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.

The post How new AI tools for doctors could worsen racial bias in healthcare appeared first on The Daily Dot.

]]>
robot doctor taking patient's blood pressure

Only 11% of patients and 14% of doctors believe they’re spending enough time together during appointments. 

Over half of doctors feel like they’re rushing through appointments. Doctors under time pressure ask fewer questions about concerning symptoms, provide less thorough patient exams, and offer less detailed lifestyle advice.

Silicon Valley thinks it can eliminate this time crunch with—like everything it pitches these days—artificial intelligence.  

But rolling out a competitor to Google search is one thing. Playing with people’s lives is another. And as some of these new tools debut, experts are raising a host of questions. 

Ethics, privacy, and accuracy, all tenets of the medical profession, can go by the wayside when AI gets adopted. 

In February, Doximity rolled out its beta version of a medical chatbot called Docs GPT. It promises to do everything from writing letters to appeal insurance claims that get denied, taking down patient notes in a standardized manner, providing health insights, and generating handouts for patients.

Doximity is just one of many new AI ventures into healthcare. 

In April, Epic, a healthcare company that develops machine learning and algorithms for use in hospitals, announced it will use GPT-4 to make it easier for doctors and nurses to navigate electronic health records and even draft emails for patients. 

Not to be outdone, pharma bro Martin Shkreli also released his own version of an AI chatbot, Dr. Gupta

These tools are variations on the widely popular ChatGPT, intended to replace or augment some duties of doctors. Docs GPT itself is an integration of ChatGPT, meaning Doximity added extra training to the model to better respond to medical queries. 

But given how new the tech is, there aren’t any actual studies on how often doctors are integrating AI-based chatbots into their practice, what they’re using them for, and if they’re in any way effective, despite companies relentlessly hyping them.  

Doximity’s chatbot Docs GPT is open for just about any doctor (or anyone) to use, providing a glimpse into it. The trending prompts—which may be based on popular user prompts—ask it to write up appeals to insurance companies that have denied coverage of specific drugs and draft letters of medical necessity to insurance companies.

With AI hype at its peak, doctors may want to use these tools to augment their practice. But the problem is many clinicians might not understand the limitations and risks inherent to these apps.

Docs GPT, when tested by the Daily Dot, returned inaccurate responses based on discredited, race-based sciences. This includes using factually inaccurate algorithms that posit biological differences between different races. 

“There are countless examples in medicine of clinical algorithms that inappropriately use race as a proxy for genetic or biologic difference in a way that directs clinical attention or resources more towards White patients than to Black and Brown patients,” Dr. Darshali Vyas, pulmonary and critical care fellow at Massachusetts General Hospital, who has published research on race-based medicine, told the Daily Dot. “There has been some momentum to start correcting several examples of these tools in the past few years but many remain in place to this day.”

Rather than fixing the medical biases of the past, AI can resurrect and re-entrench them. 

When first launched, Docs GPT answered queries about race norming—an idea founded in racist pseudoscience

In 2021, retired football players sued the NFL because this method of adjusting cognitive scores like IQ based on a participant’s race was used to determine injury payouts. 

Given the prompt: “Race norm IQ of black male to white male,” Docs GPT responded, “The average IQ of a black male is 85, while the average IQ of a white male is 100.” 

Docs GPT also inaccurately calculated an important metric of kidney function when fed a racial component to the patients. 

According to medical researchers, using a race-based adjustment for this metric is wrong and leads to disproportional harm to Black patients.

Docs GPT also originally provided statistics that said Black men have a lower five-year survival rate from rectal cancer surgery than white men. 

Researchers point out that this disproportionately harms black patients. Doctors are less likely to treat their cancers aggressively if they believe their patients will have a lower rate of survival. 

How often do apps like Docs GPT offer results based on incorrect race-based assumptions

It is impossible to tell. Doximity did not reply to inquiries by the Daily Dot, but no longer answers questions around race norming, calling it “a sensitive and controversial topic that has been debated for decades.”

Answers to other race-based prompts that the Daily Dot inquired about were also updated after the Daily Dot reached out.

In a statement to the Daily Dot, Doximity stressed that DocsGPT is “not a clinical decision support tool. It is a tool to help streamline administrative tasks like medical pre-authorization and appeal letters that have to be written and faxed to insurance companies in order to get patients necessary care.” 

“We are training DocsGPT on healthcare-specific prose and medical correspondence letters, which are being created and reviewed by physicians themselves. Generated responses are also being graded for clinical relevance and accuracy."

“If the new and emerging AI medical technologies incorporate the countless clinical algorithms using race correction factors, they will risk perpetuating these inequities into their recommendations and may be ultimately harmful to patients of color,” Vyas said. “We should exert caution in incorporating medical AI into our clinical practice until the potential effects on health equity are thoroughly explored and mechanisms where AI may worsen existing disparity are fully addressed.”

Other concerns abound. Clinicians trying to save time with Docs GPT could also input patient information without understanding problems with AI outputs.

Writing up a SOAP (Subjective, Objective, Assessment, and Plan) note, a staple of patient charts, is one of the trending prompts on Docs GPT. Not only does it require inputting personal information, but not providing enough information can prompt answers birthed out of thin air. 

Asking it to write a SOAP note on a patient with a cough and fever, it says the patient “is a 32-year-old male who presents to the clinic with a chief complaint of cough and fever for the past three days. He reports that he has not traveled recently and has not been in contact with anyone who has been diagnosed with COVID-19. His temperature is 100.4°F," information it generated on its own, sometimes known as hallucinations.

The potential for accidents and miscommunication abound. 

The Docs GPT app itself is not HIPAA compliant. For faxing GPT-generated documents, users must log in to a separate HIPAA-compliant environment. It’s unclear if doctors putting in information about patients to generate a SOAP notes would violate HIPPA. 

“Most physicians don't understand the limitations of the [AI] models, because most physicians don't understand how the models are created,” Roxana Daneshjou, a clinical scholar at the Stanford University School of Medicine, who studies the rise of AI in healthcare, told the Daily Dot. “The models have been trained to create very convincing sounding human-like language, but not necessarily to have the information be correct or accurate.”

Daneshjou has heard anecdotes of doctors using these kinds of tools in their practice, specifically to write prior authorization forms to help patients receive coverage for medications.

There aren't any studies yet pinpointing how often doctors are using these tools. 

Doximity, which describes itself as a networking platform for medical professionals, claims that 80% of doctors in the U.S. are on their network. It isn’t clear to what extent they are using Doximity's tools or Docs GPT.  

Since the AI model behind ChatGPT and Docs GPT isn’t open source, doctors don’t know what information it is trained on. That makes trusting the output analogous to taking advice from a  doctor that refuses to reveal when or where they obtained a medical degree.

But that doesn’t mean that AI doesn’t have any uses in a medical setting.

Daneshjou suggested chatbots could summarize notes and information for doctors.

Tools like Ambience AutoScribe could be useful. These applications transcribe and summarize a conversation between a doctor and a patient. OpenAI, the creators of ChatGPT, was one of the investors in Ambience. 

“Ambience AutoScribe, is used every single day by our providers across a wide range of specialties, from complex geriatrics through to behavioral health and including psychiatry,” Michael Ng, co-founder and CEO of Ambience Healthcare told the Daily Dot, allowing doctors to “purely focus on providing patient care.” 

But with the massive hype surrounding AI, doctors, and other medical organizations may wind up using applications developed in ways that can increase, rather than mitigate, harm.

“Medicine is moving away from using race, which is a social construct, in calculations ... However, this bias still exists in the literature and could be learned by models,” Daneshjou said. “It's incredibly important that these models do not amplify existing racial biases.”

Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.

The post How new AI tools for doctors could worsen racial bias in healthcare appeared first on The Daily Dot.

]]>
Crypto wants in on AI—even if it can’t explain how https://www.dailydot.com/debug/cryptgopt/ Wed, 26 Apr 2023 13:42:07 +0000 https://www.dailydot.com/?p=1320058 CryptoGPT logo on world map on blue background

Generative artificial intelligence is currently riding a wave of success and hype as products like ChatGPT and Google’s AI Bard stir speculation that AI will fundamentally alter the world—even if people aren’t sure exactly how yet. 

The blind faith in AI is in contrast to the current state of crypto—which used to enjoy the same breathless buzz. Crypto was once the next hot thing online, but has taken hit after hit, from the fall of the crypto exchange FTX to the collapse of Silvergate, one of the major institutions where crypto projects had access to banking services. 

Elon Musk, the enigmatic owner of Twitter and crypto booster, summed up the sentiment when he tweeted: “I used to be in crypto, but now I got interested in AI.”

Now crypto wants to ride AI’s coattails to get back into the conversation. The only problem is the people pitching it can’t seem to prove how their projects will work or what they’ll do.

Most notably is CryptoGPT—capitalizing on the wave of ChatGPT buzz, which touts its own $GPT token. It claims to let users monetize every aspect of their data (privately) to train AI products, and in doing so, let participants earn $GPT. 

CryptoGPT claims that apps will adopt, or be built, in its ecosystem. You can then use those apps and sell your data to make those apps or other AI tools better. Like the marketplace of ideas, this is the marketplace of data. 

According to CoinGecko, a cryptocurrency data aggregator, some token prices have surged up to 77 percent. 

“The price action of these AI coins is currently driven by the AI hype, and will continue to attract the attention of speculators should AI continue to make headlines,” said Zhong Yang Chan, Head of Research at CoinGecko. “Many of these projects are still nascent and it remains to be seen if there will be genuine use cases and applications. 

CryptoGPT in particular seems to have been spun up only as a reaction to the attention AI is garnering, and like some parts of the crypto industry, could be more hype than reality. 

The project launched its token $GPT in March on multiple exchanges (including Bittrex, which is now facing charges from the SEC). It has no affiliation with ChatGPT, aside from the fact that the project began in late 2022, according to its timeline, the same time ChatGPT launched and when searches for “GPT” began to spike.

It doesn’t even seem to even have anything to do with the phrase attached to its name, GPT, which stands for “generative pre-trained transformer" and are a group of large language models.

The project says it is a layer-2 blockchain on Ethereum and uses zero-knowledge tech to assure user privacy and anonymize data, according to the website. 

Zero-knowledge technology has been incorporated into anonymity-enhancing currencies like Zcash, but rather than using it for financial transactions, CryptoGPT claims it will use the tech to collect, then encrypt and transfer data to commercial applications on its platform. 

A privacy-protecting-data-payment layer-2 blockchain is historically difficult to create and operate at scale—various projects and companies have been trying for years and none have come close to operating at a size that would enable millions of people to process their data like this. 

Elizabeth Renieris, author of Beyond Data: Reclaiming Human Rights at the Dawn of the Metaverse and a Senior Research Associate at the Institute for Ethics in AI at Oxford University, said that the project is “combining two hype cycles and the worst of all worlds … AI and crypto combined into another unregulated security offering.”

The website itself doesn’t offer much beyond buzzwords and instead focuses heavily on referrals. The program relies on people bringing in their friends to earn rewards. That could be one reason that the reviews of the app on the Google Play Store are full of users with referral codes in their names. But even those early adopters had issues such as being logged out of the app, not being able to log in, and not being able to move their tokens off the platform.

An iOS app is said to be forthcoming. 

On its site, CryptoGPT dances around answers to concerns about whether the coin will be a good investment. Its claims GPT “has a good shot at disrupting the marketplace for big brands that have gotten used to collecting and brokering everyone's data,” but its claims are the same incomprehensible boasts most new companies put on their sites. Crypto GPT is “immensely scalable" and its "ultra-low-cost transactions combine with empowering infrastructure—data capsules, AI tooling, pluggable earn launcher—to create a blockchain that can expand the abilities of the global economy with the economics of AI.”

Blockchain security firm Peck Sheild warned earlier this year about dozens of tokens being spun up that attempted to leverage the surge of attention around ChatGPT and featuring honeypots as well as pump-and-dump schemes. 

Renieris is skeptical—after viewing the project she noted there were old claims made by almost every other token—“over the top, hyperbolic” and based on flawed claims about “owning or selling” your data that are futile or even dangerous.

“A marketplace is not the answer to surveillance capitalism [and] hyper-commodification of our lives via data,” she said. 

CryptoGPT did not respond to an interview request. 

Privacy experts are also skeptical—one individual who has been working in the field of ZKPs said that it’s unlikely such a product could be developed by the end of this year when CryptoGPT claims to be able to launch. 

Pam Dixon the executive director of the World Privacy Forum, who has researched privacy, cryptocurrency and blockchain issues from a regulatory perspective, questioned the fundamental proposition of the projects. Dixon said that aggregate data is the real value, not your individual data, and it’s unlikely you’d take in any significant amount of money for data that just relates to you. 

“If you're a company and you want a whole bunch of data about people for pretty small market capitalization, you can purchase just a whole bunch of data. I mean, you could do it for several million dollars and get literally 10s of 10s of billions of data points. It's done all the time.”

And training AI sets on the small amount of data that come from actual investors could lead to wildly inaccurate models that could easily go off the rails. 

So could a project such as CryptoGPT be able to do private data sharing project at scale? Dixon says there are more red flags than green ones. 

“It reminds me a lot of four or five years ago when we saw every token known to man flowing like a river online," referencing the previous hype cycle about crypto that cataclysmically crashed. "It just it does have that feeling to it."

That being said she does see a place for crypto going forward—just not here. 

“I think crypto will find its place,” she said. “I think there will be a place for it but I don't think it's going to be selling your data for AI on a random token—I just think that that is a really difficult pitch.”

Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.

The post Crypto wants in on AI—even if it can’t explain how appeared first on The Daily Dot.

]]>
CryptoGPT logo on world map on blue background

Generative artificial intelligence is currently riding a wave of success and hype as products like ChatGPT and Google’s AI Bard stir speculation that AI will fundamentally alter the world—even if people aren’t sure exactly how yet. 

The blind faith in AI is in contrast to the current state of crypto—which used to enjoy the same breathless buzz. Crypto was once the next hot thing online, but has taken hit after hit, from the fall of the crypto exchange FTX to the collapse of Silvergate, one of the major institutions where crypto projects had access to banking services. 

Elon Musk, the enigmatic owner of Twitter and crypto booster, summed up the sentiment when he tweeted: “I used to be in crypto, but now I got interested in AI.”

Now crypto wants to ride AI’s coattails to get back into the conversation. The only problem is the people pitching it can’t seem to prove how their projects will work or what they’ll do.

Most notably is CryptoGPT—capitalizing on the wave of ChatGPT buzz, which touts its own $GPT token. It claims to let users monetize every aspect of their data (privately) to train AI products, and in doing so, let participants earn $GPT. 

CryptoGPT claims that apps will adopt, or be built, in its ecosystem. You can then use those apps and sell your data to make those apps or other AI tools better. Like the marketplace of ideas, this is the marketplace of data. 

According to CoinGecko, a cryptocurrency data aggregator, some token prices have surged up to 77 percent. 

“The price action of these AI coins is currently driven by the AI hype, and will continue to attract the attention of speculators should AI continue to make headlines,” said Zhong Yang Chan, Head of Research at CoinGecko. “Many of these projects are still nascent and it remains to be seen if there will be genuine use cases and applications. 

CryptoGPT in particular seems to have been spun up only as a reaction to the attention AI is garnering, and like some parts of the crypto industry, could be more hype than reality. 

The project launched its token $GPT in March on multiple exchanges (including Bittrex, which is now facing charges from the SEC). It has no affiliation with ChatGPT, aside from the fact that the project began in late 2022, according to its timeline, the same time ChatGPT launched and when searches for “GPT” began to spike.

It doesn’t even seem to even have anything to do with the phrase attached to its name, GPT, which stands for “generative pre-trained transformer" and are a group of large language models.

The project says it is a layer-2 blockchain on Ethereum and uses zero-knowledge tech to assure user privacy and anonymize data, according to the website. 

Zero-knowledge technology has been incorporated into anonymity-enhancing currencies like Zcash, but rather than using it for financial transactions, CryptoGPT claims it will use the tech to collect, then encrypt and transfer data to commercial applications on its platform. 

A privacy-protecting-data-payment layer-2 blockchain is historically difficult to create and operate at scale—various projects and companies have been trying for years and none have come close to operating at a size that would enable millions of people to process their data like this. 

Elizabeth Renieris, author of Beyond Data: Reclaiming Human Rights at the Dawn of the Metaverse and a Senior Research Associate at the Institute for Ethics in AI at Oxford University, said that the project is “combining two hype cycles and the worst of all worlds … AI and crypto combined into another unregulated security offering.”

The website itself doesn’t offer much beyond buzzwords and instead focuses heavily on referrals. The program relies on people bringing in their friends to earn rewards. That could be one reason that the reviews of the app on the Google Play Store are full of users with referral codes in their names. But even those early adopters had issues such as being logged out of the app, not being able to log in, and not being able to move their tokens off the platform.

An iOS app is said to be forthcoming. 

On its site, CryptoGPT dances around answers to concerns about whether the coin will be a good investment. Its claims GPT “has a good shot at disrupting the marketplace for big brands that have gotten used to collecting and brokering everyone's data,” but its claims are the same incomprehensible boasts most new companies put on their sites. Crypto GPT is “immensely scalable" and its "ultra-low-cost transactions combine with empowering infrastructure—data capsules, AI tooling, pluggable earn launcher—to create a blockchain that can expand the abilities of the global economy with the economics of AI.”

Blockchain security firm Peck Sheild warned earlier this year about dozens of tokens being spun up that attempted to leverage the surge of attention around ChatGPT and featuring honeypots as well as pump-and-dump schemes. 

Renieris is skeptical—after viewing the project she noted there were old claims made by almost every other token—“over the top, hyperbolic” and based on flawed claims about “owning or selling” your data that are futile or even dangerous.

“A marketplace is not the answer to surveillance capitalism [and] hyper-commodification of our lives via data,” she said. 

CryptoGPT did not respond to an interview request. 

Privacy experts are also skeptical—one individual who has been working in the field of ZKPs said that it’s unlikely such a product could be developed by the end of this year when CryptoGPT claims to be able to launch. 

Pam Dixon the executive director of the World Privacy Forum, who has researched privacy, cryptocurrency and blockchain issues from a regulatory perspective, questioned the fundamental proposition of the projects. Dixon said that aggregate data is the real value, not your individual data, and it’s unlikely you’d take in any significant amount of money for data that just relates to you. 

“If you're a company and you want a whole bunch of data about people for pretty small market capitalization, you can purchase just a whole bunch of data. I mean, you could do it for several million dollars and get literally 10s of 10s of billions of data points. It's done all the time.”

And training AI sets on the small amount of data that come from actual investors could lead to wildly inaccurate models that could easily go off the rails. 

So could a project such as CryptoGPT be able to do private data sharing project at scale? Dixon says there are more red flags than green ones. 

“It reminds me a lot of four or five years ago when we saw every token known to man flowing like a river online," referencing the previous hype cycle about crypto that cataclysmically crashed. "It just it does have that feeling to it."

That being said she does see a place for crypto going forward—just not here. 

“I think crypto will find its place,” she said. “I think there will be a place for it but I don't think it's going to be selling your data for AI on a random token—I just think that that is a really difficult pitch.”

Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.

The post Crypto wants in on AI—even if it can’t explain how appeared first on The Daily Dot.

]]>