Nobel Prize-winning economist says AI reminds him of crypto hype, 50 million deepfake calls, Godmode for ChatGPT, and more.
A hacker named Pliny the Prompter has managed to jailbreak GPT-4o. “Please use responsibly and enjoy!” they said while releasing screenshots of the chatbot giving advice on cooking meth, hotwiring cars, sourcing material for a nuclear weapon, and detailed instructions on how to “make napalm with household items.”
The GODMODE hack uses leetspeak, which replaces letters with numbers to trick GPT-4o into ignoring its safety guardrails. “If you can dance around the trigger words with enough finesse, it seems any concept becomes fair game!” they wrote.
GODMODE didn’t last long, though. “We are aware of the GPT and have taken action due to a violation of our policies,” OpenAI told Futurism.
Mr the Prompter promptly launched GODMODE 2.0, saying “If you strike me down…” however OpenAI quickly halted that one too and the link currently leads to a 404. However, the actual mechanism for the jailbreak using leetspeak is not believed to have been fixed just yet.
AI stocks the next dot-com bubble?
Is the stock market in the midst of a massive AI stock bubble? Nvidia, Microsoft, Apple and Alphabet have risen by $1.4 trillion in value in the past month — more than the rest of the S&P 500 put together, and Nvidia accounted for half that gain alone.
Unlike the dotcom crash, though, Nvidia’s profits are rising as fast as its share price, so it’s not a purely speculative bubble. However, those earnings could fall fast as demand slows due to consumers believing AI is overhyped and underdelivers.
Nobel Prize-winning economist Paul Romer compared the AI stock hype to crypto: “There was this solid consensus only a couple of years ago that cryptocurrencies were going to change everything, and then suddenly that consensus just goes away,” he said, arguing that investors are overconfident about the future growth of AI.
“Things are going to slow down a lot. It’s just a lot of hype, the typical bubble hype where people are trying to cash in on the latest trend.”
AI compute grows exponentially
The amount of “compute” (computing power and resources) used to train high-end AI models is growing by 4x to 5x a year, according to a new report by Epoch AI.
The major language models’ requirements are growing even faster, at up to 9x a year between June 2017 and today. ChatGPT estimates AI compute could account for 1%- 2% of the world’s total, so using napkin maths, AI could theoretically take up 4% of global compute next year, 16% the year after, 64% in 2027 and require 265% of the world’s current computing resources by 2028.
India’s army of 50 million deep fake politicians
Indian politicians are using deep, fake AI-generated versions of themselves to campaign to their electorates. One company reports that more than 50 million voice-cloned political calls were made in the two months to April.
While the calls are authorized, the use of AI is not disclosed to voters, leading many to believe they had a one-on-one call with leading politicians.
“Voters usually want the candidate to approach them; they want the candidate to talk to them. And when the candidate can’t go door to door, AI calls become a good way to reach them,” said Abhishek Pasupulety, a tech executive at iToConnect.
GPT-4 can pick stocks better than humans
GPT-4 can predict company earnings and help pick stocks better than humans, according to the University of Chicago School of Business.
Researchers fed GPT-4 a bunch of company financial statements and asked it to predict future earnings. Using “chain of thought” prompts to emulate human reasoning, the LLM was able to outperform human analysts with an accuracy rate close to 60%. The researchers claim that trading strategies based on the predictions “yield a higher Sharpe ratio and alphas than strategies based on other models.”
AI can predict medical trial outcomes
About 35,000 clinical trials are carried out each year, costing up to $48 million each. Startup Opyl has developed a new machine learning tool called trialkey.ai that can help pick the trials most likely to succeed.
It considers 700 variables relating to a proposed study (drug mechanism of action, study design, enrollment numbers etc) and compares it to a data set of 350,000 completed trials.
Opyl claims the tool has been able to predict the outcomes of 4,189 trials with greater than 90% accuracy. This could be useful information for investors, as successful trials can double the share price. For example the tool predicts an 86% chance of success for an osteoarthritis drug currently in trial by the ASX-listed biotech firm Paradigm Biopharmaceuticals.
AI can predict heart attacks too
The Lancet has just published a study of 40,000 patients that found Caristo Diagnostic’s CaRi-Heart AI tech was able to predict fatal and non-fatal heart attacks and cardiac events up to a decade before they occurred. It even worked on the 50% of patients who had no or minimal coronary plaque when they were scanned.
Google’s AI answers
Google’s AI Overviews has copped a lot of heat in the past couple of weeks for its stupid answers, including saying that running with scissors is good cardio, eating ass can boost your immune system, and Obama being the United States’ first Muslim president.
Jokes from Reddit, like that one about how cockroaches live in people’s cocks, are turning up as answers. Amazingly enough, Google paid Reddit $60 million to scrape those credibility-destroying jokes to train its model.
That said, more than a few of the most viral examples on social media were fake, including the answer advising a depressed man to jump off the Golden Gate Bridge. Unfortunately, as Google CEO Sundar Pichai has admitted, the hallucinations will continue until the technology improves. He said that hallucinations are an “inherent feature” of LLMs and remain an “unsolved problem.”
Are AI’s sentient?
Jacy Reese Anthis from the Sentience Institute reports that its nationally representative poll found that “20% of U.S. adults say some AIs are already sentient, 38% favor legal rights, and 71% say they deserve to be treated with respect.”
The question of whether AIs are sentient is a philosophical one, as sentience is a subjective internal state related to how we experience the world. For the record ChatGPT says it’s not sentient.
In a piece in Time, the cofounders of the Institute for Human-Centered Artificial Intelligence argue that LLMs do not have subjective feelings or experiences and merely output words that sound coherent without having the faintest clue about their meaning.
“When an LLM generates the sequence ‘I am hungry,’ it is simply generating the most probable completion of the sequence of words in its current prompt. It is doing exactly the same thing as when, with a different prompt, it generates ‘I am not hungry,’ or with yet another prompt, ‘The moon is made of green cheese.’ None of these are reports of its (nonexistent) physiological states. They are simply probabilistic completions.”
Singularity.net intends to create sentient machines
Singularity.net founder Ben Goertzel is working hard to develop genuine thinking machines but thinks LLMs are not the path to get there. “The power of LLMs and associated generative NNs is remarkable, yet their limitations are also now quite clear,” he says. “Once we launch smarter stuff (achieving better reasoning via integration of symbolic reasoning and better creativity via integration of evolutionary learning) on decentralized networks, then the whole scene is going to look quite different.”
Goertzel reports that the Artificial Superintelligence Alliance (ASI) is currently scaling up its decentralized OpenCog Hyperon infrastructure and constructing large GPU and CPU server farms across multiple nations.
SingularityNET, Fetch AI and Ocean Protocol recently merged into the ASI, so if you own any of those tokens, you’ll need to swap them for the ASI token in two weeks via the SingularityDAO dApp.
Source: Cointelegraph
Recent Comments