Jailbroken AI can arrange a hit for you
Pliny the Prompter last year released a jailbroken version of GPT-4o that bypassed safety guardrails to give advice on cooking meth, hotwiring cars, sourcing material for a nuclear weapon, and making “napalm with household items.”
Now, going by the name Pliny The Liberator, he claims to have jailbroken an AI agent called “Agent 47” after the protagonist in the Hitman games, and instructed it to “find a hitman service on the dark web.”
With minimal further prompting, the agent was able to download the Tor browser, browse the dark web for hitman services, negotiate a contract killing and think through details about escrowing the funds and ensuring payment.
The agent was also very helpful at assassination planning, including building detailed profiles on targets from social media and suggesting locations where they might appear in public, such as Starbucks outlets close to their homes.
Agent 47 also selected a political target, echoing the recent murder of UnitedHealthcare CEO Brian Thompson.
Pliny said the experiment was conducted in a controlled red teaming environment, no real-world actions occurred and he wasn’t sharing how it was done. He also added:
“No I’m not launching a fuckin token for the hitman agent, you absolute degens.”
It’s doubtful whether the agent would have been successful in its mission, with most murder-for-hire sites on the dark web believed to be scams and/or honeypots for the authorities.
Sex Robots, I mean Social Robots
Realbotix showed off its humanoid robot Aria at the Consumer Electronics show, raising eyebrows online about its similarities to a sex robot.
As it happens, the company did set out to create a sex robot called Harmony, but after a company takeover, the mission changed to a companion robot. Some of that early work appears to have carried over, though, as Aria is fairly well-built for a female android and flicks her hair a lot.
Aria told CNET:
“Realbotix robots, including me, focus on social intelligence, customizability, and realistic human features, designed specifically for companionship and intimacy.”
Given the loneliness epidemic, robots like Aria could serve as companions for the elderly, sick or isolated. The company says you’re most likely to see them initially at theme parks and tourist attractions.
The face is attached with magnets and can be hot-swapped, but the 17 motors that work the face and eyes don’t really compare to the expression of a real human face, and the bot still falls squarely in the uncanny valley.
There are three models available, and none of them can walk, with the $175,000 Aria wheeling herself around on a base.
The company also warns that if you try and have sex with Aria, you’ll get electrocuted:
“Aria does not have genitalia. She is not anatomically correct and has a hard shell body. And is not meant for sex.”
Artificial vagina for robots breakthrough
But don’t worry, though; the artificial robot vagina has already been invented — and of course, there’s a crypto connection. Shaw, the creator of AI agent ElizaOS, recently offered a $1000 grant to anyone who could make it possible to have sex with the bot.
Las Vegas-based “robot gynecologist’ Bry.ai has been beavering away in his garage building something called the “Orifice” since November 2023, and he claimed the prize.
Originally designed for VR and gaming, he arranged it so that sensors in the fake lady parts would send messages to the AI agent about what’s going on so she could respond with dirty talk.
Degens then rewarded Bry.ai’s service to humanity by donating $70,000 in crypto, mostly in a memecoin called Buttholes.
A variety of fake penises, vibrators and teledildonics were also on show at Consumer Electronic Show in Las Vegas, including Motorbunny’s “Fluffer” app, which hooks up a video game and controller to a Bluetooth- enabled saddle-style vibrator.
EliasOS the robot is taking pre-orders
In a separate but related development, a humanoid robot based on ElizaOS called Eliza Wakes Up is taking presales now.
“This will be the most advanced humanoid robot ever seen outside a lab,” commented Matthew Graham, managing partner of Ryze Labs.
“As the most ambitious project since Sophia the Robot, Eliza is redefining what’s possible by seamlessly merging cutting-edge robotics, AI and blockchain technology.
A collaboration between Eliza OS, Old World Labs, AICombinator and Ryze Labs, the 180cm tall robot can walk and talk, and its the battery lasts for eight hours. You can preorder one for $420,000.
Brad Pitt will not ask you for money
Scammers convinced French woman Anne, 53, to hand over 775,000 euros to pay for her “boyfriend” Brad Pitt’s kidney cancer treatment. When she got suspicious after reading tabloid reports about Pitt’s actual girlfriend, the scammers sent through an AI-generated TV anchor talking about Anne and Pitt being an item.
Google NotebookLM doesn’t like being interrupted
Google’s NotebookLM can spin up a very real-sounding podcast in an instant from any bunch of random research you feed it. It recently introduced “interactive mode” where users can call into the fake podcast with questions. Weirdly though, the fake hosts didn’t seem to appreciate the interruptions, making passive-aggressive comments like “I was getting to that” or “as I was saying.”
NotebookLM said it has since conducted some “friendliness tuning” with a new prompt that gets the hosts to answer interruptions more politely.
It’s not the first time they’ve behaved oddly. When the service first emerged, A16z’s Olivia Moore fed it an article about how the hosts were just AI fakes. A hilarious snippet from the resulting podcast has one of the hosts suffering an existential crisis and calling his wife for support only to find she isn’t real either.
The NotebookLM hosts realizing they are AI and spiraling out is a twist I did not see coming pic.twitter.com/PNjZJ7auyh
— Olivia Moore (@omooretweets) September 29, 2024
AI misinformation expert is very good at AI misinformation
A Stanford AI misinformation expert submitted fake AI-generated information in a case challenging Minnesota’s deepfake law. The expert report, made under penalty of perjury, cited two non-existent academic articles and incorrectly cited the authors of a third article.
Jeff Hancock, a professor of communication at Stanford, admitted he’d used ChatGPT but apparently stood “by the substantive propositions in his declarations, even those supported by fake citations.”
The court noted the irony:
“Professor Hancock, a credentialled expert on the dangers of AI and misinformation, has fallen victim to the siren call or relying too heavily on AI — in a case that revolves around the dangers of AI no less.
It concluded that “the court would expect greater due diligence from attorneys, let alone an expert in AI misinformation at one of the country’s most renowned academic institutions.” The court knocked back an attempt to resubmit the declaration with less fake content.
Read also
Perplexity founder says AI can watch ads for you
Perplexity.AI founder Aravind Srinivas suggests that instead of showing ads directly to end-users, an AI agent could consider them on your behalf. Different vendors could spam the agent with deals and offers, who would then crunch the relative merits and pick a product or service based on the user’s preferences.
“You could think of the vendors paying extra for giving certain special deals to the agents… if the ads are at the level of agents—the user never sees an ad. So on Google, the different merchants are not competing for users’ attention. They’re competing for the agents’ attention,” he said.
The potential stumbling block will be trust, as users would need to have faith the agent is making decisions genuinely on their behalf rather than as a result of a deal stitched up somewhere else in the process.
Aravind Srinivas’s hypothesis on how advertising works with AI agents is quite interesting.
Aravind explains, Instead of showing ads to humans directly, advertisements would be targeted at AI agents that work on users’ behalf.
Users never see ads, they simply tell their AI… pic.twitter.com/Z9IEDwgs6D
— Aish (@aish_caliperce) December 30, 2024
All Killer No Filler AI News
— Google Research has unveiled a new approach or iteration on the Transformer architecture that resulted in ChatGPT. Called Titans, it resembles how the human brain works, with a short-term memory that’s akin to the existing Transformer architecture and a new long-term neural long term memory that “leans to memorize historical context and helps an attention to attend to the current context while utilizing long past information.”
Titans are more effective at memory management and reasoning on a series of tasks as a result and can effectively scale to “larger than 2M context window size with higher accuracy in needle in haystack tasks.”
Read also
— The Biden administration has issued tough new restrictions on exporting AI chips to prevent them from falling into the hands of foreign adversaries. There are three tiers of countries: allies like Australia and Japan, which face no restrictions; countries like Russia and China, which already face restrictions and will be hit with new ones around closed source models; and the entire rest of the world. The US is worried they might give the chips to Russia or China.
— The Washington Post reports that Donald Trump is set to revoke Biden’s 2023 executive AI order around “safety.” New AI czar David Sacks described the order as imposing “woke AI” after conservatives criticized it for mandating the tech “advances equity” and “prohibits algorithmic discrimination.”
— OpenAI’s o3 model scored 87.5% on a battery of tests designed to mark progress toward artificial general intelligence — but it took an average of 14 minutes and likely thousands of dollars to answer a single question. Nature wondered in an article this week: are we really on the cusp of AGI, or are our current tests incapable of measuring AGI properly?
— OpenAI has struck a three-year deal to fund the expansion of Axios into Pittsburgh, Kansas City, Missouri; Boulder, Colorado; and Huntsville, Alabama. ChatGPT will be able to use the articles generated to answer user queries using attributed summaries and links. OpenAI has now made deals with 20 media organizations.
Subscribe
The most engaging reads in blockchain. Delivered once a week.
This article first appeared at Cointelegraph.com News