For Christmas I received an intriguing gift from a friend - my really own "very popular" book.
"Tech-Splaining for Dummies" (terrific title) bears my name and my photo on its cover, and it has glowing reviews.
Yet it was entirely composed by AI, with a couple of basic triggers about me provided by my friend Janet.
It's an interesting read, and uproarious in parts. But it also meanders quite a lot, and is someplace between a self-help book and a stream of anecdotes.
It imitates my chatty design of writing, but it's likewise a bit repetitive, and extremely verbose. It might have gone beyond Janet's prompts in looking at data about me.
Several sentences start "as a leading technology journalist ..." - cringe - which could have been scraped from an online bio.
There's also a strange, repeated hallucination in the form of my cat (I have no family pets). And there's a metaphor on practically every page - some more random than others.
There are lots of companies online offering AI-book writing services. My book was from BookByAnyone.
When I got in touch with the president Adir Mashiach, based in Israel, he informed me he had offered around 150,000 personalised books, primarily in the US, considering that pivoting from compiling AI-generated travel guides in June 2024.
A paperback copy of your own 240-page long best-seller costs ₤ 26. The company uses its own AI tools to generate them, based upon an open source big language design.
I'm not asking you to purchase my book. Actually you can't - just Janet, who produced it, can buy any further copies.
There is currently no barrier to anybody creating one in anyone's name, including celebrities - although Mr Mashiach states there are guardrails around violent content. Each book consists of a printed disclaimer stating that it is imaginary, created by AI, and developed "entirely to bring humour and happiness".
Legally, the copyright comes from the company, but Mr Mashiach worries that the product is planned as a "customised gag present", and the books do not get sold further.
He hopes to widen his range, creating various categories such as sci-fi, and possibly offering an autobiography service. It's created to be a light-hearted kind of customer AI - selling AI-generated items to human clients.
It's likewise a bit frightening if, like me, you compose for a living. Not least due to the fact that it most likely took less than a minute to create, and it does, certainly in some parts, sound simply like me.
Musicians, authors, artists and stars worldwide have revealed alarm about their work being utilized to train generative AI tools that then churn out comparable content based upon it.
"We need to be clear, when we are discussing data here, we actually suggest human creators' life works," says Ed Newton Rex, creator of Fairly Trained, which campaigns for AI firms to regard developers' rights.
"This is books, this is short articles, this is pictures. It's masterpieces. It's records ... The entire point of AI training is to learn how to do something and after that do more like that."
In 2023 a tune including AI-generated voices of Canadian singers Drake and The Weeknd went viral on social networks before being pulled from streaming platforms because it was not their work and they had actually not consented to it. It didn't stop the track's creator attempting to nominate it for a Grammy award. And even though the artists were phony, it was still extremely popular.
"I do not think the use of generative AI for creative functions must be banned, but I do think that generative AI for these functions that is trained on people's work without approval must be prohibited," Mr Newton Rex includes. "AI can be very effective however let's build it morally and relatively."
OpenAI states Chinese rivals using its work for their AI apps
DeepSeek: The Chinese AI app that has the world talking
China's DeepSeek AI shakes market and damages America's swagger
In the UK some organisations - consisting of the BBC - have chosen to obstruct AI developers from trawling their online content for training purposes. Others have actually decided to work together - the Financial Times has partnered with ChatGPT developer OpenAI for instance.
The UK government is considering an overhaul of the law that would enable AI developers to use developers' material on the web to help develop their models, wavedream.wiki unless the rights holders pull out.
Ed Newton Rex explains this as "insanity".
He points out that AI can make advances in locations like defence, health care and logistics without trawling the work of authors, reporters and artists.
"All of these things work without going and altering copyright law and ruining the livelihoods of the nation's creatives," he argues.
Baroness Kidron, a crossbench peer in your home of Lords, is likewise highly against removing copyright law for AI.
"Creative markets are wealth creators, 2.4 million tasks and a whole lot of pleasure," states the Baroness, who is also an advisor to the Institute for Ethics in AI at .
"The federal government is weakening one of its best performing industries on the vague pledge of development."
A government spokesperson said: "No move will be made till we are absolutely positive we have a useful strategy that provides each of our objectives: increased control for right holders to assist them accredit their content, access to high-quality material to train leading AI models in the UK, and more openness for right holders from AI designers."
Under the UK government's brand-new AI plan, a national data library consisting of public information from a large range of sources will also be made available to AI scientists.
In the US the future of federal rules to manage AI is now up in the air following President Trump's return to the presidency.
In 2023 Biden signed an executive order that intended to boost the security of AI with, among other things, firms in the sector required to share details of the operations of their systems with the US government before they are released.
But this has now been rescinded by Trump. It stays to be seen what Trump will do rather, however he is stated to want the AI sector to deal with less policy.
This comes as a variety of claims versus AI firms, and especially against OpenAI, continue in the US. They have been gotten by everyone from the New York Times to authors, music labels, and even a comic.
They declare that the AI firms broke the law when they took their content from the internet without their consent, and utilized it to train their systems.
The AI companies argue that their actions fall under "fair usage" and are therefore exempt. There are a variety of aspects which can make up reasonable usage - it's not a straight-forward meaning. But the AI sector is under increasing analysis over how it gathers training information and whether it ought to be spending for it.
If this wasn't all enough to ponder, Chinese AI company DeepSeek has shaken the sector over the previous week. It became the many downloaded totally free app on Apple's US App Store.
DeepSeek declares that it developed its innovation for a portion of the cost of the likes of OpenAI. Its success has actually raised security issues in the US, and threatens American's current supremacy of the sector.
As for me and a profession as an author, I think that at the moment, if I really want a "bestseller" I'll still have to write it myself. If anything, Tech-Splaining for Dummies highlights the current weakness in generative AI tools for bigger jobs. It is full of inaccuracies and hallucinations, and it can be rather tough to check out in parts since it's so long-winded.
But offered how quickly the tech is progressing, I'm not sure how long I can stay positive that my considerably slower human writing and modifying abilities, are much better.
Sign up for our Tech Decoded newsletter to follow the greatest developments in international technology, with analysis from BBC correspondents around the world.
Outside the UK? Register here.
1
How an AI-written Book Shows why the Tech 'Terrifies' Creatives
Angelica David edited this page 2025-02-09 07:27:47 +00:00