Here’s the lowdown: Amazon has already started rolling out two new AI-powered features on the Kindle iOS app in the U.S. for thousands of books. They will continue the roll out on Kindle and Android devices sometime in 2026.
What is the “Ask This Book” Feature?
According to Amazon, “Ask This Book” “lets you ask questions about the book you’re reading and receive spoiler-free answers. The feature answers questions about the plot, characters, and other relevant details.”
The “Story So Far” feature will supply a recap (synopsis) of what a reader has read thus far in the story.
The Problem with Gen-AI
We’ve all seen generative-AI (gen-AI) large language models (LLM) invent answers to queries (known as AI hallucinations) and answer questions incorrectly. The likelihood that this wouldn’t happen with Amazon’s in-house LLM is low. Incorrect generated answers currently litter the Internet because no LLM knows right from wrong or correct from incorrect.
One of the more glaring issues that will come out of this is when Kindle’s new features get it wrong. The author (you know, the one who actually wrote the story?) will have no way to correct errors. They will be completely shut out and be unable to monitor what information these features will spew out.
You might ask: so what?
Who cares if it shares a spoiler or generates a wrong answer once in awhile?
Well… if you’ve ever read through a reviews section on any book on Amazon, you’ll see that readers are incredibly discerning. They will call out anything they see that’s incorrect. And if they aren’t aware of how the features work or that those answers originated purely out of scraped content via generative-AI, it doesn’t take a genius to figure out this is going to result in negative star ratings and reviews. And, there again, authors will have no ability to correct the information, to relay corrected answers to angry readers, or to retract inevitable spoilers.
The vast majority of the time, only the authors themselves are going to be able to answer such questions correctly. In addition, authors may not want generative-AI speculating on what a scene or a symbol means in a storyline. They may want to leave that up to the reader to decide.
Do these new features violate copyright law?
You can be certain conversations surrounding copyright law and these new features are happening in publishing houses across the U.S. They are certainly being discussed among the authors I know, and the news is spreading like wildfire.
Personally, I foresee heavy push-back from authors as well as a class-action lawsuit similar in scope to the Anthropic AI lawsuit.
Discussions will be around whether this constitutes a derivative work and a direct copyright infringement.
For my part, I think their end game is to circumvent paying human authors by scraping all of the books in the Amazon store and creating their own in-house “Generate your own book” feature for a nominal fee that will, of course, go straight into Amazon’s pockets. I mean, why bother with human-breathed works when you’ve got a money-making machine that can pump out sub-par AI slop at a fraction of the price?
I’ll go ahead and answer this question right now: Because we wrote it FIRST and we wrote it BETTER.
If you’re a concerned author wanting to make your voice heard, what can you do about this?
If you’re not comfortable with Amazon using your copyrighted works to train their in-house LLM or give answers on your stories without your consent or your ability to monitor outputs, here are some proactive ideas:
- Spread the word about these new features, especially to the big-name authors you follow on social media, at book events, and in your author groups.
- Contact the Author’s Guild and ALLi to ask if they are aware of these features and plan to address potential legal issues.
- Contact KDP Support Chat (in your KDP dashboard, click on Help in the upper right-hand corner) and ask them to remove the feature from your KDP books. When they relay the company line (“This feature is always on to provide readers with a consistent experience across the books they read, so there is no option to un-enroll”) ask them to email that same info so you have it for your records. When they email you, email them back, and write a longer email complete with all of your concerns about your rights as a copyright holder, your concern about what they will do with your data in the future, etc.
- Before you hit send on that email, consider CC’ing the Jeff@Amazon.com email address. While he’s no longer at the helm of Amazon, his email address still works, and eventually someone on the Executive Customer Relations Team will write you back. They, too, will give you the company line, but they will also pass your concerns along to someone higher up. This is what they emailed me: “Thanks for taking time to share your thoughts about Ask this Book. This feature is always on to provide readers with a consistent experience across the books they read, so there is no option to un-enroll. I appreciate your thoughts and will be sure to pass your suggestion along.”
- Voice your concerns on your social media platforms and tag Amazon in your posts.
- Consider adding a disclaimer similar to the one I use below on the copyright page of all of your books:
No generative-AI (artificial intelligence) was used in the writing or creation of this work (content or cover art). The author/publisher expressly prohibits any individual or entity from using this book to train current or future generative-AI technologies or large language models.

