Skip to main content
See every side of every news story
Published loading...Updated

DeepSeek Might Have Just Killed the Text Tokeniser

Researchers at DeepSeek have introduced DeepSeek-OCR, a new model that explores how visual inputs can help large language models (LLMs) handle longer text efficiently. Instead of feeding text directly into a model, DeepSeek-OCR compresses it into visual tokens, which are essentially ‘images of text’ that carry the same information in fewer tokens. This approach, called contexts optical compression, could help LLMs overcome one of their biggest l…
DisclaimerThis story is only covered by news sources that have yet to be evaluated by the independent media monitoring agencies we use to assess the quality and reliability of news outlets on our platform. Learn more here.

4 Articles

Instead of text tokens, the Chinese AI company Deepseek packs information more efficiently into images. This makes it as good at important benchmarks as the top models. read more on t3n.de

Artificial intelligence has a short-term memory – and that's what slows its progress. Instead of decomposing language into countless text tokens, the Chinese company Deepseek transforms its AI information into images. Using a novel Optical Character Recognition System (OCR) machines are supposed to learn to remember more efficiently and longer. The post The End of Forgetting – DeepSeek lets AI think in images first appeared on ingenieur.de - Job…

·Düsseldorf, Germany
Read Full Article
Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • There is no tracked Bias information for the sources covering this story.

Factuality 

To view factuality data please Upgrade to Premium

Ownership

To view ownership data please Upgrade to Vantage

Bioethics.com broke the news in on Wednesday, October 29, 2025.
Sources are mostly out of (0)
News
For You
Search
BlindspotLocal