🔤

How People Are Using AI For Language Translation

In 2025, people rely on live translation in meetings, on devices, and in creator workflows—Google Meet translates as you speak, Apple adds live translation, and one-click tools publish stories in nine languages. Users build Whisper-based interpreters for on-site conversations and report LLMs handling context better than Google Translate; creators localize with ElevenLabs voice and mobile recorders like Notta Memo, and teams add languages such as Polish, Romanian, German, Portuguese, and Hindi, while SignGemma targets ASL→English for accessibility.

🤖 ai summary based on 18 tweets

Popular demos from official product accounts, team members, or affiliated creators.

Can’t load? Read tweets in preview...

Bolna (@bolna_dev) is enabling 500+ Indian businesses to automate calls with multilingual, realistic voice AI agents. Half a million monthly calls are already running on Bolna’s platform! https://t.co/GaC5PKwMoY Congrats on the launch, @maitreya_wagh & @xan_ps! https://t.co/bZCzURJxSw

🌐 TAMKIN: SILENCE ENDS HERE🧏 600M+ people speak sign language. Most of the world doesn’t speak back. Tamkin does. 💠 Real-time sign language translation. 💠 AI-powered accessibility. 💠 Built for the 600M+ unheard voices. 💠 Websites, apps, hospitals, schools — all connected. Runs on $TSLT. Fuels the inclusion. 💠 10% of profits go to investors. 💠 Supply shrinks through burn & swap. 💠 Holding $TSLT = owning the system. Built for real change. Backed by $TSLT. Tamkin connects the world.

We've been working with Yoto to bring more stories to more kids. Their players are in 200+ countries, but most stories support only three languages. Together, we’re changing this. Launching now: Polish, Romanian, German, Portuguese and Hindi. One day: every language. https://t.co/7mKTpNjDDG

We built AI glasses software that gives you superhuman intelligence. Imagine instantly becoming the most knowledgeable person in any meeting, calculate large numbers in seconds, and understanding everything from physics, history, to philosophy, economics, and more. Beta app out now.

The race for LLM "cognitive core" - a few billion param model that maximally sacrifices encyclopedic knowledge for capability. It lives always-on and by default on every computer as the kernel of LLM personal computing. Its features are slowly crystalizing: - Natively multimodal text/vision/audio at both input and output. - Matryoshka-style architecture allowing a dial of capability up and down at test time. - Reasoning, also with a dial. (system 2) - Aggressively tool-using. - On-device finetuning LoRA slots for test-time training, personalization and customization. - Delegates and double checks just the right parts with the oracles in the cloud if internet is available. It doesn't know that William the Conqueror's reign ended in September 9 1087, but it vaguely recognizes the name and can look up the date. It can't recite the SHA-256 of empty string as e3b0c442..., but it can calculate it quickly should you really want it. What LLM personal computing lacks in broad world knowledge and top tier problem-solving capability it will make up in super low interaction latency (especially as multimodal matures), direct / private access to data and state, offline continuity, sovereignty ("not your weights not your brain"). i.e. many of the same reasons we like, use and buy personal computers instead of having thin clients access a cloud via remote desktop or so.

Introducing the ElevenLabs mobile app for iOS and Android. The most powerful AI voice tools, now in your pocket. Generate studio-quality voiceovers for your videos in seconds. Built for creators, educators, and professionals. https://t.co/aodeTIgJc7

2️⃣SignGemma is a sign language understanding model that’s coming later this year 🤟🏼It’s a massively multilingual model that’s best at translating ASL into English text, enabling further development of tech access for Deaf and Hard of Hearing users. 🧏 Share your feedback and interest in early testing here: https://t.co/BCcsm3yUqf

One quick check...