A hub for startup news, trends, and insights, covering the global startup ecosystem for founders, investors, and innovators. Community: @startupdis Buy Ads: @strategy (this is our only account).
🧱 KIRI Engine releases 3DGS to Mesh 2.0, turn any object into a 3D model
KIRI Engine has launched 3DGS to Mesh 2.0, a breakthrough AI model that lets anyone scan real-world objects and instantly convert them into editable 3D models. The system uses a refined pipeline that merges mesh generation, normal prediction, and reflection filtering for ultra-realistic results.
🔸 Generates high-fidelity 3D meshes with accurate lighting and textures straight from a phone camera.
🔸 Combines 3D Gaussian Splatting (3DGS) with optimized mesh reconstruction for smoother surfaces.
🔸 Models can be exported and edited in most 3D software, ideal for game devs, artists, and product designers.
🔸 The app is free to use, making high-quality 3D capture accessible to everyone, for both Android and IOS users.
AI photogrammetry just leveled up, 3D creation is becoming as simple as taking a photo.
⌨️ A viral TikTok just taught the internet how to properly delete text
A short TikTok clip went viral after revealing that pressing Ctrl + Backspace deletes entire words instead of single letters, something millions of users apparently never knew.
🔸 The video hit 30 million views in days, sparking disbelief across social media.
🔸 Even Microsoft’s official account commented, admitting they “didn’t know” about the shortcut.
🔸 The trick works across most text editors, browsers, and messaging apps, saving seconds that now feel like years of lost time.
The internet just collectively discovered productivity’s biggest cheat code and yes, the world will never be the same.
🎧 David AI secures $50M to expand its global audio data lab
San Francisco–based David AI, a YC-backed audio data research company, has raised $50M in Series B funding led by Meritech and NVIDIA, with participation from Alt Capital, First Round, Amplify Partners, and Y Combinator. The round brings total funding to $80M.
Founded in 2024 by ex-Scale AI engineers Tomer Cohen and Ben Wiley, David AI calls itself the world’s first dedicated audio data research lab. The company builds large, diverse datasets that capture global human speech — across languages, emotions, and real-world environments — to train next-generation voice and audio AI models.
🔸 The new capital will fund research expansion, team growth, and deeper collaboration with global AI and hardware firms.
🔸 David AI’s datasets already power voice models used by top AI labs and several “Mag 7” tech giants.
🔸 The company recently surpassed an eight-figure annual revenue run rate.
🔸 Competitors include Defined.ai, PublicAI, Human Native AI, Ydata, and Syntheticus.
David AI positions itself as the data backbone for the coming wave of audio-first AI - powering the interfaces that will connect humans and machines in the physical world.
⚡️ Startups don’t buy AI they buy speed
a16z and Mercury analyzed transactions from 200,000 startups (June–August 2025) to understand how companies actually spend on AI tools, especially for note-taking, design, coding, and communication. The findings reveal a clear trend: startups aren’t chasing models or hype, they’re investing in speed and efficiency.
🔸 “For everyone” products dominate 60% of AI expenses, OpenAI, Anthropic, Perplexity, Notion, Manus. Assistants and “smart workspaces” remain a competitive, leaderless category.
🔸 The creative stack, Freepik, ElevenLabs, Canva, Photoroom, Midjourney, Descript, Opus Clip, CapCut, now defines daily work. Content creation isn’t a separate role anymore.
🔸 “Vibe-coding” tools like Replit, Cursor, Lovable, and Emergent are mainstream. Replit earns 15× more than Lovable, as companies pay for faster prototyping instead of large dev teams.
🔸 Vertical AI tools are turning into “AI employees”: Crosby Legal (law), 11x (GTM), Alma (immigration). Startups prefer automation over contractors or hires.
🔸 Nearly 70% of top AI tools began as consumer apps and scaled bottom-up, the B2C-to-B2B path now takes just 1–2 years.
AI has stopped being about futuristic tech, the only metric that matters now is how fast it delivers results.
🏠 Design your dream home no architect needed
Interior designers and soon-to-be homeowners, meet HomeByMe a 3D home design platform that lets you build and furnish your dream space just like in The Sims, but with real-world accuracy.
🔸 You can draw your home layout, add walls, doors, and windows directly in 3D.
🔸 The platform includes thousands of real furniture and décor items major brands.
🔸 Everything is drag-and-drop, so you can instantly see how your dream space looks and feels.
🔸 Projects can be shared with designers or contractors for easy collaboration.
🔸 It’s perfect for planning renovations, new homes, or even testing interior aesthetics.
It’s basically The Sims for real life,!but this time, your living room isn’t fictional.
⚙️ Booking.com, Spotify, and Figma now live inside ChatGPT
🔸 Apps work as native chat integrations no installs, no separate mode.
🔸 OpenAI also launched an Apps SDK, letting developers build custom chat-based apps.
🔸 It’s essentially the next iteration of plugins, but more stable and natively integrated.
🔸 Monetization isn’t live yet, though Altman hinted at “various ways” to earn in the future.
🔸 It’s unclear if brands will be able to pay for higher visibility or priority placement in ChatGPT results.
OpenAI is reviving the plugin dream, hoping apps inside ChatGPT succeed where plugins fizzled out.
⚛️ Harvard builds quantum machine that runs for 2 hours nonstop
Physicists at Harvard have created the first quantum computer capable of continuous operation for over two hours, shattering the previous record of just 13 seconds.
🔸 The team, led by Mikhail Lukin, solved a key challenge called atomic loss, where qubits (atoms) disappear due to heat, field errors, or gas collisions.
🔸 Their system uses “optical lattice conveyor belts” and “optical tweezers” to automatically replace lost qubits in real time.
🔸 New atoms instantly sync with the state of existing ones, preserving quantum information without rebooting.
🔸 The machine generates 300,000 atoms per second, maintaining about 3,000 active qubits during operation.
🔸 Researchers believe this approach could enable quantum computers with near-unlimited uptime within a few years.
Harvard’s quantum “conveyor belt” may have just solved the biggest bottleneck in making stable, practical quantum machines.
✅ Top 50 most expensive private companies today
14 of them did not exist 5 years ago.
📊 Powered by Crypto Insider
📞 AOL’s dial-up goes silent after 34 years, the end of an internet era
America Online has officially shut down its iconic dial-up Internet service, marking the end of a 34-year chapter that once defined how millions first logged onto the web. The final modem screech echoed this week, closing the curtain on a service that introduced the world to “You’ve got mail.”
🔸 At its peak in the late 1990s, AOL connected over 23 million users through its dial-up modems and CDs mailed to nearly every U.S. household
🔸 The shutdown ends both AOL’s Internet access and its classic AOL Dialer software, though some competitors like NetZero and Juno still offer dial-up options
🔸 The decision follows the ongoing shift toward broadband and fiber, leaving rural users and nostalgia-driven collectors as the last holdouts
🔸 AOL’s parent, Yahoo (under Apollo Global Management), said the closure reflects the company’s evolution toward modern digital media and advertising
AOL didn’t just sell Internet access, it sold the feeling of being online for the first time. Now, that sound of a modem connecting fades into history, replaced by a permanent broadband hum.
🚀 Mira Murati builds Thinking Machines, AI infrastructure for everyone
Former OpenAI CTO Mira Murati has launched Thinking Machines, a public benefit startup that raised $2B at a $12B valuation, before releasing a single product. The company aims to democratize how AI models are customized and deployed.
🔸 Focused on infrastructure for fine-tuning open models, not creating ever-larger proprietary ones
🔸 Built by a hand-picked team so loyal that engineers reportedly turned down $50M–$1.5B offers from Zuckerberg
🔸 Structured as a public benefit corporation, signaling long-term social and ethical commitments
🔸 Backed by a star roster of investors betting on Murati’s track record from OpenAI and Tesla
🔸 Aims to make AI development accessible to smaller companies, challenging the “winner-takes-all” model
Murati’s journey from Albania to Tesla to OpenAI’s helm shows how engineering rigor can outpace pedigree. Now, she’s betting that the next AI revolution won’t come from bigger models, but smarter infrastructure.
🧠 Richard Sutton vs LLMs: The Bitter Debate
In a recent interview with Dwarkesh Patel, AI pioneer and Turing Award laureate Richard Sutton surprised many by saying that large language models are still not the Bitter Lesson.
Back in 2019, Sutton’s now-legendary essay “The Bitter Lesson” argued that real AI progress comes not from hand-coded human knowledge, but from scaling computation and general learning methods. It became a cornerstone idea in modern ML thinking and LLMs were widely seen as its perfect embodiment.
So why does Sutton disagree?
🔸 He believes LLMs still rely too heavily on human-created data - data that can run out and carry biases.
🔸 Unlike humans or animals, they don’t learn through continuous real-time interaction with their environment.
🔸 In his view, true AI must be able to learn autonomously, not just be fine-tuned on curated text.
Andrej Karpathy responded with a thoughtful counterpoint. He noted that animals aren’t truly “blank slates” either, evolution preloads them with survival knowledge. In that sense, LLM pretraining could be viewed as an algorithmic version of evolution itself.
Karpathy concluded that Sutton’s ideal, a perfectly self-learning system, may be more of a philosophical north star than an attainable endpoint. Still, Sutton’s call for new paradigms is a timely reminder that scaling alone isn’t the whole story.
This exchange will likely be remembered as one of the defining moments in the ongoing debate over what “real intelligence” actually means.
🔍 Wikipedia’s knowledge engine, rebuilt for AI
Wikimedia Deutschland, Jina.AI, and DataStax unveiled the Wikidata Embedding Project, a revamped interface to make Wikipedia’s 120 million+ entries AI-ready. The goal is to let LLMs query not just keywords, but meaning, relationships, and context.
🔸 Converts Wikidata into vector embeddings, enabling semantic search rather than just keyword or SPARQL queries
🔸 Supports the Model Context Protocol (MCP) so AI systems can plug in Wikipedia as a live knowledge source
🔸 Queries return richer context, e.g. “scientist” yields related fields, translations, images, linked concepts
🔸 Openly hosted (on Toolforge) and free for developers to use
🔸 Designed to help retrieval-augmented generation (RAG) systems ground answers in verified knowledge
With this, Wikipedia moves from passive data pile to active real-time feed for AI. The irony: the world’s largest crowdsourced encyclopedia is becoming one of AI’s most reliable knowledge backbones.
🎨 How to Avoid the “AI Look” in Generated Images
You can make AI-generated images look more natural by controlling palette, lighting, film style, and texture, essentially giving the model a human‑photography cheat sheet.
Palette: natural, muted, no neon or HDR. Example: "earth tone palette, saturation -15%, no acidic colors."
Light: soft diffused daylight, color temperature. Example: "natural daylight 5600K, no glare or bloom."
Film/grading: film or cinematic look emulation. Examples: "Kodak Portra 400," "Fujifilm Pro 400H," "cinematic grade, low contrast."
Exposure/contrast: avoid "overexposed" HDR. Example: "low-medium contrast, exposure -1/3 EV, preserved shadows and highlights."
Optics/angle: realistic lens and DOF. Example: "35mm f/4, natural depth of field, no oversharpening."
Textures/"noise": a little grain instead of plastic. Example: "light film grain, no plastic skin or smoothing."
Prohibitions (negative cues): "no neon saturation, no gloss, no oversharpen, no HDR, no bloom, no plastic skin, no synthetic glow, no acid-blue shadows."
💦 Corintis brings micro-liquid cooling to AI chips
Swiss startup Corintis is tackling one of AI’s biggest bottlenecks, chip overheating, with cooling built directly inside processors. The company has raised $24M Series A at a ~$400M valuation, with Intel’s CEO on its board and Microsoft as an early tester.
🔸 Liquid flows through microchannels etched into the chip, pulling heat from hotspots.
🔸 Up to 3× more efficient than traditional fans or heat sinks.
🔸 Cuts both energy and water use in data centers.
🔸 Compatible with existing infrastructure, with potential for full chip integration.
🔸 European production capacity targeted at 1M wafers per year.
🔸 Cooling can account for 20%+ of chip costs, a huge value driver as AI demand surges.
By solving heat at the source, Corintis could turn cooling from a cost burden into one of the most strategic levers in the AI hardware race.
🛒 OpenAI brings shopping into ChatGPT
OpenAI has rolled out native product purchases inside ChatGPT, starting with Etsy integration in the US.
🔸 Users can click “Buy” in chat, with payments charged to a linked card and fulfilled through the seller’s own system.
🔸 Powered by the new open-source Agentic Commerce Protocol, designed to standardize AI-driven transactions.
🔸 OpenAI plans to expand beyond Etsy and outside the US in the coming months.
🔸 Google recently demoed a similar approach, signaling a race to define AI-native commerce.
After this launch, ChatGPT moves from conversation into direct consumer transactions, turning chat into checkout.
🔔 Google launches AI fitting room for virtual shoe try-ons
Google has rolled out a new AI-powered shoe try-on feature, letting users see how sneakers look on their own feet before buying. Just tap “Try on” on a product card and upload a full-length photo, Google’s model then generates a realistic preview.
🔸 Available for now in the U.S., Australia, Canada, and Japan, with plans to expand to more regions.
🔸 The tool uses generative AI and pose-estimation to match shoes to your body type and lighting.
🔸 Initially supports major brands on Google Shopping, with apparel integration expected next.
🔸 Builds on Google’s earlier AI try-on for clothing, expanding into full-body visualization for retail.
E-commerce is entering its AR moment, and Google wants to own the virtual dressing room.
🤖 Figure unveils “Figure 03”, its first humanoid ready for mass production
Figure AI has introduced Figure 03, a next-generation humanoid robot designed as a versatile assistant for both homes and businesses. It can handle everyday tasks like doing laundry or greeting guests at a hotel reception, marking Figure’s shift from prototypes to production-ready robotics.
🔸 The robot features a textile suit and soft protective inserts to make human interaction safer and more natural.
🔸 It adds new sensors, cameras in its hands, an upgraded battery, and wireless charging, all powered by Figure’s in-house Helix neural network.
🔸 Figure 03 is optimized for large-scale manufacturing, with the first line capable of producing up to 12,000 units per year.
🔸 Pricing and launch details remain undisclosed, but the design suggests a move toward commercial deployment.
With Figure 03, the company is signaling that humanoid robotics is moving out of the lab, and into real-world service.
🔘Founders map the next AI winners at AI Challenges
The AI Challenges online conference gathered visionary founders to outline the unsolved problems that will decide which AI startups survive the next decade, from data access and inference costs to human alignment and product feedback loops. Here’s a concise, investor-facing digest.
🔸 “Data Freedom” confidential, siloed data is the biggest blocker; secure, auditable data pipes will unlock healthcare, legal, and enterprise verticals.
🔸 Inference & LM-training costs, fine-tuning and per-query inference are inefficient and expensive; parameter-efficient adaptation and hybrid edge/cloud designs are prime infra bets.
🔸 The AI Dealmaker: an agent for VCs that automates sourcing, diligence, and portfolio monitoring; retention hinges on measurable time-saved.
🔸 Turning LLM feedback into product outcomes: teams need SDKs and control planes to convert user signals into reliable model and UX improvements.
🔸 AI-native economy: autonomous agents will create new marketplaces (agent payroll, billing, reputation) and reshape how work is exchanged.
🔸 The Future of Human-AI Relationship: empathy, trust, and explainability are now product features, not optional extras.
🔸 Pitch session: five strong AI-native startups presented (themes: agent platforms, data-vaults, feedback control planes, vertical AI employees, cost-efficient inference).
🔸 Online afterparty hosted by venture investor DJ Mak.
A clear signal for investors: bet on speed + trust + economics products that prove time-saved, keep customer data in control, and make inference affordable will act like profitable, accountable “AI employees.”
🧠 Neuralink patient controls robotic arm with his thoughts
Nick Ray, who suffers from amyotrophic lateral sclerosis (ALS), has become the first Neuralink patient to control a robotic arm purely through thought. Implanted with Neuralink’s brain interface in July 2025, he recently connected it to Tesla’s Optimus robot, and the results are stunning.
🔸 Nick reports that the delay between his thoughts and the arm’s movement is “almost unnoticeable.”
🔸 For the first time in years, he was able to put on a cap, heat up nuggets in the microwave, and eat by himself.
🔸 He’s also learned to slowly control his wheelchair through the same neural interface.
🔸 The experiment marks the first integration of Neuralink’s implant with a humanoid robot, showing early progress toward mind-controlled assistive machines.
A powerful glimpse into the future, where human thought could directly control the tools that restore independence.
⚛️ Quantum pioneers win the 2025 Nobel Prize in Physics
The Nobel Committee awarded this year’s Physics Prize to John Clarke, Michel Devoret, and John Martinis for experiments that showed quantum mechanics at work in everyday electronic circuits a breakthrough that paved the way for today’s quantum computers.
🔸 In the 1980s, their superconducting circuits proved that quantum effects can appear in macroscopic systems.
🔸 Their discoveries laid the foundation for quantum processors, sensors, and encryption technologies.
🔸 Devoret is now chief scientist at Google Quantum AI, while Martinis once led the Google Quantum Lab.
🔸 Clarke, based at UC Berkeley, helped design early quantum devices that bridge physics and engineering.
🔸 The trio will share the 11 million SEK ($1.2M) prize.
Quantum theory finally leaves the lab and the engineers who made it practical are getting their due.
🎥 Sora Watermark Remover lets TikTokers clean videos
A new tool called Sora Watermark Remover allows creators to remove watermarks from Sora 2 videos while keeping 100% of the original quality.
🔸 OpenAI’s Sora 2 is freely available, but videos come with watermarks that can be annoying for social sharing.
🔸 The service simply requires uploading the video, it automatically cleans the watermark.
🔸 This is especially useful for TikTokers and other social creators who want polished AI-generated content without branding distractions.
Now Sora 2 videos can look professional without the watermark hassle.
🤖 Mercor hits record growth, and rewrites the AI playbook
In just 17 months, Mercor rocketed from $1M to $500M in revenue, becoming the fastest-growing company in AI history. What began as a remote engineer marketplace is now critical AI infrastructure connecting labs with domain experts who help train and evaluate models.
🔸 From freelance coders to scientists, doctors, and lawyers, Mercor turned expert knowledge into AI training fuel.
🔸 Now partners with OpenAI, Google, Meta, Microsoft, Amazon, and Nvidia.
🔸 Raised $100M at a $2B valuation, now targeting $10B in its next round.
🔸 Already profitable with $6M net profit in the first half of the year.
🔸 Boasts 1600% net retention and zero churn, an almost impossible metric.
🔸 Founded by Thiel Fellows aged just 22–23, now building tools for RL and AI expert marketplaces.
Mercor scaled faster than any AI startup ever, but with OpenAI launching a rival hiring platform, the next round might be a fight for the ecosystem itself.
🎤 NeuTTS-Air kills ElevenLabs’ moat, open-source voice cloning for everyone
A new open-source model called NeuTTS-Air is going viral for cloning any voice locally, no cloud, no paywalls, and total privacy. It can run on a PC or even a smartphone, using just a 3-second audio sample to generate natural, high-quality speech.
🔸 748M-parameter model, fine-tuned for speed and realism, rivals ElevenLabs and OpenAI’s Voice Engine
🔸 Works fully offline, ensuring voice data never leaves your device
🔸 Can generate entire podcasts, narrations, or dialogues from a single short recording
🔸 Released under an open-source license, meaning anyone can build apps, chatbots, or AI creators on top
If ElevenLabs dominated with convenience and polish, NeuTTS-Air is betting on freedom and decentralization, the future of voice AI may no longer need a server.
🎓 Stanford drops free AI lectures from Andrew Ng
Stanford has begun releasing a new open series of AI lectures led by Andrew Ng, the Coursera founder and pioneer of modern machine learning education.
🔸 Covers neural network training, AI agent design, and career-building in AI
🔸 Taught by Andrew Ng himself, returning to his Stanford teaching roots
🔸 Includes hands-on examples and practical labs using current AI frameworks
🔸 Designed for both beginners and professionals seeking to deepen ML foundations
🔸 Available free online, part of Stanford’s push to make AI education globally accessible
Decade after decade, Andrew Ng keeps doing what AI models can’t: teaching humans how to think like machines.
👩🎨 Google introduces PASTA - an AI agent for iterative image generation
Google Research has unveiled PASTA, a Preference Adaptive and Sequential Text-to-image Agent that interacts with users step by step, refining visuals through dialogue instead of raw prompt tweaking.
Unlike traditional models, PASTA learns from user sessions rather than isolated “prompt–image” pairs. It studies how prompts evolve and which images people ultimately choose - effectively learning from the creative process itself.
🔸 The team released the dataset of these real user sessions in open source.
🔸 Two auxiliary models were trained: one predicts user satisfaction with generated images, the other estimates which image the user would select.
🔸 Using these simulators, researchers produced 30K additional synthetic sessions to train the main agent.
🔸 Training used Implicit Q-Learning (IQL), with the goal of maximizing total user satisfaction over multiple iterations.
The result is a genuine text2image agent, not just a generator - one that learns to collaborate. It’s still research-only, but the dataset is available for exploration on Kaggle.
⚙️ Stapply: Your AI job agent
A new AI tool called Stapply promises to make job hunting effortless by acting as a personal AI recruiter, finding, ranking, and even applying to roles for you.
🔸 Indexes all existing vacancies for your search across multiple sources in real time.
🔸 Ranks results based on your preferences and career goals.
🔸 Automatically fills out application forms and attaches your resume.
🔸 Sends the applications directly, eliminating tedious manual steps.
🔸 Works as a personal assistant that adapts to your job-hunting style.
By taking over both the search and application process, Stapply could turn the often frustrating task of job hunting into a seamless, AI-powered matchmaking experience.
🛠 Thinking machines launches Tinker for AI fine-tuning
Thinking Machines, founded by OpenAI veterans including Mira Murati and John Schulman, has unveiled its first product: Tinker, a platform that makes it simple to fine-tune large AI models without heavyweight infrastructure. The startup is already valued at $12B after a $2B seed round.
🔸 Provides API-based fine-tuning for models like Llama and Qwen.
🔸 Automates GPU cluster setup, training stability, and deployment.
🔸 Lets researchers export customized models for their own use.
🔸 Free in beta, with plans for future monetization.
🔸 Aims to democratize access to tuning tools once limited to big tech labs.
🔸 Raises both excitement and safety concerns over broader model manipulation.
Tinker is positioning Thinking Machines as a key infrastructure player in the AI stack, where the competitive edge may come not from training bigger models, but from adapting existing ones.
💼 MyTinyTools unites hundreds of free mini-services
MyTinyTools is a web platform that bundles text, file, code, and cybersecurity tools into a single place, all free to use.
🔸 Dozens of utilities for text generation, editing, images, video, code, and file conversion.
🔸 Productivity helpers like calendars, planners, and timers.
🔸 Everything runs directly in the browser, no downloads needed.
🔸 No registration, no limits, and no hidden fees.
By centralizing everyday tools under one roof, MyTinyTools aims to be a lightweight all-in-one workspace for both personal and professional use.
🎥 OpenAI releases Sora 2 for AI video generation
OpenAI has launched Sora 2, the next generation of its video model, adding realism and creative control.
🔸 More accurate physics, object interactions, and scene coherence.
🔸 Synchronized sound and dialogue for lifelike storytelling.
🔸 Fine-grained control over style, pacing, and scene sequencing.
🔸 Ability to insert yourself or friends into videos while preserving voice and appearance.
🔸 Free with “generous limits,” rolling out first via invites in the US and Canada; Pro tier offers higher-quality generations.
🔸 New iOS app introduces an endless feed of short AI-generated videos.
OpenAI is turning video generation from a demo into a mainstream creative platform.
💻 Claude Sonnet 4.5 launches as Anthropic’s top coding model
Anthropic has released Claude Sonnet 4.5, now leading benchmarks and aiming to act as an autonomous software engineer.
🔸 Beats all models on SWE Bench Verified, the gold standard for software reasoning.
🔸 In tests, ran 30+ hours straight, setting up databases, buying domains, and conducting security audits.
🔸 Can move from prototypes to production-ready applications, not just snippets.
🔸 Launch includes Claude Agent SDK and preview of Imagine with Claude, a tool for on-the-fly software generation.
🔸 Pricing unchanged from Sonnet 4, $3 per million input tokens, $15 per million output tokens.
With Sonnet 4.5, Anthropic isn’t just building a coding assistant, it’s positioning Claude as a tireless full-stack engineer.