hacker_news_feed | Technologies

Telegram-канал hacker_news_feed - Hacker News

24205

Top stories from https://news.ycombinator.com (with 100+ score) Contribute to the development here: https://github.com/phil-r/hackernewsbot Also check https://t.me/designer_news Contacts: @philr

Subscribe to a channel

Hacker News

Desed: Demystify and debug your sed scripts (Score: 150+ in 21 hours)

Link: https://readhacker.news/s/6e4C7
Comments: https://readhacker.news/c/6e4C7

Читать полностью…

Hacker News

Why I self host my servers and what I've recently learned (Score: 153+ in 1 day)

Link: https://readhacker.news/s/6dYzh
Comments: https://readhacker.news/c/6dYzh

Читать полностью…

Hacker News

My job is to watch dreams die (2011) (🔥 Score: 154+ in 3 hours)

Link: https://readhacker.news/s/6e6tP
Comments: https://readhacker.news/c/6e6tP

Читать полностью…

Hacker News

UE5 Nanite in WebGPU (🔥 Score: 152+ in 3 hours)

Link: https://readhacker.news/s/6e6m5
Comments: https://readhacker.news/c/6e6m5

Читать полностью…

Hacker News

AlphaProteo generates novel proteins for biology and health research (Score: 152+ in 4 hours)

Link: https://readhacker.news/s/6e5Pv
Comments: https://readhacker.news/c/6e5Pv

Читать полностью…

Hacker News

Show HN: Hacker League – Open-Source Rocket League on Linux (Score: 150+ in 5 hours)

Link: https://readhacker.news/s/6e5x5
Comments: https://readhacker.news/c/6e5x5

Читать полностью…

Hacker News

Tinystatus: A tiny status page generated by a Python script (Score: 150+ in 16 hours)

Link: https://readhacker.news/s/6e4em
Comments: https://readhacker.news/c/6e4em

Читать полностью…

Hacker News

Building a WoW (World of Warcraft) Server in Elixir (Score: 150+ in 7 hours)

Link: https://readhacker.news/s/6e4Zf
Comments: https://readhacker.news/c/6e4Zf

Читать полностью…

Hacker News

Yi-Coder: A Small but Mighty LLM for Code (Score: 154+ in 10 hours)

Link: https://readhacker.news/s/6e4wp
Comments: https://readhacker.news/c/6e4wp

Читать полностью…

Hacker News

Show HN: Laminar – Open-Source DataDog + PostHog for LLM Apps, Built in Rust (Score: 150+ in 13 hours)

Link: https://readhacker.news/s/6e42U
Comments: https://readhacker.news/c/6e42U

Hey HN, we’re Robert, Din and Temirlan from Laminar (https://www.lmnr.ai), an open-source observability and analytics platform for complex LLM apps. It’s designed to be fast, reliable, and scalable. The stack is RabbitMQ for message queues, Postgres for storage, Clickhouse for analytics, Qdrant for semantic search - all powered by Rust.
How is Laminar different from the swarm of other “LLM observability” platforms?
On the observability part, we’re focused on handling full execution traces, not just LLM calls. We built a Rust ingestor for OpenTelemetry (Otel) spans with GenAI semantic conventions. As LLM apps get more complex (think Agents with hundreds of LLM and function calls, or complex RAG pipelines), full tracing is critical. With Otel spans, we can: 1. Cover the entire execution trace. 2. Keep the platform future-proof 3. Leverage an amazing OpenLLMetry (https://github.com/traceloop/openllmetry), open-source package for span production.
The key difference is that we tie text analytics directly to execution traces. Rich text data makes LLM traces unique, so we let you track “semantic metrics” (like what your AI agent is actually saying) and connect those metrics to where they happen in the trace. If you want to know if your AI drive-through agent made an upsell, you can design an LLM extraction pipeline in our builder (more on it later), host it on Laminar, and handle everything from event requests to output logging. Processing requests simply come as events in the Otel span.
We think it’s a win to separate core app logic from LLM event processing. Most devs don’t want to manage background queues for LLM analytics processing but still want insights into how their Agents or RAGs are working.
Our Pipeline Builder uses graph UI where nodes are LLM and util functions, and edges showing data flow. We built a custom task execution engine with support of parallel branch executions, cycles and branches (it’s overkill for simple pipelines, but it’s extremely cool and we’ve spent a lot of time designing a robust engine). You can also call pipelines directly as API endpoints. We found them to be extremely useful for iterating on and separating LLM logic. Laminar also traces pipeline directly, which removes the overhead of sending large outputs over the network.
One thing missing from all LLM observability platforms right now is an adequate search over traces. We’re attacking this problem by indexing each span in a vector DB and performing hybrid search at query time. This feature is still in beta, but we think it’s gonna be crucial part of our platform going forward.
We also support evaluations. We loved the “run everything locally, send results to a server” approach from Braintrust and Weights & Biases, so we did that too: a simple SDK and nice dashboards to track everything. Evals are still early, but we’re pushing hard on them.
Our goal is to make Laminar the Supabase for LLMOps - the go-to open-source comprehensive platform for all things LLMs / GenAI. In it’s current shape, Laminar is just few weeks old and developing rapidly, we’d love any feedback or for you to give Laminar a try in your LLM projects!

Читать полностью…

Hacker News

Show HN: Mem0 – open-source Memory Layer for AI apps (Score: 151+ in 16 hours)

Link: https://readhacker.news/s/6e2CF
Comments: https://readhacker.news/c/6e2CF

Hey HN! We're Taranjeet and Deshraj, the founders of Mem0 (https://mem0.ai). Mem0 adds a stateful memory layer to AI applications, allowing them to remember user interactions, preferences, and context over time. This enables AI apps to deliver increasingly personalized and intelligent experiences that evolve with every interaction. There’s a demo video at https://youtu.be/VtRuBCTZL1o and a playground to try out at https://app.mem0.ai/playground. You'll need to sign up to use the playground – this helps ensure responses are more tailored to you by associating interactions with an individual profile.
Current LLMs are stateless—they forget everything between sessions. This limitation leads to repetitive interactions, a lack of personalization, and increased computational costs because developers must repeatedly include extensive context in every prompt.
When we were building Embedchain (an open-source RAG framework with over 2M downloads), users constantly shared their frustration with LLMs’ inability to remember anything between sessions. They had to repeatedly input the same context, which was costly and inefficient. We realized that for AI to deliver more useful and intelligent responses, it needed memory. That’s when we started building Mem0.
Mem0 employs a hybrid datastore architecture that combines graph, vector, and key-value stores to store and manage memories effectively. Here is how it works:
Adding memories: When you use mem0 with your AI App, it can take in any messages or interactions and automatically detects the important parts to remember.
Organizing information: Mem0 sorts this information into different categories: - Facts and structured data go into a key-value store for quick access. - Connections between things (like people, places, or objects) are saved in a graph store that understands relationships between different entities. - The overall meaning and context of conversations are stored in a vector store that allows for finding similar memories later.
Retrieving memories: When given an input query, Mem0 searches for and retrieves related stored information by leveraging a combination of graph traversal techniques, vector similarity and key-value lookups. It prioritizes the most important, relevant, and recent information, making sure the AI always has the right context, no matter how much memory is stored.
Unlike traditional AI applications that operate without memory, Mem0 introduces a continuously learning memory layer. This reduces the need to repeatedly include long blocks of context in every prompt, which lowers computational costs and speeds up response times. As Mem0 learns and retains information over time, AI applications become more adaptive and provide more relevant responses without relying on large context windows in each interaction.
We’ve open-sourced the core technology that powers Mem0—specifically the memory management functionality in the vector and graph databases, as well as the stateful memory layer—under the Apache 2.0 license. This includes the ability to add, organize, and retrieve memories within your AI applications.
However, certain features that are optimized for production use, such as low latency inference, and the scalable graph and vector datastore for real-time memory updates, are part of our paid platform. These advanced capabilities are not part of the open-source package but are available for those who need to scale memory management in production environments.
We’ve made both our open-source version and platform available for HN users. You can check out our GitHub repo (https://github.com/mem0ai/mem0) or explore the platform directly at https://app.mem0.ai/playground.
We’d love to hear what you think! Please feel free to dive into the playground, check out the code, and share any thoughts or suggestions with us. Your feedback will help shape where we take Mem0 from here!

Читать полностью…

Hacker News

A Real Life Off-by-One Error (❄️ Score: 151+ in 3 days)

Link: https://readhacker.news/s/6dS8S
Comments: https://readhacker.news/c/6dS8S

Читать полностью…

Hacker News

The Internet Archive has lost its appeal in Hachette vs. Internet Archive (Score: 157+ in 7 hours)

Link: https://readhacker.news/s/6e2Ly
Comments: https://readhacker.news/c/6e2Ly

Читать полностью…

Hacker News

DAGitty – draw and analyze causal diagrams (Score: 150+ in 14 hours)

Link: https://readhacker.news/s/6dZqp
Comments: https://readhacker.news/c/6dZqp

Читать полностью…

Hacker News

Internet Archive loses appeal over eBook lending (🔥 Score: 150+ in 2 hours)

Link: https://readhacker.news/s/6e3eP
Comments: https://readhacker.news/c/6e3eP

Читать полностью…

Hacker News

Tell HN: Burnout is bad to your brain, take care (🔥 Score: 165+ in 1 hour)

Link: https://readhacker.news/c/6e79V

I am depressed and burned out for quite some time already, unfortunately my brain still couldn't recover from it.
If I summarize the impact of burnout to my brain:
- Before: I could learn things pretty quickly, come up with solutions to the problems, even be able to see common patterns and see bigger underlying problems
- After: can't learn, can't work, can't remember, can't see solutions for trivial problems (e.g. if your shirt is wet, you can change it, but I stare at it thinking when it is going to get dried up)
Take care of your mental health

Читать полностью…

Hacker News

Phind-405B and faster, high quality AI answers for everyone (Score: 150+ in 6 hours)

Link: https://readhacker.news/s/6e64V
Comments: https://readhacker.news/c/6e64V

Читать полностью…

Hacker News

Show HN: AnythingLLM – Open-Source, All-in-One Desktop AI Assistant (Score: 151+ in 5 hours)

Link: https://readhacker.news/s/6e5UT
Comments: https://readhacker.news/c/6e5UT

Hey HN!
This is Tim from AnythingLLM (https://github.com/Mintplex-Labs/anything-llm). AnythingLLM is an open-source desktop assistant that brings together RAG (Retrieval-Augmented Generation), agents, embeddings, vector databases, and more—all in one seamless package.
We built AnythingLLM over the last year iterating and iterating from user feedback. Our primary mission is to enable people with a layperson understanding of AI to be able to use AI with little to no setup for either themselves, their jobs, or just to try out using AI as an assistant but with *privacy by default*.
From these iterations & feedback, we have a couple of key learnings I wanted to share:
- "Chat with your docs" solutions are a dime-a-dozen
- Agent frameworks require knowing how to code or are too isolated from other tools
- Users do not care about benchmarks, only outputs. The magic box needs to be magic to them.
- Asking Consumers to start a docker container or open a terminal is a non-starter for most.
- Privacy by default is non-negotiable. Either by personal preference or legal constraints
- Everything needs to be in one place
From these ideas, we landed on the current state of AnythingLLM:
- Everything in AnythingLLM is private by default, but fully customizable for advanced users.
- Built-in LLM provider, but can swap at any time to the hundreds of other local or cloud LLM providers & models.
- Built-in Vector Database, most users don't even know that it is there.
- Built-in Embedding model, but of course can change if the user wants to.
- Scrape websites, import Github/GitLab repos, YouTube Transcripts, Confluence spaces - all of this is already built in for the user.
- An entire baked-in agent framework that works seamlessly within the app. We even pre-built a handful of agent skills for customers. Custom plugins are in the next update and will be able to be built with code, or a no-code builder.
- All of this just works out of the box in a single installable app that can run on any consumer-grade laptop. Everything a user does, chats, or configures is stored on the user's device. Available for Mac, Windows, and Linux
We have been actively maintaining and working on AnythingLLM via our open-source repo for a while now and welcome contributors as we hopefully launch a Community Hub soon to really proliferate users' abilities to add more niche agent skills, data connectors, and more.
*But there is even more*
We view the desktop app as a hyper-accessible single-player version of AnythingLLM. We publish a Docker image too (https://hub.docker.com/r/mintplexlabs/anythingllm) that supports multi-user management with permissioning so that you can easily bring AnythingLLM into an organization with all of the same features with minimal headache or lift.
The Docker image is for those more adept with a CLI, but being able to comfortably go from a single-user to a multi-user version of the same familiar app was very important for us.
AnythingLLM aims to be more than a UI for LLMs, we are building a comprehensive tool to leverage LLMs and all that they can do while maintaining user privacy and not needing to be an expert on AI to do it.
https://anythingllm.com/

Читать полностью…

Hacker News

OAuth from First Principles (❄️ Score: 150+ in 3 days)

Link: https://readhacker.news/s/6dSaR
Comments: https://readhacker.news/c/6dSaR

Читать полностью…

Hacker News

Accelerando (2005) (Score: 150+ in 16 hours)

Link: https://readhacker.news/s/6e4ru
Comments: https://readhacker.news/c/6e4ru

Читать полностью…

Hacker News

What's functional programming all about? (2017) (Score: 151+ in 1 day)

Link: https://readhacker.news/s/6e34J
Comments: https://readhacker.news/c/6e34J

Читать полностью…

Hacker News

Porting systemd to musl Libc-powered Linux (Score: 150+ in 7 hours)

Link: https://readhacker.news/s/6e4ZV
Comments: https://readhacker.news/c/6e4ZV

Читать полностью…

Hacker News

Lesser known parts of Python standard library (Score: 152+ in 19 hours)

Link: https://readhacker.news/s/6e3Ki
Comments: https://readhacker.news/c/6e3Ki

Читать полностью…

Hacker News

Kids who use ChatGPT as a study assistant do worse on tests (Score: 150+ in 9 hours)

Link: https://readhacker.news/s/6e4xw
Comments: https://readhacker.news/c/6e4xw

Читать полностью…

Hacker News

Routed Gothic Font (❄️ Score: 150+ in 4 days)

Link: https://readhacker.news/s/6dPyK
Comments: https://readhacker.news/c/6dPyK

Читать полностью…

Hacker News

Ask HN: Who wants to be hired? (September 2024) (❄️ Score: 150+ in 2 days)

Link: https://readhacker.news/c/6dTNn

Share your information if you are looking for work. Please use this format:

  Location:
Remote:
Willing to relocate:
Technologies:
Résumé/CV:
Email:

Please only post if you are personally looking for work. Agencies, recruiters, job boards,
and so on, are off topic here.
Readers: please only email these addresses to discuss work opportunities.
There's a site for searching these posts at https://www.wantstobehired.com.

Читать полностью…

Hacker News

The first nuclear clock will test if fundamental constants change (Score: 153+ in 8 hours)

Link: https://readhacker.news/s/6e2Gd
Comments: https://readhacker.news/c/6e2Gd

Читать полностью…

Hacker News

Physics is unreasonably good at creating new math (Score: 152+ in 10 hours)

Link: https://readhacker.news/s/6e29q
Comments: https://readhacker.news/c/6e29q

Читать полностью…

Hacker News

The Threat to OpenAI (❄️ Score: 150+ in 4 days)

Link: https://readhacker.news/s/6dPcG
Comments: https://readhacker.news/c/6dPcG

Читать полностью…

Hacker News

Kagi: Announcing The Assistant (🔥 Score: 152+ in 2 hours)

Link: https://readhacker.news/s/6e3at
Comments: https://readhacker.news/c/6e3at

Читать полностью…
Subscribe to a channel