-
Stay up-to-date with everything C++! Content directly fetched from the subreddit just for you. Join our group for discussions : @programminginc Powered by : @r_channels
"Parse, don't Validate" through the years with C++
https://derekrodriguez.dev/parse-dont-validate-through-the-years-with-c-/
https://redd.it/1swhwhp
@r_cpp
I made my C++ vector search engine 16x faster by changing data layout, not the algorithm
https://dubeykartikay.com/posts/sembed-engine-vector-search-performance/
https://redd.it/1swbilt
@r_cpp
Code Examples From an App Using C++ Modules
https://abuehl.github.io/2026/04/26/code-examples-from-an-app-using-modules.html
https://redd.it/1sw1ten
@r_cpp
C++ Modules: clangd, go or no go debating on pulling the modules plug on a project
Every now and then, someone asks how C++ modules are going. I am a lead dev (mentioned not for bragging but to emphasize the crushing weight of responsibility) for a robotics project, as we are making a CV codebase from scratch, I have the opportunity to choose the architecture and conventions of the code (even more crushing responsibility).
I have gotten some C++23 codebases with modules set up; however, whenever adding a new module, I have to recompile to get clangd to not spam extra errors. I have been starting to get second thoughts with pushing modules all the way. To avoid the gcc-clangd stuff, I ended up specifying the compiler as clang (although I could use clangd compiler flag remove/add).
The codebase is at a point where I can relatively, easily pull the plug on modules.
Requirements:
1. No vscode or intellisense - I'm tired of vendor lock
2. Clangd, so we have neovim-coc-clangd + vscodium support
3. Sits well with cmake
4. Not really has to sit too well with clangd, as long as it sits well with compile_commands.json since that is a bit more of a decentralized standard for code completion etc.
5. As much as I would be willing to learn to code without code completion, I would prefer to have enough leeway to radicalize newbies to a nvim plugin with vscodium or neovim-coc-clangd + telescope itself \^_\^
Ideals:
1. Codebase can work with as many compilers as possible (as long as they support #pragma once because the time spent on botched header guards + incoming newbies concerns me enough to diverge a lil' bit from standard C++. Also, our CV codebase is one of those projects that isn't meant to be portable. All of our portable code does use header guards to please the great bjorne stroustroup
2. Ease with other code completion tools
Why (for the curious):
1. Vendor lock sucks
2. Proprietary vendor lock sucks even more
3. I have a bone to pick with microslop
4. Because as an embedded project, vscode's oddities disrupts portions of our toolchain from time to time
5. Even though the CV codebase is not embedded, the fact that vscode support for modules was much harder to get than neovim/clangd just left a bad taste in my mouth. Call me unskilled, but it convinced the newer devs to learn modern C++ on pure command line rather than a shiny GUI-based IDE setup :P
For the curiouser: no, we don't use github anymore :P
https://redd.it/1svded9
@r_cpp
StockholmCpp 0x3D: Intro, Eventhost, Info and the C++ Quiz
https://youtu.be/i9LQS0QvWpw
https://redd.it/1sv5wtm
@r_cpp
Defending against exceptions in a scope_exit RAII type
https://devblogs.microsoft.com/oldnewthing/20260424-00/?p=112266
https://redd.it/1sv03wx
@r_cpp
Maintaining libraries in multiple formats are a bad idea
Library authors shouldn't maintain header only/ header source/ module libraries in one repo. It is a bad idea.
First of all library authors assume if tests succeed on header only format it also works on modules, which is not correct.
Second, the compilation and packaging becomes very ugly, it looks similar to c++ standard versioning macros. Like a project should only compile on one standard, and the other users should either stick to a version/branch or kick rocks.
It is very pleasant to just use modules for libraries, everything is clean. By adopting a support everything approach, library authors harm themselves first and then everyone else because everything lags down.
https://redd.it/1sufvwe
@r_cpp
C++26: Structured Bindings can introduce a Pack
https://www.sandordargo.com/blog/2026/04/22/cpp26-structured-bindings-packs
https://redd.it/1su6ipi
@r_cpp
Devirtualization and Static Polymorphism
https://david.alvarezrosa.com/posts/devirtualization-and-static-polymorphism/
https://redd.it/1sto3hp
@r_cpp
Good Resource for Topics
Hi,
Please suggest good resource for Multithreading, Smart Pointers and Copy Constructor.
Thanks
https://redd.it/1stj0ka
@r_cpp
Boost.Decimal: IEEE 754 Decimal Floating Point for C++ — Header-Only, Constexpr, C++14
https://www.boost.org/library/latest/decimal/
https://redd.it/1stha9t
@r_cpp
Example for Implementing Functions of Partitions
https://abuehl.github.io/2026/04/22/implementing-functions-of-partitions.html
https://redd.it/1ssvtts
@r_cpp
Boost 1.91 Released: New Decimal Library, SIMD UUID, Redis Sentinel, C++26 Reflection in PFR
https://boost.org/releases/1.91.0/
https://redd.it/1ssrnzf
@r_cpp
context->TryCancel();
throw co_await asyncio::error::StacktraceError<std::system_error>::make(res.error());
}
}
}),
asyncio::task::spawn([&]() -> asyncio::task::Task<void> {
while (true) {
auto element = co_await receiver.receive();
if (!element) {
if (!co_await stream.writeDone())
fmt::print(stderr, "Write done failed\n");
if (element.error() != asyncio::ReceiveError::Disconnected)
throw co_await asyncio::error::StacktraceError<std::system_error>::make(element.error());
break;
}
co_await stream.write(*std::move(element));
}
})
)
);
stream.RemoveHold();
co_await stream.done();
if (!result)
std::rethrow_exception(result.error());
}
private:
std::unique_ptr<Stub> mStub;
};
The four overloads are distinguished automatically by parameter types — the compiler selects the correct overload based on the method pointer type passed in. This is a classic use of template metaprogramming: different call patterns map to different function signatures, with zero runtime overhead.
For streaming RPCs, `GenericClient` uses `asyncio::channel` as the data conduit: `Sender` writes data into the channel, `Receiver` reads from it. The channel's close signal (`Receiver` receiving a `Disconnected` error) naturally maps to stream EOF.
# Implementing the Concrete Client
With `GenericClient` in place, implementing a concrete service client is straightforward:
class Client final : public GenericClient<sample::SampleService> {
public:
using GenericClient::GenericClient;
static Client make(const std::string &address) {
return Client{sample::SampleService::NewStub(grpc::CreateChannel(address, grpc::InsecureChannelCredentials()))};
}
asyncio::task::Task<sample::EchoResponse>
echo(
sample::EchoRequest request,
std::unique_ptr<grpc::ClientContext> context = std::make_unique<grpc::ClientContext>()
) {
co_return co_await call(&sample::SampleService::Stub::async::Echo, std::move(context), std::move(request));
}
asyncio::task::Task<void>
getNumbers(
sample::GetNumbersRequest request,
asyncio::Sender<sample::Number> sender,
std::unique_ptr<grpc::ClientContext> context = std::make_unique<grpc::ClientContext>()
) {
co_await call(
&sample::SampleService::Stub::async::GetNumbers,
std::move(context),
std::move(request),
std::move(sender)
);
}
asyncio::task::Task<sample::SumResponse> sum(
asyncio::Receiver<sample::Number> receiver,
std::unique_ptr<grpc::ClientContext> context = std::make_unique<grpc::ClientContext>()
) {
co_return co_await call(&sample::SampleService::Stub::async::Sum, std::move(context), std::move(receiver));
}
asyncio::task::Task<void>
chat(
asyncio::Receiver<sample::ChatMessage> receiver,
asyncio::Sender<sample::ChatMessage> sender,
std::unique_ptr<grpc::ClientContext> context = std::make_unique<grpc::ClientContext>()
) {
co_return co_await call(
&sample::SampleService::Stub::async::Chat,
std::move(context),
std::move(receiver),
std::move(sender)
);
}
};
Here is how to call all four RPC types concurrently,
I built a performance profiling SaaS as a hobby project — inspired by High Performance C++ — and I'd love some feedback from people who actually know profiling
Hi r/cpp,
My name is Sascha Kohler. I'm a hobby programmer — not a professional software engineer — and over the past few months I built something I'm calling **RealBench**: a performance profiling as a service platform for C++, Rust, and Go projects.
# What RealBench does technically
* **C++ core** (`lib/profiler/`): sampling profiler using `perf_event_open` directly — no instrumentation, no code changes. Stack unwinding via `libunwind`, symbol resolution via `libelf` \+ DWARF `debug_info`, C++ demangling via `__cxa_demangle`. Rust gets a separate demangling path. Go uses `--call-graph fp` instead of DWARF because frame-pointer unwinding is what actually works there. Flamegraph SVG generated in C++.
* The C++ library is exposed to Node.js via **N-API bindings** (`node-addon-api`), so the profiling worker runs in Node.js but the heavy lifting stays in C++.
* **Backend**: Hono (TypeScript) + pg-boss job queue (PostgreSQL-native, no Redis) + Cloudflare R2 for flamegraph storage. [Fly.io](http://Fly.io) for hosting. Clerk for secure auth.
* **AI analysis**: Claude takes the hotspot JSON and produces ranked, file-and-line-specific optimization suggestions. Also a reason for counting honest testing users.
* **CI integration**: a GitHub Actions workflow template — upload your binary, get results back
"I remember back those days when using assembly with C. So does this concept of using C++ as the inner workhorse and exposing it to Node.js"
# Where I'm genuinely unsure
That's the main reason for this post.
There are plenty of profiling tools. `perf` is free, Valgrind/Callgrind is free, Tracy is excellent, Orbit exists. I don't know if there's a real market need here. The pitch is "zero setup, CI-native, flamegraph + AI suggestions in one step" — but whether that's enough of an edge over just running `perf` directly, I genuinely can't tell from the inside.
So, guys : I'd love to hear from people who profile C++ code in their day-to-day work. Does the CI integration angle solve a real friction point for you? Is the AI suggestion layer interesting or does it feel like noise? What would actually make you reach for something like this instead of your current toolchain? Or is this just another proof-of-concept, and project number 2456 in my neverending story of proof-of-concepts?
# Try it
* 🌐 **App**: [https://realbench-web.fly.dev](https://realbench-web.fly.dev)
* 💻 **GitHub** (MIT licensed, 'Open core'): [https://github.com/SaschaKohler/realbench](https://github.com/SaschaKohler/realbench)
Completely free during beta. I am living in Austria, so at night I sleep at day I work. When i don't answer immediatly, be patient.
I'll personally respond to everyone who comments. Thanks for reading.
— Sascha
https://redd.it/1swcs94
@r_cpp
Is there a C++ "venv" equivalent?
Python have venv, Rust have cargo, Node have nvm. You clone a repo, run one command, you're in a reproducible environment.
Is there a viable alternative for C++? I don't think standard will ever bother with this and to their defence, not sure if that is even possible.
I've tried Conan, vcpkg, various CMake setups. They're not bad tools, but there's no standard "activation" ritual. No isolated-per-project environment that pins compiler + deps + toolchain together. No single lockfile that means the same thing on my machine, my colleague's machine, and CI. What I keep wanting is something like: "cppenv activate" and suddenly I'm in a clean, isolated, reproducible build environment for that project. Exit it, and my system is untouched. Share a lockfile, and a teammate gets the exact same thing.
How are you handling reproducible build/development environments?
https://redd.it/1sw2x3z
@r_cpp
How To Write PyTorch C++ Extensions in 2026
https://www.youtube.com/watch?v=v2LcTzpUOYU
https://redd.it/1svx1ea
@r_cpp
Interview with Guy Davidson - the new ISO C++ convener
https://www.youtube.com/watch?v=RppmIyWVw4w
https://redd.it/1svdael
@r_cpp
Using Reflection For Parsing Command Line Arguments
I've been very excited about reflection so I built a small library to pass command line arguments
Basic example:
struct Args
{
std::string firstname;
int age;
bool active;
};
// ./program --first-name John --age 99 --active
const auto args = clap::parse<Args>(argc, argv);
assert(args.firstname == "John");
assert(args.age == 99);
assert(args.active);
More interesting example:
struct Args
{
[= clap::Description<"host to connecto to">{}]
std::string host = "localhost";
[=clap::ShortName<'p'>{}]
std::uint16t port;
[[=clap::Env<"RETRYCOUNT">{}]]
std::uint32t retrycount;
std::optional<std::string> logfile;
[[=clap::ShortName<'e'>{}]]
bool encrypted;
[[=clap::ShortName<'c'>{}]]
bool compressed;
[[=clap::ShortName<'h'>{}]]
bool hashed;
};
// ./program -p 8080 -ec
const auto args = clap::parse<Args>(argc, argv);
assert(args.host == "localhost");
assert(args.port == 8080);
assert(args.retrycount == std::stoul(std::getenv("RETRYCOUNT")));
assert(!args.logfile);
assert(args.encrypted);
assert(args.compressed);
assert(!args.hashed);
The amount of code to handle this is actually quite minimal < 500 lines.
There's a few modern goodies that make this code work:
Reflection \[[P2996](https://isocpp.org/files/papers/P2996R13.html)\]
Annotations for Reflection [P3394\]
Contracts \[[P2900](https://isocpp.org/files/papers/P2900R14.pdf)\]
constexpr exceptions [P3068\]
I guess we don't know what "idiomatic" reflection usage is like yet, I'm interested to come back to this code in a years time and see what mistakes I made!
Link to the code: https://github.com/nathan-baggs/clap
Any feedback, queries, questions are welcome!
https://redd.it/1sv4w78
@r_cpp
CppCast Looking for Guests
As you may be aware - I've restarted [CppCast](https://cppcast.com/) (every 4th week in a rhythm with [C++Weekly](cppweekly" rel="nofollow">https://www.youtube.com/@cppweekly)) with u/mropert as my cohost.
We are trying to focus on new people and projects who have never before been on CppCast. I have been trolling the show and tell posts here for potential guests and projects.
But I want to ask directly - if you are interested in coming on the podcast to talk about your project / presentation / things you are passionate about and have never before been on CppCast, please comment!
A couple of notes:
* please don't be offended if I don't respond to your post, I have a very busy travel and conference schedule coming up (I'll see you at an upcoming conference!)
* if you're interested please pay attention for a DM so we can get the conversation started.
* being only 1 podcast per month, we don't need a ton of guests, and it might be a few months before your specific interview gets aired
Thank you everyone!
https://redd.it/1suls4e
@r_cpp
Parallel C++ for Scientific Applications: Integrating C++ and Python
https://www.youtube.com/watch?v=qlLgtSCw0ZA
https://redd.it/1suiegc
@r_cpp
Hunting a Windows ARM crash through Rust, C, and a Build-System configurations
https://autoexplore.medium.com/hunting-a-windows-arm-crash-through-rust-c-and-a-build-system-configurations-f768dd66d5c5
https://redd.it/1sucxti
@r_cpp
Grupo SIGNAL
https://signal.group/#CjQKIJF6YRUebui-iF3UU0PoOmCX5MCUYIMAYlrIeQM6T1UUEhAmZqnBHrQ9Ky22CrNZ3VSu
https://redd.it/1su2pnu
@r_cpp
Can AI write truly optimized C++?
https://pvs-studio.com/en/blog/posts/cpp/1366/
https://redd.it/1stiofv
@r_cpp
Libraries for general purpose 2D/3D geometry vocabulary types?
I work in the geospatial industry and have worked on plenty of large projects that have their own internal geometry libraries. Some good, some bad, most with interesting historical choices. I recently joined a new project that hasn't yet really defined its vocabulary types yet, and I'm finding that extremely inconvenient, so I'm looking around at what is common
The kinds of things I'm looking for are:
`Vector<typename T, size_t Dimension>`: Basically a `std::array<T,Dimension>` with a vector-like API
Point: A wrapper around a Vector with point semantics
`Size`: A wrapper around a `Vector` with size semantics
Range: A basic min/max interval
`AxisAlignedBox`: A set of `Range`s in N dimensions
RotatedBox: A AxisAlignedBox with a basis Vector
`Polyline`: A `std::vector<Point>` assumed to be open
Polygon: A std::vector<Point> assumed to be closed
`Matrix`: An NxM matrix
...
I know there are plenty of vector/matrix/linear algebra libraries out there, often focused on flexibilty and raw computational performance. I'm more interested in nice vocabulary types that implement proper semantics via convenient methods and operators.
It seems these things are often provided by game engines, but pulling in an entire game engine for a non-gaming project feels silly.
So if you were starting a new, greenfield C++ application dealing with 3D geometric data, which existing library, if any, would you reach for?
https://redd.it/1stif6d
@r_cpp
C++ is unsafe. Rust is safe. Should we all move to Rust? 44CVEs found in Rust CoreUtils audit.
https://www.phoronix.com/news/Ubuntu-Rust-Coreutils-Audit
https://redd.it/1st3rbf
@r_cpp
Binary debug for nested big ass structures.
Heyaa,
So recently I had to compare binaries in the layout of multi level nested big fat structures. I surprised to find that there are no good tools to do that. The best i could find was watch section in visual studio. I have tried another tool, WinDbg this doesn’t work well with macros and arrays. To make matters worse, this big ass structure has offsets that point beyond of the structure. There is no good tools which automatically tells values for each field
Tldr: i have custom buffer layout with multiple nested level structures.
https://redd.it/1sssd6w
@r_cpp
showcasing the elegance of asyncio's concurrent programming model:
asyncio::task::Task<void> asyncMain(const int argc, char *argv[]) {
auto client = Client::make("localhost:50051");
co_await all(
// Unary RPC
asyncio::task::spawn([&]() -> asyncio::task::Task<void> {
sample::EchoRequest req;
req.set_message("Hello gRPC!");
const auto resp = co_await client.echo(req);
fmt::print("Echo: {}\n", resp.message());
}),
// Server streaming + client streaming, connected via a channel
asyncio::task::spawn([&]() -> asyncio::task::Task<void> {
sample::GetNumbersRequest req;
req.set_value(1);
req.set_count(5);
auto [sender, receiver] = asyncio::channel<sample::Number>();
const auto result = co_await all(
client.getNumbers(req, std::move(sender)),
client.sum(std::move(receiver))
);
const auto &resp = std::get<sample::SumResponse>(result);
fmt::print("Sum: {}, count: {}\n", resp.total(), resp.count());
}),
// Bidirectional streaming
asyncio::task::spawn([&]() -> asyncio::task::Task<void> {
auto [inSender, inReceiver] = asyncio::channel<sample::ChatMessage>();
auto [outSender, outReceiver] = asyncio::channel<sample::ChatMessage>();
co_await all(
client.chat(std::move(outReceiver), std::move(inSender)),
asyncio::task::spawn([&]() -> asyncio::task::Task<void> {
sample::ChatMessage msg;
msg.set_content("Hello server!");
co_await asyncio::error::guard(outSender.send(std::move(msg)));
outSender.close();
}),
asyncio::task::spawn([&]() -> asyncio::task::Task<void> {
const auto msg = co_await asyncio::error::guard(inReceiver.receive());
fmt::print("Chat reply: {}\n", msg.content());
})
);
})
);
}
The channel-based pipeline connecting `getNumbers` and `sum` is especially worth noting: numbers produced by the server-streaming RPC flow directly through the channel into the client-streaming RPC. The whole pipeline looks like synchronous code, but is fully asynchronous underneath.
> Full source code: [GitHub](https://github.com/Hackerl/asyncio/tree/master/sample/grpc)
> Due to word count limitations, the server-side section can only be placed in Part 2.
https://redd.it/1ssjrsx
@r_cpp
void (AsyncStub::*method)(grpc::ClientContext *, const Request *,
grpc::ClientReadReactor<Element> *),
std::shared_ptr<grpc::ClientContext> context,
Request request,
asyncio::Sender<Element> sender
) {
Reader<Element> reader{context};
std::invoke(method, mStub->async(), context.get(), &request, &reader);
reader.AddHold();
reader.StartCall();
const auto result = co_await asyncio::error::capture(
asyncio::task::spawn([&]() -> asyncio::task::Task<void> {
while (true) {
auto element = co_await reader.read();
if (!element)
break;
co_await asyncio::error::guard(sender.send(*std::move(element)));
}
})
);
reader.RemoveHold();
co_await reader.done();
if (!result)
std::rethrow_exception(result.error());
}
// 3. Client streaming: read data from Receiver and write into the stream
template<typename Response, typename Element>
asyncio::task::Task<Response>
call(
void (AsyncStub::*method)(grpc::ClientContext *, Response *,
grpc::ClientWriteReactor<Element> *),
std::shared_ptr<grpc::ClientContext> context,
asyncio::Receiver<Element> receiver
) {
Response response;
Writer<Element> writer{context};
std::invoke(method, mStub->async(), context.get(), &response, &writer);
writer.AddHold();
writer.StartCall();
const auto result = co_await asyncio::error::capture(
asyncio::task::spawn([&]() -> asyncio::task::Task<void> {
while (true) {
auto element = co_await receiver.receive();
if (!element) {
if (!co_await writer.writeDone())
fmt::print(stderr, "Write done failed\n");
if (element.error() != asyncio::ReceiveError::Disconnected)
throw co_await asyncio::error::StacktraceError<std::system_error>::make(element.error());
break;
}
co_await writer.write(*std::move(element));
}
})
);
writer.RemoveHold();
co_await writer.done();
if (!result)
std::rethrow_exception(result.error());
co_return response;
}
// 4. Bidirectional streaming: hold both a Receiver (input) and a Sender (output)
template<typename RequestElement, typename ResponseElement>
asyncio::task::Task<void>
call(
void (AsyncStub::*method)(grpc::ClientContext *,
grpc::ClientBidiReactor<RequestElement, ResponseElement> *),
std::shared_ptr<grpc::ClientContext> context,
asyncio::Receiver<RequestElement> receiver,
asyncio::Sender<ResponseElement> sender
) {
Stream<RequestElement, ResponseElement> stream{context};
std::invoke(method, mStub->async(), context.get(), &stream);
stream.AddHold();
stream.StartCall();
const auto result = co_await asyncio::error::capture(
all(
asyncio::task::spawn([&]() -> asyncio::task::Task<void> {
while (true) {
auto element = co_await stream.read();
if (!element)
break;
if (const auto res = co_await sender.send(*std::move(element)); !res) {