-
Stay up-to-date with everything C++! Content directly fetched from the subreddit just for you. Join our group for discussions : @programminginc Powered by : @r_channels
The STL for Geometry: Thirty-Year Evolution of C++ Libraries
https://polydera.com/algorithms/the-stl-for-geometry
https://redd.it/1t19d0y
@r_cpp
Microsoft ODBC Driver 17.11.1 for SQL Server released
ODBC Driver 17.11.1 is out.
Fixes:
Parameter array processing: SQL\_ATTR\_PARAMS\_PROCESSED\_PTR now reports correctly, row counting fixed when SQL\_PARAM\_IGNORE is used
Connection error with Data Classification metadata in async mode
XA recovery transaction ID computation
RPM side-by-side installs now work
Debian package license acceptance
New platforms:
macOS 14, 15, 26
Debian 13
RHEL 10
Oracle Linux 9, 10
SUSE 16
Ubuntu 24.04, 25.10
Alpine 3.21, 3.22, 3.23
Download: https://learn.microsoft.com/en-us/sql/connect/odbc/download-odbc-driver-for-sql-server
Full blog post: Microsoft ODBC Driver 17.11.1 for SQL Server Released | Microsoft Community Hub
https://redd.it/1t15ffy
@r_cpp
CSC4700: Parallel C++ for Scientific Applications
https://www.youtube.com/playlist?list=PL7vEgTL3Falab59uJoOb7AtFQKVuL0MV-
https://redd.it/1t0viu1
@r_cpp
Sub-microsecond timing on EC2 is way messier than I expected
Been doing sub-microsecond profiling on EC2 and kept getting wildly inconsistent cycle counts.
One mistake was using cpuid as the serialization barrier before rdtsc. On a VM that can be a mess, since cpuid often traps so the hypervisor can fake feature flags. So now the "measurement overhead" includes a VM exit, which is thousands of cycles on some runs.
Switching to lfence + rdtsc made the numbers a lot more stable.
Then I hit the calibration problem. Measuring TSC frequency with a short sleep() looked simple, but the results were all over the place. Scheduler delay, timer granularity, and probably vCPU steal time were enough to make the calibration useless at this scale. A busy-wait loop with pause gave me a much saner number.
Also forgot to pin the thread at first. rdtscp at least tells you when you migrated, but those samples are basically trash. Same with the first few iterations before icache/branch predictor warm up.
Curious what people here actually use for sub-microsecond timing. Do you just trust nanobench / Google Benchmark, or do you still end up writing your own rdtsc wrappers once VMs get involved?
https://redd.it/1t0o16n
@r_cpp
I made a super fast CNN library for C++20 from scratch.
I was exploring Convolutional Neural Networks (CNNs) in more depth and I had an interesting idea of making a dependency free, header only cnn library for C++20.
I did some research and found out about tiny-dnn which is a cnn library for c++14, super fast but the developers stopped updating it back in 2016, so I decided to take on a challenge to make my own CNN library from scratch for c+ +20 with extreme performance tuning for CPU, and I did achieve close to what I was expecting.
I benchmarked with "pytorch" and the results were good enough to post, I have documented about the library here along with the benchmark results. At some instances it outperformed pytorch and I was shocked too.
Documentation- "https://Inkd.in/gNFF74JJ"
To get a rough idea on how fast is my engine it goes 97.51% accuracy on mnist dataset in just 25 seconds of training with a throughput of 2k+ images / second.
processor - Ryzen 7 5800H mobile
For overview -
My engine uses DAG layout
It has Zero Allocation
Multithreading Support
L1/L2 Cache Optimization
and a lot of internal stuffs going on, here is the repository link-
"https://github.com/KunwarPrabhat/CustomCNN"
My engine is still in its early stage so there are alot of things that can be fixed I need more devlopers to contribute if they're interested in it :))
https://redd.it/1t080e4
@r_cpp
Is anyone still using the CTRE library in 2026?
Back in 2019 an engineer/committee member made a splash by introducing CTRE, Compile Time Regular Expressions.
It was implemented using type aliases instead of constexpr function calls meaning it was easy to implement in C++11. Very impressive stuff.
It also gave you a nicer callsite spelling in C++20 leveraging generalized non-type template parameter passing.
I'm curious if anyone is still using it. If no, why not? I'd love to hear!
https://redd.it/1t05xxt
@r_cpp
C++26: string and string_view improvements
https://www.sandordargo.com/blog/2026/04/29/cpp26-string-string_view-improvements
https://redd.it/1t02001
@r_cpp
Some theory and practice of alignment in C++ (guide part 3).
https://pvs-studio.com/en/blog/posts/cpp/1369/
https://redd.it/1szx3m5
@r_cpp
CPP* Compiler Project *Almost Too Good To Be True
https://github.com/LukeSchoen/CPrime
https://redd.it/1szi16y
@r_cpp
Why C++ Is Growing and What C++26 Means for Production Systems
https://www.youtube.com/watch?v=Qvr9MTAU_y4
https://redd.it/1szbl25
@r_cpp
Cpp Files Still Help Breaking Build Dependencies of Modules
https://abuehl.github.io/2026/04/23/cpp-files-still-help-breaking-dependencies.html
https://redd.it/1sz5a9o
@r_cpp
Build a simple and yet powerful TUI file-organizer (forg?)
And Yes, i know, organizing can quit your searching. But I'm constantly nerved down to my bones (don't know how to bring this in proper english) when I'm looking at my /Downloads, /Documents, /Active_Projects, /AnythingAtAll and still find a lot of stuff in it after some days of intensive work at my lap.... Aaargh...I think often by myself.
So here it is. Simple OSX/Linux Tool with TUI ...everything lined up in your /Organized.... (configurable of 'cause)
May it be useful. For you as it is already for me.
FREE at all. clone...compile...have fun.... cheers...
github: https://github.com/SaschaKohler/file-organizer
https://redd.it/1sywf3f
@r_cpp
atomic_queue benchmarks SMT vs no-SMT performance
https://max0x7ba.github.io/atomic_queue/html/benchmarks.html
https://redd.it/1syrt4e
@r_cpp
boost::container::hub ACCEPTED into Boost
Hi to all, I'm glad to announce that the proposed boost::container::hub container has been ACCEPTED. Congrats u/joaquintides !
More details here:
https://lists.boost.org/archives/list/boost@lists.boost.org/thread/7WZ7QTPE2YDYD5OYCKXKKV2N74JHJRZL/
Reminder:hub is a sequence container with O(1) insertion and erasure and element stability with great performance (see these benchmarks): pointers/iterators to an element remain valid as long as the element is not erased. hub is very similar but not entirely equivalent to C++26 std::hive (hence the different naming, consult the section "Comparison with std::hive" for details).
https://redd.it/1sycrw7
@r_cpp
stackless coroutines for gamedev in ~200 lines of C++
https://vittorioromeo.com/index/blog/sfex_coroutine.html
https://redd.it/1t171z7
@r_cpp
External Polymorphism in C++26
While developing our Type Erasure library Any++ for C++23, we had to resort to a preprocessor-based EDSL to eliminate the boilerplate.
By studying the "C++26 Reflection Proposals," I was constantly searching for a way to replace this preprocessor programming.
Implementing Type Erasure requires three components:
- A "V-Table" for indirecting function calls.
- A "Facade" for the ergonomic connection between the data and the "V-Table."
- An "Adapter" to connect the functions of the "V-Table" to the specific type.
It suffices to describe one of these components. The other two can then be generated automatically.
One way to do this with C++26 is to specify in code the default adapter and generate the V-table and facade.
C++26 allows you to generate a class using define_aggregate.
The key is that the class can only contain data members.
However, since a data member can have an operator(), member functions can also be simulated this way.
These data members can be specified so that they don't occupy any memory. This allows you to access the enclosing class in the operator() and use the information it manages (the V-table and the data reference).
Interestingly, this method also enables static dispatch with an elegant interface.
To coincide with the GCC16 release and its excellent reflection implementation, I've sketched out such an API:
template <typename Self>
struct stringable{
[[=default_{}]] // that says: when not specialized, call self.as_string()
static std::string as_string(Self const& self);
};
void print(std::vector<dyn<stringable>> const& things){
for(auto& thing : things){
std::println("{}", thing.as_string());
}
}
template <>
struct stringable<int>{
static std::string as_string(int const& self) {
return std::to_string(self);
}
};
template <>
struct stringable<std::string>{
static std::string as_string(std::string const& self) {
return self;
}
};
struct foo{ double f; };
template <>
struct stringable<foo>{
static std::string as_string(foo const& self){
return "foo: " + std::to_string(self.f);
}
};
struct boo {
bool b = false;
std::string as_string(){
return std::string{"boo? "} + (b ? "T" : "F");
}
};
int main(int argc, char *argv[]) {
// static dispatch
auto a1 = trait_as<int, stringable>{{42}};
auto z_from_self = a1.as_string();
std::println("z_from_trait = {}", z_from_self);
// dynamic dispatch, reference semantics only
int i = 4711;
auto dyn_stringable = dyn<stringable>{i};
auto z_from_dyn_stringable = dyn_stringable.as_string();
std::println("z_from_dyn_stringable = {}", z_from_dyn_stringable);
std::string s = "hello world";
foo a_foo{3.14};
boo a_boo{true};
print({dyn<stringable>{i}, dyn<stringable>{s}, dyn<stringable>{a_foo}, dyn<stringable>{a_boo}});
}
Juan Alday of Citadel Securities: Why C++ Wins in Finance (April 28th, 2026)
https://www.youtube.com/watch?v=InLxLEqg_fs
https://redd.it/1t0tnv4
@r_cpp
Syntactic sugar of member function binding?
Instead of
std::bind_front(&Very::Long::Namespace::VeryLongClassName::method, pobject)
#define BIND_FRONT(method, pobject) std::bind_front(&std::remove_cvref_t<decltype(*(pobject))>::method, (pobject))
BIND_FRONT(method, pobject)
(pobject->method)
((*pobject).method)
((*pobject)::method)
BIND_FRONT(method, pobject) // a.k.a `std::bind_front(&std::remove_cvref_t<decltype(*(pobject))>::method, (pobject))`
Safe Optimistic Lock Coupling
https://databasearchitects.blogspot.com/2026/04/safe-optimistic-lock-coupling.html
https://redd.it/1t06gif
@r_cpp
Fast GPU Linear Algebra via Compile Time Expression Fusion
https://arxiv.org/abs/2604.22242
https://redd.it/1t01hj5
@r_cpp
Working on a new language (vibe-compiler) – Looking for feedback on my C++23 Lexer/Parser
Hey everyone,
I've been working on a custom programming language called vibe-compiler. It's a low-level project built with C++23.
I want to learn more about how can I built a OOP in my custom language. I'm following the www.craftinginterpreters.com . I'd love some feedback on my approach to naming convention and how can i improve the project. Also, if anyone is interested in contributing or just chatting about compiler design, I'd love to connect!
This is the GitHub repo: https://github.com/gemrey13/vibe-compiler
https://redd.it/1szwqdg
@r_cpp
GCC 16.1 released with many new C++26/23 features, C++20 now the default stable language version
https://gcc.gnu.org/gcc-16/changes.html#cxx
https://redd.it/1szveq3
@r_cpp
What do you think is a keyword that should be added to C++?
https://redd.it/1szgavx
@r_cpp
ACAV v1.0.0: an open-source GUI tool for exploring Clang ASTs in C/C++ projects
I am the author of ACAV, the Aurora Clang AST Viewer, and I have just made the first public release, v1.0.0.
ACAV is an open-source Qt desktop application for exploring Clang ASTs in C, C++, Objective-C, and Objective-C++ projects that provide a `compile_commands.json` compilation database.
It supports source-to-AST navigation, AST-node search, source-code search, declaration-context views, selected-subtree JSON export, and background AST generation/caching.
Links:
\- GitHub: https://github.com/uvic-aurora/acav
\- Release: https://github.com/uvic-aurora/acav/releases/tag/v1.0.0
\- Manual: https://uvic-aurora.github.io/acav-manual/index.html
\- Demo video: https://youtu.be/0M7dYAlnrTI
There are also prebuilt Docker/Podman demo images for LLVM 20, 21, and 22.
https://github.com/uvic-aurora/acav/pkgs/container/acav
I would appreciate feedback from C++ users, especially anyone who works with Clang tooling or wants a more visual way to inspect ASTs.
https://redd.it/1sz6w3w
@r_cpp
Custom shell i have been working on
hi i have been working on this shell for some time and i though of getting some feedback, note that i still am working on the project : https://github.com/Indective/Ish
https://redd.it/1sz22ws
@r_cpp
learncpp. com not online anymore
I can't seem to get in the site learncpp.com anymore.
Anyone else with the same issue?
Do you have any suggestions for a similar site where i can learn cpp, I am a complete novice trying to learn cpp after C (still learning C)
https://redd.it/1syvxrb
@r_cpp
Discussion: Using Python as a control plane for high-performance systems - Is the overhead of the C-API still the main bottleneck?
Hi everyone,
I’ve been working on a high-performance infrastructure project where the core data engines are written in Rust and C++ (using Polars and custom C-style binary protocols), but the orchestration and business logic are handled by Python 3.13+.
In the C++ world, we often avoid Python for anything "mission-critical" due to the GIL and the overhead of the Python C-API. However, I’ve been experimenting with a Shared Memory IPC approach to decouple the high-level logic from the data plane.
I’d love to get some feedback from the C++ community on this architecture:
1. Zero-Copy IPC via Shared Memory: Instead of using Protobuf or JSON over a socket, I'm allocating segments in the Windows Kernel and using struct packing to write raw bytes. For those of you building C++ engines that need to talk to high-level "glue" languages, do you still prefer Unix Domain Sockets/Named Pipes, or has Shared Memory become your standard?
2. Memory Alignment and Padding: When interfacing Python’s struct module (C-style) with C++ structs, I've had to be extremely careful with Little Endianness and memory alignment. Is there a more robust way to handle this without bringing in heavy dependencies like FlatBuffers?
3. CPU Affinity: I'm pinning the Python consumer to specific cores to avoid context switching when reading from the shared buffer. In a hybrid system, do you usually reserve specific cores for the C++ engine and leave others for the management scripts, or do you let the OS scheduler handle it?
The Goal:
I'm trying to build a system where Python handles the "thinking" (logic/orchestration) while the "doing" (I/O and computation) happens at $O(1)$ or $O(\\log n)$ at the hardware level.
I'm curious: What is your "threshold" for moving a component out of Python/Go and into pure C++? Is it strictly latency, or is it about memory safety and deterministic behavior?
https://redd.it/1syrus9
@r_cpp
I tried compile-time heapsort in TMP. It basically became selection sort.
Tried implementing compile-time sorting with old-school TMP (recursive templates, no constexpr). Yeah, constexpr sort exists now, but I wanted to see how far pure template recursion could go. Quicksort and mergesort worked fine. Heapsort was the one that broke.
Then it clicked: heapsort assumes cheap random access. Parent node, left child, right child, all index arithmetic. But in a typelist like arr<5, 3, 8, 1> there's no arr[i]. Every element access peels the head off recursively, so it's O(n) per lookup. Heapify becomes expensive, sift-down becomes expensive, and the whole thing degrades.
What I actually ended up with was... selection sort. Find the min by scanning the whole list, pull it out, recurse. O(n²) template instantiations. Not great.
Quicksort doesn't have this problem because it just filters into two sublists (less-than pivot, greater-than pivot). No indexing needed. Mergesort splits with take/drop which is O(n) but only happens once per level, so it stays O(n log n) overall.
I didn't really clock the random access dependency until I was halfway through writing the heap version. Felt kind of dumb in retrospect. Never really felt how much big-O depends on the data structure until TMP took away my arrays.
Full code in comments if anyone wants to look at it. Fair warning the mergesort lives in namespace www because I was iterating on these in separate files and never bothered renaming.
Anyone else run into algorithms that stop making sense in TMP?
https://redd.it/1syqjdu
@r_cpp
A Principal Software Engineer at Epic Games / 25 Year Vet, talks about why AI is just a "giant switchboard" and why code is a delicate crystal.
I’ve been thinking a lot about how people actually get comfortable with complex topics like programming, not by tutorials, but by just being passively around the conversations.
So I recorded one of those conversations.
I sat down with Dietmar Hauser (25+ years in the industry, Principal Software Engineer at Epic), and we went from Commodore 64 days, literally typing code out of magazines. All the way to modern C++ and where we find ourselves at the moment with another layer of abstraction = LLMs.
What stuck with me wasn’t just the history, but how he talks about coding as this fragile, interconnected system (“a delicate crystal”), that shatters if you touch the wrong thing, which i found very interesting.
It’s a long, unfiltered discussion, more like something you overhear between two people deep in the field than a structured interview.
If you’re trying to get a feel for how experienced engineers actually think about code, or if you wanna warm up to the idea, this convo might be useful:
https://youtu.be/PE3aCgSHvTQ
https://redd.it/1syd309
@r_cpp