Extension for real-time profiling in Visual Studio
https://youtu.be/FPQ3cStwy6s
https://redd.it/1g8rd7j
@r_cpp
Security in C++ - Hardening Techniques From the Trenches - Louis Dionne - C++Now 2024
https://www.youtube.com/watch?v=t7EJTO0-reg
https://redd.it/1g8pgzu
@r_cpp
Long Term Project Idea
Hello r/cpp! I am super familiar with Python, Java, SQL, etc., but recently, I've been learning C++ and I want to work on a project and devote 2-3 hours a day to building something in C++, but I can't find anything to get my hands on. Do you guys have any idea of what I could do? I would prefer to do something related to finance, but I am open to almost anything cool enough lol.
P.S. Really willing to devote a lot of time and effort into something just lacking direction
Thank you :)
https://redd.it/1g8idx7
@r_cpp
c++17 asio tcp http server keeps saying invalid pointer when writing
So im making a simple tcp http server and no matter how I try it i keep getting error code system:10014(yes im on windows)
no matter what I try I keep getting "read_some: The system detected an invalid pointer address in attempting to use a pointer argument in a call [system:10014\]"
c++17 code using boost asio
using tcp = boost::asio::ip::tcp;
try
{
boost::asio::ioservice ioservice;
tcp::acceptor acceptor(ioservice, tcp::endpoint(tcp::v4(), 8080));
for (;;)
{
tcp::socket* socket = new tcp::socket(ioservice);
acceptor.accept(socket);
boost::system::error_code ec;
//std::vector<char> buffer(1024); // Buffer of 1024 bytes.
std::vector<char> data(socket->available());
// Synchronously read some data into the buffer.
size_t bytes_read = socket->read_some(boost::asio::buffer(data), ec);
if(ec){
CORE_DEBUG("ERROR Reading {0}", ec.message());
}else{
std::cout << "Received " << bytes_read << " bytes:\n";
std::cout.write(data.data(), bytes_read);
std::cout << std::endl;
}
// Print the data received (as a string).
const char message[] = "HTTP/1.0 200 OK\r\nContent-Length: 45\r\n\r\n<html><body><i>Hello, world</i></body></html>";
boost::asio::write(socket, boost::asio::buffer(message), ec);
if(ec){
COREDEBUG("ERROR WRITING {0}", ec.message());
}else{
COREDEBUG("Wrote");
}
}
}
catch (std::exception& e)
{
std::cerr << "Exception :" << e.what() << std::endl;
}
https://redd.it/1g8dd5l
@r_cpp
return either X3 or Y3 in main() [link to godbolt](https://godbolt.org/z/85o4n5b56)
main:
mov eax, 6
ret
Since closures have to be allocated on the heap, as they can and will outlive the stack frame of the function in which they are created, does this make C++ the only language that can achieve this kind of behavior?
AFAIK Rust's const cannot allocate on the heap,C has no way to do closures, maybe Zig can do this (?).
What do you think? Would you come up with something else? (You cannot use classes or structs, and it has to be "polymorphic" at compile time)
https://redd.it/1g7xdou
@r_cpp
Clang-tidy scanning system headers
Alright, I've been in the rabbit hole trying to speed-up my Clang-Tidy scan.
At the moment it's almost useless as it takes 30sec or more to scan just a few files. The reason it takes so long seems to be that Clang-tidy finds thousands of warnings in 'non-user' code:
"Suppressed 19619 warnings (19619 in non-user code)."
I don't know if its possible to ignore system headers, but why would anyone ever want to scan system headers for readability/modernization and the like if it's not part of user-written code??
Command:
clang-tidy -p Workspace/build/utility/compile_commands.json --enable-check-profile Workspace/src/utility/src/managed_node.cpp
My compile_commands.json file seems very reasonable. I have 5 cpp-files with a couple of local includes and then a sequence of system headers that are prefixed with '-isystem'. Interestingly, i tried simply removing the '-isystem' paths, which led to clang-tidy finishing in 0.1s, so it is without a doubt wasting time on global files that i have no way to change anyway. The problem with this is that it now errors on all the system headers.
Can anyone explain how to configure clang-tidy to skip system header checks or perhaps explain why it might not even be possible?
Edit: The setup I'm working on uses vscode, which integrates clang-tidy nicely by automatically scanning open files with suggestions for fixing problems; with clang-tidy itself or alternatively copilot. As it takes minutes before suggestions appear and since its quite cpu-intensive, I've had to turn it all off..
https://redd.it/1g7wt5y
@r_cpp
Seeking Advice on Designing Large-Scale Code Optimization Exercises
I am the Head of Training at a company that specializes in software development, with a strong focus on code writing and optimization, primarily in C++ with occasional use of Python.
I am currently looking to design or source a new training exercise for our trainees, centered around large-scale code optimization. The objective is to help them develop skills in identifying bottlenecks, eliminating redundant data passes, and optimizing entire codebases, as opposed to refining specific functions or small sections of code, which are most of the exercises we have as of right now.
I would appreciate any guidance on how to approach this, whether through existing exercises or best practices for creating such training modules from scratch.
Thank you for your insights.
https://redd.it/1g7p0cn
@r_cpp
Scanned</th></tr>");
for (JsonPair kv : doc.as<JsonObject>()) {
String uid = kv.value()["uid"].as<String>();
String lastScanned = kv.value()["last_scanned"].as<String>();
html.concat("<tr><td>");
html.concat(uid);
html.concat("</td>");
html.concat("<td>");
html.concat(lastScanned);
html.concat("</td></tr>");
}
html.concat("</table></div>");
renderFooter(html);
server.send(200, "text/html", html);
} else {
server.send(500, "text/plain", "Failed to retrieve data from Firebase");
Serial.println(fbdo.errorReason());
}
}
void handleAPIRFID() {
// This handler can be used to provide RFID data as JSON
if (Firebase.RTDB.getJSON(&fbdo, "/rfid")) {
FirebaseJson& json = fbdo.jsonObject();
String jsonStr;
json.toString(jsonStr, true);
server.send(200, "application/json", jsonStr);
} else {
server.send(500, "text/plain", "Failed to retrieve data from Firebase");
Serial.println(fbdo.errorReason());
}
}
void handleAPIUpdateName() {
if (server.method() != HTTP_POST) {
server.send(405, "text/plain", "Method Not Allowed");
return;
}
// Parse the JSON body
StaticJsonDocument<512> doc;
DeserializationError error = deserializeJson(doc, server.arg("plain"));
if (error) {
Serial.print("Failed to parse updateName request: ");
Serial.println(error.c_str());
server.send(400, "application/json", "{\"success\":false, \"message\":\"Invalid JSON\"}");
return;
}
String uid = doc["uid"].as<String>();
String name = doc["name"].as<String>();
if (uid.isEmpty()) {
server.send(400, "application/json", "{\"success\":false, \"message\":\"UID is required\"}");
return;
}
// Update name in Firebase
String basePath = "/rfid/";
basePath.concat(uid);
basePath.concat("/name");
if (Firebase.RTDB.setString(&fbdo, basePath.c_str(), name)) {
server.send(200, "application/json", "{\"success\":true}");
} else {
server.send(500, "application/json", "{\"success\":false, \"message\":\"Failed to update name in Firebase\"}");
Serial.println(fbdo.errorReason());
}
}
void handleExportData() {
// Export data as JSON
if (Firebase.RTDB.getJSON(&fbdo, "/rfid")) {
FirebaseJson& json = fbdo.jsonObject();
String jsonStr;
json.toString(jsonStr, true);
server.sendHeader("Content-Disposition", "attachment; filename=\"rfid_data.json\"");
server.send(200, "application/json", jsonStr);
} else {
server.send(500, "text/plain", "Failed to retrieve data from Firebase");
Serial.println(fbdo.errorReason());
}
}
void handleFavicon() {
server.send(204, "image/x-icon", ""); // No Content
}
void handleNotFound() {
server.send(404, "text/plain", "404: Not Found");
}
// Helper functions
void renderHeader(String& html, String title) {
html.concat("<!DOCTYPE html><html><head>");
html.concat("<meta name='viewport' content='width=device-width, initial-scale=1'>");
html.concat("<title>");
html.concat(title);
html.concat("</title>");
html.concat("<style>");
html.concat("body { font-family: Arial; margin: 0; padding: 0; background-color: #f2f2f2; }");
html.concat("nav { background-color: #333; color: #fff; padding: 10px; }");
html.concat("nav a { color: #fff; margin-right: 15px; text-decoration: none; }");
html.concat(".container { padding: 20px; }");
html.concat("table { width: 100%; border-collapse: collapse; }");
html.concat("th, td { border: 1px solid #ddd; padding: 8px; }");
html.concat("th { background-color: #333; color: white; }");
html.concat("</style></head><body>");
html.concat("<nav><a
Firebase ESP32 Client: Getting INVALID_EMAIL Error on ESP32 PlatformIO
[error code found in serial monitor](https://i.sstatic.net/YjilDEfx.png)
I'm working on an ESP32 project that uses an RFID reader to log attendance data to Firebase Realtime Database, in PlatformIO. I'm using the Firebase ESP Client library by Mobizt.
I keep encountering the following error messages in the Serial Monitor:
Token Info: type = ID token, status = error Token Error: INVALID\_EMAIL Token Info: type = ID token, status = error Token Error: bad request
**Fixes Attempted:**
Verified that Anonymous Authentication is enabled in Firebase Console under Authentication > Sign-in method.
Double-checked my API key and database URL to ensure they are correct.
Ensured that [auth.user.email](http://auth.user.email/) and auth.user.password are not set anywhere in my code.
Updated the Firebase ESP Client library to the latest version (I'm using version 4.3.1).
Waited for authentication to complete before interacting with Firebase, using a loop to check Firebase.ready().
Erased the ESP32 flash memory to clear any old credentials.
Despite these efforts, the INVALID\_EMAIL error persists.
**Here's my code:**
#include <WiFi.h>
#include <Firebase_ESP_Client.h>
#include <SPI.h>
#include <MFRC522.h>
#include <LittleFS.h>
#include <ArduinoJson.h>
#include <WebServer.h>
#include <time.h>
// Wi-Fi credentials
const char* ssid = "X";
const char* password = "X";
// Firebase API key and database URL
#define API_KEY "X"
#define DATABASE_URL "X``your text``"
// Firebase objects
FirebaseData fbdo;
FirebaseAuth auth;
FirebaseConfig config;
// RFID setup
#define RST_PIN 22
#define SS_PIN 21
MFRC522 mfrc522(SS_PIN, RST_PIN);
// Web server setup
WebServer server(80);
// Time setup
const char* ntpServer = "pool.ntp.org";
const long gmtOffset_sec = -28800;
const int daylightOffset_sec = 3600;
// Variables
String uidString;
String currentTime;
// Function declarations
void handleRoot();
void handleProfiles();
void handleNotFound();
void tokenStatusCallback(TokenInfo info);
void configureTime();
String getFormattedTime();
void handleAPIRFID();
void handleAPIUpdateName();
void handleExportData();
void renderHeader(String& html, String title);
void renderFooter(String& html);
void handleFavicon();
String getTokenType(TokenInfo info);
String getTokenStatus(TokenInfo info);
void setup() {
Serial.begin(115200);
SPI.begin();
mfrc522.PCD_Init();
delay(4);
// Initialize LittleFS
if (!LittleFS.begin()) {
Serial.println("An error occurred while mounting LittleFS");
} else {
Serial.println("LittleFS mounted successfully");
}
// Connect to Wi-Fi
WiFi.begin(ssid, password);
Serial.print("Connecting to Wi-Fi");
while (WiFi.status() != WL_CONNECTED) {
delay(500);
Serial.print(".");
}
Serial.println("\nConnected to Wi-Fi");
Serial.print("IP Address: ");
Serial.println(WiFi.localIP());
// Configure time
configureTime();
// Ensure no email/password is set
auth.user.email = "";
auth.user.password = "";
// Initialize Firebase Config
config.api_key = API_KEY;
config.database_url = DATABASE_URL;
config.signer.anonymous = true;
// Assign the callback function for token status
config.token_status_callback = tokenStatusCallback;
// Initialize Firebase
Firebase.begin(&config, &auth);
Firebase.reconnectWiFi(true);
// Wait for authentication to complete
Serial.println("Authenticating with Firebase...");
unsigned long authTimeout = millis();
const unsigned long authTimeoutDuration = 10000; // 10 seconds timeout
while ((auth.token.uid.length() == 0) && (millis() - authTimeout <
Merge add algorithm
Is there easy way to take two sorted vectors and merge them together into a new vector such that if two Elements have identical keys then resulting vector would just sum the values ?
I can code myself just by modifying a standard merge algorithm but I want to learn how to do it just with stl or ranges.
https://redd.it/1g7bmm2
@r_cpp
String-interpolation (f'strings) for C++ (P3412) on godbolt
Would be really handy to see this in C++26!
int main() {
int x = 17;
std::print(f"X is {x}");
}
Paper: wg21.link/P3412
Implementation on compiler explorer is available now
https://godbolt.org/z/rK67MWGoz
https://redd.it/1g7e8ms
@r_cpp
Do Projects Like Safe C++ and C++ Circle Compiler Have the Potential to Make C++ Inherently Memory Safe?
As you may know, there are projects being developed with the goal of making C++ memory safe. My question is, what’s your personal opinion on this? Do you think they will succeed? Will these projects be able to integrate with existing code without making the syntax more complex or harder to use, or do you think they’ll manage to pull it off? Do you personally believe in the success of Safe C++? Do you see a future for it?
https://redd.it/1g7bsn9
@r_cpp
Writing realistic benchmarks is hard with optimizing compiler
Hi, this will be a brief report on my mostly failed efforts to compare std::views::filter
performance to good ol for each(aka range based for loop).
I think there will be nothing here that experts do not already know, but it was interesting for me how "sensitive" results are to seemingly minor changes in source code so wanted to share in case somebody finds it interesting.
First of all I want to say that I know benchmarking std::views::filter is very hard(many dimensions of benchmark matrix, e.g. type of range elements, size of range, percent of hits, is it combined with other view..., what do you do with results...) and this is just documenting attempts to benchmark 1 simple use case.
And before you ask: no I do not think I benchmarked -O0
and order of running lambdas does not affect results.
And yes I do know about google benchmark, I was intrigued after reading P3406R0 section 2.7. to hack a quick comparison of view and "regular" style, proper benchmark would as I said previously have a huge number of values in each dimension.
Originally I started with code like this(time_fn is some helper for timing arbitrary functor)
template<typename Fn>
std::optional<int> timefn(Fn&& fn, const std::stringview desc) {
const auto start = std::chrono::steadyclock::now();
const auto ret = fn();
const auto end = std::chrono::steadyclock::now();
std::print("{:<20} took {}\n", desc, std::chrono::round<std::chrono::microseconds>(end-start));
return ret;
}
int main()
{
sizet num = 16*1024*1024;
int needle = 123;
std::vector<int> vals(num);
sr::transform(vals, vals.begin(), [num,needle, i=0](int) mutable {if (i++ < num/2) {return needle-1;} else return (rand()%100'000);});
const auto pred = [needle](const int val) {return val == needle;};
auto ancientway = &vals, pred -> std::optional<int>
{
for (const auto& val : vals)
{
if(pred(val))
{
return std::optional{val};
}
}
return std::nullopt;
};
auto viewsway = [&vals, pred] -> std::optional<int>
{
auto maybeval = vals | sv::filter(pred) | sv::take(1);
if (maybeval)
{
return std::optional{*maybeval.begin()};
}
else
{
return std::nullopt;
}
};
const auto retancient = timefn(std::move(ancientway), "ancient");
const auto retviewsway = timefn(std::move(viewsway), "views");}
This kept printing 0 micros because clang is so damn smart that he figured out values we find were never used so he optimized away entire line:
`const auto ret = fn();`
Now this is easily fixed by just using the result, not much to say here, except that if this was part of a bigger benchmark it could have been easily missed.
Anyways after this results were shocking:
ancient took 2757µs
views took 2057µs
I was shocked that views approach was faster than the regular for loop. Well it turns out that this was just because in one case compiler managed to unroll(not vectorize), in another he did not.
"Fix" for this was just breaking inlining of a *helper* function:
[[gnu::noinline]] std::optional<int> timefn(Fn&& fn, const std::stringview desc) {
Now both ways are the same
ancient took 2674µs
views took 2680µs
But what is more interesting to me is the following. Clang managed to figure out the dynamic size of vector and propagate that. I meant it is not dynamic in a sense that it is constant, but it is not like we are dealing with the std::array with fixed array len, he actually understood what will be the len of the vector when iterated.
So instead of doing noinline on a helper function let's break this by just randomizing len a little bit:
sizet num
Come to the dark side. We have cookies! - Reflections on template<>
C++ is like no other programming language I've encountered.
Sure it's object oriented. So is Smalltalk. So is C#.
Sure it's procedural (or can be) and mid level. So is C.
What really sets it apart is all the things you can do with the template keyword - things that aren't immediately apparent, and things that are very powerful, like genericizing an "interface" at the source level, rather than having to rely on virtual calls to bind to it, allowing the compiler to inline across an interface boundary.
Template wasn't designed specifically to do that, but it allows for it due to the way it works.
Contrast that with C# generics, which do not bind to code at the source level, but rather at the binary level.
What do I mean by binary vs source level binding? I had an article at code project to illustrate the difference. X( until today. Let me see if I can boil it down. The template keyword basically makes the compiler work like a mail merge but with typed, checked and evaluated arguments. That means the result of a template instantiation is - wait for it.., more C++ code - in text, which the compiler then reintakes and compiles as part of its process. Because it works that way, you can do things with it you can't, if it didn't produce C++ textual sourcecode as the result (like C#s that produce binary code as the result of an instantiation)
But inlining across interfaces isn't the only thing it's good for that it wasn't designed for.
I have code that allows you to do this
// declare a 16-bit RGB pixel - rgbpixel<16> is shorthand
// for this:
using rgb565 = pixel<channeltraits<channelname::R,5>, // 5 bits to red
channeltraits<channelname::G,6>, // 6 bits to green
channeltraits<channelname::B,5>>; // 5 bits to blue
// you can now do
rgb565 px(0,0,0); // black
int red = px.template channel<channelname::R>();
int green = px.template channel<channelname::G>();
int blue = px.template channel<channelname::B>();
// swap red and blue
px.template channel<channelname::R>(blue);
px.template channel<channelname::B>(red);
Astute readers will notice that it's effectively doing compile time searches through a list of color channel "names" every time a channel<channel_name::?> template instantiation is created.
This is craziness. But it is very useful, it's not easy to do without relying on The STL (which i often can't do because of complications on embedded platforms).
Template specializations are magical, and probably why I most love the language. I'll leave it at that.
https://redd.it/1g700dj
@r_cpp
Blog: Exploring Smart Pointers in C++
I had recently started writing C++ for some of my projects, and the idea of 'smart pointers' had intrigued me for a long time. Last week, I decided to use try 'smart pointers' and learn about them in detail.
I have documented my learning in the following blog: https://shubham0204.github.io/blogpost/programming/cpp-smart-pointers
The content is quite basic and does not cover any advance use-cases of smart-pointers (if they exist) due to limited knowledge as a beginner. I would be glad if the C++ Reddit community can share their feedback on the blog, point out any mistakes that I might have made and provide more insightful ideas that I should include.
https://redd.it/1ftbw4o
@r_cpp
Simple and fast neural network inference library
Dear all,
I would like to integrate a tool into my simulation library that could allow me to use trained DNN models. I will be only doing the inference (Training is done using python).
I have seen onnxruntime but compiling it is complex, and it has plenty of dependencies. More or less the same goes to the C++ API of torch or tensorflow/keras. Though, I am not against generating a ONNX model once my models are trained.
I was wondering if you guys have any suggestion?
Ideally I would like to run models containing I need mainly multi-layer perceptrons, convolutional networks, recurrent NN (LSTM, etc), Grapth neural networks, and maybe Transformers.
Am I asking too much?
Best
https://redd.it/1g8p910
@r_cpp
CppCast: Type Erasure, SIMD-Within-a-Register and more
https://cppcast.com/type_erasure-simd-within-a-register_and_more/
https://redd.it/1g8lfum
@r_cpp
should i use "using namespace std" in coding interview
Hi! I have a coding interview coming up, and I'm gonna use C++ to code. Do you recommend "using namespace std" in interviews, just because i'd be able to code up my solution faster, or would that be a red flag because it's generally bad practice?
https://redd.it/1g8hcax
@r_cpp
learncpp equivalents for Go and Python languages
HI Folks!
I have gone through the learncpp website taking the suggestions of many people here.. At my workplace, along with cpp I also have to work with Go and Python. What are some learncpp equivalents for these languages that you have come across?
https://redd.it/1g89azw
@r_cpp
Objects are a poor man's Closures - a modern C++ take
I learned about this koan (title) while reading the chapter on Crafting Interpreters (Robert Nystrom) that addressed closures.
If you find it interesting, the longer version goes like [this](https://gist.github.com/jackrusher/5653669) (and it's about Scheme, of course)
(the post will be about C++, promise)
For the scope of this book, the author wants you to understand you essentially do not need classes to represent objects to achieve (runtime, in this case) polymorphism for the programming language you are building together. (Not because classes aren't useful, he goes on to add them in the next chapters, but because they are not implemented yet).
His challenge goes like this (note that Bob Nystrom published his book for free, on this website, and the full chapter is [here](https://craftinginterpreters.com/closures.html)):
>A [famous koan](http://wiki.c2.com/?ClosuresAndObjectsAreEquivalent) teaches us that “objects are a poor man’s closure” (and vice versa). Our VM doesn’t support objects yet, but now that we have closures we can approximate them. Using closures, write a Lox program that models two-dimensional vector “objects”. It should:
>Define a “constructor” function to create a new vector with the given *x* and *y* coordinates.
>Provide “methods” to access the *x* and *y* coordinates of values returned from that constructor.
>Define an addition “method” that adds two vectors and produces a third.
For lox, which looks a bit like JavaScript, I came up with this:
fun Vector(x, y) {
fun getX() {
return x;
}
fun getY() {
return y;
}
fun add(other) {
return Vector(x + other("getX")(), y + other("getY")());
}
fun ret(method) {
if (method == "getX") {
return getX;
} else if (method == "getY") {
return getY;
} else if (method == "add") {
return add;
} else {
return nil;
}
}
return ret;
}
var vector1 = Vector(1, 2);
var vector2 = Vector(3, 4);
var v1X = vector1("getX");
print v1X(); // 1
var v2Y = vector2("getY");
print v2Y(); // 4
var vector3 = vector1("add")(vector2);
print vector3("getX")(); // 4
print vector3("getY")(); // 6
The weird final return function is like that because Lox has no collection types (or a switch statement). This also plays well with the language being dynamically typed.
This essentially achieves polymorphic behavior without using classes.
Now, the beauty of C++ (for me) is the compile time behavior we can guarantee with constexpr (consteval) for something like this. The best version I could come up with is [this](https://gist.github.com/AndreiMoraru123/51a793f50631ed295ae6d1a4e079cdd2):
#include <print>
#include <tuple>
consteval auto Vector(int x, int y) {
auto getX = [x] consteval {return x;};
auto getY = [y] consteval {return y;};
auto add = [x, y](auto other) consteval {
const auto [otherX, otherY, _] = other;
return Vector(x + otherX(), y + otherY());
};
return std::make_tuple(getX, getY, add);
}
auto main() -> int {
constexpr auto vector1 = Vector(1, 2);
constexpr auto vector2 = Vector(2, 4);
constexpr auto v1Add = std::get<2>(vector1);
constexpr auto vector3 = v1Add(vector2);
constexpr auto X3 = std::get<0>(vector3);
constexpr auto Y3 = std::get<1>(vector3);
std::println("{}", X3()); // 3
std::println("{}", Y3()); // 6
}
Except for not being allowed to use structured bindings for constexpr functions (and instead having to use std::get), I really like this. We can also return a tuple as we now have collection types and it plays better with static typing.
Now, if we drop the prints, this compiles down to two lines of asm if we
OOps Concept Class and Object
[https://www.compilersutra.com/docs/opp-in-cpp](https://www.compilersutra.com/docs/opp-in-cpp) The article link i have attached it consist of concepts like classes, objects, `encapsulation`, and `constructors` in a fun and engaging way using real-world analogies. The playful tone makes complex topics more digestible.
# 🗳️ What Should We Cover Next?
We’d love your input! Which programming topics would you like to see next?
* Advanced C++ (Design Patterns, Concurrency)
* Debugging and Profiling in C++
* GPU/TPU Programming Basics
* AI and Machine Learning for Compilers
* Build Automation with CMake
Share you though in the comment
https://redd.it/1g7ull4
@r_cpp
href='/'>Home</a><a href='/profiles'>Profiles</a><a href='/exportData'>Export Data</a></nav>");
}
void renderFooter(String& html) {
html.concat("</body></html>");
}
void configureTime() {
configTime(gmtOffset_sec, daylightOffset_sec, ntpServer);
Serial.println("Configuring time...");
struct tm timeinfo;
int retries = 0;
const int maxRetries = 10;
while (!getLocalTime(&timeinfo) && retries < maxRetries) {
Serial.println("Waiting for time synchronization...");
delay(1000);
retries++;
}
if (retries < maxRetries) {
Serial.printf("Time configured: %04d-%02d-%02d %02d:%02d:%02d\n",
timeinfo.tm_year + 1900,
timeinfo.tm_mon + 1,
timeinfo.tm_mday,
timeinfo.tm_hour,
timeinfo.tm_min,
timeinfo.tm_sec);
} else {
Serial.println("Failed to obtain time");
}
}
String getFormattedTime() {
struct tm timeinfo;
if (!getLocalTime(&timeinfo)) {
return "Time not available";
}
char buffer[25];
strftime(buffer, sizeof(buffer), "%Y-%m-%d %H:%M:%S", &timeinfo);
return String(buffer);
}
// Token status callback function
void tokenStatusCallback(TokenInfo info) {
Serial.printf("Token Info: type = %s, status = %s\n", getTokenType(info).c_str(), getTokenStatus(info).c_str());
if (info.status == token_status_error) {
Serial.printf("Token Error: %s\n", info.error.message.c_str());
}
}
String getTokenType(TokenInfo info) {
switch (info.type) {
case token_type_undefined:
return "undefined";
case token_type_legacy_token:
return "legacy token";
case token_type_id_token:
return "ID token";
case token_type_custom_token:
return "custom token";
case token_type_oauth2_access_token:
return "OAuth2 access token";
default:
return "unknown";
}
}
String getTokenStatus(TokenInfo info) {
switch (info.status) {
case token_status_uninitialized:
return "uninitialized";
case token_status_on_signing:
return "on signing";
case token_status_on_request:
return "on request";
case token_status_on_refresh:
return "on refresh";
case token_status_ready:
return "ready";
case token_status_error:
return "error";
default:
return "unknown";
}
}
https://redd.it/1g7mmye
@r_cpp
authTimeoutDuration)) {
Firebase.ready(); // This updates the auth.token information
delay(500);
Serial.print(".");
}
if (auth.token.uid.length() != 0) {
Serial.println("\nFirebase authentication successful.");
Serial.print("User UID: ");
Serial.println(auth.token.uid.c_str());
} else {
Serial.println("\nFailed to authenticate with Firebase.");
Serial.println("Check your Firebase configuration and ensure anonymous authentication is enabled.");
}
// Set up web server routes
server.on("/", handleRoot);
server.on("/profiles", handleProfiles);
server.on("/api/rfid", handleAPIRFID);
server.on("/api/updateName", HTTP_POST, handleAPIUpdateName);
server.on("/exportData", handleExportData);
server.on("/favicon.ico", handleFavicon);
server.onNotFound(handleNotFound);
server.begin();
Serial.println("Web server started");
}
void loop() {
server.handleClient();
// Check for new RFID cards
if (mfrc522.PICC_IsNewCardPresent() && mfrc522.PICC_ReadCardSerial()) {
uidString = "";
for (byte i = 0; i < mfrc522.uid.size; i++) {
uidString.concat(mfrc522.uid.uidByte[i] < 0x10 ? "0" : "");
uidString.concat(String(mfrc522.uid.uidByte[i], HEX));
}
uidString.toUpperCase();
Serial.print("Card UID: ");
Serial.println(uidString);
// Get current time
time_t now = time(nullptr);
struct tm timeinfo;
localtime_r(&now, &timeinfo);
char timeString[25];
strftime(timeString, sizeof(timeString), "%Y-%m-%d %H:%M:%S", &timeinfo);
currentTime = String(timeString);
// Send data to Firebase
String basePath = "/rfid/";
basePath.concat(uidString);
String path_last_scanned = basePath;
path_last_scanned.concat("/last_scanned");
if (Firebase.RTDB.setString(&fbdo, path_last_scanned.c_str(), currentTime)) {
Serial.println("Timestamp sent to Firebase successfully");
} else {
Serial.println("Failed to send timestamp to Firebase");
Serial.println(fbdo.errorReason());
}
String path_uid = basePath;
path_uid.concat("/uid");
if (Firebase.RTDB.setString(&fbdo, path_uid.c_str(), uidString)) {
Serial.println("UID sent to Firebase successfully");
} else {
Serial.println("Failed to send UID to Firebase");
Serial.println(fbdo.errorReason());
}
mfrc522.PICC_HaltA();
mfrc522.PCD_StopCrypto1();
}
}
// Web server handlers
void handleRoot() {
String html;
renderHeader(html, "RFID Attendance Tracker");
html.concat("<div class='container'>");
html.concat("<h1>Welcome to the RFID Attendance Tracker</h1>");
html.concat("<p>Use your RFID card to register your attendance.</p>");
html.concat("<p>Current Time: ");
html.concat(getFormattedTime());
html.concat("</p>");
html.concat("</div>");
renderFooter(html);
server.send(200, "text/html", html);
}
void handleProfiles() {
// Retrieve data from Firebase
if (Firebase.RTDB.getJSON(&fbdo, "/rfid")) {
FirebaseJson& json = fbdo.jsonObject();
String jsonStr;
json.toString(jsonStr, true);
// Parse JSON data
DynamicJsonDocument doc(8192);
DeserializationError error = deserializeJson(doc, jsonStr);
if (error) {
Serial.print("deserializeJson() failed: ");
Serial.println(error.c_str());
server.send(500, "text/plain", "Failed to parse data");
return;
}
// Generate HTML page
String html;
renderHeader(html, "Profiles");
html.concat("<div class='container'>");
html.concat("<h1>Scanned Cards</h1>");
html.concat("<table><tr><th>UID</th><th>Last
ISO/IEC 14882:2024
https://www.iso.org/standard/83626.html
https://redd.it/1g7jpbt
@r_cpp
Code review - hft backtester
Hi everyone, I am a current junior year cs student with way too aspirational dreams of working in the high frequency trading industry. I have been working on a backtesting suite to test different strategies, and would like a code review. It is still currently a WIP, but would like some feedback on what I have so far.
https://github.com/DJ824/orderbook-reconstruction
If anyone wants to run this code, DM me for the market data, I have it hosted on google drive as the csv files are around 500mb.
https://redd.it/1g7ft23
@r_cpp
Is std::print the new std::regex? std::print vs fmt::print code generation seems insane
Why is the code generation 10x worse with std::print vs. fmt::print, and code compilation seems a bit worse, too?
https://godbolt.org/z/543j58djd
What is the `std::__unicode::__v15_1_0::__gcb_edges` stuff that fmt doesn't generate? Maybe we can opt out of the Unicode on the std?
I'm working in an environment where Unicode is applicable, but I wonder if it's for everybody. Usually, when we use fmt, it's not to show the user some strings; it's mostly for debugging, so I don't need Unicode support 99% of the time. Qt can handle it for my UI for translated strings and other Unicode characters.
https://redd.it/1g7dn5f
@r_cpp
= 1610241024 + (rand()%2);
At this point we are in the situation that the code performs the same. I know this is relatively simple views code, but I was still amazed, so I have decided to see if I can help the poor old for loop by giving hints to compiler(since predicate is true only once and we know our testcase has a long search before it finds the element).
if(pred(val)) [unlikely]
That dropped performance of code by almost double( from around 2700 micros to around 4900µs) unless we also make value we search for non constant. :) int needle = 123 + (rand()%2);
Now unlikely attribute does not help, but at least it does not hurt.
At this point I have decided to stop since it was becoming huge investment of time and gods of loop unrolling are a moody bunch, but here are some of my half conclusions:
1. I should never presume that clang is too dumb to see through memory allocations(not just talking about heap elision).
2. I am honestly shocked that even in this simple example filter | take
optimizes so well(or at least as manual loop) as I was honestly sure that it will be slower I just did not know by how much
3. Would be interesting to see if clang is smart enough to bypass even benchmark::DoNotOptimize from google benchmark
4. Still disappointed that did not get any vectorization despite using -march=native
5. I worked in a company where people loved to sprinkle likely and unlikely everywhere, I never liked that that much, now I like it even less. :)
6. Not too much should be concluded from this as it was just 1 testcase on 1 compiler
7. I have also watched interesting code_report video where Bryce and Connor are having fun with views::filter vectorization and it is true what they say: clang "diagnostic" about why he did not vectorize something is useless
8. I hope PGO would remove a lot of this random swings, but then making benchmark data "representative" becomes critical
P.S. I have used clang 19 with -O3 and -stdlib=libc++
https://redd.it/1g7a3qy
@r_cpp
Latest Updates to Stand Modmenu – What’s New?
Superkeys.net is a comprehensive platform for gamers looking to enhance their gameplay with mod menus and tools. One of the most popular mods available is the burl= https://superkeys.net/product/midnight-cs2/ Stand Modmenu /url/b for GTA V. This mod menu is renowned for its advanced features, providing players with everything from enhanced controls and teleportation to game recovery options and a wealth of trolling options, such as spawning objects and modifying in-game physics. Stand offers a user-friendly interface and regular updates, making it a go-to choice for GTA V modders superkeys. Users frequently share tips on how to utilize the mod’s advanced features and troubleshoot common issues, such as avoiding detection by anti-cheat systems. The forum also serves as a support hub where players help each other with installation issues or update delays, and they often share the latest developments about the mod’s evolving capabilities
https://redd.it/1g73fox
@r_cpp
codeproject,com is no more :(
I hope this is an appropriate place to break the bad news, as it has been a premier site on the web for showcasing projects, and was heavy on C++ especially in the early days, but expanded to all languages over it's 25+ year run.
16 million user accounts, and decades of community to the wind. The site presently isn't up right now, but as I understand it, the hope is to bring it back in a read only form so people can still access past submissions.
There goes one of the best places online to ask a coding question.
If this is too off topic, I apologize. I wasn't sure, but I felt it was worth risking it, as it was a big site, and I'm sure some people here will want the news, however bad.
https://redd.it/1g6y1l5
@r_cpp
Pulling a single item from a C++ parameter pack by its index, remarks (The Old New Thing, Raymond Chen)
https://devblogs.microsoft.com/oldnewthing/20240930-00/?p=110324
https://redd.it/1ft6wyi
@r_cpp