56
Stay up-to-date with latest news on Android Development! Content directly fetched from the subreddit just for you. Powered by : @r_channels
Why UDP is King for Remote Mouse Apps: How I fixed cursor lag and "ghosting"
I’ve been building a WiFi Mouse app, and I wanted to share some technical insights on how I finally achieved "silky smooth" cursor movement. If you’re building any real-time input sharing tool, you might find this journey useful.
# 1. The TCP Pitfall (and the TCP_NODELAY Band-aid)
Initially, I started with **TCP**. It’s reliable, right? Wrong for this use case. I immediately hit "stuttering" issues. This was mostly due to **Nagle’s Algorithm**, which buffers small packets to save bandwidth—a disaster for mouse coordinates.
Enabling `TCP_NODELAY` (disabling Nagle) helped significantly, but it still wasn't perfect. Because TCP is a stream-oriented protocol that requires ACKs and handles retransmission, any slight network hiccup causes "Head-of-Line Blocking." Your cursor stops, then jumps forward as the buffered packets finally arrive.
# 2. Switching to UDP: 2x Performance Boost
I recently rewrote the transmission layer to use **UDP**. Since we don't need to wait for ACKs or worry about retransmission, the perceived latency was cut nearly in half. In the context of a mouse, "lost" data is better than "late" data.
# 3. Three Rules for Implementation
If you're implementing this, here are the three optimizations that made the biggest difference:
* **Move it off the UI Thread:** This is obvious but critical. Never let network I/O block your touch event processing.
* **The Power of** `HandlerThread,` I found `HandlerThread` to be the best choice here. It provides a dedicated Looper for background tasks, allowing me to queue up movement events without the overhead of creating new threads constantly.
* **Drop Old Data (Preventing "The Tail"):** This was the "Aha!" moment. If the network is congested, don't let the queue build up. If a new movement packet comes in and the previous one hasn't sent yet, **discard the old one.** It's much better to have a tiny "jump" in position than to have the cursor "ghost" or "crawl" across the screen trying to catch up to events that happened 500ms ago.
**The result?** A remote cursor that feels like it’s physically plugged into the PC.
I’m currently looking for testers to push the limits of this new UDP implementation. If you’re interested in trying the beta, check the comments!
https://redd.it/1qklt2s
@reddit_androiddev
Ffmpeg
I can't find any reliable ffmpeg build .
I tried this `https://github.com/moizhassankh/ffmpeg-kit-android-16KB\` but hardware acceleration isn't working.
https://redd.it/1qkdnd6
@reddit_androiddev
Can i run Android studio with 8GB of RAM for my jetpack compose assignment.??
Ive asked google and its said 16gb is recommended but 8 is still doable.
https://redd.it/1qka44y
@reddit_androiddev
I built a free launch stack for mobile app developers
https://redd.it/1qk2i1b
@reddit_androiddev
Automatic Instagram follower bot using Droidrun #DroidrunDevSprint
https://redd.it/1qk2fza
@reddit_androiddev
Free and open source
https://redd.it/1qjyeso
@reddit_androiddev
Google Play app suspended after repeated rejections (metadata + CSAE) — appeal already submitted, looking for technical insight
Hi all,
Posting after exhausting official channels and submitting an appeal. I’m looking for technical insight from developers who’ve handled similar Google Play enforcement cases.
# Background
* Individual Google Play developer account
* \~14 other active apps currently live and compliant
* App category: Matrimony / dating (18+)
# Timeline & Issues Raised by Google
1. **Metadata policy**
* Long description included wording like “Join thousands of families…”
* Short description used “Trusted”
* Interpreted as testimonials / performance claims
* Metadata has since been rewritten to neutral, functional language
2. **Child Safety Standards (CSAE)**
* App requires published standards prohibiting CSAE
* A compliant policy page existed previously
* During a website migration, the policy URL changed and the Play Console link was not updated
* Resulted in a missing policy page during review
* A new permanent CSAE policy page is now live and compliant
3. **Enforcement Process**
* After multiple rejections, the app was suspended citing “Repeated app rejections”
# Steps Already Taken
* Metadata fully revised and cleaned
* CSAE policy page restored at a permanent URL
* Internal checklist created to prevent future metadata and compliance misses
* Appeal submitted via Play Console
* Google Play Help Community thread created here: **\[link to your support thread\]**
# Questions for experienced developers
* In cases where suspensions result from procedural issues (metadata + missing compliance page), have appeals been successful?
* Is it safer to accept the suspension and not re-publish, rather than using a new package name?
* Any guidance on minimizing account-level risk after a suspension like this?
I’m not disputing policy interpretation — just trying to understand best practices going forward while keeping the account in good standing.
Thanks for your time.
https://redd.it/1qju51a
@reddit_androiddev
Publish for client
Can a European IT consulting company publish a client’s app under their own Apple or Google account with a signed authorization? The client is a financial services company. Any EU-specific experiences?
https://redd.it/1qjr4yz
@reddit_androiddev
NEED SOMEONE TO HELP FOR A SMALL ISSUE
Can anyone help me to bypass root and emulator detection in android studio virtual device
Asking serious help and if you successfully help me to bypass I’ll pay you
https://redd.it/1qjrjf2
@reddit_androiddev
Struggling with Compose UI
As a beginning learner of compose and android development in general, I struggle with building UI. I have some minor experience with CSS but even there the process of arranging & formatting items felt like a nightmare, like a torture. I am afraid to develop the same feeling for Compose UI.
I still can't exactly wrap my head around Modified chaining and how it affects the UI exactly. Mainly, I struggle with spacing.
What would you advice to learn compose UI quick? Maybe, there are some online games similar to CSS ' Flessbox Froggy? Maybe, there are articles covering how wrap my head around core principles? Any advice is appreciated
https://redd.it/1qj9tzw
@reddit_androiddev
We built QuickV to solve a very real problem with quick-commerce apps.
We developed QuickV because comparing prices on quick-commerce apps is a lot more painful than it should be.
So if you are asking for the cheapest delivery place, then you are stuck with a rotation of delivery services like Blinkit, Zepto, Instamart, BigBasket. and searching for the same object over and over again while forgetting prices.
So we attempted to remedy that.
QuickV allows comparison of products and prices for Blinkit, Zepto, Instamart, and BigBasket in a single application (JioMart coming soon).
What it does:
1.Search Once, View Results from All Suppliers
2.Prices and Availability Compared Immediately
3.Location set once for all platforms and can be changed later with one tap
4.Look around: categories and hot deals
5.See full product details within the chosen platform
6.Each provider will maintain a separate cart.
7.Add items to all carts in one tap and checkout at the provider
In short, no more app hopping. It all happens in one spot, and you decide where to purchase.
Play Store
https://play.google.com/store/apps/details?id=com.quickV.app
Would love honest feedback – what works, what doesn’t, and the next piece you’d like!
https://redd.it/1qjng6q
@reddit_androiddev
If a ViewModel is testable on the JVM and doesn’t depend on Context — why isn’t it considered part of the Domain layer?
I’ve been revisiting **Clean Architecture in Android**, and this question keeps coming up for me.
ViewModels:
* are testable on the **JVM**
* don’t depend on Android **Context**
* often contain **business-related logic**
* survive **configuration changes**
Given that, why are ViewModels still strictly considered **Presentation layer** and not **Domain**?
Is it because:
* they model ***UI state*** **rather than** ***business rules***?
* they depend on lifecycle and navigation concerns?
* or simply because they’re framework-driven?
I’m curious how e**xperienced Android devs reason about this in real-world projects**, not just textbook diagrams.
**Would love to hear different perspectives.**
https://redd.it/1qj99bx
@reddit_androiddev
Building an Android app is easy. Getting users is not.
I am building a voice keyboard app and trying to figure out what actually works for early growth.
What got you your first 100 users
What looked promising but was a complete waste of time
Not interested in theory or growth hacks.
Only things you would do again if starting from zero today.
https://redd.it/1qezjku
@reddit_androiddev
Why does the Gemini app not use Compose?
I was checking which UI SDKs different apps used via Show Layout Bounds and saw that the gemini app which came out in 2024 was purely XML/View. Anyone know the reason for this?
https://redd.it/1qejtuf
@reddit_androiddev
I just started learning Android development with Kotlin + Jetpack Compose, but I feel completely lost. What learning path would you recommend for a total beginner? Which topics should I learn first before diving deeper into Compose?
https://github.com/AMillionDriver/Basic_Compose
https://redd.it/1qdzb8o
@reddit_androiddev
What's your workflow for shipping app updates to the play store?
My process has been creating the release aab locally, create a new release in play console, then uploading the aab into the new release. Are there any CI tools you an recommend for a solo developer to automate this process?
https://redd.it/1qkhd68
@reddit_androiddev
Post-capture image redaction on Android (open-source learning project)
Many Android apps capture images that may contain faces or sensitive text
(e.g. marketplaces, receipts, compliance-sensitive flows).
While looking into this, I found that most approaches either rely on
manual post-processing steps or paid SDKs.
As a learning project, I built a small open-source Android SDK that applies
privacy masking immediately after an image is captured — before it is
uploaded or stored.
The SDK:
\- takes a captured Bitmap / File / Uri
\- detects faces and text on-device
\- returns a masked image so only the redacted result is persisted or shared
This project is mainly to learn about Android SDK design and open-source
practices, and I’d really appreciate feedback on the approach or API design.
Source:
https://github.com/jtl4098/SnapSafe
https://redd.it/1qkb8el
@reddit_androiddev
Kotlin or Flutter for job security and satisfaction + fintech?
Hi all, I'm a mid level developer, primarily vue/ts/node JS ecosystem and trying of moving to app development.
I see a lot of arguments here and there about native or multiplatform. Many say to generally stay away from Flutter and some say the KMP is still too new.
I've been looking at fintech for a while and can see some use Kotlin and Swift native, mainly big companies that are established and startups stick to multi platform ones like Flutter.
Any suggestions? I guess my primary goal is to get really good at a technology and stick to it + not hate it like I do with JS (Long story, but js fatigue is real).
What im seeing is that Kotlin also opens doors up for backend development while dart and Flutter is a niche? Thanks
https://redd.it/1qk7e1l
@reddit_androiddev
Need help extracting data from ID cards using OCR in Android (Kotlin)
Hi, I’m working on an Android project with Kotlin, and my client asked me to take a photo of the ID cards of people who will be working. I decided to look for alternatives and tried taking a temporary photo of the IDs and scanning them with an OCR, like Google’s ML Kit library, to extract the first name, last name, and ID number. This would be enough for the system, but right now I can’t get it to work properly: there’s always some error, like the last name being saved in the first name field, or if the image is uploaded in a different rotation, it saves completely wrong data. I don’t know what to do.
Has anyone done something similar? Is there a library that could help me?
https://redd.it/1qk3ap8
@reddit_androiddev
Can I monetize an Android app with banners without using Google Play Billing?
Hi everyone,
I’m building a simple Android app that shows a feed of local events. I want to monetize it in the simplest way possible: by displaying sponsor banners that say something like “Advertise Here” with contact info (phone, WhatsApp, email, or website).
My question is:
Can I charge sponsors directly outside the app (PayPal, bank transfer, etc.) for these banners?
Or does Google Play consider this a “digital product” and force me to use Google Play Billing?
I just want the app to remain free for users and avoid any Play Store violations.
Thanks for your advice!
https://redd.it/1qjydo4
@reddit_androiddev
Seeking advice: My open-source code was stolen, admitted by the thief, and Google Play reinstated their app"
https://redd.it/1qjxcaw
@reddit_androiddev
Architecture patterns for using Model Context Protocol (MCP) in Android UI Automation/Testing
Hi everyone,
I am currently conducting a technical evaluation of mobile-next/mobile-mcp (a Kotlin Multiplatform implementation of the Model Context Protocol) for my company. Our goal is to enable LLM agents to interact with our Android and iOS application for automated testing and QA validation.
I’ve done a POC and I wanted to open a discussion on the architectural trade-offs others might be facing with similar setups.
[Let me know in the comments if i should do any POC with any mcp for mobile app testing\]
My Current Observations:
The speed of test creation is the biggest pro.
However, I am aware of the inherent non-determinism (flakiness) of LLM-generated actions. We are accepting this trade-off for now in exchange for velocity, but I have a major concern regarding long-term maintenance.
The Discussion Points:
1. "Self-Healing" vs. Maintenance
In a traditional setup, if a View ID changes, the test fails, and we manually update the script. With an MCP-driven architecture, does the Agent context effectively "update" itself?
My concern: If the test fails, how are you handling the feedback loop? Does the Agent retry with updated context, or do we end up maintaining complex prompt-engineering files that are just as hard to maintain as Espresso code?
2. Real-world Pros/Cons
Has anyone here moved past the POC stage with MCP on Android?
Pros: rapid exploration, uncovering edge cases manual scripts miss.
Cons: latency of the LLM roundtrip, context window limits when passing large View hierarchies.
I’m interested to hear if anyone is using this strictly for internal tooling/debugging or if you are actually relying on it for CI pipelines.
Thanks!
https://redd.it/1qjucqd
@reddit_androiddev
Using adb to force release a decoder or increase its buffer size on a runtime build.
Hello,
Im having an issue with an app (cod warzone) where when the c2.mtk.avc.decoder is flushed it causes graphical corruption, artifacts and polygons stretching to infinity (really high). I'm just wondering if there are any ways to force a decoder release whether it's direct or indirect that does **NOT require root** or increase the buffer size without root.
Ive also seen this appear in the logs when the artifacts happen:
01-17 14:38:34.576 1078 1078 E mali_gralloc: ERROR: Unrecognized and/or unsupported format 0x38 and usage 0xb00
It says mali g720 supports 8 bit reds on google but im somehow having this error. I've also went through aosp to find this and it came up with other pixel formats that i found in a decoder app before, so i went and checked them and they matched them, my s23 ultra is also missing this 0x38 pixel format (red 8) aswell, but i am not seeing this unsupported pixel type on the s23u's logs (which doesn't have the graphical corruption issues either)
build:oneui 8, tab s10 ultra android 16
I am not a developer for warzone mobile but i want to try and fix this issue without tampering with game files.
Thanks in advance
https://redd.it/1qjreyu
@reddit_androiddev
Someone guide me? I am publishing my first Android app right now. Do organic downloads still work? Can you share a case study of yours?
.
https://redd.it/1qjlypn
@reddit_androiddev
Has anyone been through something like this? Suddenly I have many sales from the same localization, but they're all being refunded after a few minutes.
https://redd.it/1qjo9hh
@reddit_androiddev
I saw those viral "Life Calendar" wallpapers on Reels, but I hated that they were static... so I built a live widget instead.
https://redd.it/1qj1sxt
@reddit_androiddev
Has anyone been able to find the page they're looking for on Play Console?
https://redd.it/1qj9xbx
@reddit_androiddev
App idea. And in need so suggestion, guidance and some opinions.
Hey everyone, I’m looking for some honest feedback on an app idea I’ve been thinking about.
The core problem I’m trying to solve is **group journeys in multiple vehicles** — like road trips, convoys, friends/family traveling together in cars or bikes.
# The idea
An app where people traveling together can:
* Create a **temporary “journey room”**
* See all group members **moving live on a map** (real time)
* Have a **hands-free voice room** for the group (no calling each person)
* Follow a **leader’s navigation route** so everyone stays on the same path (optional)
* Send quick **one-tap alerts** like:
* “Stop needed”
* “Fuel / food”
* “Slowing down”
* “Problem / help”
The focus is **coordination and awareness**, not social media:
* No feeds
* No chatting/texting while driving
* Journey ends automatically when the trip ends
Think:
>
# Who I imagine using it
* Friends on road trips in multiple cars
* Families traveling together
* Group bike rides or mixed car + bike trips
* Convoys / rally drives / college trips
# What I’m trying to understand
* Does this solve a **real pain point** for you?
* Would you actually install/use something like this?
* Is this already solved well by existing apps (Google Maps, Waze, WhatsApp, etc.)?
* What would make this *not* worth using?
* Any safety or practicality concerns I should think about?
I’m not trying to sell anything — just validating whether this is a useful idea before building a prototype.
Would really appreciate blunt opinions 🙏
Thanks!
https://redd.it/1qeltn4
@reddit_androiddev
How can I access health data from commercial wearables for a student prototype?
Hi, I’m an industrial design student working on a thesis prototype. I’m trying to understand how commercial smart rings and wearables handle user data access, and what options I have to access this data for my college project.
I want to build my product using user health data, but since this is a student project, I can’t develop my own health-tracking hardware right now and have to rely on data from third-party wearable apps.
Is there any way to access this data for a proof-of-concept prototype? I’m interested in understanding all possible approaches—official ones like APIs or data exports, as well as technical or restricted approaches such as modified APKs, root access, firmware modification, or encrypted data access—using only my own data.
Also, I'm looking to purchase Boat Smart Ring for my prototype, because it is cheap.
https://redd.it/1qed94e
@reddit_androiddev
Detection-only vs recognition — when less ML leads to better UX
I’ve been working on a small Android utility where I had to choose between using face detection only versus adding full face recognition.
On paper, recognition seems like the obvious upgrade — automatic matching, labeling, fewer taps. But in practice, I kept running into UX issues that surprised me. A detection-only flow (bounding boxes + explicit user selection) ended up feeling:
clearer about what the app is actually doing
less error-prone when faces are similar or partially visible
easier for users to trust (no hidden assumptions)
simpler to explain and reason about in the UI
It made me question whether “more ML” is always better for consumer UX.
For those who’ve shipped production apps:
Have you seen cases where doing less ML (or avoiding recognition altogether) actually led to a better experience?
How do you decide when automation helps versus when it just adds invisible complexity?
I’m curious how others think about this tradeoff in real-world apps.
https://redd.it/1qe48jn
@reddit_androiddev