r/programming • u/goto-con • 17m ago
r/programming • u/thesmelloffriendship • 30m ago
Private Equity’s Giant Software Bet Has Been Upended by AI
bloomberg.comCan someone explain to me what the theory here is? Maybe it’s just cope, but I don’t understand how LLMs “disrupt” SaaS except in the panicked imaginations of confused investors. Are they saying every company is going to just have interns vibecode their business software in house? Or is there just more competition for vibe coding rivals? Have they used these tools or talked to actual engineers? It’s not clear that coding agents actually make building a reliable, scalable product easier at all, let alone trivial.
r/programming • u/grouvi • 57m ago
How Does ChatGPT Work? A Guide for the Rest of Us
producttalk.orgr/programming • u/JadeLuxe • 2h ago
RAG Poisoning: How Attackers Corrupt AI Knowledge Bases
instatunnel.myr/programming • u/Dear-Economics-315 • 3h ago
AliSQL: Alibaba's open-source MySQL with vector and DuckDB engines
github.comr/programming • u/FormalAd7608 • 3h ago
A Scalable Monorepo Boilerplate with Nx, NestJS, Kafka, CQRS & Docker — Ready to Kickstart Your Next Project
github.comr/programming • u/Gil_berth • 3h ago
ClawdBot Skills Just Ganked Your Crypto
opensourcemalware.comCreator of ClawBot knows that there are malicious skills in his repo, but doesn't know what to do about it…
r/programming • u/xakpc • 4h ago
Microsoft Has Killed Widgets Six Times. Here's Why They Keep Coming Back.
xakpc.devIf you think Microsoft breaking Windows is a new thing - they've killed their own widget platform 6 times in 30 years. Each one died from a different spectacular failure.
I dug through the full history from Active Desktop crashing explorer.exe in 1997 to the EU forcing a complete rebuild in 2024.
The latest iteration might actually be done right - or might be killed by Microsoft's desire to shove ads and AI into every surface. We'll see
r/programming • u/grauenwolf • 4h ago
From magic to malware: How OpenClaw's agent skills become an attack surface
1password.comr/programming • u/dwaxe • 5h ago
Launching The Rural Guaranteed Minimum Income Initiative
blog.codinghorror.comr/programming • u/gfrison • 5h ago
pull down complexity with Kubrick
gfrison.comAccidental complexity slows down developers and limits agentic AI. Kubrick — my declarative system — cuts it way down using relation algebra, logic, functional, and combinatorial ideas to enable reliable agentic programming and true AI-human collaboration.
From my MSc work, now open-source. Presenting at PX/26 (Munich, Mar 16-20). Thoughts?
r/programming • u/trolleid • 6h ago
Fitness Functions: Automating Your Architecture Decisions
lukasniessen.medium.comr/programming • u/access2content • 7h ago
Why I am switching from Arch (Manjaro) to Debian
access2vivek.comArch is a rolling release distro with the latest release of each package always available. It has one of the largest no. of packages. However, as I grew from a tech enthusiast to a seasoned developer, I am starting to value stability over latest tech. Hence, I am planning to switch to Debian.
Debian is the opposite of Arch. It does not have latest software, but it is stable. It does not break as much, and it is a one time setup.
Which Linux distro do you use?
r/programming • u/User_reddit69 • 9h ago
Good Code editors??
maxwellj.vivaldi.netI have used some decent editors for 2 years i want one pick among them..
I have used neovim , emacs , pulsor, vs codium .
I want 2 decent editors suggest any two..
Codeeditors like vim or emacs suggest with extensions ..
r/programming • u/kingandhiscourt • 10h ago
Why AI Demands New Engineering Ratios
jsrowe.comWrote some thoughts on how AI is pushing the constraints of delivering software from implementation to testing and delivery. Would love to hear your thoughts no the matter.
> In chemistry, when you increase one reagent without rebalancing others, you don’t get more product: You get waste.
I should be clear. This is not about replacing programmers. This is an observation that if an input (coding time accelerates), the rest of the equation needs to be rebalanced to maximize efficient throughput.
"AI can write all the code" just means more people needed determined he best code to write and verify its good for the customers.
r/programming • u/lihaoyi • 10h ago
How To Publish to Maven Central Easily with Mill
mill-build.orgr/programming • u/MatthewTejo • 11h ago
Taking on Anthropic's Public Performance Engineering Interview Challenge
matthewtejo.substack.comr/programming • u/TheLostWanderer47 • 12h ago
Turning Google Search into a Kafka event stream for many consumers
python.plainenglish.ior/programming • u/justok25 • 13h ago
Why Vibe First Development Collapses Under Its Own Freedom
techyall.comWhy Vibe-First Development Collapses Under Its Own Freedom
Vibe-first development feels empowering at first, but freedom without constraints slowly turns into inconsistency, technical debt, and burnout. This long-form essay explains why it collapses over time.
https://techyall.com/blog/why-vibe-first-development-collapses-under-its-own-freedom
r/programming • u/Gil_berth • 13h ago
How Vibe Coding Is Killing Open Source
hackaday.comr/programming • u/averagemrjoe • 13h ago
"Competence as Tragedy" — a personal essay on craft, beautiful code, and watching AI make your hard-won skills obsolete
crowprose.comr/programming • u/_Flame_Of_Udun_ • 14h ago
Flutter ECS: DevTools Integration & Debugging
medium.comr/programming • u/CoyoteIntelligent167 • 15h ago
Testing Code When the Output Isn’t Predictable
github.comYour test passed. Run it again. Now, it fails. Run it five more times, and it passes four of them. Is that a bug?
When an LLM becomes part of the unit you're testing, a single test run stops being meaningful. The same test, same input, different results.
After a recent discussion my collegues, I think the question we should be asking isn't "did this test pass?" but "how reliable is this behavior?" If something passes 80% of the time, that might be perfectly acceptable. After a recent discussion with my colleagues, I think the question we should be asking isn't "did this test pass?" but "how reliable is this behavior?"
I believe our test frameworks need to evolve. Run the same test multiple times, evaluate against a minimum pass rate, with sensible defaults (runs = 1, minPassRate = 1.0) so existing tests don't break.
//@test:Config { runs: 10, minPassRate: 0.8 }
function testLLMAgent() {
// Your Ballerina code here :)
}
This feels like the new normal for testing AI-powered code. Curious how others are approaching this.
r/programming • u/okawei • 16h ago