NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
The Silent Scientist: When Software Research Fails to Reach Its Audience (cacm.acm.org)
muragekibicho 23 hours ago [-]
Shameless plug. I run LeetArxiv. It's a successor to papers with code built on the thesis "Every research paper should be a brief blog post with relevant code".

Here's our "Sora's Annotated Diffusion Transformer" writeup (code+paper side-by-side) Link: https://leetarxiv.substack.com/p/the-annotated-diffusion-tra...

looobay 21 hours ago [-]
That's an awesome project! It's literally a gold mine lol. Congrats and thank you for this!
ngriffiths 21 hours ago [-]
> active science communication has been sparse in the area of software research, and those who have tried often find their efforts unrewarded or unsuccessful.

The authors suggest:

> Identify your target audience to tailor your message! Use diverse communication channels beyond papers, and actively engage with practitioners to foster dialogue rather than broadcasting information!

What I would emphasize is that many researchers just don't know how to do it. It isn't as simple as just thinking up a target audience and churning out a blog post. If you are the median researcher, ~0 people will read that post!

I think people underestimate:

- How hard it is to find the right target audience - How hard it is to understand the target audience's language - How hard it is to persuade the target reader that this work you've done should matter even a little to their work, even when you designed it specifically for them - How few people in the audience will ever understand your work well - How narrow your target audience should be

I also think many researchers want to be able to, if not as a primary career goal then at least as a fulfilling, public service type activity. Currently testing this out a bit (more: https://griffens.net).

pajamasam 1 days ago [-]
Personally, I much prefer “software research” from engineers working in the industry. I’m sceptical of software research being done at universities.
billfruit 1 days ago [-]
I feel much of the knowledge and experience in the industry is simply lost because it isn't widely documented and studied. There needs to be detailed histories of major software development projects from the industry, in book form for people to learn from, in the same way as histories of military campaigns and railway projects.

It not widely done, and we end up with mere "Software Archeology", where we have artefacts like code, but the entire human process of why and how it reached that form left unknown.

zwnow 1 days ago [-]
This is actually one of the things I struggle with the most. Even small scale apps are mysterious to me, I have no idea how to build, deploy and maintain an app correctly.

For context, I work for a startup as solo dev and I manage 5 projects ranging from mobile to fullstack apps.

I am completely overwhelmed because there basically does not exist any clear answer to this. Some go all in on cloud or managed infra but I am not rich so I'd highly prefer cheap deployment/hosting. Next issue that idk how to fix is scaling the app. I find myself being stuck while building a ton as well, because there are WAY too many attack vectors in web development idk how to properly protect from.

Doesn't help not having any kind of mentor either. Thus far the apps I've built run fine considering I only have like 30-40 users max. But my boss wants to scale and I think itll doom the projects.

I'd wish there were actual standards making web safe without requiring third party infra for auth for example. It should be way easier for people to build web apps in a secure way.

It got to a point of me not wanting to build web stuff anymore because rather than building features I have to think about attack vectors, scaling issues, dependency issues, legacy code. It makes me regret joining the industry tbh.

datadrivenangel 1 days ago [-]
You can load test! Make another copy of the app in a copy of your prod environment and then call it until it breaks!

Also look at the 12 factor app [0] and the DORA capabilities [1].

0 - https://12factor.net/ 1 - https://dora.dev/capabilities/

tayo42 24 hours ago [-]
People (and companies) need to be incentived some how to write and share.
BeFlatXIII 23 hours ago [-]
If you want a truly egregious example of university research steering actual practitioners in the wrong direction, K–12 education is the worst offender.
spookie 1 days ago [-]
Scientists come from all facets of life you know? Some might've even been at FAANG at some point :)
1 days ago [-]
neves 1 days ago [-]
Do you have any good recommendations about recent software development research?

Till today I still share with my coworkers this 15yo article from Microsoft Research:

https://www.microsoft.com/en-us/research/blog/exploding-soft...

mavhc 1 days ago [-]
Thanks, read the paper about testing the mythical man month theory.

Seems the conclusions are: fewer people is better; only one "organisation" or group should contribute to the code; ownership should be at the lowest level possible, not a high up manager; and knowledge retention is important, effectively manage people leaving and make sure the design rational is well documented

PaulKeeble 1 days ago [-]
There is an enormous gulf between research in general and the people who should be reading it from a professional point of view. Science communication is really broken and what makes the trade press or press generally is largely about whether a papers authors manage to write a good press release and effectively writes an article themselves.

We need more New Scientist type magazine like things that do decent round ups of scientific findings for various fields that do a good job of shuffling through the thousands of papers a month and finding the highest impact papers. The pipeline from research to use in professions can drastically be improved. At the moment you end up having a hosepipe of abstracts and its a lot of time to review that daily.

atrettel 23 hours ago [-]
I get what you are saying, and I just want to add another factor here.

Science journalism has gotten a lot harder over the years simply due to how fragmented ("salami sliced" [1] as it is sometimes called) so much research is now. "Publish or perish" encourages researchers to break up a single, coherent body of research into many smaller papers ("minimum publishable units") rather than publishing a larger and easy-to-follow paper. I find it to be one of the most annoying current practices in scientific publishing, because it makes it difficult to see the bigger picture even if you are a subject matter expert. It is hard to find every piece of the research given how split up it becomes, though forward and backward citation analysis helps. That only gets worse when trying to summarize the research from a less technical perspective as science journalists do.

[1] https://en.wikipedia.org/wiki/Salami_slicing_tactics#Salami_...

m0rc 23 hours ago [-]
So far, the best reference for software engineering research appears to be R. Glass et al.'s 2002 work, Facts and Fallacies of Software Engineering. I haven't found a better or more comprehensive reference.

It would be great to see an updated edition.

Do you know a better source of information?

spit2wind 7 hours ago [-]
Glass is great!

The only others who compare are his contemporaries, Steve McConnell, Timothy Lister, Tom DeMarco, and Barry Boehm.

Unfortunately, they're all basically retired. It feels like this kind of interest in software development, at least the publishing, ended around the mid-2000s.

My guess is the shift to blogs from books, adoption of Agile (in whatever form), and a shift in industry focus to getting rich rather than getting good ended the efforts to come up with resources like Glass put together.

mmooss 22 hours ago [-]
For the audience here, the opposite side of the coin is more relevant: Why don't you read software research?

Based on this and other articles (and on experience), it's an especially underutilized resource. By reading it, you would gain an advantage over competition. Why aren't you using this advantage that is there for the taking?

And why don't we see papers posted to HN?

mkozlows 22 hours ago [-]
Because it's usually not that useful. I have a friend who was software-adjacent, and would post all these exciting studies showing that this or that practice was a big deal and massively boosted productivity. And without fail, those studies were some toy experiment design that had nothing to do with actual real-world conditions, and weren't remotely strong enough to convince me to up-end opinions based on my actual experience.

I'm sure there are sub-fields where academic papers are more important -- AI research, or really anything with "research" in the name -- but if you're just building normal software, I don't think there's much there.

mmooss 22 hours ago [-]
Thanks for your POV.

> those studies were some toy experiment design that had nothing to do with actual real-world conditions

Isn't that the nature of understanding and applying science? Science is not engineering: Science discovers new knowledge. Applying that knowledge to the real world is engineering.

Perhaps overcoming that barrier, to some degree, is worthwhile. In a sense, it's a well-known gap.

mkozlows 22 hours ago [-]
The question is whether spherical cow research tells you anything that holds up once you introduce the complications of reality into it. In physics, it clearly does. In economics, I think it does in a lot of cases (though with limits). In software engineering... well, like I say, there are areas where I'm sure it does, but research about e.g. strong typing or unit tests or PR review or whatever just doesn't have the juice, IME.
rossdavidh 21 hours ago [-]
In real-world software development, managing complexity is often (usually) the core of the challenge. A simplified example, is leaving out the very thing that is the obstacle to most good software development. In fact, it is sometimes the case that doing something that helps with managing complexity, will impair performance as measured in some other way. For example, it may slow execution speed by some amount, but allow the software to be broken into smaller pieces each of which is more compehensible. Managing this tradeoff is the key to much software development. If you test with "toy experiment design", you may be throwing out the very thing that is most important to study.
mmooss 20 hours ago [-]
Great point. I think that also emphasizes the necessity of the D in R&D: The research has to be adapted to the real world to be useful, for example to organizational frameworks and processes that manage complexity as you say.

Most software organizations I know don't have anything like the time to do D (to distinguish it from software development), except in a few clear high-ROI cases. Big software companies like Microsoft and Google have research divisions; I wonder how much they devote to D as opposed to R, and how much of that is released publicly.

Strilanc 21 hours ago [-]
Well, for example, consider this recent study that claimed developers using AI tools take 19% longer to finish tasks [1].

This was their methodology:

> we recruited 16 experienced developers from large open-source repositories (averaging 22k+ stars and 1M+ lines of code) that they’ve contributed to for multiple years. Developers provide lists of real issues (246 total) that would be valuable to the repository—bug fixes, features, and refactors that would normally be part of their regular work. Then, we randomly assign each issue to either allow or disallow use of AI while working on the issue.

Now consider the question of whether you expect this research to generalize. Do you expect that if you / your friends / your coworkers started using AI tools (or stopped using AI tools) that the difference in productivity would also be 19%? Of course not! They didn't look at enough people or contexts to get two sig figs of precision on that average, nor enough to expect the conclusion to generalize. Plus the AI tools are constantly changing, so even if the study was nailing the average productivity change it would be wrong a few months later. Plus the time period wasn't long enough for the people to build expertise, and "if I spend time getting good at this will it be worth it" is probably the real question we want answered. The study is so weak that I don't even feel compelled to trust the sign of their result to be predictive. And I would be saying the same thing if it reported 19% higher instead of 19% lower.

I don't want to be too harsh on the study authors; I have a hard time imagining any way to do better given resource constraints and real world practicalities... but that's kind of the whole problem with such studies. They're too small and too specific and that's really hard to fix. Honestly I think I'd trust five anecdotes at lunch more than most software studies (mainly because the anecdotes have the huge advantage of being from the same context I work in). Contrast with medical studies where I'd trust the studies over the anecdotes, because for all their flaws at least they actually put in the necessary resources.

To be pithy: maybe we upvote Carmack quotes more than software studies because Carmack quotes are informed by more written code than most software studies.

[1]: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

mmooss 20 hours ago [-]
Taking into account issues like that reading critically, which is great and essential. Dismissing ideas on that basis - often done on HN generally, even for large medical studies - is intellectually lazy, imho:

Life is full of flaws and uncertainty; that is the medium in which we swim and breath and work. The solution is not to lie at the bottom until the ocean becomes pure H2O; the trick is to find value.

Mars008 16 hours ago [-]
> I don't want to be too harsh on the study authors

Well, I'll do it for you. There is much of attention grabbing bull*it. For example I've seen on LinkedIn study claiming 60% of Indians daily using AI in their jobs, and only 10% of Japanese. You can guess who did it, very patriotic, but far from the reality.

dylanowen 22 hours ago [-]
For me it's a discovery problem. I have a hard time finding papers to read. Where do you go to find interesting or relevant papers?
bsoles 21 hours ago [-]
The audience of software research is other software researchers.

The expectation that a practicing CS graduate, even with a master's degree, should be able to read, understand, and apply in their work research articles published in academic journals is not very meaningful.

Not because they are not capable people, but because research articles these days are highly specialized, building upon specialized fields, language, etc.

We don't expect mechanical engineers read latest research on fluid mechanics, say, making use of Navier-Stokes equations. I am a mechanical engineer with a graduate degree in another field and I would be immediately lost if I tried to read such an article. So why do we expect this from software engineers?

KalMann 21 hours ago [-]
Well I think you have to ask what the goal of the researchers are. In the case of fluid mechanics they may research new algorithms that make into the software mechanical engineers use, even if they don't understand the algorithms themselves for example. So mechanical engineers still benefit from the research.

So I guess what I'm wondering is if software engineers benefit from the research that software research produce? (even if they don't understand it themselves)

ngriffiths 20 hours ago [-]
Not all engineers are in the target audience, and not all details of research findings need to be conveyed to the target audience to make a real impact. The point is if no findings ever make it to engineers (in the broadest sense), there is zero real world impact. I guess real impact is not the only goal but it's a valid one.
tarr11 23 hours ago [-]
I’ve discovered a lot of research via two minute papers on YT. Entertaining and easy to understand.

https://youtube.com/@twominutepapers?si=hyvCvW4UwS0QBbrZ

nitwit005 18 hours ago [-]
This assumes a wide audience, but that tends not to be the case. Say you have a paper on some sort of database optimization. How many people are genuinely working on database optimizers in industry? Even a quite successful social media post has low odds of reaching them.
jhd3 19 hours ago [-]
Emery Berger's thoughts on the topic

How to Have Real-World Impact: Five “Easy” Pieces - https://emeryberger.medium.com/how-to-have-real-world-impact...

physarum_salad 17 hours ago [-]
What about the case where publicity is intentionally avoided so as not to have results drowned in a deluge of hyperbole and journalistic hubris?
NedF 1 days ago [-]
One example would help their case.

> Thanks to software research, we know that most code comprehensibility metrics do not, in practice, reflect what they are supposed to measure.

Linked research doesn't really agree. But if it did, so?

If comprehensibility is not a simple metric then who's got a magic wand to do the fancy feedback? Sounds like it'd take a human/AGI which is useless, that's why we have metrics.

Are any real programmers who produce things for the world using comprehensibility metrics or is it all the university fakers and their virtual world they have created?

If this is their 'one example' it sucks.

zkmon 1 days ago [-]
Science Research doesn't happen for its own sake. Every effort needs to be a part of the pipeline of demand and supply. Otherwise it's just a tune that you sing in the shower.
embedding-shape 1 days ago [-]
> Every effort needs to be a part of the pipeline of demand and supply

It's almost unthinkable the amount of technology and innovations we would have never gotten if this was actually true in practice. So many inventions happened because two people happen to be in the same place for no particular reason, or someone noticed something strange/interesting and started going down the rabbit-hole for curiosities sake, with demand/supply having absolutely zero bearing on that.

I got to be honest, it's slightly strange to see something like that stated here out of all places, where most of us dive into rabbit-holes for fun and no profit, all the time, and supply/demand is probably the least interesting part of the puzzle when it comes to understanding and solving problems.

zkmon 16 hours ago [-]
You need to take it with current context, not with the nostalgic past when pure research happened out of curiosity. World is more purpose-driven now. Scientists are employees of some establishment with goals that are driven by the funding. If you are funded for doing Fourier-like research that must be for patents which are fully commercial-goal driven.

Everyone is connected to financial strings like puppets. If you are doing research without financial connection, you must be very rich, having a personal lab, having lots of free time, or not having family duties and worldly goals. Those are rare people, just like how wealthy countries had more scientists in the past centuries.

digitalPhonix 1 days ago [-]
The Fourier transform existed for the sake of existing for ~200 years before it turned out to be useful for building the entirety of our communications infrastructure on top of.
nyeah 1 days ago [-]
I agree 100% in spirit. Electrical transmission lines were not understood when Fourier did his work. Maxwell wasn't even born yet. And the math ultimately unleashed by Fourier transforms goes way beyond applications.

In cold hard dates, though, Fourier was already using his series to solve heat transfer problems in 1822.

I don't agree with the bizarre idea that every bit of research should have a clear utility. I'm just being careful about dates. And I think FTs kind of were invented with a view towards solving differential equations for physics. Just not electrical ones.

assemblyman 1 days ago [-]
A lot of people have already mentioned cases where this is neither true nor desirable e.g. high-energy and condensed matter physics, astrophysics, any branch of pure mathematics etc.

But, more importantly, who dictates what needs to happen. If you so desire, you should absolutely sing a tune in the shower, write a poem for yourself, calculate an effect and throw the piece of paper away, write code and delete it. The satisfaction is in exercising your creative urges, learning a new skill, exploring your curiosity even if no one else sees it or uses it.

I have had the privilege of working with some of the best physicists on the planet. Every single one of them has exposed only part of their work to the world. The rest might not be remarkable in terms of novelty but was crucial to them. They had to do it irrespective of "impact" or "importance". The dead-ends weren't dead to them.

Philosophically, as far as I know, we all get one shot at living. If I can help it, I am going to only spend a fraction of my time choosing to be "part of the pipeline of demand and supply". The rest will be wanderings.

tpoacher 1 days ago [-]
This is only partly true. MRI technology came out of people hunting for aliens in space. The path science and discovery take are rarely as linear as the funders would like them to be.
noir_lord 1 days ago [-]
Indeed, not to mention the fundamental science you do now may be a product later and sometimes 50 years later.

Transistor was 1947 but a lot of the basic science was from 1890's - 1920's.

Still transistors right - what did they ever do for us? (apologies to the monty python team)

zkmon 1 days ago [-]
There are always edge cases. But the bulk follows the gravity flow. Even poetry, these days, should find a buyer.
n4r9 1 days ago [-]
If you've ever wondered why progress in fundamental physics seems to have slowed down; look no further!
nosianu 1 days ago [-]
Maybe you are wrong about what is the cause, what is the effect? You describe how we fund most research, so of course this is what we get.
auggierose 1 days ago [-]
Sometimes edge cases is all there is.
missingdays 1 days ago [-]
> Even poetry, these days, should find a buyer.

Why?

philipwhiuk 23 hours ago [-]
There's a groupthink that financialization of everything is good.
thibaut_barrere 1 days ago [-]
You are describing applied research. But fundamental research seeks to expand knowledge itself, and unsurprisingly delivers a lot of unplanned value.
glitchc 1 days ago [-]
Applied research consists of taking theoretical findings and applying them to a specific application. As such, applied research requires fundamental research.
nathan_compton 1 days ago [-]
The whole purpose of life is singing tunes in the shower, arguably.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 15:04:25 GMT+0000 (Coordinated Universal Time) with Vercel.