Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In academia, those sorts of tests would still need to be approved by the human experimentation ethics board. There are no exceptions for trivial tests.


> In academia, those sorts of tests would still need to be approved by the human experimentation ethics board. There are no exceptions for trivial tests.

That's the point. It's like requiring a report to be filed whenever there is a "use of force" but then applying that rule using the Newtonian definition of force. Sat in your chair? File a report. Stand back up? File a report. Filed a report? File a report.

Worse, this kind of thing can happen retroactively. If you discover that your numbers are different than expected, but you hadn't declared any experiment, comparing what changed before and after is the experiment. But you hadn't notified those users that you were doing an experiment because you hadn't expected to have any reason to, so now you can't even have the people with the before and after data communicate with the people who know what changes were made to the system in that time frame because comparing that information would constitute doing the experiment.

It's like telling a car company they can't see their sales data when deciding which models to continue producing because it would constitute doing a psychological experiment on what kind of cars people like.

(On the other hand, it sounds like the law would only apply to entities the size of Facebook, and screw those guys in general. But it really is kind of a silly rule.)


We can all talk and be flippant about how trivial it is to change the colour of a button or whatever, but the sum total of all these changes is something different.

These services are running huge numbers of experiments in order to maximize engagement. Then everyone wonders what happened when tons of people on Facebook end up depressed and tons of people on YouTube end up radicalized by extremist rabbit holes.

It's death by a thousand cuts.


That's a separate problem though. The solution for that isn't to do something at the level of the individual experiments, it's to do something at the agglomeration level where the trivial individual harms are actually accumulating.

If you have some food which is infected with salmonella, you don't pick it apart with a microscope at the level of individual cells and try to separate it back out, you just throw the whole thing away and eat something else.

In this context the contaminated food is Facebook.


To continue with your analogy, Facebook is just one tainted chicken breast in the meat counter. We need to examine the entire meat packing and inspection infrastructure that gave rise to this mess.


> comparing what changed before and after is the experiment.

IIUC, in order to do that comparison you still need to collect data. You may throw that data away and your experiment ends right there, you may do analysis on that data, but you said it yourself - it is an experiment.


Right, so what are we trying to do here then? Having a notification that you're constantly participating in an open-ended experiment with a purpose to be determined at a future date seems worse than nothing. But if you require a more specific notification before the data is collected then the after the fact analysis doesn't just require user notification, it's inherently prohibited.


Yeah i’d expect the notification as an opt-in? Do you want to be part of that experiment ?


The experiment where they change the color of the submit button? What should cause me to care about that?

And what does opt-in even look like? No matter whether you want to "participate in the experiment" the submit button still needs to be some color for you, which is the only part of the "experiment" with any direct effect on you.

The concern with psychological experiments isn't that they're collecting data. That's a different bailiwick. The major issue with psychological experiments is that they may have significant direct psychological consequences. If you show people only news stories about mass shootings and conflict it may cause them to become violent or suicidal -- which has nothing to do with whether you collect data on it or what you do with it afterwards. The experiment itself is the harm.

Which means we would need some kind of principled and efficient way of distinguishing those kinds of "real" experiments from just measuring what happens when you make a subtle adjustment to a context menu.


Yes, and it's dumb. It's a bureaucratic nightmare that most likely inhibits progress. Not only that, this is also being used as a cudgel to silence the people that did the grievance studies hoax. [0]

[0] https://reason.com/blog/2019/01/07/peter-boghossian-portland...


Neither does it deliver a guarantee on the results you get out at the end - the Stanford experiment was faked, and academics have struggled to replicate the marshmallow test.


While I agree with your point, I do think that some kind of ethics oversight should happen over experiments like the ones you mentioned. I just think that it's absurd to expect the same from simple tests.


I thought marshmallow was replicated but clarified -- that it showed how children react to un/trusted adults, not uncover some genetic propensity to deferred rewards.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: